Siga las instrucciones en la pantalla para instalar la aplicación. Felicidades, que ha instalado con éxito AetherSX2 APK versión 6.0 en su dispositivo Android.
-Cómo utilizar AetherSX2 APK versión 6.0?
-Cómo cargar juegos de PS2 en AetherSX2
-Para cargar juegos de PS2 en AetherSX2, necesitas tener el archivo BIOS de PS2 y el archivo ISO de juegos de PS2 en el almacenamiento de tu dispositivo. Puede copiarlos desde su PC utilizando un cable USB o un servicio en la nube. Alternativamente, puede descargarlos de Internet, pero asegúrese de que sean legales y seguros.
-
-Una vez que tengas los archivos, sigue estos pasos:
-
-Inicie la aplicación AetherSX2 y otorgue los permisos necesarios.
-Toque en el icono "Configuración" en la esquina superior derecha de la pantalla.
-Toque en la opción "BIOS" y seleccione el archivo BIOS de PS2 desde el almacenamiento de su dispositivo.
-Toque en el botón "Atrás" para volver al menú principal.
-Toque en el icono "Juegos" en la esquina inferior izquierda de la pantalla.
-
-Toque en el arte de la portada del juego para empezar a jugar.
-
-Cómo configurar ajustes y controles en AetherSX2
-Para configurar los ajustes y controles en AetherSX2, puede acceder al menú "Configuración" desde el menú principal o tocando el botón "Menú" mientras juega un juego. A partir de ahí, puede ajustar varias opciones, como:
-
-Gráficos: Puede cambiar la resolución, relación de aspecto, velocidad de fotogramas, anti-aliasing, filtrado de texturas y más.
-Sonido: Puede activar o desactivar efectos de sonido, música y voz, así como ajustar el volumen y la latencia.
-Controles: Puede personalizar el diseño, tamaño, opacidad y vibración de los botones virtuales, así como utilizar un controlador físico o un teclado si tiene uno conectado a su dispositivo.
-Trucos: Puedes activar o desactivar trucos para tus juegos, como salud infinita, dinero, munición y más.
-Avanzado: Puede ajustar algunos ajustes avanzados que pueden mejorar o empeorar su experiencia de juego, como trucos de velocidad, parches, plugins y más.
-
-Tenga en cuenta que algunos ajustes pueden requerir un reinicio de la aplicación o el juego para que surta efecto. Además, algunos ajustes pueden no funcionar para todos los juegos o dispositivos, así que experimenta con ellos bajo tu propio riesgo.
-Pros y contras de la versión AetherSX2 APK 6.0
-Pros
-Algunos de los pros de AetherSX2 APK versión 6.0 son:
-
- Es libre y de código abierto, lo que significa que no tiene que pagar nada o preocuparse por el malware o los anuncios.
- Tiene alta compatibilidad y rendimiento, lo que significa que puede ejecutar la mayoría de los juegos de PS2 sin problemas y sin problemas.
- Tiene varias configuraciones y opciones que le permiten personalizar su experiencia de juego, como gráficos, sonido, controles, trucos y más.
- Tiene soporte multijugador en línea, lo que significa que puede jugar con sus amigos a través de Internet utilizando una conexión Wi-Fi.
-
-
-Contras
-Algunos de los contras de AetherSX2 APK versión 6.0 son:
-
- Requiere un dispositivo potente para funcionar correctamente, lo que significa que puede no funcionar bien en dispositivos de gama baja o antiguos.
-Requiere un archivo BIOS de PS2 y un archivo ISO de juegos de PS2 para jugar, lo que significa que necesita tener acceso a una consola PS2 o un PC para obtenerlos legalmente.
-Puede que no sea compatible con todos los juegos o dispositivos de PS2, lo que significa que algunos juegos pueden no funcionar correctamente o en absoluto.
- Puede tener algunos errores o errores que pueden afectar su experiencia de juego, tales como bloqueos, congelaciones, problemas técnicos, etc.
-
- Conclusión
- AetherSX2 APK versión 6.0 es un emulador de PS2 nuevo y mejorado para dispositivos Android que le permite jugar juegos de PS2 en su teléfono inteligente o tableta. Se basa en el popular emulador PCSX2 para PC pero optimizado para dispositivos móviles. Soporta la mayoría de los juegos de PS2 incluyendo títulos populares como Final Fantasy X Kingdom Hearts God of War y más. Tiene varias características y beneficios, como alta compatibilidad y rendimiento interfaz fácil de usar de soporte multijugador en línea de almacenamiento en la nube de soporte y más. También tiene algunos inconvenientes, como requerir un dispositivo potente un archivo BIOS de PS2 y un archivo ISO de juegos de PS2 y tener algunos errores o errores. Sin embargo, si usted es un fan de los juegos de PS2 y quiere jugar en su dispositivo Android, AetherSX2 APK versión 6.0 vale la pena intentarlo.
- Preguntas frecuentes
-Aquí hay algunas preguntas frecuentes sobre AetherSX2 APK versión 6.0:
-
- ¿Es AetherSX2 APK versión 6.0 seguro y legal?
-AetherSX2 APK versión 6.0 es seguro y legal, siempre y cuando lo descargue desde el sitio web oficial y utilice su propio archivo BIOS PS2 y archivo ISO del juego PS2. Sin embargo, descargar el archivo BIOS de PS2 y el archivo ISO de juegos de PS2 desde Internet puede ser ilegal en algunos países, así que hazlo bajo tu propio riesgo.
- ¿Cómo puedo mejorar el rendimiento de AetherSX2 APK versión 6.0?
-
- ¿Cómo puedo jugar juegos multijugador en línea en AetherSX2 APK versión 6.0?
-Puede jugar juegos multijugador en línea en AetherSX2 APK versión 6.0 mediante el uso de una conexión Wi-Fi y habilitar la opción "Multijugador en línea" en la aplicación. A continuación, puede unirse o crear una habitación con otros jugadores que están utilizando la misma aplicación y juego.
- ¿Cómo puedo guardar y cargar mi progreso en AetherSX2 APK versión 6.0?
-Puede guardar y cargar su progreso en AetherSX2 APK Versión 6.0 mediante el uso de las opciones "Guardar estado" y "Estado de carga" en la aplicación. También puede utilizar la opción "Cloud Saving" para guardar su progreso en línea y acceder a él desde cualquier dispositivo.
- ¿Dónde puedo obtener más información y soporte para AetherSX2 APK versión 6.0?
-Puede obtener más información y soporte para AetherSX2 APK Versión 6.0 visitando el sitio web oficial, el servidor oficial de Discord, o la comunidad oficial de Reddit. También puede ponerse en contacto con los desarrolladores por correo electrónico a aethersx2@gmail.com.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Fifa 4 En Lnea.md b/spaces/Benson/text-generation/Examples/Descargar Fifa 4 En Lnea.md
deleted file mode 100644
index 6a999385af8797899402170f639400d3968f5ad9..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Fifa 4 En Lnea.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-Descargar FIFA 4 Online: Una guía para principiantes
-Si eres un fan de los juegos de fútbol, es posible que hayas oído hablar de FIFA 4 Online, la última entrega de la popular serie EA Sports. FIFA 4 Online es un juego de fútbol en línea gratuito que te permite crear tu propio equipo, competir con otros jugadores y disfrutar de la emoción del hermoso juego. En este artículo, te mostraremos cómo descargar FIFA 4 Online, cómo jugarlo y por qué deberías probarlo.
-¿Qué es FIFA 4 Online?
-Una breve introducción al juego y sus características
-FIFA 4 Online es un juego de fútbol multijugador en línea desarrollado por EA Spearhead y publicado por Nexon. Se basa en la serie FIFA, pero con algunas características y mejoras únicas. Algunas de las características de FIFA 4 Online son:
-descargar fifa 4 en línea DOWNLOAD • https://bltlly.com/2v6Lem
-
-Gráficos y animaciones realistas que capturan la esencia del fútbol
-Más de 15.000 jugadores de más de 40 ligas y equipos nacionales
-Una variedad de modos de juego, tales como modo de temporada, modo de torneo, modo de partido, y el modo de práctica
-Un constructor de equipo personalizado que te permite crear tu propio equipo, elegir tu formación, tácticas y kits
-Un sistema de desarrollo de jugadores que te permite mejorar las habilidades, habilidades y atributos de tus jugadores
-Un sistema de mercado que te permite comprar y vender jugadores, artículos y monedas
-Un sistema de clasificación que mide tu rendimiento y te empareja con jugadores de nivel de habilidad similar
-Un sistema social que te permite chatear, interactuar y cooperar con otros jugadores
-
-Cómo descargar e instalar FIFA 4 Online
-Para jugar a FIFA 4 Online, necesitas descargar e instalar el cliente del juego en tu PC. Estos son los pasos para hacerlo:
-
-Vaya al sitio web oficial de FIFA 4 Online y elija su región.
-Haga clic en el botón "Descargar" y siga las instrucciones para descargar el instalador del juego.
-
-Espere a que la instalación termine y lance el juego.
-
-Cómo crear una cuenta e iniciar sesión
-Para jugar a FIFA 4 Online, necesitas crear una cuenta e iniciar sesión con tus credenciales. Estos son los pasos para hacerlo:
-
-En el lanzador del juego, haga clic en el botón "Registrarse" y complete su dirección de correo electrónico, contraseña, apodo y pregunta de seguridad.
-Verifique su dirección de correo electrónico haciendo clic en el enlace enviado a su bandeja de entrada.
-Inicie sesión con su dirección de correo electrónico y contraseña en el lanzador del juego.
-Elige un servidor y un canal para entrar en el juego.
-
- Cómo jugar FIFA 4 Online
- Los modos y opciones del juego
- FIFA 4 Online ofrece una variedad de modos de juego y opciones para diferentes preferencias y estilos de juego. Algunos de los modos de juego y opciones son:
-
- Modo temporada: Este es el modo principal del juego, donde se puede jugar a través de una temporada completa con su equipo. Puedes elegir entre diferentes ligas, como Premier League, Bundesliga, La Liga, Serie A, Ligue 1, K League, CSL, etc. También puedes participar en competiciones de copa, como FA Cup, Champions League, Europa League, etc. Puedes ganar monedas, objetos, jugadores y trofeos completando partidos y logros.
-Modo de torneo: Este es un modo donde puedes unirte o crear un torneo con otros jugadores. Puedes elegir entre diferentes formatos, como knockout, round-robin, league, etc. También puedes establecer las reglas, como duración del partido, dificultad, clasificación por equipos, etc. Puedes ganar premios y recompensas avanzando en el torneo.
-Modo de partido: Este es un modo en el que puede jugar un solo partido con otro jugador o un ordenador. Puede elegir entre diferentes opciones, como partido amistoso, partido clasificado, partido personalizado, etc. También puede seleccionar el estadio, el clima, el tiempo, etc. Puede ganar monedas y experiencia jugando partidos.
-
-
- Los controles y la interfaz
- FIFA 4 Online tiene un sistema de control sencillo e intuitivo y una interfaz que facilita el juego. Algunos de los controles y elementos de interfaz son:
-
- Teclado y ratón: Puede utilizar el teclado y el ratón para controlar sus reproductores y navegar por los menús. Las teclas por defecto son W, A, S, D para el movimiento, Q y E para cambiar jugadores, espacio para correr, clic izquierdo para pasar y disparar, clic derecho para abordar y deslizar, etc. También puede personalizar las teclas en el menú de configuración.
-Gamepad: Puedes usar un gamepad para controlar a tus jugadores y navegar por los menús. Los botones por defecto son stick izquierdo para el movimiento, stick derecho para movimientos de habilidad, L1 y R1 para cambiar jugadores, L2 para correr, X para pasar y disparar, O para abordar y deslizarse, etc. También puede personalizar los botones en el menú de configuración.
-HUD: El HUD (visualización frontal) le muestra la información y las opciones que necesita durante el juego. Los elementos de HUD son puntuación, tiempo, resistencia, radar, nombres de jugadores, calificaciones de jugadores, etc. También puede acceder al menú de pausa, ventana de chat, comandos rápidos, etc. desde el HUD.
-
- Consejos y trucos para principiantes
- FIFA 4 Online es un juego divertido y desafiante que requiere habilidad y estrategia para dominar. Aquí hay algunos consejos y trucos para principiantes que pueden ayudarte a mejorar tu juego:
-
- Elige tu equipo sabiamente: Tu equipo es tu activo más importante en FIFA 4 Online. Usted debe elegir un equipo que se adapte a su estilo de juego y preferencias. También puedes personalizar a tu equipo comprando y vendiendo jugadores, cambiando formaciones y tácticas, mejorando habilidades y atributos, etc.
-
-Juega inteligente: FIFA 4 Online no se trata solo de marcar goles y ganar partidos. También se trata de jugar inteligente y usar tu cerebro. Debes analizar las fortalezas y debilidades de tu oponente, adaptarte a diferentes situaciones y escenarios, usar diferentes estrategias y formaciones, etc.
-Diviértete: El consejo más importante para los principiantes es divertirse jugando a FIFA 4 Online. No te frustres ni te enojes si pierdes o cometes errores. En cambio, aprende de tus errores y mejora tu juego. Disfruta de la emoción del fútbol y diviértete con otros jugadores.
-
- Por qué deberías jugar a FIFA 4 Online
- Los beneficios de jugar juegos de fútbol en línea
- Jugar juegos de fútbol online como FIFA 4 Online tiene muchos beneficios que pueden mejorar tu vida de varias maneras. Algunos de los beneficios son:
-
- Entretenimiento: Jugar juegos de fútbol en línea es una gran manera de entretenerse y divertirse. Usted puede disfrutar de la emoción del fútbol sin salir de su casa o gastar dinero en entradas o equipos.
-Educación: Jugar juegos de fútbol en línea también puede educar sobre el fútbol y otros aspectos de la vida. Puedes aprender sobre diferentes equipos, jugadores, ligas, culturas, historia, geografía, etc. También puedes mejorar tus habilidades cognitivas, como memoria, concentración, resolución de problemas, etc.
-Ejercicio: Jugar juegos de fútbol online también puede ayudarte a ejercitar tu cuerpo y tu mente. Puede quemar calorías, fortalecer sus músculos, mejorar su coordinación, etc. moviendo los dedos, las manos, los brazos, etc. También puede estimular su cerebro, liberar el estrés, aumentar su estado de ánimo, etc. jugando juegos de fútbol en línea.
-
-
- La comunidad y los eventos de FIFA 4 Online
- FIFA 4 Online tiene una gran y activa comunidad de jugadores que comparten una pasión común por el fútbol y los juegos. Puedes unirte a la comunidad y disfrutar de los diferentes eventos y actividades que ofrece FIFA 4 Online. Algunos de la comunidad y eventos de FIFA 4 Online son:
-
- Foro: El foro es el lugar donde puedes comunicarte con otros jugadores y los desarrolladores de FIFA 4 Online. Puede publicar sus preguntas, sugerencias, comentarios, informes de errores, etc. También puede leer las últimas noticias, anuncios, guías, consejos, etc. del personal oficial.
-Blog: El blog es el lugar donde puedes leer las historias y experiencias de otros jugadores y los desarrolladores de FIFA 4 Online. También puedes compartir tus propias historias y experiencias escribiendo un artículo de blog. También puedes comentar otros artículos e interactuar con otros blogueros.
-Facebook: La página de Facebook es el lugar donde puedes seguir las actualizaciones y eventos de FIFA 4 Online. También puede gustar, compartir y comentar las publicaciones y fotos. También puede participar en concursos y regalos y ganar premios y recompensas.
-YouTube: El canal de YouTube es el lugar donde puedes ver los videos y transmisiones en vivo de FIFA 4 Online. También puede suscribirse y comentar los vídeos. También puede participar en chats y encuestas e interactuar con otros espectadores.
-Discordia: El servidor de discordia es el lugar donde puedes unirte a los chats de voz y texto de FIFA 4 Online. También puede crear o unir salas y canales para diferentes temas y propósitos. También puede usar bots y comandos para mejorar su experiencia.
-
- Las recompensas y logros de FIFA 4 Online
-
-
- Monedas: Las monedas son la moneda de FIFA 4 Online que puedes usar para comprar jugadores, objetos, monedas, etc. Puedes ganar monedas jugando partidos, completando logros, participando en eventos, etc.
-Artículos: Los artículos son los consumibles de FIFA 4 Online que puedes usar para mejorar tu equipo o jugadores. Puedes ganar objetos jugando partidos, completando logros, participando en eventos, etc.
-Jugadores: Los jugadores son el núcleo de FIFA 4 Online que puedes utilizar para construir tu equipo o vender monedas. Puedes ganar jugadores jugando partidos, completando logros, participando en eventos, etc.
-Trofeos: Los trofeos son los símbolos de tus logros y progreso en FIFA 4 Online. Puedes ganar trofeos jugando partidos, completando logros, participando en eventos, etc.
-Logros: Los logros son los retos y metas que puedes completar en FIFA 4 Online. Usted puede ganar logros mediante la realización de diversas tareas y acciones en el juego, tales como goles, ganar partidos, la creación de equipos, etc.
-
- Conclusión
- FIFA 4 Online es un juego de fútbol en línea gratuito que ofrece una experiencia realista e inmersiva del hermoso juego. Puedes descargar FIFA 4 Online, crear tu propio equipo, jugar con otros jugadores y disfrutar de las diferentes características y modos del juego. También puedes unirte a la comunidad y a los eventos de FIFA 4 Online y ganar recompensas y logros por tu rendimiento y progreso. FIFA 4 Online es un juego que no debes perderte si eres fanático del fútbol y los juegos.
-
- Entonces, ¿qué estás esperando? Descarga FIFA 4 Online hoy y comienza tu viaje de fútbol!
- Preguntas frecuentes
- Q: ¿Cuáles son los requisitos del sistema para FIFA 4 Online?
-A: Los requisitos mínimos del sistema para FIFA 4 Online son:
-
-OS: Windows 7 o superior
-CPU: Intel Core i3 o superior
-RAM: 4 GB o superior
-GPU: NVIDIA GeForce GT 630 o superior
-
-Internet: Conexión de banda ancha o superior
-
- Q: ¿Cómo puedo contactar con el servicio de atención al cliente de FIFA 4 Online?
-A: Puede ponerse en contacto con el servicio de atención al cliente de FIFA 4 Online utilizando los siguientes métodos:
-
-Correo electrónico: support@fifa4online.com
-Teléfono: +82-2-1234-5678
-Chat en vivo: Disponible en el sitio web oficial de FIFA 4 Online
-
- Q: ¿Cómo puedo reportar un error o un hacker en FIFA 4 Online?
-A: Puedes reportar un error o un hacker en FIFA 4 Online usando los siguientes métodos:
-
-Informe en el juego: Puede usar el botón de informe en el HUD o el menú de pausa para informar de un error o un hacker durante un partido.
-Informe del foro: Puede utilizar la sección de informes en el foro para informar de un error o un hacker con capturas de pantalla o vídeos como evidencia.
-Informe de correo electrónico: Puede usar la dirección de correo electrónico report@fifa4online.com para reportar un error o un hacker con capturas de pantalla o videos como evidencia.
-
- Q: ¿Cómo puedo obtener más monedas y artículos en FIFA 4 Online?
-A: Puedes obtener más monedas y objetos en FIFA 4 Online utilizando los siguientes métodos:
-
-Jugar partidos: Puedes ganar monedas y objetos jugando partidos en diferentes modos y dificultades.
-Completar logros: Puedes ganar monedas y objetos completando logros en diferentes categorías y niveles.
-Participar en eventos: Puedes ganar monedas y objetos participando en eventos que se celebran regularmente u ocasionalmente.
-Comprar monedas y artículos: Puedes comprar monedas y artículos con dinero real utilizando el sistema de mercado o el sitio web oficial de FIFA 4 Online
-
- P: ¿Cómo puedo mejorar mis habilidades y tácticas en FIFA 4 Online?
-A: Puedes mejorar tus habilidades y tácticas en FIFA 4 Online utilizando los siguientes métodos:
-
-Practicar habilidades: Puedes practicar tus habilidades y movimientos en el modo de práctica o contra oponentes fáciles.
-
-Ver repeticiones: Puedes ver tus propias repeticiones u otras repeticiones de jugadores para analizar tus errores y mejorar tu juego.
-Pedir consejo: Puedes pedir consejo a otros jugadores o expertos en el chat, el foro, el blog, etc.
-
- : https://www.fifaonline4.nexon.com/ : https://www.ea.com/games/fifa/fifa-fifa-online- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/vis/densepose.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/vis/densepose.py
deleted file mode 100644
index ba561cac55691c887f186194d859b03354a755a8..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/vis/densepose.py
+++ /dev/null
@@ -1,581 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import logging
-import numpy as np
-from typing import Iterable, Optional, Tuple
-import cv2
-
-from ..structures import DensePoseDataRelative, DensePoseOutput, DensePoseResult
-from .base import Boxes, Image, MatrixVisualizer, PointsVisualizer
-
-
-class DensePoseResultsVisualizer(object):
- def visualize(self, image_bgr: Image, densepose_result: Optional[DensePoseResult]) -> Image:
- if densepose_result is None:
- return image_bgr
- context = self.create_visualization_context(image_bgr)
- for i, result_encoded_w_shape in enumerate(densepose_result.results):
- iuv_arr = DensePoseResult.decode_png_data(*result_encoded_w_shape)
- bbox_xywh = densepose_result.boxes_xywh[i]
- self.visualize_iuv_arr(context, iuv_arr, bbox_xywh)
- image_bgr = self.context_to_image_bgr(context)
- return image_bgr
-
-
-class DensePoseMaskedColormapResultsVisualizer(DensePoseResultsVisualizer):
- def __init__(
- self,
- data_extractor,
- segm_extractor,
- inplace=True,
- cmap=cv2.COLORMAP_PARULA,
- alpha=0.7,
- val_scale=1.0,
- ):
- self.mask_visualizer = MatrixVisualizer(
- inplace=inplace, cmap=cmap, val_scale=val_scale, alpha=alpha
- )
- self.data_extractor = data_extractor
- self.segm_extractor = segm_extractor
-
- def create_visualization_context(self, image_bgr: Image):
- return image_bgr
-
- def context_to_image_bgr(self, context):
- return context
-
- def get_image_bgr_from_context(self, context):
- return context
-
- def visualize_iuv_arr(self, context, iuv_arr, bbox_xywh):
- image_bgr = self.get_image_bgr_from_context(context)
- matrix = self.data_extractor(iuv_arr)
- segm = self.segm_extractor(iuv_arr)
- mask = np.zeros(matrix.shape, dtype=np.uint8)
- mask[segm > 0] = 1
- image_bgr = self.mask_visualizer.visualize(image_bgr, mask, matrix, bbox_xywh)
- return image_bgr
-
-
-def _extract_i_from_iuvarr(iuv_arr):
- return iuv_arr[0, :, :]
-
-
-def _extract_u_from_iuvarr(iuv_arr):
- return iuv_arr[1, :, :]
-
-
-def _extract_v_from_iuvarr(iuv_arr):
- return iuv_arr[2, :, :]
-
-
-class DensePoseResultsMplContourVisualizer(DensePoseResultsVisualizer):
- def __init__(self, levels=10, **kwargs):
- self.levels = levels
- self.plot_args = kwargs
-
- def create_visualization_context(self, image_bgr: Image):
- import matplotlib.pyplot as plt
- from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
-
- context = {}
- context["image_bgr"] = image_bgr
- dpi = 100
- height_inches = float(image_bgr.shape[0]) / dpi
- width_inches = float(image_bgr.shape[1]) / dpi
- fig = plt.figure(figsize=(width_inches, height_inches), dpi=dpi)
- plt.axes([0, 0, 1, 1])
- plt.axis("off")
- context["fig"] = fig
- canvas = FigureCanvas(fig)
- context["canvas"] = canvas
- extent = (0, image_bgr.shape[1], image_bgr.shape[0], 0)
- plt.imshow(image_bgr[:, :, ::-1], extent=extent)
- return context
-
- def context_to_image_bgr(self, context):
- fig = context["fig"]
- w, h = map(int, fig.get_size_inches() * fig.get_dpi())
- canvas = context["canvas"]
- canvas.draw()
- image_1d = np.fromstring(canvas.tostring_rgb(), dtype="uint8")
- image_rgb = image_1d.reshape(h, w, 3)
- image_bgr = image_rgb[:, :, ::-1].copy()
- return image_bgr
-
- def visualize_iuv_arr(self, context, iuv_arr: np.ndarray, bbox_xywh: Boxes) -> Image:
- import matplotlib.pyplot as plt
-
- u = _extract_u_from_iuvarr(iuv_arr).astype(float) / 255.0
- v = _extract_v_from_iuvarr(iuv_arr).astype(float) / 255.0
- extent = (
- bbox_xywh[0],
- bbox_xywh[0] + bbox_xywh[2],
- bbox_xywh[1],
- bbox_xywh[1] + bbox_xywh[3],
- )
- plt.contour(u, self.levels, extent=extent, **self.plot_args)
- plt.contour(v, self.levels, extent=extent, **self.plot_args)
-
-
-class DensePoseResultsCustomContourVisualizer(DensePoseResultsVisualizer):
- """
- Contour visualization using marching squares
- """
-
- def __init__(self, levels=10, **kwargs):
- # TODO: colormap is hardcoded
- cmap = cv2.COLORMAP_PARULA
- if isinstance(levels, int):
- self.levels = np.linspace(0, 1, levels)
- else:
- self.levels = levels
- if "linewidths" in kwargs:
- self.linewidths = kwargs["linewidths"]
- else:
- self.linewidths = [1] * len(self.levels)
- self.plot_args = kwargs
- img_colors_bgr = cv2.applyColorMap((self.levels * 255).astype(np.uint8), cmap)
- self.level_colors_bgr = [
- [int(v) for v in img_color_bgr.ravel()] for img_color_bgr in img_colors_bgr
- ]
-
- def create_visualization_context(self, image_bgr: Image):
- return image_bgr
-
- def context_to_image_bgr(self, context):
- return context
-
- def get_image_bgr_from_context(self, context):
- return context
-
- def visualize_iuv_arr(self, context, iuv_arr: np.ndarray, bbox_xywh: Boxes) -> Image:
- image_bgr = self.get_image_bgr_from_context(context)
- segm = _extract_i_from_iuvarr(iuv_arr)
- u = _extract_u_from_iuvarr(iuv_arr).astype(float) / 255.0
- v = _extract_v_from_iuvarr(iuv_arr).astype(float) / 255.0
- self._contours(image_bgr, u, segm, bbox_xywh)
- self._contours(image_bgr, v, segm, bbox_xywh)
-
- def _contours(self, image_bgr, arr, segm, bbox_xywh):
- for part_idx in range(1, DensePoseDataRelative.N_PART_LABELS + 1):
- mask = segm == part_idx
- if not np.any(mask):
- continue
- arr_min = np.amin(arr[mask])
- arr_max = np.amax(arr[mask])
- I, J = np.nonzero(mask)
- i0 = np.amin(I)
- i1 = np.amax(I) + 1
- j0 = np.amin(J)
- j1 = np.amax(J) + 1
- if (j1 == j0 + 1) or (i1 == i0 + 1):
- continue
- Nw = arr.shape[1] - 1
- Nh = arr.shape[0] - 1
- for level_idx, level in enumerate(self.levels):
- if (level < arr_min) or (level > arr_max):
- continue
- vp = arr[i0:i1, j0:j1] >= level
- bin_codes = vp[:-1, :-1] + vp[1:, :-1] * 2 + vp[1:, 1:] * 4 + vp[:-1, 1:] * 8
- mp = mask[i0:i1, j0:j1]
- bin_mask_codes = mp[:-1, :-1] + mp[1:, :-1] * 2 + mp[1:, 1:] * 4 + mp[:-1, 1:] * 8
- it = np.nditer(bin_codes, flags=["multi_index"])
- color_bgr = self.level_colors_bgr[level_idx]
- linewidth = self.linewidths[level_idx]
- while not it.finished:
- if (it[0] != 0) and (it[0] != 15):
- i, j = it.multi_index
- if bin_mask_codes[i, j] != 0:
- self._draw_line(
- image_bgr,
- arr,
- mask,
- level,
- color_bgr,
- linewidth,
- it[0],
- it.multi_index,
- bbox_xywh,
- Nw,
- Nh,
- (i0, j0),
- )
- it.iternext()
-
- def _draw_line(
- self,
- image_bgr,
- arr,
- mask,
- v,
- color_bgr,
- linewidth,
- bin_code,
- multi_idx,
- bbox_xywh,
- Nw,
- Nh,
- offset,
- ):
- lines = self._bin_code_2_lines(arr, v, bin_code, multi_idx, Nw, Nh, offset)
- x0, y0, w, h = bbox_xywh
- x1 = x0 + w
- y1 = y0 + h
- for line in lines:
- x0r, y0r = line[0]
- x1r, y1r = line[1]
- pt0 = (int(x0 + x0r * (x1 - x0)), int(y0 + y0r * (y1 - y0)))
- pt1 = (int(x0 + x1r * (x1 - x0)), int(y0 + y1r * (y1 - y0)))
- cv2.line(image_bgr, pt0, pt1, color_bgr, linewidth)
-
- def _bin_code_2_lines(self, arr, v, bin_code, multi_idx, Nw, Nh, offset):
- i0, j0 = offset
- i, j = multi_idx
- i += i0
- j += j0
- v0, v1, v2, v3 = arr[i, j], arr[i + 1, j], arr[i + 1, j + 1], arr[i, j + 1]
- x0i = float(j) / Nw
- y0j = float(i) / Nh
- He = 1.0 / Nh
- We = 1.0 / Nw
- if (bin_code == 1) or (bin_code == 14):
- a = (v - v0) / (v1 - v0)
- b = (v - v0) / (v3 - v0)
- pt1 = (x0i, y0j + a * He)
- pt2 = (x0i + b * We, y0j)
- return [(pt1, pt2)]
- elif (bin_code == 2) or (bin_code == 13):
- a = (v - v0) / (v1 - v0)
- b = (v - v1) / (v2 - v1)
- pt1 = (x0i, y0j + a * He)
- pt2 = (x0i + b * We, y0j + He)
- return [(pt1, pt2)]
- elif (bin_code == 3) or (bin_code == 12):
- a = (v - v0) / (v3 - v0)
- b = (v - v1) / (v2 - v1)
- pt1 = (x0i + a * We, y0j)
- pt2 = (x0i + b * We, y0j + He)
- return [(pt1, pt2)]
- elif (bin_code == 4) or (bin_code == 11):
- a = (v - v1) / (v2 - v1)
- b = (v - v3) / (v2 - v3)
- pt1 = (x0i + a * We, y0j + He)
- pt2 = (x0i + We, y0j + b * He)
- return [(pt1, pt2)]
- elif (bin_code == 6) or (bin_code == 9):
- a = (v - v0) / (v1 - v0)
- b = (v - v3) / (v2 - v3)
- pt1 = (x0i, y0j + a * He)
- pt2 = (x0i + We, y0j + b * He)
- return [(pt1, pt2)]
- elif (bin_code == 7) or (bin_code == 8):
- a = (v - v0) / (v3 - v0)
- b = (v - v3) / (v2 - v3)
- pt1 = (x0i + a * We, y0j)
- pt2 = (x0i + We, y0j + b * He)
- return [(pt1, pt2)]
- elif bin_code == 5:
- a1 = (v - v0) / (v1 - v0)
- b1 = (v - v1) / (v2 - v1)
- pt11 = (x0i, y0j + a1 * He)
- pt12 = (x0i + b1 * We, y0j + He)
- a2 = (v - v0) / (v3 - v0)
- b2 = (v - v3) / (v2 - v3)
- pt21 = (x0i + a2 * We, y0j)
- pt22 = (x0i + We, y0j + b2 * He)
- return [(pt11, pt12), (pt21, pt22)]
- elif bin_code == 10:
- a1 = (v - v0) / (v3 - v0)
- b1 = (v - v0) / (v1 - v0)
- pt11 = (x0i + a1 * We, y0j)
- pt12 = (x0i, y0j + b1 * He)
- a2 = (v - v1) / (v2 - v1)
- b2 = (v - v3) / (v2 - v3)
- pt21 = (x0i + a2 * We, y0j + He)
- pt22 = (x0i + We, y0j + b2 * He)
- return [(pt11, pt12), (pt21, pt22)]
- return []
-
-
-try:
- import matplotlib
-
- matplotlib.use("Agg")
- DensePoseResultsContourVisualizer = DensePoseResultsMplContourVisualizer
-except ModuleNotFoundError:
- logger = logging.getLogger(__name__)
- logger.warning("Could not import matplotlib, using custom contour visualizer")
- DensePoseResultsContourVisualizer = DensePoseResultsCustomContourVisualizer
-
-
-class DensePoseResultsFineSegmentationVisualizer(DensePoseMaskedColormapResultsVisualizer):
- def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7):
- super(DensePoseResultsFineSegmentationVisualizer, self).__init__(
- _extract_i_from_iuvarr,
- _extract_i_from_iuvarr,
- inplace,
- cmap,
- alpha,
- val_scale=255.0 / DensePoseDataRelative.N_PART_LABELS,
- )
-
-
-class DensePoseResultsUVisualizer(DensePoseMaskedColormapResultsVisualizer):
- def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7):
- super(DensePoseResultsUVisualizer, self).__init__(
- _extract_u_from_iuvarr, _extract_i_from_iuvarr, inplace, cmap, alpha, val_scale=1.0
- )
-
-
-class DensePoseResultsVVisualizer(DensePoseMaskedColormapResultsVisualizer):
- def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7):
- super(DensePoseResultsVVisualizer, self).__init__(
- _extract_v_from_iuvarr, _extract_i_from_iuvarr, inplace, cmap, alpha, val_scale=1.0
- )
-
-
-class DensePoseOutputsFineSegmentationVisualizer(object):
- def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7):
- self.mask_visualizer = MatrixVisualizer(
- inplace=inplace,
- cmap=cmap,
- val_scale=255.0 / DensePoseDataRelative.N_PART_LABELS,
- alpha=alpha,
- )
-
- def visualize(
- self, image_bgr: Image, dp_output_with_bboxes: Optional[Tuple[DensePoseOutput, Boxes]]
- ) -> Image:
- if dp_output_with_bboxes is None:
- return image_bgr
- densepose_output, bboxes_xywh = dp_output_with_bboxes
- S = densepose_output.S
- I = densepose_output.I # noqa
- U = densepose_output.U
- V = densepose_output.V
- N = S.size(0)
- assert N == I.size(0), (
- "densepose outputs S {} and I {}"
- " should have equal first dim size".format(S.size(), I.size())
- )
- assert N == U.size(0), (
- "densepose outputs S {} and U {}"
- " should have equal first dim size".format(S.size(), U.size())
- )
- assert N == V.size(0), (
- "densepose outputs S {} and V {}"
- " should have equal first dim size".format(S.size(), V.size())
- )
- assert N == len(bboxes_xywh), (
- "number of bounding boxes {}"
- " should be equal to first dim size of outputs {}".format(len(bboxes_xywh), N)
- )
- for n in range(N):
- Sn = S[n].argmax(dim=0)
- In = I[n].argmax(dim=0) * (Sn > 0).long()
- matrix = In.cpu().numpy().astype(np.uint8)
- mask = np.zeros(matrix.shape, dtype=np.uint8)
- mask[matrix > 0] = 1
- bbox_xywh = bboxes_xywh[n]
- image_bgr = self.mask_visualizer.visualize(image_bgr, mask, matrix, bbox_xywh)
- return image_bgr
-
-
-class DensePoseOutputsUVisualizer(object):
- def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7):
- self.mask_visualizer = MatrixVisualizer(
- inplace=inplace, cmap=cmap, val_scale=1.0, alpha=alpha
- )
-
- def visualize(
- self, image_bgr: Image, dp_output_with_bboxes: Optional[Tuple[DensePoseOutput, Boxes]]
- ) -> Image:
- if dp_output_with_bboxes is None:
- return image_bgr
- densepose_output, bboxes_xywh = dp_output_with_bboxes
- assert isinstance(
- densepose_output, DensePoseOutput
- ), "DensePoseOutput expected, {} encountered".format(type(densepose_output))
- S = densepose_output.S
- I = densepose_output.I # noqa
- U = densepose_output.U
- V = densepose_output.V
- N = S.size(0)
- assert N == I.size(0), (
- "densepose outputs S {} and I {}"
- " should have equal first dim size".format(S.size(), I.size())
- )
- assert N == U.size(0), (
- "densepose outputs S {} and U {}"
- " should have equal first dim size".format(S.size(), U.size())
- )
- assert N == V.size(0), (
- "densepose outputs S {} and V {}"
- " should have equal first dim size".format(S.size(), V.size())
- )
- assert N == len(bboxes_xywh), (
- "number of bounding boxes {}"
- " should be equal to first dim size of outputs {}".format(len(bboxes_xywh), N)
- )
- for n in range(N):
- Sn = S[n].argmax(dim=0)
- In = I[n].argmax(dim=0) * (Sn > 0).long()
- segmentation = In.cpu().numpy().astype(np.uint8)
- mask = np.zeros(segmentation.shape, dtype=np.uint8)
- mask[segmentation > 0] = 1
- Un = U[n].cpu().numpy().astype(np.float32)
- Uvis = np.zeros(segmentation.shape, dtype=np.float32)
- for partId in range(Un.shape[0]):
- Uvis[segmentation == partId] = Un[partId][segmentation == partId].clip(0, 1) * 255
- bbox_xywh = bboxes_xywh[n]
- image_bgr = self.mask_visualizer.visualize(image_bgr, mask, Uvis, bbox_xywh)
- return image_bgr
-
-
-class DensePoseOutputsVVisualizer(object):
- def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7):
- self.mask_visualizer = MatrixVisualizer(
- inplace=inplace, cmap=cmap, val_scale=1.0, alpha=alpha
- )
-
- def visualize(
- self, image_bgr: Image, dp_output_with_bboxes: Optional[Tuple[DensePoseOutput, Boxes]]
- ) -> Image:
- if dp_output_with_bboxes is None:
- return image_bgr
- densepose_output, bboxes_xywh = dp_output_with_bboxes
- assert isinstance(
- densepose_output, DensePoseOutput
- ), "DensePoseOutput expected, {} encountered".format(type(densepose_output))
- S = densepose_output.S
- I = densepose_output.I # noqa
- U = densepose_output.U
- V = densepose_output.V
- N = S.size(0)
- assert N == I.size(0), (
- "densepose outputs S {} and I {}"
- " should have equal first dim size".format(S.size(), I.size())
- )
- assert N == U.size(0), (
- "densepose outputs S {} and U {}"
- " should have equal first dim size".format(S.size(), U.size())
- )
- assert N == V.size(0), (
- "densepose outputs S {} and V {}"
- " should have equal first dim size".format(S.size(), V.size())
- )
- assert N == len(bboxes_xywh), (
- "number of bounding boxes {}"
- " should be equal to first dim size of outputs {}".format(len(bboxes_xywh), N)
- )
- for n in range(N):
- Sn = S[n].argmax(dim=0)
- In = I[n].argmax(dim=0) * (Sn > 0).long()
- segmentation = In.cpu().numpy().astype(np.uint8)
- mask = np.zeros(segmentation.shape, dtype=np.uint8)
- mask[segmentation > 0] = 1
- Vn = V[n].cpu().numpy().astype(np.float32)
- Vvis = np.zeros(segmentation.shape, dtype=np.float32)
- for partId in range(Vn.size(0)):
- Vvis[segmentation == partId] = Vn[partId][segmentation == partId].clip(0, 1) * 255
- bbox_xywh = bboxes_xywh[n]
- image_bgr = self.mask_visualizer.visualize(image_bgr, mask, Vvis, bbox_xywh)
- return image_bgr
-
-
-class DensePoseDataCoarseSegmentationVisualizer(object):
- """
- Visualizer for ground truth segmentation
- """
-
- def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7):
- self.mask_visualizer = MatrixVisualizer(
- inplace=inplace,
- cmap=cmap,
- val_scale=255.0 / DensePoseDataRelative.N_BODY_PARTS,
- alpha=alpha,
- )
-
- def visualize(
- self,
- image_bgr: Image,
- bbox_densepose_datas: Optional[Tuple[Iterable[Boxes], Iterable[DensePoseDataRelative]]],
- ) -> Image:
- if bbox_densepose_datas is None:
- return image_bgr
- for bbox_xywh, densepose_data in zip(*bbox_densepose_datas):
- matrix = densepose_data.segm.numpy()
- mask = np.zeros(matrix.shape, dtype=np.uint8)
- mask[matrix > 0] = 1
- image_bgr = self.mask_visualizer.visualize(image_bgr, mask, matrix, bbox_xywh.numpy())
- return image_bgr
-
-
-class DensePoseDataPointsVisualizer(object):
- def __init__(self, densepose_data_to_value_fn=None, cmap=cv2.COLORMAP_PARULA):
- self.points_visualizer = PointsVisualizer()
- self.densepose_data_to_value_fn = densepose_data_to_value_fn
- self.cmap = cmap
-
- def visualize(
- self,
- image_bgr: Image,
- bbox_densepose_datas: Optional[Tuple[Iterable[Boxes], Iterable[DensePoseDataRelative]]],
- ) -> Image:
- if bbox_densepose_datas is None:
- return image_bgr
- for bbox_xywh, densepose_data in zip(*bbox_densepose_datas):
- x0, y0, w, h = bbox_xywh.numpy()
- x = densepose_data.x.numpy() * w / 255.0 + x0
- y = densepose_data.y.numpy() * h / 255.0 + y0
- pts_xy = zip(x, y)
- if self.densepose_data_to_value_fn is None:
- image_bgr = self.points_visualizer.visualize(image_bgr, pts_xy)
- else:
- v = self.densepose_data_to_value_fn(densepose_data)
- img_colors_bgr = cv2.applyColorMap(v, self.cmap)
- colors_bgr = [
- [int(v) for v in img_color_bgr.ravel()] for img_color_bgr in img_colors_bgr
- ]
- image_bgr = self.points_visualizer.visualize(image_bgr, pts_xy, colors_bgr)
- return image_bgr
-
-
-def _densepose_data_u_for_cmap(densepose_data):
- u = np.clip(densepose_data.u.numpy(), 0, 1) * 255.0
- return u.astype(np.uint8)
-
-
-def _densepose_data_v_for_cmap(densepose_data):
- v = np.clip(densepose_data.v.numpy(), 0, 1) * 255.0
- return v.astype(np.uint8)
-
-
-def _densepose_data_i_for_cmap(densepose_data):
- i = (
- np.clip(densepose_data.i.numpy(), 0.0, DensePoseDataRelative.N_PART_LABELS)
- * 255.0
- / DensePoseDataRelative.N_PART_LABELS
- )
- return i.astype(np.uint8)
-
-
-class DensePoseDataPointsUVisualizer(DensePoseDataPointsVisualizer):
- def __init__(self):
- super(DensePoseDataPointsUVisualizer, self).__init__(
- densepose_data_to_value_fn=_densepose_data_u_for_cmap
- )
-
-
-class DensePoseDataPointsVVisualizer(DensePoseDataPointsVisualizer):
- def __init__(self):
- super(DensePoseDataPointsVVisualizer, self).__init__(
- densepose_data_to_value_fn=_densepose_data_v_for_cmap
- )
-
-
-class DensePoseDataPointsIVisualizer(DensePoseDataPointsVisualizer):
- def __init__(self):
- super(DensePoseDataPointsIVisualizer, self).__init__(
- densepose_data_to_value_fn=_densepose_data_i_for_cmap
- )
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/par.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/par.h
deleted file mode 100644
index fa88b2ccd80dbcc445d28786837dd616261dd413..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/par.h
+++ /dev/null
@@ -1,62 +0,0 @@
-/*
- * Copyright 2008-2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace omp
-{
-namespace detail
-{
-
-
-struct par_t : thrust::system::omp::detail::execution_policy,
- thrust::detail::allocator_aware_execution_policy<
- thrust::system::omp::detail::execution_policy>
-{
- __host__ __device__
- THRUST_CONSTEXPR par_t() : thrust::system::omp::detail::execution_policy() {}
-};
-
-
-} // end detail
-
-
-static const detail::par_t par;
-
-
-} // end omp
-} // end system
-
-
-// alias par here
-namespace omp
-{
-
-
-using thrust::system::omp::par;
-
-
-} // end omp
-} // end thrust
-
diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/postprocessing.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/postprocessing.py
deleted file mode 100644
index 1a3d287eeb6c2cb3070f1aa7157b006e9aa857f5..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/modeling/postprocessing.py
+++ /dev/null
@@ -1,101 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import torch
-from torch.nn import functional as F
-
-from detectron2.structures import Instances, ROIMasks
-
-
-# perhaps should rename to "resize_instance"
-def detector_postprocess(
- results: Instances, output_height: int, output_width: int, mask_threshold: float = 0.5
-):
- """
- Resize the output instances.
- The input images are often resized when entering an object detector.
- As a result, we often need the outputs of the detector in a different
- resolution from its inputs.
-
- This function will resize the raw outputs of an R-CNN detector
- to produce outputs according to the desired output resolution.
-
- Args:
- results (Instances): the raw outputs from the detector.
- `results.image_size` contains the input image resolution the detector sees.
- This object might be modified in-place.
- output_height, output_width: the desired output resolution.
-
- Returns:
- Instances: the resized output from the model, based on the output resolution
- """
- # Change to 'if is_tracing' after PT1.7
- if isinstance(output_height, torch.Tensor):
- # Converts integer tensors to float temporaries to ensure true
- # division is performed when computing scale_x and scale_y.
- output_width_tmp = output_width.float()
- output_height_tmp = output_height.float()
- new_size = torch.stack([output_height, output_width])
- else:
- new_size = (output_height, output_width)
- output_width_tmp = output_width
- output_height_tmp = output_height
-
- scale_x, scale_y = (
- output_width_tmp / results.image_size[1],
- output_height_tmp / results.image_size[0],
- )
- results = Instances(new_size, **results.get_fields())
-
- if results.has("pred_boxes"):
- output_boxes = results.pred_boxes
- elif results.has("proposal_boxes"):
- output_boxes = results.proposal_boxes
- else:
- output_boxes = None
- assert output_boxes is not None, "Predictions must contain boxes!"
-
- output_boxes.scale(scale_x, scale_y)
- output_boxes.clip(results.image_size)
-
- results = results[output_boxes.nonempty()]
-
- if results.has("pred_masks"):
- if isinstance(results.pred_masks, ROIMasks):
- roi_masks = results.pred_masks
- else:
- # pred_masks is a tensor of shape (N, 1, M, M)
- roi_masks = ROIMasks(results.pred_masks[:, 0, :, :])
- results.pred_masks = roi_masks.to_bitmasks(
- results.pred_boxes, output_height, output_width, mask_threshold
- ).tensor # TODO return ROIMasks/BitMask object in the future
-
- if results.has("pred_keypoints"):
- results.pred_keypoints[:, :, 0] *= scale_x
- results.pred_keypoints[:, :, 1] *= scale_y
-
- return results
-
-
-def sem_seg_postprocess(result, img_size, output_height, output_width):
- """
- Return semantic segmentation predictions in the original resolution.
-
- The input images are often resized when entering semantic segmentor. Moreover, in same
- cases, they also padded inside segmentor to be divisible by maximum network stride.
- As a result, we often need the predictions of the segmentor in a different
- resolution from its inputs.
-
- Args:
- result (Tensor): semantic segmentation prediction logits. A tensor of shape (C, H, W),
- where C is the number of classes, and H, W are the height and width of the prediction.
- img_size (tuple): image size that segmentor is taking as input.
- output_height, output_width: the desired output resolution.
-
- Returns:
- semantic segmentation prediction (Tensor): A tensor of the shape
- (C, output_height, output_width) that contains per-pixel soft predictions.
- """
- result = result[:, : img_size[0], : img_size[1]].expand(1, -1, -1, -1)
- result = F.interpolate(
- result, size=(output_height, output_width), mode="bilinear", align_corners=False
- )[0]
- return result
diff --git a/spaces/Chomkwoy/Nilkessye/cpool_new/src/left_pool.cpp b/spaces/Chomkwoy/Nilkessye/cpool_new/src/left_pool.cpp
deleted file mode 100644
index 7b4dc1d2d57436ccf64be1467f0ec3defccececc..0000000000000000000000000000000000000000
--- a/spaces/Chomkwoy/Nilkessye/cpool_new/src/left_pool.cpp
+++ /dev/null
@@ -1,91 +0,0 @@
-// #include
-#include
-
-#include
-
-std::vector pool_forward(
- torch::Tensor input
-) {
- // Initialize output
- torch::Tensor output = torch::zeros_like(input);
-
- // Get width
- int64_t width = input.size(3);
-
- // Copy the last column
- torch::Tensor input_temp = input.select(3, width - 1);
- torch::Tensor output_temp = output.select(3, width - 1);
- output_temp.copy_(input_temp);
-
- torch::Tensor max_temp;
- for (int64_t ind = 1; ind < width; ++ind) {
- input_temp = input.select(3, width - ind - 1);
- output_temp = output.select(3, width - ind);
- max_temp = output.select(3, width - ind - 1);
-
- torch::max_out(max_temp, input_temp, output_temp);
- }
-
- return {
- output
- };
-}
-
-std::vector pool_backward(
- torch::Tensor input,
- torch::Tensor grad_output
-) {
- auto output = torch::zeros_like(input);
-
- int32_t batch = input.size(0);
- int32_t channel = input.size(1);
- int32_t height = input.size(2);
- int32_t width = input.size(3);
-
- // auto max_val = torch::zeros(torch::CUDA(torch::kFloat), {batch, channel, height});
- // auto max_ind = torch::zeros(torch::CUDA(torch::kLong), {batch, channel, height});
- auto max_val = torch::zeros({batch, channel, height}, torch::TensorOptions().dtype(torch::kFloat).device(torch::kCUDA));
- auto max_ind = torch::zeros({batch, channel, height}, torch::TensorOptions().dtype(torch::kLong).device(torch::kCUDA));
-
- auto input_temp = input.select(3, width - 1);
- max_val.copy_(input_temp);
-
- max_ind.fill_(width - 1);
-
- auto output_temp = output.select(3, width - 1);
- auto grad_output_temp = grad_output.select(3, width - 1);
- output_temp.copy_(grad_output_temp);
-
- auto un_max_ind = max_ind.unsqueeze(3);
- // auto gt_mask = torch::zeros(torch::CUDA(torch::kByte), {batch, channel, height});
- // auto max_temp = torch::zeros(torch::CUDA(torch::kFloat), {batch, channel, height});
- auto gt_mask = torch::zeros({batch, channel, height}, torch::TensorOptions().dtype(torch::kByte).device(torch::kCUDA));
- auto max_temp = torch::zeros({batch, channel, height}, torch::TensorOptions().dtype(torch::kFloat).device(torch::kCUDA));
-
- for (int32_t ind = 1; ind < width; ++ind) {
- input_temp = input.select(3, width - ind - 1);
- torch::gt_out(gt_mask, input_temp, max_val);
-
- torch::masked_select_out(max_temp, input_temp, gt_mask);
- max_val.masked_scatter_(gt_mask, max_temp);
- max_ind.masked_fill_(gt_mask, width - ind - 1);
-
- grad_output_temp = grad_output.select(3, width - ind - 1).unsqueeze(3);
- output.scatter_add_(3, un_max_ind, grad_output_temp);
- }
-
- return {
- output
- };
-}
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def(
- "forward", &pool_forward, "Left Pool Forward",
- py::call_guard()
- );
- m.def(
- "backward", &pool_backward, "Left Pool Backward",
- py::call_guard()
- );
-}
diff --git a/spaces/CofAI/chat/g4f/Provider/Providers/Dfehub.py b/spaces/CofAI/chat/g4f/Provider/Providers/Dfehub.py
deleted file mode 100644
index 2f66f19b50b6b4ab79c012f123c47241141942eb..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat/g4f/Provider/Providers/Dfehub.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import os
-import requests
-from ...typing import sha256, Dict, get_type_hints
-
-url = "https://chat.dfehub.com"
-model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-4']
-supports_stream = True
-needs_auth = False
-
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- headers = {
- 'Authority': 'chat.dfehub.com',
- 'Content-Type': 'application/json',
- 'Method': 'POST',
- 'Path': '/api/openai/v1/chat/completions',
- 'Scheme': 'https',
- 'Accept': 'text/event-stream',
- 'Accept-Language': 'pt-BR,pt;q=0.9,en-US;q=0.8,en;q=0.7,zh-CN;q=0.6,zh;q=0.5',
- 'Content-Type': 'application/json',
- 'Origin': 'https://chat.dfehub.com',
- 'Referer': 'https://chat.dfehub.com/',
- 'Sec-Ch-Ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
- 'Sec-Ch-Ua-Mobile': '?0',
- 'Sec-Ch-Ua-Platform': '"Windows"',
- 'Sec-Fetch-Dest': 'empty',
- 'Sec-Fetch-Mode': 'cors',
- 'Sec-Fetch-Site': 'same-origin',
- 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36',
- 'X-Requested-With': 'XMLHttpRequest',
- }
-
- data = {
- 'model': model,
- 'temperature': 0.7,
- 'max_tokens': '8000',
- 'presence_penalty': 0,
- 'messages': messages,
- }
-
- response = requests.post(url + '/api/openai/v1/chat/completions',
- headers=headers, json=data, stream=stream)
-
- yield response.json()['choices'][0]['message']['content']
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join(
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/structures/__init__.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/structures/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/web_routedef.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/web_routedef.py
deleted file mode 100644
index a1eb0a76549fbde5aa0c81f02b041b77bd91e0ad..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/web_routedef.py
+++ /dev/null
@@ -1,216 +0,0 @@
-import abc
-import os # noqa
-from typing import (
- TYPE_CHECKING,
- Any,
- Callable,
- Dict,
- Iterator,
- List,
- Optional,
- Sequence,
- Type,
- Union,
- overload,
-)
-
-import attr
-
-from . import hdrs
-from .abc import AbstractView
-from .typedefs import Handler, PathLike
-
-if TYPE_CHECKING: # pragma: no cover
- from .web_request import Request
- from .web_response import StreamResponse
- from .web_urldispatcher import AbstractRoute, UrlDispatcher
-else:
- Request = StreamResponse = UrlDispatcher = AbstractRoute = None
-
-
-__all__ = (
- "AbstractRouteDef",
- "RouteDef",
- "StaticDef",
- "RouteTableDef",
- "head",
- "options",
- "get",
- "post",
- "patch",
- "put",
- "delete",
- "route",
- "view",
- "static",
-)
-
-
-class AbstractRouteDef(abc.ABC):
- @abc.abstractmethod
- def register(self, router: UrlDispatcher) -> List[AbstractRoute]:
- pass # pragma: no cover
-
-
-_HandlerType = Union[Type[AbstractView], Handler]
-
-
-@attr.s(auto_attribs=True, frozen=True, repr=False, slots=True)
-class RouteDef(AbstractRouteDef):
- method: str
- path: str
- handler: _HandlerType
- kwargs: Dict[str, Any]
-
- def __repr__(self) -> str:
- info = []
- for name, value in sorted(self.kwargs.items()):
- info.append(f", {name}={value!r}")
- return " {handler.__name__!r}" "{info}>".format(
- method=self.method, path=self.path, handler=self.handler, info="".join(info)
- )
-
- def register(self, router: UrlDispatcher) -> List[AbstractRoute]:
- if self.method in hdrs.METH_ALL:
- reg = getattr(router, "add_" + self.method.lower())
- return [reg(self.path, self.handler, **self.kwargs)]
- else:
- return [
- router.add_route(self.method, self.path, self.handler, **self.kwargs)
- ]
-
-
-@attr.s(auto_attribs=True, frozen=True, repr=False, slots=True)
-class StaticDef(AbstractRouteDef):
- prefix: str
- path: PathLike
- kwargs: Dict[str, Any]
-
- def __repr__(self) -> str:
- info = []
- for name, value in sorted(self.kwargs.items()):
- info.append(f", {name}={value!r}")
- return " {path}" "{info}>".format(
- prefix=self.prefix, path=self.path, info="".join(info)
- )
-
- def register(self, router: UrlDispatcher) -> List[AbstractRoute]:
- resource = router.add_static(self.prefix, self.path, **self.kwargs)
- routes = resource.get_info().get("routes", {})
- return list(routes.values())
-
-
-def route(method: str, path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef:
- return RouteDef(method, path, handler, kwargs)
-
-
-def head(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef:
- return route(hdrs.METH_HEAD, path, handler, **kwargs)
-
-
-def options(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef:
- return route(hdrs.METH_OPTIONS, path, handler, **kwargs)
-
-
-def get(
- path: str,
- handler: _HandlerType,
- *,
- name: Optional[str] = None,
- allow_head: bool = True,
- **kwargs: Any,
-) -> RouteDef:
- return route(
- hdrs.METH_GET, path, handler, name=name, allow_head=allow_head, **kwargs
- )
-
-
-def post(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef:
- return route(hdrs.METH_POST, path, handler, **kwargs)
-
-
-def put(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef:
- return route(hdrs.METH_PUT, path, handler, **kwargs)
-
-
-def patch(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef:
- return route(hdrs.METH_PATCH, path, handler, **kwargs)
-
-
-def delete(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef:
- return route(hdrs.METH_DELETE, path, handler, **kwargs)
-
-
-def view(path: str, handler: Type[AbstractView], **kwargs: Any) -> RouteDef:
- return route(hdrs.METH_ANY, path, handler, **kwargs)
-
-
-def static(prefix: str, path: PathLike, **kwargs: Any) -> StaticDef:
- return StaticDef(prefix, path, kwargs)
-
-
-_Deco = Callable[[_HandlerType], _HandlerType]
-
-
-class RouteTableDef(Sequence[AbstractRouteDef]):
- """Route definition table"""
-
- def __init__(self) -> None:
- self._items: List[AbstractRouteDef] = []
-
- def __repr__(self) -> str:
- return f""
-
- @overload
- def __getitem__(self, index: int) -> AbstractRouteDef:
- ...
-
- @overload
- def __getitem__(self, index: slice) -> List[AbstractRouteDef]:
- ...
-
- def __getitem__(self, index): # type: ignore[no-untyped-def]
- return self._items[index]
-
- def __iter__(self) -> Iterator[AbstractRouteDef]:
- return iter(self._items)
-
- def __len__(self) -> int:
- return len(self._items)
-
- def __contains__(self, item: object) -> bool:
- return item in self._items
-
- def route(self, method: str, path: str, **kwargs: Any) -> _Deco:
- def inner(handler: _HandlerType) -> _HandlerType:
- self._items.append(RouteDef(method, path, handler, kwargs))
- return handler
-
- return inner
-
- def head(self, path: str, **kwargs: Any) -> _Deco:
- return self.route(hdrs.METH_HEAD, path, **kwargs)
-
- def get(self, path: str, **kwargs: Any) -> _Deco:
- return self.route(hdrs.METH_GET, path, **kwargs)
-
- def post(self, path: str, **kwargs: Any) -> _Deco:
- return self.route(hdrs.METH_POST, path, **kwargs)
-
- def put(self, path: str, **kwargs: Any) -> _Deco:
- return self.route(hdrs.METH_PUT, path, **kwargs)
-
- def patch(self, path: str, **kwargs: Any) -> _Deco:
- return self.route(hdrs.METH_PATCH, path, **kwargs)
-
- def delete(self, path: str, **kwargs: Any) -> _Deco:
- return self.route(hdrs.METH_DELETE, path, **kwargs)
-
- def options(self, path: str, **kwargs: Any) -> _Deco:
- return self.route(hdrs.METH_OPTIONS, path, **kwargs)
-
- def view(self, path: str, **kwargs: Any) -> _Deco:
- return self.route(hdrs.METH_ANY, path, **kwargs)
-
- def static(self, prefix: str, path: PathLike, **kwargs: Any) -> None:
- self._items.append(StaticDef(prefix, path, kwargs))
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/click/_compat.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/click/_compat.py
deleted file mode 100644
index 9153d150ce67a708f920fcf9c606970fc061f816..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/click/_compat.py
+++ /dev/null
@@ -1,623 +0,0 @@
-import codecs
-import io
-import os
-import re
-import sys
-import typing as t
-from weakref import WeakKeyDictionary
-
-CYGWIN = sys.platform.startswith("cygwin")
-WIN = sys.platform.startswith("win")
-auto_wrap_for_ansi: t.Optional[t.Callable[[t.TextIO], t.TextIO]] = None
-_ansi_re = re.compile(r"\033\[[;?0-9]*[a-zA-Z]")
-
-
-def _make_text_stream(
- stream: t.BinaryIO,
- encoding: t.Optional[str],
- errors: t.Optional[str],
- force_readable: bool = False,
- force_writable: bool = False,
-) -> t.TextIO:
- if encoding is None:
- encoding = get_best_encoding(stream)
- if errors is None:
- errors = "replace"
- return _NonClosingTextIOWrapper(
- stream,
- encoding,
- errors,
- line_buffering=True,
- force_readable=force_readable,
- force_writable=force_writable,
- )
-
-
-def is_ascii_encoding(encoding: str) -> bool:
- """Checks if a given encoding is ascii."""
- try:
- return codecs.lookup(encoding).name == "ascii"
- except LookupError:
- return False
-
-
-def get_best_encoding(stream: t.IO[t.Any]) -> str:
- """Returns the default stream encoding if not found."""
- rv = getattr(stream, "encoding", None) or sys.getdefaultencoding()
- if is_ascii_encoding(rv):
- return "utf-8"
- return rv
-
-
-class _NonClosingTextIOWrapper(io.TextIOWrapper):
- def __init__(
- self,
- stream: t.BinaryIO,
- encoding: t.Optional[str],
- errors: t.Optional[str],
- force_readable: bool = False,
- force_writable: bool = False,
- **extra: t.Any,
- ) -> None:
- self._stream = stream = t.cast(
- t.BinaryIO, _FixupStream(stream, force_readable, force_writable)
- )
- super().__init__(stream, encoding, errors, **extra)
-
- def __del__(self) -> None:
- try:
- self.detach()
- except Exception:
- pass
-
- def isatty(self) -> bool:
- # https://bitbucket.org/pypy/pypy/issue/1803
- return self._stream.isatty()
-
-
-class _FixupStream:
- """The new io interface needs more from streams than streams
- traditionally implement. As such, this fix-up code is necessary in
- some circumstances.
-
- The forcing of readable and writable flags are there because some tools
- put badly patched objects on sys (one such offender are certain version
- of jupyter notebook).
- """
-
- def __init__(
- self,
- stream: t.BinaryIO,
- force_readable: bool = False,
- force_writable: bool = False,
- ):
- self._stream = stream
- self._force_readable = force_readable
- self._force_writable = force_writable
-
- def __getattr__(self, name: str) -> t.Any:
- return getattr(self._stream, name)
-
- def read1(self, size: int) -> bytes:
- f = getattr(self._stream, "read1", None)
-
- if f is not None:
- return t.cast(bytes, f(size))
-
- return self._stream.read(size)
-
- def readable(self) -> bool:
- if self._force_readable:
- return True
- x = getattr(self._stream, "readable", None)
- if x is not None:
- return t.cast(bool, x())
- try:
- self._stream.read(0)
- except Exception:
- return False
- return True
-
- def writable(self) -> bool:
- if self._force_writable:
- return True
- x = getattr(self._stream, "writable", None)
- if x is not None:
- return t.cast(bool, x())
- try:
- self._stream.write("") # type: ignore
- except Exception:
- try:
- self._stream.write(b"")
- except Exception:
- return False
- return True
-
- def seekable(self) -> bool:
- x = getattr(self._stream, "seekable", None)
- if x is not None:
- return t.cast(bool, x())
- try:
- self._stream.seek(self._stream.tell())
- except Exception:
- return False
- return True
-
-
-def _is_binary_reader(stream: t.IO[t.Any], default: bool = False) -> bool:
- try:
- return isinstance(stream.read(0), bytes)
- except Exception:
- return default
- # This happens in some cases where the stream was already
- # closed. In this case, we assume the default.
-
-
-def _is_binary_writer(stream: t.IO[t.Any], default: bool = False) -> bool:
- try:
- stream.write(b"")
- except Exception:
- try:
- stream.write("")
- return False
- except Exception:
- pass
- return default
- return True
-
-
-def _find_binary_reader(stream: t.IO[t.Any]) -> t.Optional[t.BinaryIO]:
- # We need to figure out if the given stream is already binary.
- # This can happen because the official docs recommend detaching
- # the streams to get binary streams. Some code might do this, so
- # we need to deal with this case explicitly.
- if _is_binary_reader(stream, False):
- return t.cast(t.BinaryIO, stream)
-
- buf = getattr(stream, "buffer", None)
-
- # Same situation here; this time we assume that the buffer is
- # actually binary in case it's closed.
- if buf is not None and _is_binary_reader(buf, True):
- return t.cast(t.BinaryIO, buf)
-
- return None
-
-
-def _find_binary_writer(stream: t.IO[t.Any]) -> t.Optional[t.BinaryIO]:
- # We need to figure out if the given stream is already binary.
- # This can happen because the official docs recommend detaching
- # the streams to get binary streams. Some code might do this, so
- # we need to deal with this case explicitly.
- if _is_binary_writer(stream, False):
- return t.cast(t.BinaryIO, stream)
-
- buf = getattr(stream, "buffer", None)
-
- # Same situation here; this time we assume that the buffer is
- # actually binary in case it's closed.
- if buf is not None and _is_binary_writer(buf, True):
- return t.cast(t.BinaryIO, buf)
-
- return None
-
-
-def _stream_is_misconfigured(stream: t.TextIO) -> bool:
- """A stream is misconfigured if its encoding is ASCII."""
- # If the stream does not have an encoding set, we assume it's set
- # to ASCII. This appears to happen in certain unittest
- # environments. It's not quite clear what the correct behavior is
- # but this at least will force Click to recover somehow.
- return is_ascii_encoding(getattr(stream, "encoding", None) or "ascii")
-
-
-def _is_compat_stream_attr(stream: t.TextIO, attr: str, value: t.Optional[str]) -> bool:
- """A stream attribute is compatible if it is equal to the
- desired value or the desired value is unset and the attribute
- has a value.
- """
- stream_value = getattr(stream, attr, None)
- return stream_value == value or (value is None and stream_value is not None)
-
-
-def _is_compatible_text_stream(
- stream: t.TextIO, encoding: t.Optional[str], errors: t.Optional[str]
-) -> bool:
- """Check if a stream's encoding and errors attributes are
- compatible with the desired values.
- """
- return _is_compat_stream_attr(
- stream, "encoding", encoding
- ) and _is_compat_stream_attr(stream, "errors", errors)
-
-
-def _force_correct_text_stream(
- text_stream: t.IO[t.Any],
- encoding: t.Optional[str],
- errors: t.Optional[str],
- is_binary: t.Callable[[t.IO[t.Any], bool], bool],
- find_binary: t.Callable[[t.IO[t.Any]], t.Optional[t.BinaryIO]],
- force_readable: bool = False,
- force_writable: bool = False,
-) -> t.TextIO:
- if is_binary(text_stream, False):
- binary_reader = t.cast(t.BinaryIO, text_stream)
- else:
- text_stream = t.cast(t.TextIO, text_stream)
- # If the stream looks compatible, and won't default to a
- # misconfigured ascii encoding, return it as-is.
- if _is_compatible_text_stream(text_stream, encoding, errors) and not (
- encoding is None and _stream_is_misconfigured(text_stream)
- ):
- return text_stream
-
- # Otherwise, get the underlying binary reader.
- possible_binary_reader = find_binary(text_stream)
-
- # If that's not possible, silently use the original reader
- # and get mojibake instead of exceptions.
- if possible_binary_reader is None:
- return text_stream
-
- binary_reader = possible_binary_reader
-
- # Default errors to replace instead of strict in order to get
- # something that works.
- if errors is None:
- errors = "replace"
-
- # Wrap the binary stream in a text stream with the correct
- # encoding parameters.
- return _make_text_stream(
- binary_reader,
- encoding,
- errors,
- force_readable=force_readable,
- force_writable=force_writable,
- )
-
-
-def _force_correct_text_reader(
- text_reader: t.IO[t.Any],
- encoding: t.Optional[str],
- errors: t.Optional[str],
- force_readable: bool = False,
-) -> t.TextIO:
- return _force_correct_text_stream(
- text_reader,
- encoding,
- errors,
- _is_binary_reader,
- _find_binary_reader,
- force_readable=force_readable,
- )
-
-
-def _force_correct_text_writer(
- text_writer: t.IO[t.Any],
- encoding: t.Optional[str],
- errors: t.Optional[str],
- force_writable: bool = False,
-) -> t.TextIO:
- return _force_correct_text_stream(
- text_writer,
- encoding,
- errors,
- _is_binary_writer,
- _find_binary_writer,
- force_writable=force_writable,
- )
-
-
-def get_binary_stdin() -> t.BinaryIO:
- reader = _find_binary_reader(sys.stdin)
- if reader is None:
- raise RuntimeError("Was not able to determine binary stream for sys.stdin.")
- return reader
-
-
-def get_binary_stdout() -> t.BinaryIO:
- writer = _find_binary_writer(sys.stdout)
- if writer is None:
- raise RuntimeError("Was not able to determine binary stream for sys.stdout.")
- return writer
-
-
-def get_binary_stderr() -> t.BinaryIO:
- writer = _find_binary_writer(sys.stderr)
- if writer is None:
- raise RuntimeError("Was not able to determine binary stream for sys.stderr.")
- return writer
-
-
-def get_text_stdin(
- encoding: t.Optional[str] = None, errors: t.Optional[str] = None
-) -> t.TextIO:
- rv = _get_windows_console_stream(sys.stdin, encoding, errors)
- if rv is not None:
- return rv
- return _force_correct_text_reader(sys.stdin, encoding, errors, force_readable=True)
-
-
-def get_text_stdout(
- encoding: t.Optional[str] = None, errors: t.Optional[str] = None
-) -> t.TextIO:
- rv = _get_windows_console_stream(sys.stdout, encoding, errors)
- if rv is not None:
- return rv
- return _force_correct_text_writer(sys.stdout, encoding, errors, force_writable=True)
-
-
-def get_text_stderr(
- encoding: t.Optional[str] = None, errors: t.Optional[str] = None
-) -> t.TextIO:
- rv = _get_windows_console_stream(sys.stderr, encoding, errors)
- if rv is not None:
- return rv
- return _force_correct_text_writer(sys.stderr, encoding, errors, force_writable=True)
-
-
-def _wrap_io_open(
- file: t.Union[str, "os.PathLike[str]", int],
- mode: str,
- encoding: t.Optional[str],
- errors: t.Optional[str],
-) -> t.IO[t.Any]:
- """Handles not passing ``encoding`` and ``errors`` in binary mode."""
- if "b" in mode:
- return open(file, mode)
-
- return open(file, mode, encoding=encoding, errors=errors)
-
-
-def open_stream(
- filename: "t.Union[str, os.PathLike[str]]",
- mode: str = "r",
- encoding: t.Optional[str] = None,
- errors: t.Optional[str] = "strict",
- atomic: bool = False,
-) -> t.Tuple[t.IO[t.Any], bool]:
- binary = "b" in mode
- filename = os.fspath(filename)
-
- # Standard streams first. These are simple because they ignore the
- # atomic flag. Use fsdecode to handle Path("-").
- if os.fsdecode(filename) == "-":
- if any(m in mode for m in ["w", "a", "x"]):
- if binary:
- return get_binary_stdout(), False
- return get_text_stdout(encoding=encoding, errors=errors), False
- if binary:
- return get_binary_stdin(), False
- return get_text_stdin(encoding=encoding, errors=errors), False
-
- # Non-atomic writes directly go out through the regular open functions.
- if not atomic:
- return _wrap_io_open(filename, mode, encoding, errors), True
-
- # Some usability stuff for atomic writes
- if "a" in mode:
- raise ValueError(
- "Appending to an existing file is not supported, because that"
- " would involve an expensive `copy`-operation to a temporary"
- " file. Open the file in normal `w`-mode and copy explicitly"
- " if that's what you're after."
- )
- if "x" in mode:
- raise ValueError("Use the `overwrite`-parameter instead.")
- if "w" not in mode:
- raise ValueError("Atomic writes only make sense with `w`-mode.")
-
- # Atomic writes are more complicated. They work by opening a file
- # as a proxy in the same folder and then using the fdopen
- # functionality to wrap it in a Python file. Then we wrap it in an
- # atomic file that moves the file over on close.
- import errno
- import random
-
- try:
- perm: t.Optional[int] = os.stat(filename).st_mode
- except OSError:
- perm = None
-
- flags = os.O_RDWR | os.O_CREAT | os.O_EXCL
-
- if binary:
- flags |= getattr(os, "O_BINARY", 0)
-
- while True:
- tmp_filename = os.path.join(
- os.path.dirname(filename),
- f".__atomic-write{random.randrange(1 << 32):08x}",
- )
- try:
- fd = os.open(tmp_filename, flags, 0o666 if perm is None else perm)
- break
- except OSError as e:
- if e.errno == errno.EEXIST or (
- os.name == "nt"
- and e.errno == errno.EACCES
- and os.path.isdir(e.filename)
- and os.access(e.filename, os.W_OK)
- ):
- continue
- raise
-
- if perm is not None:
- os.chmod(tmp_filename, perm) # in case perm includes bits in umask
-
- f = _wrap_io_open(fd, mode, encoding, errors)
- af = _AtomicFile(f, tmp_filename, os.path.realpath(filename))
- return t.cast(t.IO[t.Any], af), True
-
-
-class _AtomicFile:
- def __init__(self, f: t.IO[t.Any], tmp_filename: str, real_filename: str) -> None:
- self._f = f
- self._tmp_filename = tmp_filename
- self._real_filename = real_filename
- self.closed = False
-
- @property
- def name(self) -> str:
- return self._real_filename
-
- def close(self, delete: bool = False) -> None:
- if self.closed:
- return
- self._f.close()
- os.replace(self._tmp_filename, self._real_filename)
- self.closed = True
-
- def __getattr__(self, name: str) -> t.Any:
- return getattr(self._f, name)
-
- def __enter__(self) -> "_AtomicFile":
- return self
-
- def __exit__(self, exc_type: t.Optional[t.Type[BaseException]], *_: t.Any) -> None:
- self.close(delete=exc_type is not None)
-
- def __repr__(self) -> str:
- return repr(self._f)
-
-
-def strip_ansi(value: str) -> str:
- return _ansi_re.sub("", value)
-
-
-def _is_jupyter_kernel_output(stream: t.IO[t.Any]) -> bool:
- while isinstance(stream, (_FixupStream, _NonClosingTextIOWrapper)):
- stream = stream._stream
-
- return stream.__class__.__module__.startswith("ipykernel.")
-
-
-def should_strip_ansi(
- stream: t.Optional[t.IO[t.Any]] = None, color: t.Optional[bool] = None
-) -> bool:
- if color is None:
- if stream is None:
- stream = sys.stdin
- return not isatty(stream) and not _is_jupyter_kernel_output(stream)
- return not color
-
-
-# On Windows, wrap the output streams with colorama to support ANSI
-# color codes.
-# NOTE: double check is needed so mypy does not analyze this on Linux
-if sys.platform.startswith("win") and WIN:
- from ._winconsole import _get_windows_console_stream
-
- def _get_argv_encoding() -> str:
- import locale
-
- return locale.getpreferredencoding()
-
- _ansi_stream_wrappers: t.MutableMapping[t.TextIO, t.TextIO] = WeakKeyDictionary()
-
- def auto_wrap_for_ansi(
- stream: t.TextIO, color: t.Optional[bool] = None
- ) -> t.TextIO:
- """Support ANSI color and style codes on Windows by wrapping a
- stream with colorama.
- """
- try:
- cached = _ansi_stream_wrappers.get(stream)
- except Exception:
- cached = None
-
- if cached is not None:
- return cached
-
- import colorama
-
- strip = should_strip_ansi(stream, color)
- ansi_wrapper = colorama.AnsiToWin32(stream, strip=strip)
- rv = t.cast(t.TextIO, ansi_wrapper.stream)
- _write = rv.write
-
- def _safe_write(s):
- try:
- return _write(s)
- except BaseException:
- ansi_wrapper.reset_all()
- raise
-
- rv.write = _safe_write
-
- try:
- _ansi_stream_wrappers[stream] = rv
- except Exception:
- pass
-
- return rv
-
-else:
-
- def _get_argv_encoding() -> str:
- return getattr(sys.stdin, "encoding", None) or sys.getfilesystemencoding()
-
- def _get_windows_console_stream(
- f: t.TextIO, encoding: t.Optional[str], errors: t.Optional[str]
- ) -> t.Optional[t.TextIO]:
- return None
-
-
-def term_len(x: str) -> int:
- return len(strip_ansi(x))
-
-
-def isatty(stream: t.IO[t.Any]) -> bool:
- try:
- return stream.isatty()
- except Exception:
- return False
-
-
-def _make_cached_stream_func(
- src_func: t.Callable[[], t.Optional[t.TextIO]],
- wrapper_func: t.Callable[[], t.TextIO],
-) -> t.Callable[[], t.Optional[t.TextIO]]:
- cache: t.MutableMapping[t.TextIO, t.TextIO] = WeakKeyDictionary()
-
- def func() -> t.Optional[t.TextIO]:
- stream = src_func()
-
- if stream is None:
- return None
-
- try:
- rv = cache.get(stream)
- except Exception:
- rv = None
- if rv is not None:
- return rv
- rv = wrapper_func()
- try:
- cache[stream] = rv
- except Exception:
- pass
- return rv
-
- return func
-
-
-_default_text_stdin = _make_cached_stream_func(lambda: sys.stdin, get_text_stdin)
-_default_text_stdout = _make_cached_stream_func(lambda: sys.stdout, get_text_stdout)
-_default_text_stderr = _make_cached_stream_func(lambda: sys.stderr, get_text_stderr)
-
-
-binary_streams: t.Mapping[str, t.Callable[[], t.BinaryIO]] = {
- "stdin": get_binary_stdin,
- "stdout": get_binary_stdout,
- "stderr": get_binary_stderr,
-}
-
-text_streams: t.Mapping[
- str, t.Callable[[t.Optional[str], t.Optional[str]], t.TextIO]
-] = {
- "stdin": get_text_stdin,
- "stdout": get_text_stdout,
- "stderr": get_text_stderr,
-}
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-9da94804.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-9da94804.css
deleted file mode 100644
index 79d901421a55ea578fdaf2c50c84e8fafcea8c41..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-9da94804.css
+++ /dev/null
@@ -1 +0,0 @@
-div.svelte-1gww5xe{display:flex;position:absolute;justify-content:center;align-items:center;border-radius:var(--radius-sm);background-color:#000c;padding:var(--size-1) .4rem;color:#fff;font-size:var(--text-sm)}span.svelte-1gww5xe{display:inline-block;margin-right:var(--size-1);border-radius:var(--radius-xs);width:var(--size-3);height:var(--size-3)}.wrap.svelte-1mjxput{margin-top:var(--size-3)}.legend.svelte-1mjxput{display:flex;justify-content:center;align-items:center;color:var(--body-text-color)}.legend-item.svelte-1mjxput{display:flex;align-items:center;gap:var(--spacing-sm);margin-right:var(--size-2);margin-left:var(--size-2)}.legend-box.svelte-1mjxput{display:inline-block;border-radius:var(--radius-xs);width:var(--size-3);height:var(--size-3)}svg.svelte-1mjxput{width:var(--size-full)}.label-text.svelte-1mjxput{fill:var(--body-text-color);font-size:var(--text-sm);font-family:var(--font-mono)}.main-label.svelte-1mjxput{display:flex;justify-content:center;align-items:center;color:var(--body-text-color)}.chart.svelte-etmurc{display:flex;display:relative;justify-content:center;align-items:center;background:var(--background-fill-primary);width:var(--size-full);height:var(--size-64)}
diff --git a/spaces/DYSHITELGOOGLA/app/README.md b/spaces/DYSHITELGOOGLA/app/README.md
deleted file mode 100644
index 9d1ed22e3d6c7c9ca24370eca52fc70b49a8acbd..0000000000000000000000000000000000000000
--- a/spaces/DYSHITELGOOGLA/app/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: App
-emoji: 📚
-colorFrom: red
-colorTo: green
-sdk: streamlit
-sdk_version: 1.25.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DaleChen/AutoGPT/autogpt/speech/brian.py b/spaces/DaleChen/AutoGPT/autogpt/speech/brian.py
deleted file mode 100644
index 821fdf2f482a9cfa928e5c9680152ad6766d8326..0000000000000000000000000000000000000000
--- a/spaces/DaleChen/AutoGPT/autogpt/speech/brian.py
+++ /dev/null
@@ -1,40 +0,0 @@
-""" Brian speech module for autogpt """
-import os
-
-import requests
-from playsound import playsound
-
-from autogpt.speech.base import VoiceBase
-
-
-class BrianSpeech(VoiceBase):
- """Brian speech module for autogpt"""
-
- def _setup(self) -> None:
- """Setup the voices, API key, etc."""
- pass
-
- def _speech(self, text: str, _: int = 0) -> bool:
- """Speak text using Brian with the streamelements API
-
- Args:
- text (str): The text to speak
-
- Returns:
- bool: True if the request was successful, False otherwise
- """
- tts_url = (
- f"https://api.streamelements.com/kappa/v2/speech?voice=Brian&text={text}"
- )
- response = requests.get(tts_url)
-
- if response.status_code == 200:
- with open("speech.mp3", "wb") as f:
- f.write(response.content)
- playsound("speech.mp3")
- os.remove("speech.mp3")
- return True
- else:
- print("Request failed with status code:", response.status_code)
- print("Response content:", response.content)
- return False
diff --git a/spaces/DaweiZ/toy-gpt/app.py b/spaces/DaweiZ/toy-gpt/app.py
deleted file mode 100644
index 5a4cd94c7f6407eaeb80d6dd05711fbe48df40da..0000000000000000000000000000000000000000
--- a/spaces/DaweiZ/toy-gpt/app.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import os
-import chainlit as cl
-from langchain.llms import OpenAI
-
-# The OPENAI_API_KEY is a secret in huggingface settings.
-# this is the way to retrieve it in runtime
-OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")
-
-# when the user starts a chat, this will be called
-@cl.on_chat_start
-async def start():
- # Your logic will be here
- # content = "The function start() is called when the user starts a chat because of the decorator @cl.on_chat_start"
- # await cl.Message(content=content).send()
-
- # ask the user for their OpenAI API key
- # OPENAI_API_KEY = await cl.AskUserMessage(
- # content="Please enter your OpenAI API key", timeout=100
- # ).send()['content']
-
- # Chainlit will automatically load environment variables from a .env file in the root of the project
- # so you can just get the API key using cl.user_session.get("OPENAI_API_KEY")
- # OPENAI_API_KEY = cl.user_session.get("OPENAI_API_KEY")
-
-
- # define the model and save it as an environment variable so that it can be used later
- llm = OpenAI(
- model_name="gpt-3.5-turbo",
- temperature=0,
- max_tokens=2000,
- openai_api_key=OPENAI_API_KEY,
- )
- cl.user_session.set(key="llm", value=llm)
-
-
-# continously on a loop
-# the @on_message decorator to tell Chainlit to run the main function each time a user sends a message. Then, we send back the answer to the UI with the Message class.
-@cl.on_message
-async def main(message: str):
- # Your logic will be here
- llm = cl.user_session.get("llm")
- result = llm(message)
- # send a response back to the user all the time
- await cl.Message(content=f"The answer from gpt-3.5-turbo: \n{result}").send()
diff --git a/spaces/Dorado607/ChuanhuChatGPT/modules/models/azure.py b/spaces/Dorado607/ChuanhuChatGPT/modules/models/azure.py
deleted file mode 100644
index 42cddfbda8cc74e40e114ee4bed46a2f9ff74ce9..0000000000000000000000000000000000000000
--- a/spaces/Dorado607/ChuanhuChatGPT/modules/models/azure.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from langchain.chat_models import AzureChatOpenAI
-import os
-
-from .base_model import Base_Chat_Langchain_Client
-
-# load_config_to_environ(["azure_openai_api_key", "azure_api_base_url", "azure_openai_api_version", "azure_deployment_name"])
-
-class Azure_OpenAI_Client(Base_Chat_Langchain_Client):
- def setup_model(self):
- # inplement this to setup the model then return it
- return AzureChatOpenAI(
- openai_api_base=os.environ["AZURE_OPENAI_API_BASE_URL"],
- openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],
- deployment_name=os.environ["AZURE_DEPLOYMENT_NAME"],
- openai_api_key=os.environ["AZURE_OPENAI_API_KEY"],
- openai_api_type="azure",
- )
\ No newline at end of file
diff --git a/spaces/ECCV2022/bytetrack/yolox/core/launch.py b/spaces/ECCV2022/bytetrack/yolox/core/launch.py
deleted file mode 100644
index 2fd5eaa765d7da2193f16f0fc463d001f6c4d5c5..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/yolox/core/launch.py
+++ /dev/null
@@ -1,219 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# Code are based on
-# https://github.com/facebookresearch/detectron2/blob/master/detectron2/engine/launch.py
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Copyright (c) Megvii, Inc. and its affiliates.
-
-from loguru import logger
-
-import torch
-import torch.distributed as dist
-import torch.multiprocessing as mp
-
-import yolox.utils.dist as comm
-from yolox.utils import configure_nccl
-
-import os
-import subprocess
-import sys
-import time
-
-__all__ = ["launch"]
-
-
-def _find_free_port():
- """
- Find an available port of current machine / node.
- """
- import socket
-
- sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
- # Binding to port 0 will cause the OS to find an available port for us
- sock.bind(("", 0))
- port = sock.getsockname()[1]
- sock.close()
- # NOTE: there is still a chance the port could be taken by other processes.
- return port
-
-
-def launch(
- main_func,
- num_gpus_per_machine,
- num_machines=1,
- machine_rank=0,
- backend="nccl",
- dist_url=None,
- args=(),
-):
- """
- Args:
- main_func: a function that will be called by `main_func(*args)`
- num_machines (int): the total number of machines
- machine_rank (int): the rank of this machine (one per machine)
- dist_url (str): url to connect to for distributed training, including protocol
- e.g. "tcp://127.0.0.1:8686".
- Can be set to auto to automatically select a free port on localhost
- args (tuple): arguments passed to main_func
- """
- world_size = num_machines * num_gpus_per_machine
- if world_size > 1:
- if int(os.environ.get("WORLD_SIZE", "1")) > 1:
- dist_url = "{}:{}".format(
- os.environ.get("MASTER_ADDR", None),
- os.environ.get("MASTER_PORT", "None"),
- )
- local_rank = int(os.environ.get("LOCAL_RANK", "0"))
- world_size = int(os.environ.get("WORLD_SIZE", "1"))
- _distributed_worker(
- local_rank,
- main_func,
- world_size,
- num_gpus_per_machine,
- num_machines,
- machine_rank,
- backend,
- dist_url,
- args,
- )
- exit()
- launch_by_subprocess(
- sys.argv,
- world_size,
- num_machines,
- machine_rank,
- num_gpus_per_machine,
- dist_url,
- args,
- )
- else:
- main_func(*args)
-
-
-def launch_by_subprocess(
- raw_argv,
- world_size,
- num_machines,
- machine_rank,
- num_gpus_per_machine,
- dist_url,
- args,
-):
- assert (
- world_size > 1
- ), "subprocess mode doesn't support single GPU, use spawn mode instead"
-
- if dist_url is None:
- # ------------------------hack for multi-machine training -------------------- #
- if num_machines > 1:
- master_ip = subprocess.check_output(["hostname", "--fqdn"]).decode("utf-8")
- master_ip = str(master_ip).strip()
- dist_url = "tcp://{}".format(master_ip)
- ip_add_file = "./" + args[1].experiment_name + "_ip_add.txt"
- if machine_rank == 0:
- port = _find_free_port()
- with open(ip_add_file, "w") as ip_add:
- ip_add.write(dist_url+'\n')
- ip_add.write(str(port))
- else:
- while not os.path.exists(ip_add_file):
- time.sleep(0.5)
-
- with open(ip_add_file, "r") as ip_add:
- dist_url = ip_add.readline().strip()
- port = ip_add.readline()
- else:
- dist_url = "tcp://127.0.0.1"
- port = _find_free_port()
-
- # set PyTorch distributed related environmental variables
- current_env = os.environ.copy()
- current_env["MASTER_ADDR"] = dist_url
- current_env["MASTER_PORT"] = str(port)
- current_env["WORLD_SIZE"] = str(world_size)
- assert num_gpus_per_machine <= torch.cuda.device_count()
-
- if "OMP_NUM_THREADS" not in os.environ and num_gpus_per_machine > 1:
- current_env["OMP_NUM_THREADS"] = str(1)
- logger.info(
- "\n*****************************************\n"
- "Setting OMP_NUM_THREADS environment variable for each process "
- "to be {} in default, to avoid your system being overloaded, "
- "please further tune the variable for optimal performance in "
- "your application as needed. \n"
- "*****************************************".format(
- current_env["OMP_NUM_THREADS"]
- )
- )
-
- processes = []
- for local_rank in range(0, num_gpus_per_machine):
- # each process's rank
- dist_rank = machine_rank * num_gpus_per_machine + local_rank
- current_env["RANK"] = str(dist_rank)
- current_env["LOCAL_RANK"] = str(local_rank)
-
- # spawn the processes
- cmd = ["python3", *raw_argv]
-
- process = subprocess.Popen(cmd, env=current_env)
- processes.append(process)
-
- for process in processes:
- process.wait()
- if process.returncode != 0:
- raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
-
-
-def _distributed_worker(
- local_rank,
- main_func,
- world_size,
- num_gpus_per_machine,
- num_machines,
- machine_rank,
- backend,
- dist_url,
- args,
-):
- assert (
- torch.cuda.is_available()
- ), "cuda is not available. Please check your installation."
- configure_nccl()
- global_rank = machine_rank * num_gpus_per_machine + local_rank
- logger.info("Rank {} initialization finished.".format(global_rank))
- try:
- dist.init_process_group(
- backend=backend,
- init_method=dist_url,
- world_size=world_size,
- rank=global_rank,
- )
- except Exception:
- logger.error("Process group URL: {}".format(dist_url))
- raise
- # synchronize is needed here to prevent a possible timeout after calling init_process_group
- # See: https://github.com/facebookresearch/maskrcnn-benchmark/issues/172
- comm.synchronize()
-
- if global_rank == 0 and os.path.exists(
- "./" + args[1].experiment_name + "_ip_add.txt"
- ):
- os.remove("./" + args[1].experiment_name + "_ip_add.txt")
-
- assert num_gpus_per_machine <= torch.cuda.device_count()
- torch.cuda.set_device(local_rank)
-
- args[1].local_rank = local_rank
- args[1].num_machines = num_machines
-
- # Setup the local process group (which contains ranks within the same machine)
- # assert comm._LOCAL_PROCESS_GROUP is None
- # num_machines = world_size // num_gpus_per_machine
- # for i in range(num_machines):
- # ranks_on_i = list(range(i * num_gpus_per_machine, (i + 1) * num_gpus_per_machine))
- # pg = dist.new_group(ranks_on_i)
- # if i == machine_rank:
- # comm._LOCAL_PROCESS_GROUP = pg
-
- main_func(*args)
diff --git a/spaces/Epitech/Scarecrow/original_app/README.md b/spaces/Epitech/Scarecrow/original_app/README.md
deleted file mode 100644
index 4861b4d1d183821f5f6d0d5cd44003f475cf9728..0000000000000000000000000000000000000000
--- a/spaces/Epitech/Scarecrow/original_app/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
-This is the based scarecrow application with the back-end and the scarecrow communication setup.
-
-Inside the huggingface face application, as a demo, there is only the back-end part with some visualisation setup.
-
-
-- a.mp3 -> predator sound for human
-- b.mp3 -> predator sound for cell_phone
-- coco.names -> labels for yolo to use
-- scarecrow.py -> the application that collect video and send the stream to the back-end
-- backend.py -> the application which run the model to detect animals
-- yolov3.cfg & yolov3.weights -> can't be included inside huggingface as binary
\ No newline at end of file
diff --git a/spaces/Fengbinbin/gpt-academic/request_llm/bridge_jittorllms.py b/spaces/Fengbinbin/gpt-academic/request_llm/bridge_jittorllms.py
deleted file mode 100644
index 28d0a7aab745cca4a1cdaded3c4803319000b5f0..0000000000000000000000000000000000000000
--- a/spaces/Fengbinbin/gpt-academic/request_llm/bridge_jittorllms.py
+++ /dev/null
@@ -1,153 +0,0 @@
-
-from transformers import AutoModel, AutoTokenizer
-import time
-import threading
-import importlib
-from toolbox import update_ui, get_conf
-from multiprocessing import Process, Pipe
-
-load_message = "jittorllms尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,jittorllms消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……"
-
-#################################################################################
-class GetGLMHandle(Process):
- def __init__(self):
- super().__init__(daemon=True)
- self.parent, self.child = Pipe()
- self.jittorllms_model = None
- self.info = ""
- self.success = True
- self.check_dependency()
- self.start()
- self.threadLock = threading.Lock()
-
- def check_dependency(self):
- try:
- import jittor
- from .jittorllms.models import get_model
- self.info = "依赖检测通过"
- self.success = True
- except:
- self.info = r"缺少jittorllms的依赖,如果要使用jittorllms,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_jittorllms.txt`"+\
- r"和`git clone https://gitlink.org.cn/jittor/JittorLLMs.git --depth 1 request_llm/jittorllms`两个指令来安装jittorllms的依赖(在项目根目录运行这两个指令)。"
- self.success = False
-
- def ready(self):
- return self.jittorllms_model is not None
-
- def run(self):
- # 子进程执行
- # 第一次运行,加载参数
- def load_model():
- import types
- try:
- if self.jittorllms_model is None:
- device, = get_conf('LOCAL_MODEL_DEVICE')
- from .jittorllms.models import get_model
- # availabel_models = ["chatglm", "pangualpha", "llama", "chatrwkv"]
- args_dict = {'model': 'chatglm', 'RUN_DEVICE':'cpu'}
- self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict))
- except:
- self.child.send('[Local Message] Call jittorllms fail 不能正常加载jittorllms的参数。')
- raise RuntimeError("不能正常加载jittorllms的参数!")
-
- load_model()
-
- # 进入任务等待状态
- while True:
- # 进入任务等待状态
- kwargs = self.child.recv()
- # 收到消息,开始请求
- try:
- for response, history in self.jittorllms_model.run_web_demo(kwargs['query'], kwargs['history']):
- self.child.send(response)
- except:
- self.child.send('[Local Message] Call jittorllms fail.')
- # 请求处理结束,开始下一个循环
- self.child.send('[Finish]')
-
- def stream_chat(self, **kwargs):
- # 主进程执行
- self.threadLock.acquire()
- self.parent.send(kwargs)
- while True:
- res = self.parent.recv()
- if res != '[Finish]':
- yield res
- else:
- break
- self.threadLock.release()
-
-global glm_handle
-glm_handle = None
-#################################################################################
-def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
- """
- 多线程方法
- 函数的说明请见 request_llm/bridge_all.py
- """
- global glm_handle
- if glm_handle is None:
- glm_handle = GetGLMHandle()
- if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + glm_handle.info
- if not glm_handle.success:
- error = glm_handle.info
- glm_handle = None
- raise RuntimeError(error)
-
- # jittorllms 没有 sys_prompt 接口,因此把prompt加入 history
- history_feedin = []
- history_feedin.append(["What can I do?", sys_prompt])
- for i in range(len(history)//2):
- history_feedin.append([history[2*i], history[2*i+1]] )
-
- watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
- response = ""
- for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
- if len(observe_window) >= 1: observe_window[0] = response
- if len(observe_window) >= 2:
- if (time.time()-observe_window[1]) > watch_dog_patience:
- raise RuntimeError("程序终止。")
- return response
-
-
-
-def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
- """
- 单线程方法
- 函数的说明请见 request_llm/bridge_all.py
- """
- chatbot.append((inputs, ""))
-
- global glm_handle
- if glm_handle is None:
- glm_handle = GetGLMHandle()
- chatbot[-1] = (inputs, load_message + "\n\n" + glm_handle.info)
- yield from update_ui(chatbot=chatbot, history=[])
- if not glm_handle.success:
- glm_handle = None
- return
-
- if additional_fn is not None:
- import core_functional
- importlib.reload(core_functional) # 热更新prompt
- core_functional = core_functional.get_core_functions()
- if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
- inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
-
- # 处理历史信息
- history_feedin = []
- history_feedin.append(["What can I do?", system_prompt] )
- for i in range(len(history)//2):
- history_feedin.append([history[2*i], history[2*i+1]] )
-
- # 开始接收jittorllms的回复
- response = "[Local Message]: 等待jittorllms响应中 ..."
- for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
- chatbot[-1] = (inputs, response)
- yield from update_ui(chatbot=chatbot, history=history)
-
- # 总结输出
- if response == "[Local Message]: 等待jittorllms响应中 ...":
- response = "[Local Message]: jittorllms响应异常 ..."
- history.extend([inputs, response])
- yield from update_ui(chatbot=chatbot, history=history)
diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/inference/infer_tool_grad.py b/spaces/FrankZxShen/so-vits-svc-models-ba/inference/infer_tool_grad.py
deleted file mode 100644
index f2587d98209abcbdd6d199ca3142dcbc69a87428..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/so-vits-svc-models-ba/inference/infer_tool_grad.py
+++ /dev/null
@@ -1,171 +0,0 @@
-import hashlib
-import json
-import logging
-import os
-import time
-from pathlib import Path
-import io
-import librosa
-import maad
-import numpy as np
-from inference import slicer
-import parselmouth
-import soundfile
-import torch
-import torchaudio
-
-# from hubert import hubert_model
-import utils
-from models import SynthesizerTrn
-logging.getLogger('numba').setLevel(logging.WARNING)
-logging.getLogger('matplotlib').setLevel(logging.WARNING)
-
-
-def resize2d_f0(x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(np.arange(0, len(source) * target_len, len(source)) / target_len, np.arange(0, len(source)),
- source)
- res = np.nan_to_num(target)
- return res
-
-
-def get_f0(x, p_len, f0_up_key=0):
-
- time_step = 160 / 16000 * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
-
- f0 = parselmouth.Sound(x, 16000).to_pitch_ac(
- time_step=time_step / 1000, voicing_threshold=0.6,
- pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency']
-
- pad_size = (p_len - len(f0) + 1) // 2
- if(pad_size > 0 or p_len - len(f0) - pad_size > 0):
- f0 = np.pad(
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode='constant')
-
- f0 *= pow(2, f0_up_key / 12)
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * \
- 254 / (f0_mel_max - f0_mel_min) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0
-
-
-def clean_pitch(input_pitch):
- num_nan = np.sum(input_pitch == 1)
- if num_nan / len(input_pitch) > 0.9:
- input_pitch[input_pitch != 1] = 1
- return input_pitch
-
-
-def plt_pitch(input_pitch):
- input_pitch = input_pitch.astype(float)
- input_pitch[input_pitch == 1] = np.nan
- return input_pitch
-
-
-def f0_to_pitch(ff):
- f0_pitch = 69 + 12 * np.log2(ff / 440)
- return f0_pitch
-
-
-def fill_a_to_b(a, b):
- if len(a) < len(b):
- for _ in range(0, len(b) - len(a)):
- a.append(a[0])
-
-
-def mkdir(paths: list):
- for path in paths:
- if not os.path.exists(path):
- os.mkdir(path)
-
-
-class VitsSvc(object):
- def __init__(self):
- self.device = torch.device(
- "cuda" if torch.cuda.is_available() else "cpu")
- self.SVCVITS = None
- self.hps = None
- self.speakers = None
- self.hubert_soft = utils.get_hubert_model()
-
- def set_device(self, device):
- self.device = torch.device(device)
- self.hubert_soft.to(self.device)
- if self.SVCVITS != None:
- self.SVCVITS.to(self.device)
-
- def loadCheckpoint(self, path):
- self.hps = utils.get_hparams_from_file(
- f"checkpoints/{path}/config.json")
- self.SVCVITS = SynthesizerTrn(
- self.hps.data.filter_length // 2 + 1,
- self.hps.train.segment_size // self.hps.data.hop_length,
- **self.hps.model)
- _ = utils.load_checkpoint(
- f"checkpoints/{path}/model.pth", self.SVCVITS, None)
- _ = self.SVCVITS.eval().to(self.device)
- self.speakers = self.hps.spk
-
- def get_units(self, source, sr):
- source = source.unsqueeze(0).to(self.device)
- with torch.inference_mode():
- units = self.hubert_soft.units(source)
- return units
-
- def get_unit_pitch(self, in_path, tran):
- source, sr = torchaudio.load(in_path)
- source = torchaudio.functional.resample(source, sr, 16000)
- if len(source.shape) == 2 and source.shape[1] >= 2:
- source = torch.mean(source, dim=0).unsqueeze(0)
- soft = self.get_units(source, sr).squeeze(0).cpu().numpy()
- f0_coarse, f0 = get_f0(source.cpu().numpy()[0], soft.shape[0]*2, tran)
- return soft, f0
-
- def infer(self, speaker_id, tran, raw_path):
- speaker_id = self.speakers[speaker_id]
- sid = torch.LongTensor([int(speaker_id)]).to(self.device).unsqueeze(0)
- soft, pitch = self.get_unit_pitch(raw_path, tran)
- f0 = torch.FloatTensor(clean_pitch(pitch)).unsqueeze(0).to(self.device)
- stn_tst = torch.FloatTensor(soft)
- with torch.no_grad():
- x_tst = stn_tst.unsqueeze(0).to(self.device)
- x_tst = torch.repeat_interleave(
- x_tst, repeats=2, dim=1).transpose(1, 2)
- audio, _ = self.SVCVITS.infer(x_tst, f0=f0, g=sid)[
- 0, 0].data.float()
- return audio, audio.shape[-1]
-
- def inference(self, srcaudio, chara, tran, slice_db):
- sampling_rate, audio = srcaudio
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(
- audio, orig_sr=sampling_rate, target_sr=16000)
- soundfile.write("tmpwav.wav", audio, 16000, format="wav")
- chunks = slicer.cut("tmpwav.wav", db_thresh=slice_db)
- audio_data, audio_sr = slicer.chunks2audio("tmpwav.wav", chunks)
- audio = []
- for (slice_tag, data) in audio_data:
- length = int(np.ceil(len(data) / audio_sr *
- self.hps.data.sampling_rate))
- raw_path = io.BytesIO()
- soundfile.write(raw_path, data, audio_sr, format="wav")
- raw_path.seek(0)
- if slice_tag:
- _audio = np.zeros(length)
- else:
- out_audio, out_sr = self.infer(chara, tran, raw_path)
- _audio = out_audio.cpu().numpy()
- audio.extend(list(_audio))
- audio = (np.array(audio) * 32768.0).astype('int16')
- return (self.hps.data.sampling_rate, audio)
diff --git a/spaces/Frederick/Clause_Segmentation_and_Classification/README.md b/spaces/Frederick/Clause_Segmentation_and_Classification/README.md
deleted file mode 100644
index c5d29e479f280c070b485e6f1fc4f4b26dbe36de..0000000000000000000000000000000000000000
--- a/spaces/Frederick/Clause_Segmentation_and_Classification/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Clause Segmentation And Classification
-emoji: 🌍
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Froleptan/lambdalabs-dreambooth-avatar/README.md b/spaces/Froleptan/lambdalabs-dreambooth-avatar/README.md
deleted file mode 100644
index e5c7cea6cb0a6ec41de3aeb19b819d589960d457..0000000000000000000000000000000000000000
--- a/spaces/Froleptan/lambdalabs-dreambooth-avatar/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Lambdalabs Dreambooth Avatar
-emoji: 🚀
-colorFrom: purple
-colorTo: purple
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/GIZ/SDSN-demo/utils/uploadAndExample.py b/spaces/GIZ/SDSN-demo/utils/uploadAndExample.py
deleted file mode 100644
index a185e6fc4c32c3dd564341df1037512e2f173df1..0000000000000000000000000000000000000000
--- a/spaces/GIZ/SDSN-demo/utils/uploadAndExample.py
+++ /dev/null
@@ -1,33 +0,0 @@
-import streamlit as st
-import tempfile
-import json
-
-def add_upload(choice):
- """
- Provdies the user with choice to either 'Upload Document' or 'Try Example'.
- Based on user choice runs streamlit processes and save the path and name of
- the 'file' to streamlit session_state which then can be fetched later.
-
- """
-
- if choice == 'Upload Document':
- uploaded_file = st.sidebar.file_uploader('Upload the File',
- type=['pdf', 'docx', 'txt'])
- if uploaded_file is not None:
- with tempfile.NamedTemporaryFile(mode="wb", delete = False) as temp:
- bytes_data = uploaded_file.getvalue()
- temp.write(bytes_data)
- st.session_state['filename'] = uploaded_file.name
- st.session_state['filepath'] = temp.name
-
-
- else:
- # listing the options
- with open('docStore/sample/files.json','r') as json_file:
- files = json.load(json_file)
-
- option = st.sidebar.selectbox('Select the example document',
- list(files.keys()))
- file_name = file_path = files[option]
- st.session_state['filename'] = file_name
- st.session_state['filepath'] = file_path
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/ld/ld_r34_gflv1_r101_fpn_coco_1x.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/ld/ld_r34_gflv1_r101_fpn_coco_1x.py
deleted file mode 100644
index 905651d1f1d7cd956147111bba6d427e59ce1895..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/ld/ld_r34_gflv1_r101_fpn_coco_1x.py
+++ /dev/null
@@ -1,19 +0,0 @@
-_base_ = ['./ld_r18_gflv1_r101_fpn_coco_1x.py']
-model = dict(
- pretrained='torchvision://resnet34',
- backbone=dict(
- type='ResNet',
- depth=34,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'),
- neck=dict(
- type='FPN',
- in_channels=[64, 128, 256, 512],
- out_channels=256,
- start_level=1,
- add_extra_convs='on_output',
- num_outs=5))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/README.md b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/README.md
deleted file mode 100644
index be46e329b6b602f2f6fe77eb1af161b072c92534..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/README.md
+++ /dev/null
@@ -1,75 +0,0 @@
-# Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
-
-## Introduction
-
-
-
-```latex
-@inproceedings{deeplabv3plus2018,
- title={Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation},
- author={Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam},
- booktitle={ECCV},
- year={2018}
-}
-```
-
-## Results and models
-
-Note:
-`D-8`/`D-16` here corresponding to the output stride 8/16 setting for DeepLab series.
-`MG-124` stands for multi-grid dilation in the last stage of ResNet.
-
-### Cityscapes
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ---------- | --------------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| DeepLabV3+ | R-50-D8 | 512x1024 | 40000 | 7.5 | 3.94 | 79.61 | 81.01 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x1024_40k_cityscapes/deeplabv3plus_r50-d8_512x1024_40k_cityscapes_20200605_094610-d222ffcd.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x1024_40k_cityscapes/deeplabv3plus_r50-d8_512x1024_40k_cityscapes_20200605_094610.log.json) |
-| DeepLabV3+ | R-101-D8 | 512x1024 | 40000 | 11 | 2.60 | 80.21 | 81.82 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x1024_40k_cityscapes/deeplabv3plus_r101-d8_512x1024_40k_cityscapes_20200605_094614-3769eecf.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x1024_40k_cityscapes/deeplabv3plus_r101-d8_512x1024_40k_cityscapes_20200605_094614.log.json) |
-| DeepLabV3+ | R-50-D8 | 769x769 | 40000 | 8.5 | 1.72 | 78.97 | 80.46 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_769x769_40k_cityscapes/deeplabv3plus_r50-d8_769x769_40k_cityscapes_20200606_114143-1dcb0e3c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_769x769_40k_cityscapes/deeplabv3plus_r50-d8_769x769_40k_cityscapes_20200606_114143.log.json) |
-| DeepLabV3+ | R-101-D8 | 769x769 | 40000 | 12.5 | 1.15 | 79.46 | 80.50 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_769x769_40k_cityscapes/deeplabv3plus_r101-d8_769x769_40k_cityscapes_20200606_114304-ff414b9e.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_769x769_40k_cityscapes/deeplabv3plus_r101-d8_769x769_40k_cityscapes_20200606_114304.log.json) |
-| DeepLabV3+ | R-18-D8 | 512x1024 | 80000 | 2.2 | 14.27 | 76.89 | 78.76 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r18-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_512x1024_80k_cityscapes/deeplabv3plus_r18-d8_512x1024_80k_cityscapes_20201226_080942-cff257fe.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_512x1024_80k_cityscapes/deeplabv3plus_r18-d8_512x1024_80k_cityscapes-20201226_080942.log.json) |
-| DeepLabV3+ | R-50-D8 | 512x1024 | 80000 | - | - | 80.09 | 81.13 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x1024_80k_cityscapes/deeplabv3plus_r50-d8_512x1024_80k_cityscapes_20200606_114049-f9fb496d.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x1024_80k_cityscapes/deeplabv3plus_r50-d8_512x1024_80k_cityscapes_20200606_114049.log.json) |
-| DeepLabV3+ | R-101-D8 | 512x1024 | 80000 | - | - | 80.97 | 82.03 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x1024_80k_cityscapes/deeplabv3plus_r101-d8_512x1024_80k_cityscapes_20200606_114143-068fcfe9.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x1024_80k_cityscapes/deeplabv3plus_r101-d8_512x1024_80k_cityscapes_20200606_114143.log.json) |
-| DeepLabV3+ | R-18-D8 | 769x769 | 80000 | 2.5 | 5.74 | 76.26 | 77.91 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r18-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_769x769_80k_cityscapes/deeplabv3plus_r18-d8_769x769_80k_cityscapes_20201226_083346-f326e06a.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_769x769_80k_cityscapes/deeplabv3plus_r18-d8_769x769_80k_cityscapes-20201226_083346.log.json) |
-| DeepLabV3+ | R-50-D8 | 769x769 | 80000 | - | - | 79.83 | 81.48 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_769x769_80k_cityscapes/deeplabv3plus_r50-d8_769x769_80k_cityscapes_20200606_210233-0e9dfdc4.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_769x769_80k_cityscapes/deeplabv3plus_r50-d8_769x769_80k_cityscapes_20200606_210233.log.json) |
-| DeepLabV3+ | R-101-D8 | 769x769 | 80000 | - | - | 80.98 | 82.18 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_769x769_80k_cityscapes/deeplabv3plus_r101-d8_769x769_80k_cityscapes_20200607_000405-a7573d20.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_769x769_80k_cityscapes/deeplabv3plus_r101-d8_769x769_80k_cityscapes_20200607_000405.log.json) |
-| DeepLabV3+ | R-101-D16-MG124 | 512x1024 | 40000 | 5.8 | 7.48 | 79.09 | 80.36 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_40k_cityscapes/deeplabv3plus_r101-d16-mg124_512x1024_40k_cityscapes_20200908_005644-cf9ce186.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_40k_cityscapes/deeplabv3plus_r101-d16-mg124_512x1024_40k_cityscapes-20200908_005644.log.json) |
-| DeepLabV3+ | R-101-D16-MG124 | 512x1024 | 80000 | 9.9 | - | 79.90 | 81.33 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_80k_cityscapes/deeplabv3plus_r101-d16-mg124_512x1024_80k_cityscapes_20200908_005644-ee6158e0.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_80k_cityscapes/deeplabv3plus_r101-d16-mg124_512x1024_80k_cityscapes-20200908_005644.log.json) |
-| DeepLabV3+ | R-18b-D8 | 512x1024 | 80000 | 2.1 | 14.95 | 75.87 | 77.52 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r18b-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18b-d8_512x1024_80k_cityscapes/deeplabv3plus_r18b-d8_512x1024_80k_cityscapes_20201226_090828-e451abd9.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18b-d8_512x1024_80k_cityscapes/deeplabv3plus_r18b-d8_512x1024_80k_cityscapes-20201226_090828.log.json) |
-| DeepLabV3+ | R-50b-D8 | 512x1024 | 80000 | 7.4 | 3.94 | 80.28 | 81.44 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50b-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50b-d8_512x1024_80k_cityscapes/deeplabv3plus_r50b-d8_512x1024_80k_cityscapes_20201225_213645-a97e4e43.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50b-d8_512x1024_80k_cityscapes/deeplabv3plus_r50b-d8_512x1024_80k_cityscapes-20201225_213645.log.json) |
-| DeepLabV3+ | R-101b-D8 | 512x1024 | 80000 | 10.9 | 2.60 | 80.16 | 81.41 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101b-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101b-d8_512x1024_80k_cityscapes/deeplabv3plus_r101b-d8_512x1024_80k_cityscapes_20201226_190843-9c3c93a4.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101b-d8_512x1024_80k_cityscapes/deeplabv3plus_r101b-d8_512x1024_80k_cityscapes-20201226_190843.log.json) |
-| DeepLabV3+ | R-18b-D8 | 769x769 | 80000 | 2.4 | 5.96 | 76.36 | 78.24 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r18b-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18b-d8_769x769_80k_cityscapes/deeplabv3plus_r18b-d8_769x769_80k_cityscapes_20201226_151312-2c868aff.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18b-d8_769x769_80k_cityscapes/deeplabv3plus_r18b-d8_769x769_80k_cityscapes-20201226_151312.log.json) |
-| DeepLabV3+ | R-50b-D8 | 769x769 | 80000 | 8.4 | 1.72 | 79.41 | 80.56 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50b-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50b-d8_769x769_80k_cityscapes/deeplabv3plus_r50b-d8_769x769_80k_cityscapes_20201225_224655-8b596d1c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50b-d8_769x769_80k_cityscapes/deeplabv3plus_r50b-d8_769x769_80k_cityscapes-20201225_224655.log.json) |
-| DeepLabV3+ | R-101b-D8 | 769x769 | 80000 | 12.3 | 1.10 | 79.88 | 81.46 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101b-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101b-d8_769x769_80k_cityscapes/deeplabv3plus_r101b-d8_769x769_80k_cityscapes_20201226_205041-227cdf7c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101b-d8_769x769_80k_cityscapes/deeplabv3plus_r101b-d8_769x769_80k_cityscapes-20201226_205041.log.json) |
-
-### ADE20K
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ---------- | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| DeepLabV3+ | R-50-D8 | 512x512 | 80000 | 10.6 | 21.01 | 42.72 | 43.75 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x512_80k_ade20k/deeplabv3plus_r50-d8_512x512_80k_ade20k_20200614_185028-bf1400d8.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x512_80k_ade20k/deeplabv3plus_r50-d8_512x512_80k_ade20k_20200614_185028.log.json) |
-| DeepLabV3+ | R-101-D8 | 512x512 | 80000 | 14.1 | 14.16 | 44.60 | 46.06 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x512_80k_ade20k/deeplabv3plus_r101-d8_512x512_80k_ade20k_20200615_014139-d5730af7.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x512_80k_ade20k/deeplabv3plus_r101-d8_512x512_80k_ade20k_20200615_014139.log.json) |
-| DeepLabV3+ | R-50-D8 | 512x512 | 160000 | - | - | 43.95 | 44.93 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x512_160k_ade20k/deeplabv3plus_r50-d8_512x512_160k_ade20k_20200615_124504-6135c7e0.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x512_160k_ade20k/deeplabv3plus_r50-d8_512x512_160k_ade20k_20200615_124504.log.json) |
-| DeepLabV3+ | R-101-D8 | 512x512 | 160000 | - | - | 45.47 | 46.35 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x512_160k_ade20k/deeplabv3plus_r101-d8_512x512_160k_ade20k_20200615_123232-38ed86bb.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x512_160k_ade20k/deeplabv3plus_r101-d8_512x512_160k_ade20k_20200615_123232.log.json) |
-
-#### Pascal VOC 2012 + Aug
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ---------- | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | -------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| DeepLabV3+ | R-50-D8 | 512x512 | 20000 | 7.6 | 21 | 75.93 | 77.50 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x512_20k_voc12aug/deeplabv3plus_r50-d8_512x512_20k_voc12aug_20200617_102323-aad58ef1.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x512_20k_voc12aug/deeplabv3plus_r50-d8_512x512_20k_voc12aug_20200617_102323.log.json) |
-| DeepLabV3+ | R-101-D8 | 512x512 | 20000 | 11 | 13.88 | 77.22 | 78.59 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x512_20k_voc12aug/deeplabv3plus_r101-d8_512x512_20k_voc12aug_20200617_102345-c7ff3d56.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x512_20k_voc12aug/deeplabv3plus_r101-d8_512x512_20k_voc12aug_20200617_102345.log.json) |
-| DeepLabV3+ | R-50-D8 | 512x512 | 40000 | - | - | 76.81 | 77.57 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x512_40k_voc12aug/deeplabv3plus_r50-d8_512x512_40k_voc12aug_20200613_161759-e1b43aa9.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x512_40k_voc12aug/deeplabv3plus_r50-d8_512x512_40k_voc12aug_20200613_161759.log.json) |
-| DeepLabV3+ | R-101-D8 | 512x512 | 40000 | - | - | 78.62 | 79.53 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x512_40k_voc12aug/deeplabv3plus_r101-d8_512x512_40k_voc12aug_20200613_205333-faf03387.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x512_40k_voc12aug/deeplabv3plus_r101-d8_512x512_40k_voc12aug_20200613_205333.log.json) |
-
-#### Pascal Context
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ---------- | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | -------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| DeepLabV3+ | R-101-D8 | 480x480 | 40000 | - | 9.09 | 47.30 | 48.47 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context/deeplabv3plus_r101-d8_480x480_40k_pascal_context_20200911_165459-d3c8a29e.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context/deeplabv3plus_r101-d8_480x480_40k_pascal_context-20200911_165459.log.json) |
-| DeepLabV3+ | R-101-D8 | 480x480 | 80000 | - | - | 47.23 | 48.26 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context/deeplabv3plus_r101-d8_480x480_80k_pascal_context_20200911_155322-145d3ee8.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context/deeplabv3plus_r101-d8_480x480_80k_pascal_context-20200911_155322.log.json) |
-
-#### Pascal Context 59
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ---------- | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | -------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| DeepLabV3+ | R-101-D8 | 480x480 | 40000 | - | - | 52.86 | 54.54 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context_59.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context_59/deeplabv3plus_r101-d8_480x480_40k_pascal_context_59_20210416_111233-ed937f15.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context_59/deeplabv3plus_r101-d8_480x480_40k_pascal_context_59-20210416_111233.log.json) |
-| DeepLabV3+ | R-101-D8 | 480x480 | 80000 | - | - | 53.2 | 54.67 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context_59.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context_59/deeplabv3plus_r101-d8_480x480_80k_pascal_context_59_20210416_111127-7ca0331d.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context_59/deeplabv3plus_r101-d8_480x480_80k_pascal_context_59-20210416_111127.log.json) |
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r18b-d8_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r18b-d8_769x769_80k_cityscapes.py
deleted file mode 100644
index 5dd34dd2134c745275c66adc5488b4b9f68d6809..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r18b-d8_769x769_80k_cityscapes.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = './fcn_r50-d8_769x769_80k_cityscapes.py'
-model = dict(
- pretrained='torchvision://resnet18',
- backbone=dict(type='ResNet', depth=18),
- decode_head=dict(
- in_channels=512,
- channels=128,
- ),
- auxiliary_head=dict(in_channels=256, channels=64))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index 990a085eda2f2dc47f1a1289bfbf2726ad8c9c4f..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/fcn_r50-d8.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
diff --git a/spaces/Grezz/generate_human_motion/VQ-Trans/README.md b/spaces/Grezz/generate_human_motion/VQ-Trans/README.md
deleted file mode 100644
index 547a1d4b52a5c76f0f86c641557f99d0688c0ffd..0000000000000000000000000000000000000000
--- a/spaces/Grezz/generate_human_motion/VQ-Trans/README.md
+++ /dev/null
@@ -1,400 +0,0 @@
-# Motion VQ-Trans
-Pytorch implementation of paper "Generating Human Motion from Textual Descriptions with High Quality Discrete Representation"
-
-
-[[Notebook Demo]](https://colab.research.google.com/drive/1tAHlmcpKcjg_zZrqKku7AfpqdVAIFrF8?usp=sharing)
-
-
-
-
-If our project is helpful for your research, please consider citing : (todo)
-```
-@inproceedings{shen2020ransac,
- title={RANSAC-Flow: generic two-stage image alignment},
- author={Shen, Xi and Darmon, Fran{\c{c}}ois and Efros, Alexei A and Aubry, Mathieu},
- booktitle={16th European Conference on Computer Vision}
- year={2020}
- }
-```
-
-
-## Table of Content
-* [1. Visual Results](#1-visual-results)
-* [2. Installation](#2-installation)
-* [3. Quick Start](#3-quick-start)
-* [4. Train](#4-train)
-* [5. Evaluation](#5-evaluation)
-* [6. Motion Render](#6-motion-render)
-* [7. Acknowledgement](#7-acknowledgement)
-* [8. ChangLog](#8-changlog)
-
-
-
-
-## 1. Visual Results (More results can be found in our project page (todo))
-
-
-
-
-## 2. Installation
-
-### 2.1. Environment
-
-
-
-Our model can be learnt in a **single GPU V100-32G**
-
-```bash
-conda env create -f environment.yml
-conda activate VQTrans
-```
-
-The code was tested on Python 3.8 and PyTorch 1.8.1.
-
-
-### 2.2. Dependencies
-
-```bash
-bash dataset/prepare/download_glove.sh
-```
-
-
-### 2.3. Datasets
-
-
-We are using two 3D human motion-language dataset: HumanML3D and KIT-ML. For both datasets, you could find the details as well as download link [[here]](https://github.com/EricGuo5513/HumanML3D).
-
-Take HumanML3D for an example, the file directory should look like this:
-```
-./dataset/HumanML3D/
-├── new_joint_vecs/
-├── texts/
-├── Mean.npy # same as in [HumanML3D](https://github.com/EricGuo5513/HumanML3D)
-├── Std.npy # same as in [HumanML3D](https://github.com/EricGuo5513/HumanML3D)
-├── train.txt
-├── val.txt
-├── test.txt
-├── train_val.txt
-└──all.txt
-```
-
-
-### 2.4. Motion & text feature extractors:
-
-We use the same extractors provided by [t2m](https://github.com/EricGuo5513/text-to-motion) to evaluate our generated motions. Please download the extractors.
-
-```bash
-bash dataset/prepare/download_extractor.sh
-```
-
-### 2.5. Pre-trained models
-
-The pretrained model files will be stored in the 'pretrained' folder:
-```bash
-bash dataset/prepare/download_model.sh
-```
-
-
-
-### 2.6. Render motion (optional)
-
-If you want to render the generated motion, you need to install:
-
-```bash
-sudo sh dataset/prepare/download_smpl.sh
-conda install -c menpo osmesa
-conda install h5py
-conda install -c conda-forge shapely pyrender trimesh mapbox_earcut
-```
-
-
-
-## 3. Quick Start
-
-A quick start guide of how to use our code is available in [demo.ipynb](https://colab.research.google.com/drive/1tAHlmcpKcjg_zZrqKku7AfpqdVAIFrF8?usp=sharing)
-
-
-
-
-
-
-## 4. Train
-
-Note that, for kit dataset, just need to set '--dataname kit'.
-
-### 4.1. VQ-VAE
-
-The results are saved in the folder output_vqfinal.
-
-
-
-VQ training
-
-
-```bash
-python3 train_vq.py \
---batch-size 256 \
---lr 2e-4 \
---total-iter 300000 \
---lr-scheduler 200000 \
---nb-code 512 \
---down-t 2 \
---depth 3 \
---dilation-growth-rate 3 \
---out-dir output \
---dataname t2m \
---vq-act relu \
---quantizer ema_reset \
---loss-vel 0.5 \
---recons-loss l1_smooth \
---exp-name VQVAE
-```
-
-
-
-### 4.2. Motion-Transformer
-
-The results are saved in the folder output_transformer.
-
-
-
-MoTrans training
-
-
-```bash
-python3 train_t2m_trans.py \
---exp-name VQTransformer \
---batch-size 128 \
---num-layers 9 \
---embed-dim-gpt 1024 \
---nb-code 512 \
---n-head-gpt 16 \
---block-size 51 \
---ff-rate 4 \
---drop-out-rate 0.1 \
---resume-pth output/VQVAE/net_last.pth \
---vq-name VQVAE \
---out-dir output \
---total-iter 300000 \
---lr-scheduler 150000 \
---lr 0.0001 \
---dataname t2m \
---down-t 2 \
---depth 3 \
---quantizer ema_reset \
---eval-iter 10000 \
---pkeep 0.5 \
---dilation-growth-rate 3 \
---vq-act relu
-```
-
-
-
-## 5. Evaluation
-
-### 5.1. VQ-VAE
-
-
-VQ eval
-
-
-```bash
-python3 VQ_eval.py \
---batch-size 256 \
---lr 2e-4 \
---total-iter 300000 \
---lr-scheduler 200000 \
---nb-code 512 \
---down-t 2 \
---depth 3 \
---dilation-growth-rate 3 \
---out-dir output \
---dataname t2m \
---vq-act relu \
---quantizer ema_reset \
---loss-vel 0.5 \
---recons-loss l1_smooth \
---exp-name TEST_VQVAE \
---resume-pth output/VQVAE/net_last.pth
-```
-
-
-
-### 5.2. Motion-Transformer
-
-
-
-MoTrans eval
-
-
-```bash
-python3 GPT_eval_multi.py \
---exp-name TEST_VQTransformer \
---batch-size 128 \
---num-layers 9 \
---embed-dim-gpt 1024 \
---nb-code 512 \
---n-head-gpt 16 \
---block-size 51 \
---ff-rate 4 \
---drop-out-rate 0.1 \
---resume-pth output/VQVAE/net_last.pth \
---vq-name VQVAE \
---out-dir output \
---total-iter 300000 \
---lr-scheduler 150000 \
---lr 0.0001 \
---dataname t2m \
---down-t 2 \
---depth 3 \
---quantizer ema_reset \
---eval-iter 10000 \
---pkeep 0.5 \
---dilation-growth-rate 3 \
---vq-act relu \
---resume-gpt output/VQTransformer/net_best_fid.pth
-```
-
-
-
-
-## 6. Motion Render
-
-
-
-Motion Render
-
-
-You should input the npy folder address and the motion names. Here is an example:
-
-```bash
-python3 render_final.py --filedir output/TEST_VQTransformer/ --motion-list 000019 005485
-```
-
-
-
-### 7. Acknowledgement
-
-We appreciate helps from :
-
-* Public code like [text-to-motion](https://github.com/EricGuo5513/text-to-motion), [TM2T](https://github.com/EricGuo5513/TM2T) etc.
-
-### 8. ChangLog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/Hallucinate/demo/taming/modules/transformer/permuter.py b/spaces/Hallucinate/demo/taming/modules/transformer/permuter.py
deleted file mode 100644
index 0d43bb135adde38d94bf18a7e5edaa4523cd95cf..0000000000000000000000000000000000000000
--- a/spaces/Hallucinate/demo/taming/modules/transformer/permuter.py
+++ /dev/null
@@ -1,248 +0,0 @@
-import torch
-import torch.nn as nn
-import numpy as np
-
-
-class AbstractPermuter(nn.Module):
- def __init__(self, *args, **kwargs):
- super().__init__()
- def forward(self, x, reverse=False):
- raise NotImplementedError
-
-
-class Identity(AbstractPermuter):
- def __init__(self):
- super().__init__()
-
- def forward(self, x, reverse=False):
- return x
-
-
-class Subsample(AbstractPermuter):
- def __init__(self, H, W):
- super().__init__()
- C = 1
- indices = np.arange(H*W).reshape(C,H,W)
- while min(H, W) > 1:
- indices = indices.reshape(C,H//2,2,W//2,2)
- indices = indices.transpose(0,2,4,1,3)
- indices = indices.reshape(C*4,H//2, W//2)
- H = H//2
- W = W//2
- C = C*4
- assert H == W == 1
- idx = torch.tensor(indices.ravel())
- self.register_buffer('forward_shuffle_idx',
- nn.Parameter(idx, requires_grad=False))
- self.register_buffer('backward_shuffle_idx',
- nn.Parameter(torch.argsort(idx), requires_grad=False))
-
- def forward(self, x, reverse=False):
- if not reverse:
- return x[:, self.forward_shuffle_idx]
- else:
- return x[:, self.backward_shuffle_idx]
-
-
-def mortonify(i, j):
- """(i,j) index to linear morton code"""
- i = np.uint64(i)
- j = np.uint64(j)
-
- z = np.uint(0)
-
- for pos in range(32):
- z = (z |
- ((j & (np.uint64(1) << np.uint64(pos))) << np.uint64(pos)) |
- ((i & (np.uint64(1) << np.uint64(pos))) << np.uint64(pos+1))
- )
- return z
-
-
-class ZCurve(AbstractPermuter):
- def __init__(self, H, W):
- super().__init__()
- reverseidx = [np.int64(mortonify(i,j)) for i in range(H) for j in range(W)]
- idx = np.argsort(reverseidx)
- idx = torch.tensor(idx)
- reverseidx = torch.tensor(reverseidx)
- self.register_buffer('forward_shuffle_idx',
- idx)
- self.register_buffer('backward_shuffle_idx',
- reverseidx)
-
- def forward(self, x, reverse=False):
- if not reverse:
- return x[:, self.forward_shuffle_idx]
- else:
- return x[:, self.backward_shuffle_idx]
-
-
-class SpiralOut(AbstractPermuter):
- def __init__(self, H, W):
- super().__init__()
- assert H == W
- size = W
- indices = np.arange(size*size).reshape(size,size)
-
- i0 = size//2
- j0 = size//2-1
-
- i = i0
- j = j0
-
- idx = [indices[i0, j0]]
- step_mult = 0
- for c in range(1, size//2+1):
- step_mult += 1
- # steps left
- for k in range(step_mult):
- i = i - 1
- j = j
- idx.append(indices[i, j])
-
- # step down
- for k in range(step_mult):
- i = i
- j = j + 1
- idx.append(indices[i, j])
-
- step_mult += 1
- if c < size//2:
- # step right
- for k in range(step_mult):
- i = i + 1
- j = j
- idx.append(indices[i, j])
-
- # step up
- for k in range(step_mult):
- i = i
- j = j - 1
- idx.append(indices[i, j])
- else:
- # end reached
- for k in range(step_mult-1):
- i = i + 1
- idx.append(indices[i, j])
-
- assert len(idx) == size*size
- idx = torch.tensor(idx)
- self.register_buffer('forward_shuffle_idx', idx)
- self.register_buffer('backward_shuffle_idx', torch.argsort(idx))
-
- def forward(self, x, reverse=False):
- if not reverse:
- return x[:, self.forward_shuffle_idx]
- else:
- return x[:, self.backward_shuffle_idx]
-
-
-class SpiralIn(AbstractPermuter):
- def __init__(self, H, W):
- super().__init__()
- assert H == W
- size = W
- indices = np.arange(size*size).reshape(size,size)
-
- i0 = size//2
- j0 = size//2-1
-
- i = i0
- j = j0
-
- idx = [indices[i0, j0]]
- step_mult = 0
- for c in range(1, size//2+1):
- step_mult += 1
- # steps left
- for k in range(step_mult):
- i = i - 1
- j = j
- idx.append(indices[i, j])
-
- # step down
- for k in range(step_mult):
- i = i
- j = j + 1
- idx.append(indices[i, j])
-
- step_mult += 1
- if c < size//2:
- # step right
- for k in range(step_mult):
- i = i + 1
- j = j
- idx.append(indices[i, j])
-
- # step up
- for k in range(step_mult):
- i = i
- j = j - 1
- idx.append(indices[i, j])
- else:
- # end reached
- for k in range(step_mult-1):
- i = i + 1
- idx.append(indices[i, j])
-
- assert len(idx) == size*size
- idx = idx[::-1]
- idx = torch.tensor(idx)
- self.register_buffer('forward_shuffle_idx', idx)
- self.register_buffer('backward_shuffle_idx', torch.argsort(idx))
-
- def forward(self, x, reverse=False):
- if not reverse:
- return x[:, self.forward_shuffle_idx]
- else:
- return x[:, self.backward_shuffle_idx]
-
-
-class Random(nn.Module):
- def __init__(self, H, W):
- super().__init__()
- indices = np.random.RandomState(1).permutation(H*W)
- idx = torch.tensor(indices.ravel())
- self.register_buffer('forward_shuffle_idx', idx)
- self.register_buffer('backward_shuffle_idx', torch.argsort(idx))
-
- def forward(self, x, reverse=False):
- if not reverse:
- return x[:, self.forward_shuffle_idx]
- else:
- return x[:, self.backward_shuffle_idx]
-
-
-class AlternateParsing(AbstractPermuter):
- def __init__(self, H, W):
- super().__init__()
- indices = np.arange(W*H).reshape(H,W)
- for i in range(1, H, 2):
- indices[i, :] = indices[i, ::-1]
- idx = indices.flatten()
- assert len(idx) == H*W
- idx = torch.tensor(idx)
- self.register_buffer('forward_shuffle_idx', idx)
- self.register_buffer('backward_shuffle_idx', torch.argsort(idx))
-
- def forward(self, x, reverse=False):
- if not reverse:
- return x[:, self.forward_shuffle_idx]
- else:
- return x[:, self.backward_shuffle_idx]
-
-
-if __name__ == "__main__":
- p0 = AlternateParsing(16, 16)
- print(p0.forward_shuffle_idx)
- print(p0.backward_shuffle_idx)
-
- x = torch.randint(0, 768, size=(11, 256))
- y = p0(x)
- xre = p0(y, reverse=True)
- assert torch.equal(x, xre)
-
- p1 = SpiralOut(2, 2)
- print(p1.forward_shuffle_idx)
- print(p1.backward_shuffle_idx)
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/summary/randeng_pegasus_523M_summary.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/summary/randeng_pegasus_523M_summary.sh
deleted file mode 100644
index 10f6d29a6acd1fe70117d0f1b8d33ce58cdb1384..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/examples/summary/randeng_pegasus_523M_summary.sh
+++ /dev/null
@@ -1,143 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=randeng_pegasus_523M_summary
-#SBATCH --nodes=1
-#SBATCH --ntasks-per-node=8
-#SBATCH --gres=gpu:8 # number of gpus
-#SBATCH --cpus-per-task=30
-#SBATCH -o %x-%j.log
-
-set -x -e
-
-echo "START TIME: $(date)"
-MODEL_NAME=randeng_pegasus_523M_summary_last
-MICRO_BATCH_SIZE=128
-ROOT_DIR=/cognitive_comp/dongxiaoqun/finetune/${MODEL_NAME}
-
-if [ ! -d ${ROOT_DIR} ];then
- mkdir ${ROOT_DIR}
- echo ${ROOT_DIR} created!!!!!!!!!!!!!!
-else
- echo ${ROOT_DIR} exist!!!!!!!!!!!!!!!
-fi
-
-output_save_path=$ROOT_DIR/${MODEL_NAME}.json
-if [ -f ${output_save_path} ];then
- echo ${output_save_path} exist, rm it!!!!!!!!!!!!!!!!!
- rm ${output_save_path}
-fi
-
-ZERO_STAGE=1
-
-config_json="${ROOT_DIR}/ds_config.${MODEL_NAME}.json"
-
-# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size()
-cat < $config_json
-{
- "train_micro_batch_size_per_gpu": ${MICRO_BATCH_SIZE},
- "steps_per_print": 1000,
- "gradient_clipping": 1.0,
- "zero_optimization": {
- "stage": $ZERO_STAGE,
- "contiguous_gradients": false,
- "overlap_comm": true,
- "reduce_scatter": true,
- "reduce_bucket_size": 50000000,
- "allgather_bucket_size": 500000000
- },
- "optimizer": {
- "type": "Adam",
- "params": {
- "lr": 5e-5,
- "betas": [
- 0.9,
- 0.999
- ],
- "eps": 1e-8,
- "weight_decay": 1e-2
- }
- },
- "scheduler": {
- "params": {
- "warmup_min_lr": 1e-8,
- "warmup_max_lr": 1e-4,
- "total_num_steps": 60000,
- "warmup_num_steps" : 1000
- },
- "type": "WarmupDecayLR"
- },
- "zero_allow_untested_optimizer": false,
- "fp16": {
- "enabled": true,
- "loss_scale": 0,
- "loss_scale_window": 1000,
- "hysteresis": 2,
- "min_loss_scale": 1
- },
- "activation_checkpointing": {
- "partition_activations": false,
- "contiguous_memory_optimization": false
- },
- "wall_clock_breakdown": false
-}
-EOT
-
-export PL_DEEPSPEED_CONFIG_PATH=$config_json
-export TORCH_EXTENSIONS_DIR=/cognitive_comp/dongxiaoqun/torch_extendsions
-# export MASTER_PORT=$[RANDOM%10000+50000]
-#
-# --strategy deepspeed_stage_${ZERO_STAGE} \
-TRAINER_ARGS="
- --max_epochs 10 \
- --gpus 1 \
- --num_nodes 1 \
- --strategy deepspeed_stage_${ZERO_STAGE} \
- --default_root_dir $ROOT_DIR \
- --dirpath $ROOT_DIR/ckpt \
- --save_top_k 3 \
- --monitor val_loss \
- --mode min \
- --save_last \
- --every_n_train_steps 10000 \
- --val_check_interval 0.1 \
-"
-prompt='"'
-DATA_ARGS="
- --datasets_name lcsts \
- --num_workers 30 \
- --train_batchsize $MICRO_BATCH_SIZE \
- --val_batchsize $MICRO_BATCH_SIZE \
- --test_batchsize $MICRO_BATCH_SIZE \
- --max_enc_length 128 \
- --max_dec_length 64 \
- --val_datasets_field val \
- --prompt $prompt \
-"
-
-# --prompt $prompt \
-# --pretrained_model_path /cognitive_comp/ganruyi/experiments/randeng_t5_77M_summary/ckpt/hf_pretrained_epoch1_step75019 \
-
-# mode_path="/cognitive_comp/dongxiaoqun/train_model/fengshen-pegasus-base/ckpt/hf_pretrained_epoch0_step22200/"
-mode_path="/cognitive_comp/dongxiaoqun/train_model/fengshen-pegasus-large/ckpt/hf_pretrained_epoch0_step122000"
-cp /cognitive_comp/dongxiaoqun/pretrained_model/pegasus-large/vocab.txt $mode_path/
-
-MODEL_ARGS="
- --pretrained_model_path $mode_path \
- --output_save_path $output_save_path \
- --self_tokenizer \
-"
-
-SCRIPTS_PATH=/cognitive_comp/dongxiaoqun/debug/Fengshenbang-LM/fengshen/examples/summary/seq2seq_summary.py
-
-export CMD=" \
- $SCRIPTS_PATH \
- $TRAINER_ARGS \
- $MODEL_ARGS \
- $DATA_ARGS \
- "
-
-echo $CMD
-
-source activate
-conda activate torchnew
-srun --nodes=1 --ntasks-per-node=1 --gres=gpu:1 --cpus-per-task=30 -o ${MODEL_NAME}-%J.log --jobid=229555 bash -c 'python3 $SCRIPT_PATH $CMD'
-
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/benchmark/dummy_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/benchmark/dummy_dataset.py
deleted file mode 100644
index 2f051754af55966e26850e94c121e0ff439bfd28..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/benchmark/dummy_dataset.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import numpy as np
-from fairseq.data import FairseqDataset
-
-
-class DummyDataset(FairseqDataset):
- def __init__(self, batch, num_items, item_size):
- super().__init__()
- self.batch = batch
- self.num_items = num_items
- self.item_size = item_size
-
- def __getitem__(self, index):
- return index
-
- def __len__(self):
- return self.num_items
-
- def collater(self, samples):
- return self.batch
-
- @property
- def sizes(self):
- return np.array([self.item_size] * self.num_items)
-
- def num_tokens(self, index):
- return self.item_size
-
- def size(self, index):
- return self.item_size
-
- def ordered_indices(self):
- return np.arange(self.num_items)
-
- @property
- def supports_prefetch(self):
- return False
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/checkpoint_utils.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/checkpoint_utils.py
deleted file mode 100644
index ef5d4c9022c3c35722f0bc9150260c7a65d35e5f..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/checkpoint_utils.py
+++ /dev/null
@@ -1,858 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import ast
-import collections
-import contextlib
-import logging
-import numpy as np
-import os
-import re
-import time
-import traceback
-from collections import OrderedDict
-from typing import Any, Dict, Optional, Union
-
-import torch
-from fairseq.data import data_utils
-from fairseq.dataclass.configs import CheckpointConfig
-from fairseq.dataclass.utils import (
- convert_namespace_to_omegaconf,
- overwrite_args_by_name,
-)
-from fairseq.distributed.fully_sharded_data_parallel import FSDP, has_FSDP
-from fairseq.file_io import PathManager
-from fairseq.models import FairseqDecoder, FairseqEncoder
-from omegaconf import DictConfig, open_dict, OmegaConf
-
-
-logger = logging.getLogger(__name__)
-
-
-def save_checkpoint(cfg: CheckpointConfig, trainer, epoch_itr, val_loss):
- from fairseq import meters
-
- # only one worker should attempt to create the required dir
- if trainer.data_parallel_rank == 0:
- os.makedirs(cfg.save_dir, exist_ok=True)
-
- prev_best = getattr(save_checkpoint, "best", val_loss)
- if val_loss is not None:
- best_function = max if cfg.maximize_best_checkpoint_metric else min
- save_checkpoint.best = best_function(val_loss, prev_best)
-
- if cfg.no_save:
- return
-
- trainer.consolidate_optimizer() # TODO(SS): do we need this if no_save_optimizer_state
-
- if not trainer.should_save_checkpoint_on_current_rank:
- if trainer.always_call_state_dict_during_save_checkpoint:
- trainer.state_dict()
- return
-
- write_timer = meters.StopwatchMeter()
- write_timer.start()
-
- epoch = epoch_itr.epoch
- end_of_epoch = epoch_itr.end_of_epoch()
- updates = trainer.get_num_updates()
-
- logger.info(f"Preparing to save checkpoint for epoch {epoch} @ {updates} updates")
-
- def is_better(a, b):
- return a >= b if cfg.maximize_best_checkpoint_metric else a <= b
-
- suffix = trainer.checkpoint_suffix
- checkpoint_conds = collections.OrderedDict()
- checkpoint_conds["checkpoint{}{}.pt".format(epoch, suffix)] = (
- end_of_epoch and not cfg.no_epoch_checkpoints and epoch % cfg.save_interval == 0
- )
- checkpoint_conds["checkpoint_{}_{}{}.pt".format(epoch, updates, suffix)] = (
- not end_of_epoch
- and cfg.save_interval_updates > 0
- and updates % cfg.save_interval_updates == 0
- )
- checkpoint_conds["checkpoint_best{}.pt".format(suffix)] = val_loss is not None and (
- not hasattr(save_checkpoint, "best")
- or is_better(val_loss, save_checkpoint.best)
- )
- if val_loss is not None and cfg.keep_best_checkpoints > 0:
- worst_best = getattr(save_checkpoint, "best", None)
- chkpts = checkpoint_paths(
- cfg.save_dir,
- pattern=r"checkpoint\.best_{}_(\d+\.?\d*){}\.pt".format(
- cfg.best_checkpoint_metric, suffix
- ),
- )
- if len(chkpts) > 0:
- p = chkpts[-1] if cfg.maximize_best_checkpoint_metric else chkpts[0]
- worst_best = float(p.rsplit("_")[-1].replace("{}.pt".format(suffix), ""))
- # add random digits to resolve ties
- with data_utils.numpy_seed(epoch, updates, val_loss):
- rand_sfx = np.random.randint(0, cfg.keep_best_checkpoints)
-
- checkpoint_conds[
- "checkpoint.best_{}_{:.3f}{}{}.pt".format(
- cfg.best_checkpoint_metric,
- val_loss,
- rand_sfx,
- suffix
- )
- ] = worst_best is None or is_better(val_loss, worst_best)
- checkpoint_conds[
- "checkpoint_last{}.pt".format(suffix)
- ] = not cfg.no_last_checkpoints
-
- extra_state = {"train_iterator": epoch_itr.state_dict(), "val_loss": val_loss}
- if hasattr(save_checkpoint, "best"):
- extra_state.update({"best": save_checkpoint.best})
-
- checkpoints = [
- os.path.join(cfg.save_dir, fn) for fn, cond in checkpoint_conds.items() if cond
- ]
- if len(checkpoints) > 0:
- trainer.save_checkpoint(checkpoints[0], extra_state)
- for cp in checkpoints[1:]:
- if cfg.write_checkpoints_asynchronously:
- # TODO[ioPath]: Need to implement a delayed asynchronous
- # file copying/moving feature.
- logger.warning(
- f"ioPath is not copying {checkpoints[0]} to {cp} "
- "since async write mode is on."
- )
- else:
- assert PathManager.copy(
- checkpoints[0], cp, overwrite=True
- ), f"Failed to copy {checkpoints[0]} to {cp}"
-
- write_timer.stop()
- logger.info(
- "Saved checkpoint {} (epoch {} @ {} updates, score {}) (writing took {} seconds)".format(
- checkpoints[0], epoch, updates, val_loss, write_timer.sum
- )
- )
-
- if not end_of_epoch and cfg.keep_interval_updates > 0:
- # remove old checkpoints; checkpoints are sorted in descending order
- if cfg.keep_interval_updates_pattern == -1:
- checkpoints = checkpoint_paths(
- cfg.save_dir, pattern=r"checkpoint_\d+_(\d+){}\.pt".format(suffix)
- )
- else:
- checkpoints = checkpoint_paths(
- cfg.save_dir,
- pattern=r"checkpoint_\d+_(\d+){}\.pt".format(suffix),
- keep_match=True,
- )
- checkpoints = [
- x[0]
- for x in checkpoints
- if x[1] % cfg.keep_interval_updates_pattern != 0
- ]
-
- for old_chk in checkpoints[cfg.keep_interval_updates :]:
- if os.path.lexists(old_chk):
- os.remove(old_chk)
- elif PathManager.exists(old_chk):
- PathManager.rm(old_chk)
-
- if cfg.keep_last_epochs > 0:
- # remove old epoch checkpoints; checkpoints are sorted in descending order
- checkpoints = checkpoint_paths(
- cfg.save_dir, pattern=r"checkpoint(\d+){}\.pt".format(suffix)
- )
- for old_chk in checkpoints[cfg.keep_last_epochs :]:
- if os.path.lexists(old_chk):
- os.remove(old_chk)
- elif PathManager.exists(old_chk):
- PathManager.rm(old_chk)
-
- if cfg.keep_best_checkpoints > 0:
- # only keep the best N checkpoints according to validation metric
- checkpoints = checkpoint_paths(
- cfg.save_dir,
- pattern=r"checkpoint\.best_{}_(\d+\.?\d*){}\.pt".format(
- cfg.best_checkpoint_metric, suffix
- ),
- )
- if not cfg.maximize_best_checkpoint_metric:
- checkpoints = checkpoints[::-1]
- for old_chk in checkpoints[cfg.keep_best_checkpoints :]:
- if os.path.lexists(old_chk):
- os.remove(old_chk)
- elif PathManager.exists(old_chk):
- PathManager.rm(old_chk)
-
-
-def load_checkpoint(cfg: CheckpointConfig, trainer, **passthrough_args):
- """
- Load a checkpoint and restore the training iterator.
-
- *passthrough_args* will be passed through to
- ``trainer.get_train_iterator``.
- """
-
- reset_optimizer = cfg.reset_optimizer
- reset_lr_scheduler = cfg.reset_lr_scheduler
- optimizer_overrides = ast.literal_eval(cfg.optimizer_overrides)
- reset_meters = cfg.reset_meters
- reset_dataloader = cfg.reset_dataloader
-
- if cfg.finetune_from_model is not None and (
- reset_optimizer or reset_lr_scheduler or reset_meters or reset_dataloader
- ):
- raise ValueError(
- "--finetune-from-model can not be set together with either --reset-optimizer"
- " or reset_lr_scheduler or reset_meters or reset_dataloader"
- )
-
- suffix = trainer.checkpoint_suffix
- if (
- cfg.restore_file == "checkpoint_last.pt"
- ): # default value of restore_file is 'checkpoint_last.pt'
- checkpoint_path = os.path.join(
- cfg.save_dir, "checkpoint_last{}.pt".format(suffix)
- )
- first_launch = not PathManager.exists(checkpoint_path)
- if cfg.finetune_from_model is not None and first_launch:
- # if there is no last checkpoint to restore, start the finetune from pretrained model
- # else just use usual logic to load checkpoint, e.g. restart from last checkpoint and etc.
- if PathManager.exists(cfg.finetune_from_model):
- checkpoint_path = cfg.finetune_from_model
- reset_optimizer = True
- reset_lr_scheduler = True
- reset_meters = True
- reset_dataloader = True
- logger.info(
- f"loading pretrained model from {checkpoint_path}: "
- "optimizer, lr scheduler, meters, dataloader will be reset"
- )
- else:
- raise ValueError(
- f"--funetune-from-model {cfg.finetune_from_model} does not exist"
- )
- elif suffix is not None:
- checkpoint_path = cfg.restore_file.replace(".pt", suffix + ".pt")
- else:
- checkpoint_path = cfg.restore_file
-
- if cfg.restore_file != "checkpoint_last.pt" and cfg.finetune_from_model:
- raise ValueError(
- "--finetune-from-model and --restore-file (non-default value) "
- "can not be specified together: " + str(cfg)
- )
-
- extra_state = trainer.load_checkpoint(
- checkpoint_path,
- reset_optimizer,
- reset_lr_scheduler,
- optimizer_overrides,
- reset_meters=reset_meters,
- )
-
- if (
- extra_state is not None
- and "best" in extra_state
- and not reset_optimizer
- and not reset_meters
- ):
- save_checkpoint.best = extra_state["best"]
-
- if extra_state is not None and not reset_dataloader:
- # restore iterator from checkpoint
- itr_state = extra_state["train_iterator"]
- epoch_itr = trainer.get_train_iterator(
- epoch=itr_state["epoch"], load_dataset=True, **passthrough_args
- )
- epoch_itr.load_state_dict(itr_state)
- else:
- epoch_itr = trainer.get_train_iterator(
- epoch=1, load_dataset=True, **passthrough_args
- )
-
- trainer.lr_step(epoch_itr.epoch)
-
- return extra_state, epoch_itr
-
-
-def load_checkpoint_to_cpu(path, arg_overrides=None, load_on_all_ranks=False):
- """Loads a checkpoint to CPU (with upgrading for backward compatibility).
-
- If doing single-GPU training or if the checkpoint is only being loaded by at
- most one process on each node (current default behavior is for only rank 0
- to read the checkpoint from disk), load_on_all_ranks should be False to
- avoid errors from torch.distributed not having been initialized or
- torch.distributed.barrier() hanging.
-
- If all processes on each node may be loading the checkpoint
- simultaneously, load_on_all_ranks should be set to True to avoid I/O
- conflicts.
-
- There's currently no support for > 1 but < all processes loading the
- checkpoint on each node.
- """
- local_path = PathManager.get_local_path(path)
- # The locally cached file returned by get_local_path() may be stale for
- # remote files that are periodically updated/overwritten (ex:
- # checkpoint_last.pt) - so we remove the local copy, sync across processes
- # (if needed), and then download a fresh copy.
- if local_path != path and PathManager.path_requires_pathmanager(path):
- try:
- os.remove(local_path)
- except FileNotFoundError:
- # With potentially multiple processes removing the same file, the
- # file being missing is benign (missing_ok isn't available until
- # Python 3.8).
- pass
- if load_on_all_ranks:
- torch.distributed.barrier()
- local_path = PathManager.get_local_path(path)
-
- with open(local_path, "rb") as f:
- state = torch.load(f, map_location=torch.device("cpu"))
-
- if "args" in state and state["args"] is not None and arg_overrides is not None:
- args = state["args"]
- for arg_name, arg_val in arg_overrides.items():
- setattr(args, arg_name, arg_val)
-
- if "cfg" in state and state["cfg"] is not None:
-
- # hack to be able to set Namespace in dict config. this should be removed when we update to newer
- # omegaconf version that supports object flags, or when we migrate all existing models
- from omegaconf import _utils
-
- old_primitive = _utils.is_primitive_type
- _utils.is_primitive_type = lambda _: True
-
- state["cfg"] = OmegaConf.create(state["cfg"])
-
- _utils.is_primitive_type = old_primitive
- OmegaConf.set_struct(state["cfg"], True)
-
- if arg_overrides is not None:
- overwrite_args_by_name(state["cfg"], arg_overrides)
-
- state = _upgrade_state_dict(state)
- return state
-
-
-def load_model_ensemble(
- filenames,
- arg_overrides: Optional[Dict[str, Any]] = None,
- task=None,
- strict=True,
- suffix="",
- num_shards=1,
- state=None,
-):
- """Loads an ensemble of models.
-
- Args:
- filenames (List[str]): checkpoint files to load
- arg_overrides (Dict[str,Any], optional): override model args that
- were used during model training
- task (fairseq.tasks.FairseqTask, optional): task to use for loading
- """
- assert not (
- strict and num_shards > 1
- ), "Cannot load state dict with strict=True and checkpoint shards > 1"
- ensemble, args, _task = load_model_ensemble_and_task(
- filenames,
- arg_overrides,
- task,
- strict,
- suffix,
- num_shards,
- state,
- )
- return ensemble, args
-
-
-def get_maybe_sharded_checkpoint_filename(
- filename: str, suffix: str, shard_idx: int, num_shards: int
-) -> str:
- orig_filename = filename
- filename = filename.replace(".pt", suffix + ".pt")
- fsdp_filename = filename[:-3] + f"-shard{shard_idx}.pt"
- model_parallel_filename = orig_filename[:-3] + f"_part{shard_idx}.pt"
- if PathManager.exists(fsdp_filename):
- return fsdp_filename
- elif num_shards > 1:
- return model_parallel_filename
- else:
- return filename
-
-
-def load_model_ensemble_and_task(
- filenames,
- arg_overrides: Optional[Dict[str, Any]] = None,
- task=None,
- strict=True,
- suffix="",
- num_shards=1,
- state=None,
-):
- assert state is None or len(filenames) == 1
-
- from fairseq import tasks
-
- assert not (
- strict and num_shards > 1
- ), "Cannot load state dict with strict=True and checkpoint shards > 1"
- ensemble = []
- cfg = None
- for filename in filenames:
- orig_filename = filename
- model_shard_state = {"shard_weights": [], "shard_metadata": []}
- assert num_shards > 0
- st = time.time()
- for shard_idx in range(num_shards):
- filename = get_maybe_sharded_checkpoint_filename(
- orig_filename, suffix, shard_idx, num_shards
- )
-
- if not PathManager.exists(filename):
- raise IOError("Model file not found: {}".format(filename))
- if state is None:
- state = load_checkpoint_to_cpu(filename, arg_overrides)
- if "args" in state and state["args"] is not None:
- cfg = convert_namespace_to_omegaconf(state["args"])
- elif "cfg" in state and state["cfg"] is not None:
- cfg = state["cfg"]
- else:
- raise RuntimeError(
- f"Neither args nor cfg exist in state keys = {state.keys()}"
- )
-
- if task is None:
- task = tasks.setup_task(cfg.task)
-
- if "task_state" in state:
- task.load_state_dict(state["task_state"])
-
- if "fsdp_metadata" in state and num_shards > 1:
- model_shard_state["shard_weights"].append(state["model"])
- model_shard_state["shard_metadata"].append(state["fsdp_metadata"])
- # check FSDP import before the code goes too far
- if not has_FSDP:
- raise ImportError(
- "Cannot find FullyShardedDataParallel. "
- "Please install fairscale with: pip install fairscale"
- )
- if shard_idx == num_shards - 1:
- consolidated_model_state = FSDP.consolidate_shard_weights(
- shard_weights=model_shard_state["shard_weights"],
- shard_metadata=model_shard_state["shard_metadata"],
- )
- model = task.build_model(cfg.model)
- model.load_state_dict(
- consolidated_model_state, strict=strict, model_cfg=cfg.model
- )
- else:
- # model parallel checkpoint or unsharded checkpoint
- model = task.build_model(cfg.model)
- model.load_state_dict(
- state["model"], strict=strict, model_cfg=cfg.model
- )
-
- # reset state so it gets loaded for the next model in ensemble
- state = None
- if shard_idx % 10 == 0 and shard_idx > 0:
- elapsed = time.time() - st
- logger.info(
- f"Loaded {shard_idx} shards in {elapsed:.2f}s, {elapsed / (shard_idx+1):.2f}s/shard"
- )
-
- # build model for ensemble
- ensemble.append(model)
- return ensemble, cfg, task
-
-
-def checkpoint_paths(path, pattern=r"checkpoint(\d+)\.pt", keep_match=False):
- """Retrieves all checkpoints found in `path` directory.
-
- Checkpoints are identified by matching filename to the specified pattern. If
- the pattern contains groups, the result will be sorted by the first group in
- descending order.
- """
- pt_regexp = re.compile(pattern)
- files = PathManager.ls(path)
-
- entries = []
- for i, f in enumerate(files):
- m = pt_regexp.fullmatch(f)
- if m is not None:
- idx = float(m.group(1)) if len(m.groups()) > 0 else i
- entries.append((idx, m.group(0)))
- if keep_match:
- return [(os.path.join(path, x[1]), x[0]) for x in sorted(entries, reverse=True)]
- else:
- return [os.path.join(path, x[1]) for x in sorted(entries, reverse=True)]
-
-
-def torch_persistent_save(obj, filename, async_write: bool = False):
- if async_write:
- with PathManager.opena(filename, "wb") as f:
- _torch_persistent_save(obj, f)
- else:
- if PathManager.supports_rename(filename):
- # do atomic save
- with PathManager.open(filename + ".tmp", "wb") as f:
- _torch_persistent_save(obj, f)
- PathManager.rename(filename + ".tmp", filename)
- else:
- # fallback to non-atomic save
- with PathManager.open(filename, "wb") as f:
- _torch_persistent_save(obj, f)
-
-
-def _torch_persistent_save(obj, f):
- if isinstance(f, str):
- with PathManager.open(f, "wb") as h:
- torch_persistent_save(obj, h)
- return
- for i in range(3):
- try:
- return torch.save(obj, f)
- except Exception:
- if i == 2:
- logger.error(traceback.format_exc())
- raise
-
-
-def _upgrade_state_dict(state):
- """Helper for upgrading old model checkpoints."""
-
- # add optimizer_history
- if "optimizer_history" not in state:
- state["optimizer_history"] = [
- {"criterion_name": "CrossEntropyCriterion", "best_loss": state["best_loss"]}
- ]
- state["last_optimizer_state"] = state["optimizer"]
- del state["optimizer"]
- del state["best_loss"]
- # move extra_state into sub-dictionary
- if "epoch" in state and "extra_state" not in state:
- state["extra_state"] = {
- "epoch": state["epoch"],
- "batch_offset": state["batch_offset"],
- "val_loss": state["val_loss"],
- }
- del state["epoch"]
- del state["batch_offset"]
- del state["val_loss"]
- # reduce optimizer history's memory usage (only keep the last state)
- if "optimizer" in state["optimizer_history"][-1]:
- state["last_optimizer_state"] = state["optimizer_history"][-1]["optimizer"]
- for optim_hist in state["optimizer_history"]:
- del optim_hist["optimizer"]
- # record the optimizer class name
- if "optimizer_name" not in state["optimizer_history"][-1]:
- state["optimizer_history"][-1]["optimizer_name"] = "FairseqNAG"
- # move best_loss into lr_scheduler_state
- if "lr_scheduler_state" not in state["optimizer_history"][-1]:
- state["optimizer_history"][-1]["lr_scheduler_state"] = {
- "best": state["optimizer_history"][-1]["best_loss"]
- }
- del state["optimizer_history"][-1]["best_loss"]
- # keep track of number of updates
- if "num_updates" not in state["optimizer_history"][-1]:
- state["optimizer_history"][-1]["num_updates"] = 0
- # old model checkpoints may not have separate source/target positions
- if (
- "args" in state
- and hasattr(state["args"], "max_positions")
- and not hasattr(state["args"], "max_source_positions")
- ):
- state["args"].max_source_positions = state["args"].max_positions
- state["args"].max_target_positions = state["args"].max_positions
- # use stateful training data iterator
- if "train_iterator" not in state["extra_state"]:
- state["extra_state"]["train_iterator"] = {
- "epoch": state["extra_state"]["epoch"],
- "iterations_in_epoch": state["extra_state"].get("batch_offset", 0),
- }
-
- # backward compatibility, cfg updates
- if "args" in state and state["args"] is not None:
- # default to translation task
- if not hasattr(state["args"], "task"):
- state["args"].task = "translation"
- # --raw-text and --lazy-load are deprecated
- if getattr(state["args"], "raw_text", False):
- state["args"].dataset_impl = "raw"
- elif getattr(state["args"], "lazy_load", False):
- state["args"].dataset_impl = "lazy"
- # epochs start at 1
- if state["extra_state"]["train_iterator"] is not None:
- state["extra_state"]["train_iterator"]["epoch"] = max(
- state["extra_state"]["train_iterator"].get("epoch", 1), 1
- )
- # --remove-bpe ==> --postprocess
- if hasattr(state["args"], "remove_bpe"):
- state["args"].post_process = state["args"].remove_bpe
- # --min-lr ==> --stop-min-lr
- if hasattr(state["args"], "min_lr"):
- state["args"].stop_min_lr = state["args"].min_lr
- del state["args"].min_lr
- # binary_cross_entropy / kd_binary_cross_entropy => wav2vec criterion
- if (
- hasattr(state["args"], "criterion")
- and state["args"].criterion in [
- "binary_cross_entropy",
- "kd_binary_cross_entropy",
- ]
- ):
- state["args"].criterion = "wav2vec"
- # remove log_keys if it's None (criteria will supply a default value of [])
- if hasattr(state["args"], "log_keys") and state["args"].log_keys is None:
- delattr(state["args"], "log_keys")
- # speech_pretraining => audio pretraining
- if (
- hasattr(state["args"], "task")
- and state["args"].task == "speech_pretraining"
- ):
- state["args"].task = "audio_pretraining"
- # audio_cpc => wav2vec
- if hasattr(state["args"], "arch") and state["args"].arch == "audio_cpc":
- state["args"].arch = "wav2vec"
- # convert legacy float learning rate to List[float]
- if hasattr(state["args"], "lr") and isinstance(state["args"].lr, float):
- state["args"].lr = [state["args"].lr]
- # convert task data arg to a string instead of List[string]
- if (
- hasattr(state["args"], "data")
- and isinstance(state["args"].data, list)
- and len(state["args"].data) > 0
- ):
- state["args"].data = state["args"].data[0]
- # remove keys in state["args"] related to teacher-student learning
- for key in [
- "static_teachers",
- "static_teacher_weights",
- "dynamic_teachers",
- "dynamic_teacher_weights",
- ]:
- if key in state["args"]:
- delattr(state["args"], key)
-
- state["cfg"] = convert_namespace_to_omegaconf(state["args"])
-
- if "cfg" in state and state["cfg"] is not None:
- cfg = state["cfg"]
- with open_dict(cfg):
- # any upgrades for Hydra-based configs
- if (
- "task" in cfg
- and "eval_wer_config" in cfg.task
- and isinstance(cfg.task.eval_wer_config.print_alignment, bool)
- ):
- cfg.task.eval_wer_config.print_alignment = "hard"
- if "generation" in cfg and isinstance(cfg.generation.print_alignment, bool):
- cfg.generation.print_alignment = "hard" if cfg.generation.print_alignment else None
- if (
- "model" in cfg
- and "w2v_args" in cfg.model
- and cfg.model.w2v_args is not None
- and (
- hasattr(cfg.model.w2v_args, "task") or "task" in cfg.model.w2v_args
- )
- and hasattr(cfg.model.w2v_args.task, "eval_wer_config")
- and cfg.model.w2v_args.task.eval_wer_config is not None
- and isinstance(
- cfg.model.w2v_args.task.eval_wer_config.print_alignment, bool
- )
- ):
- cfg.model.w2v_args.task.eval_wer_config.print_alignment = "hard"
-
- return state
-
-
-def prune_state_dict(state_dict, model_cfg: Optional[DictConfig]):
- """Prune the given state_dict if desired for LayerDrop
- (https://arxiv.org/abs/1909.11556).
-
- Training with LayerDrop allows models to be robust to pruning at inference
- time. This function prunes state_dict to allow smaller models to be loaded
- from a larger model and re-maps the existing state_dict for this to occur.
-
- It's called by functions that load models from checkpoints and does not
- need to be called directly.
- """
- arch = None
- if model_cfg is not None:
- arch = (
- model_cfg._name
- if isinstance(model_cfg, DictConfig)
- else getattr(model_cfg, "arch", None)
- )
-
- if not model_cfg or arch is None or arch == "ptt_transformer":
- # args should not be none, but don't crash if it is.
- return state_dict
-
- encoder_layers_to_keep = getattr(model_cfg, "encoder_layers_to_keep", None)
- decoder_layers_to_keep = getattr(model_cfg, "decoder_layers_to_keep", None)
-
- if not encoder_layers_to_keep and not decoder_layers_to_keep:
- return state_dict
-
- # apply pruning
- logger.info(
- "Pruning model to specified layer configuration - this works best if the model was trained with LayerDrop"
- )
-
- def create_pruning_pass(layers_to_keep, layer_name):
- keep_layers = sorted(
- int(layer_string) for layer_string in layers_to_keep.split(",")
- )
- mapping_dict = {}
- for i in range(len(keep_layers)):
- mapping_dict[str(keep_layers[i])] = str(i)
-
- regex = re.compile(r"^{layer}.*\.layers\.(\d+)".format(layer=layer_name))
- return {"substitution_regex": regex, "mapping_dict": mapping_dict}
-
- pruning_passes = []
- if encoder_layers_to_keep:
- pruning_passes.append(create_pruning_pass(encoder_layers_to_keep, "encoder"))
- if decoder_layers_to_keep:
- pruning_passes.append(create_pruning_pass(decoder_layers_to_keep, "decoder"))
-
- new_state_dict = {}
- for layer_name in state_dict.keys():
- match = re.search(r"\.layers\.(\d+)\.", layer_name)
- # if layer has no number in it, it is a supporting layer, such as an
- # embedding
- if not match:
- new_state_dict[layer_name] = state_dict[layer_name]
- continue
-
- # otherwise, layer should be pruned.
- original_layer_number = match.group(1)
- # figure out which mapping dict to replace from
- for pruning_pass in pruning_passes:
- if original_layer_number in pruning_pass["mapping_dict"] and pruning_pass[
- "substitution_regex"
- ].search(layer_name):
- new_layer_number = pruning_pass["mapping_dict"][original_layer_number]
- substitution_match = pruning_pass["substitution_regex"].search(
- layer_name
- )
- new_state_key = (
- layer_name[: substitution_match.start(1)]
- + new_layer_number
- + layer_name[substitution_match.end(1) :]
- )
- new_state_dict[new_state_key] = state_dict[layer_name]
-
- # Since layers are now pruned, *_layers_to_keep are no longer needed.
- # This is more of "It would make it work fix" rather than a proper fix.
- if isinstance(model_cfg, DictConfig):
- context = open_dict(model_cfg)
- else:
- context = contextlib.ExitStack()
- with context:
- if hasattr(model_cfg, "encoder_layers_to_keep"):
- model_cfg.encoder_layers_to_keep = None
- if hasattr(model_cfg, "decoder_layers_to_keep"):
- model_cfg.decoder_layers_to_keep = None
-
- return new_state_dict
-
-
-def load_pretrained_component_from_model(
- component: Union[FairseqEncoder, FairseqDecoder], checkpoint: str
-):
- """
- Load a pretrained FairseqEncoder or FairseqDecoder from checkpoint into the
- provided `component` object. If state_dict fails to load, there may be a
- mismatch in the architecture of the corresponding `component` found in the
- `checkpoint` file.
- """
- if not PathManager.exists(checkpoint):
- raise IOError("Model file not found: {}".format(checkpoint))
- state = load_checkpoint_to_cpu(checkpoint)
- if isinstance(component, FairseqEncoder):
- component_type = "encoder"
- elif isinstance(component, FairseqDecoder):
- component_type = "decoder"
- else:
- raise ValueError(
- "component to load must be either a FairseqEncoder or "
- "FairseqDecoder. Loading other component types are not supported."
- )
- component_state_dict = OrderedDict()
- for key in state["model"].keys():
- if key.startswith(component_type):
- # encoder.input_layers.0.0.weight --> input_layers.0.0.weight
- component_subkey = key[len(component_type) + 1 :]
- component_state_dict[component_subkey] = state["model"][key]
- component.load_state_dict(component_state_dict, strict=True)
- return component
-
-
-def verify_checkpoint_directory(save_dir: str) -> None:
- if not os.path.exists(save_dir):
- os.makedirs(save_dir, exist_ok=True)
- temp_file_path = os.path.join(save_dir, "dummy")
- try:
- with open(temp_file_path, "w"):
- pass
- except OSError as e:
- logger.warning(
- "Unable to access checkpoint save directory: {}".format(save_dir)
- )
- raise e
- else:
- os.remove(temp_file_path)
-
-
-def load_ema_from_checkpoint(fpath):
- """Loads exponential moving averaged (EMA) checkpoint from input and
- returns a model with ema weights.
-
- Args:
- fpath: A string path of checkpoint to load from.
-
- Returns:
- A dict of string keys mapping to various values. The 'model' key
- from the returned dict should correspond to an OrderedDict mapping
- string parameter names to torch Tensors.
- """
- params_dict = collections.OrderedDict()
- new_state = None
-
- with PathManager.open(fpath, 'rb') as f:
- new_state = torch.load(
- f,
- map_location=(
- lambda s, _: torch.serialization.default_restore_location(s, 'cpu')
- ),
- )
-
- # EMA model is stored in a separate "extra state"
- model_params = new_state['extra_state']['ema']
-
- for key in list(model_params.keys()):
- p = model_params[key]
- if isinstance(p, torch.HalfTensor):
- p = p.float()
- if key not in params_dict:
- params_dict[key] = p.clone()
- # NOTE: clone() is needed in case of p is a shared parameter
- else:
- raise ValueError("Key {} is repeated in EMA model params.".format(key))
-
- if len(params_dict) == 0:
- raise ValueError(
- f"Input checkpoint path '{fpath}' does not contain "
- "ema model weights, is this model trained with EMA?"
- )
-
- new_state['model'] = params_dict
- return new_state
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda.cpp b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda.cpp
deleted file mode 100644
index ece47a8d908b93cec102743070c9057986d39d3f..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda.cpp
+++ /dev/null
@@ -1,51 +0,0 @@
-/**
- * Copyright (c) Facebook, Inc. and its affiliates.
- *
- * This source code is licensed under the MIT license found in the
- * LICENSE file in the root directory of this source tree.
- */
-
-#include
-#include
-
-std::vector
-lightconv_cuda_forward(at::Tensor input, at::Tensor filters, int padding_l);
-
-std::vector lightconv_cuda_backward(
- at::Tensor gradOutput,
- int padding_l,
- at::Tensor input,
- at::Tensor filters);
-
-#define CHECK_CUDA(x) \
- AT_ASSERTM(x.type().is_cuda(), #x " must be a CUDA tensor")
-#define CHECK_CONTIGUOUS(x) \
- AT_ASSERTM(x.is_contiguous(), #x " must be contiguous")
-#define CHECK_INPUT(x) \
- CHECK_CUDA(x); \
- CHECK_CONTIGUOUS(x)
-
-std::vector
-lightconv_forward(at::Tensor input, at::Tensor filters, int padding_l) {
- CHECK_INPUT(input);
- CHECK_INPUT(filters);
-
- return lightconv_cuda_forward(input, filters, padding_l);
-}
-
-std::vector lightconv_backward(
- at::Tensor gradOutput,
- int padding_l,
- at::Tensor input,
- at::Tensor filters) {
- CHECK_INPUT(gradOutput);
- CHECK_INPUT(input);
- CHECK_INPUT(filters);
-
- return lightconv_cuda_backward(gradOutput, padding_l, input, filters);
-}
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def("forward", &lightconv_forward, "lighconv forward (CUDA)");
- m.def("backward", &lightconv_backward, "lighconv backward (CUDA)");
-}
diff --git a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/docs/conf.py b/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/docs/conf.py
deleted file mode 100644
index ea5f14051897b9545eb78ffc9acfaa77171237fe..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/docs/conf.py
+++ /dev/null
@@ -1,242 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Indic NLP Library documentation build configuration file, created by
-# sphinx-quickstart on Tue Nov 3 01:50:37 2015.
-#
-# This file is execfile()d with the current directory set to its containing dir.
-#
-# Note that not all possible configuration values are present in this
-# autogenerated file.
-#
-# All configuration values have a default; values that are commented out
-# serve to show the default.
-
-import sys, os
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-sys.path.insert(0, os.path.abspath('..'))
-
-# -- General configuration -----------------------------------------------------
-
-# If your documentation needs a minimal Sphinx version, state it here.
-#needs_sphinx = '1.0'
-
-# Add any Sphinx extension module names here, as strings. They can be extensions
-# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
-extensions = ['sphinx.ext.autodoc', 'sphinx.ext.mathjax', 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'sphinxarg.ext']
-
-# Add any paths that contain templates here, relative to this directory.
-templates_path = ['_templates']
-
-# The suffix of source filenames.
-source_suffix = '.rst'
-
-# The encoding of source files.
-#source_encoding = 'utf-8-sig'
-
-# The master toctree document.
-master_doc = 'index'
-
-# General information about the project.
-project = 'Indic NLP Library'
-copyright = '2015, Anoop Kunchukuttan'
-
-# The version info for the project you're documenting, acts as replacement for
-# |version| and |release|, also used in various other places throughout the
-# built documents.
-#
-# The short X.Y version.
-version = '0.2'
-# The full version, including alpha/beta/rc tags.
-release = '0.2'
-
-# The language for content autogenerated by Sphinx. Refer to documentation
-# for a list of supported languages.
-#language = None
-
-# There are two options for replacing |today|: either, you set today to some
-# non-false value, then it is used:
-#today = ''
-# Else, today_fmt is used as the format for a strftime call.
-#today_fmt = '%B %d, %Y'
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-exclude_patterns = ['_build']
-
-# The reST default role (used for this markup: `text`) to use for all documents.
-#default_role = None
-
-# If true, '()' will be appended to :func: etc. cross-reference text.
-#add_function_parentheses = True
-
-# If true, the current module name will be prepended to all description
-# unit titles (such as .. function::).
-#add_module_names = True
-
-# If true, sectionauthor and moduleauthor directives will be shown in the
-# output. They are ignored by default.
-#show_authors = False
-
-# The name of the Pygments (syntax highlighting) style to use.
-pygments_style = 'sphinx'
-
-# A list of ignored prefixes for module index sorting.
-#modindex_common_prefix = []
-
-
-# -- Options for HTML output ---------------------------------------------------
-
-# The theme to use for HTML and HTML Help pages. See the documentation for
-# a list of builtin themes.
-html_theme = 'sphinx_rtd_theme'
-
-# Theme options are theme-specific and customize the look and feel of a theme
-# further. For a list of options available for each theme, see the
-# documentation.
-#html_theme_options = {}
-
-# Add any paths that contain custom themes here, relative to this directory.
-#html_theme_path = []
-
-# The name for this set of Sphinx documents. If None, it defaults to
-# " v documentation".
-#html_title = None
-
-# A shorter title for the navigation bar. Default is the same as html_title.
-#html_short_title = None
-
-# The name of an image file (relative to this directory) to place at the top
-# of the sidebar.
-#html_logo = None
-
-# The name of an image file (within the static path) to use as favicon of the
-# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
-# pixels large.
-#html_favicon = None
-
-# Add any paths that contain custom static files (such as style sheets) here,
-# relative to this directory. They are copied after the builtin static files,
-# so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = ['_static']
-
-# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
-# using the given strftime format.
-#html_last_updated_fmt = '%b %d, %Y'
-
-# If true, SmartyPants will be used to convert quotes and dashes to
-# typographically correct entities.
-#html_use_smartypants = True
-
-# Custom sidebar templates, maps document names to template names.
-#html_sidebars = {}
-
-# Additional templates that should be rendered to pages, maps page names to
-# template names.
-#html_additional_pages = {}
-
-# If false, no module index is generated.
-#html_domain_indices = True
-
-# If false, no index is generated.
-#html_use_index = True
-
-# If true, the index is split into individual pages for each letter.
-#html_split_index = False
-
-# If true, links to the reST sources are added to the pages.
-#html_show_sourcelink = True
-
-# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
-#html_show_sphinx = True
-
-# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
-#html_show_copyright = True
-
-# If true, an OpenSearch description file will be output, and all pages will
-# contain a tag referring to it. The value of this option must be the
-# base URL from which the finished HTML is served.
-#html_use_opensearch = ''
-
-# This is the file name suffix for HTML files (e.g. ".xhtml").
-#html_file_suffix = None
-
-# Output file base name for HTML help builder.
-htmlhelp_basename = 'IndicNLPLibrarydoc'
-
-
-# -- Options for LaTeX output --------------------------------------------------
-
-latex_elements = {
-# The paper size ('letterpaper' or 'a4paper').
-#'papersize': 'letterpaper',
-
-# The font size ('10pt', '11pt' or '12pt').
-#'pointsize': '10pt',
-
-# Additional stuff for the LaTeX preamble.
-#'preamble': '',
-}
-
-# Grouping the document tree into LaTeX files. List of tuples
-# (source start file, target name, title, author, documentclass [howto/manual]).
-latex_documents = [
- ('index', 'IndicNLPLibrary.tex', 'Indic NLP Library Documentation',
- 'Anoop Kunchukuttan', 'manual'),
-]
-
-# The name of an image file (relative to this directory) to place at the top of
-# the title page.
-#latex_logo = None
-
-# For "manual" documents, if this is true, then toplevel headings are parts,
-# not chapters.
-#latex_use_parts = False
-
-# If true, show page references after internal links.
-#latex_show_pagerefs = False
-
-# If true, show URL addresses after external links.
-#latex_show_urls = False
-
-# Documents to append as an appendix to all manuals.
-#latex_appendices = []
-
-# If false, no module index is generated.
-#latex_domain_indices = True
-
-
-# -- Options for manual page output --------------------------------------------
-
-# One entry per manual page. List of tuples
-# (source start file, name, description, authors, manual section).
-man_pages = [
- ('index', 'indicnlplibrary', 'Indic NLP Library Documentation',
- ['Anoop Kunchukuttan'], 1)
-]
-
-# If true, show URL addresses after external links.
-#man_show_urls = False
-
-
-# -- Options for Texinfo output ------------------------------------------------
-
-# Grouping the document tree into Texinfo files. List of tuples
-# (source start file, target name, title, author,
-# dir menu entry, description, category)
-texinfo_documents = [
- ('index', 'IndicNLPLibrary', 'Indic NLP Library Documentation',
- 'Anoop Kunchukuttan', 'IndicNLPLibrary', 'NLP library for Indian languages',
- 'NLP'),
-]
-
-# Documents to append as an appendix to all manuals.
-#texinfo_appendices = []
-
-# If false, no module index is generated.
-#texinfo_domain_indices = True
-
-# How to display URL addresses: 'footnote', 'no', or 'inline'.
-#texinfo_show_urls = 'footnote'
diff --git a/spaces/Harveenchadha/en_to_indic_translation/prepare_data.sh b/spaces/Harveenchadha/en_to_indic_translation/prepare_data.sh
deleted file mode 100644
index 0db64ee9966dc1c8d90209b8b7c3d8e842c8c200..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/en_to_indic_translation/prepare_data.sh
+++ /dev/null
@@ -1,71 +0,0 @@
-exp_dir=$1
-src_lang=$2
-tgt_lang=$3
-train_data_dir=${4:-"$exp_dir/$src_lang-$tgt_lang"}
-devtest_data_dir=${5:-"$exp_dir/devtest/all/$src_lang-$tgt_lang"}
-
-echo "Running experiment ${exp_dir} on ${src_lang} to ${tgt_lang}"
-
-train_processed_dir=$exp_dir/data
-devtest_processed_dir=$exp_dir/data
-
-out_data_dir=$exp_dir/final_bin
-
-mkdir -p $train_processed_dir
-mkdir -p $devtest_processed_dir
-mkdir -p $out_data_dir
-
-# train preprocessing
-train_infname_src=$train_data_dir/train.$src_lang
-train_infname_tgt=$train_data_dir/train.$tgt_lang
-train_outfname_src=$train_processed_dir/train.SRC
-train_outfname_tgt=$train_processed_dir/train.TGT
-echo "Applying normalization and script conversion for train"
-input_size=`python scripts/preprocess_translate.py $train_infname_src $train_outfname_src $src_lang`
-input_size=`python scripts/preprocess_translate.py $train_infname_tgt $train_outfname_tgt $tgt_lang`
-echo "Number of sentences in train: $input_size"
-
-# dev preprocessing
-dev_infname_src=$devtest_data_dir/dev.$src_lang
-dev_infname_tgt=$devtest_data_dir/dev.$tgt_lang
-dev_outfname_src=$devtest_processed_dir/dev.SRC
-dev_outfname_tgt=$devtest_processed_dir/dev.TGT
-echo "Applying normalization and script conversion for dev"
-input_size=`python scripts/preprocess_translate.py $dev_infname_src $dev_outfname_src $src_lang`
-input_size=`python scripts/preprocess_translate.py $dev_infname_tgt $dev_outfname_tgt $tgt_lang`
-echo "Number of sentences in dev: $input_size"
-
-# test preprocessing
-test_infname_src=$devtest_data_dir/test.$src_lang
-test_infname_tgt=$devtest_data_dir/test.$tgt_lang
-test_outfname_src=$devtest_processed_dir/test.SRC
-test_outfname_tgt=$devtest_processed_dir/test.TGT
-echo "Applying normalization and script conversion for test"
-input_size=`python scripts/preprocess_translate.py $test_infname_src $test_outfname_src $src_lang`
-input_size=`python scripts/preprocess_translate.py $test_infname_tgt $test_outfname_tgt $tgt_lang`
-echo "Number of sentences in test: $input_size"
-
-echo "Learning bpe. This will take a very long time depending on the size of the dataset"
-echo `date`
-# learn bpe for preprocessed_train files
-bash learn_bpe.sh $exp_dir
-echo `date`
-
-echo "Applying bpe"
-bash apply_bpe_traindevtest_notag.sh $exp_dir
-
-mkdir -p $exp_dir/final
-
-# this is only required for joint training
-# echo "Adding language tags"
-# python scripts/add_tags_translate.py $outfname._bpe $outfname.bpe $src_lang $tgt_lang
-
-# this is imporatnt step if you are training with tpu and using num_batch_buckets
-# the currnet implementation does not remove outliers before bucketing and hence
-# removing these large sentences ourselves helps with getting better buckets
-python scripts/remove_large_sentences.py $exp_dir/bpe/train.SRC $exp_dir/bpe/train.TGT $exp_dir/final/train.SRC $exp_dir/final/train.TGT
-python scripts/remove_large_sentences.py $exp_dir/bpe/dev.SRC $exp_dir/bpe/dev.TGT $exp_dir/final/dev.SRC $exp_dir/final/dev.TGT
-python scripts/remove_large_sentences.py $exp_dir/bpe/test.SRC $exp_dir/bpe/test.TGT $exp_dir/final/test.SRC $exp_dir/final/test.TGT
-
-echo "Binarizing data"
-bash binarize_training_exp.sh $exp_dir SRC TGT
diff --git a/spaces/Harveenchadha/oiTrans/indic_nlp_library/contrib/hindi_to_kannada_transliterator.py b/spaces/Harveenchadha/oiTrans/indic_nlp_library/contrib/hindi_to_kannada_transliterator.py
deleted file mode 100644
index a88f7d42120a0ae6eedaea91080c8d2a75539ee8..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/oiTrans/indic_nlp_library/contrib/hindi_to_kannada_transliterator.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import sys
-from indicnlp import common
-common.set_resources_path(INDIC_NLP_RESOURCES)
-
-from indicnlp import loader
-from indicnlp.normalize import indic_normalize
-from indicnlp.transliterate import unicode_transliterate
-
-if __name__ == '__main__':
- """
- This script transliterates Hindi to Kannada. It removes/remaps
- characters only found in Hindi. It also adds halanta to words ending
- with consonant - as is the convention in Kannada
- """
-
- infname=sys.argv[1] # one sentence/word per line. Sentences should be space-tokenized
- outfname=sys.agv[2]
- loader.load()
-
- normalizer_factory=indic_normalize.IndicNormalizerFactory()
- normalizer=normalizer_factory.get_normalizer('hi')
-
- with open(infname,'r',encoding='utf-8') as infile, \
- open(outfname,'w',encoding='utf-8') as outfile:
- for line in infile:
- line=line.strip()
- line=normalizer.normalize(line)
-
- ## replace chandrabindus with anusvara
- line=line.replace('\u0900','\u0902')
- line=line.replace('\u0901','\u0902')
-
- ### replace chandra e and o diacritics with e and o respectively
- #line=line.replace('\u0945','\u0947')
- #line=line.replace('\u0949','\u094b')
-
- ### replace chandra e and o diacritics with a diacritic
- ## this seems to be general usage
- line=line.replace('\u0945','\u093e')
- line=line.replace('\u0949','\u093e')
-
- ## remove nukta
- line=line.replace('\u093c','')
-
- ## add halant if word ends with consonant
- #if isc.is_consonant(isc.get_phonetic_feature_vector(line[-1],'hi')):
- # line=line+'\u094d'
- words=line.split(' ')
- outwords=[]
- for word in line.split(' '):
- if isc.is_consonant(isc.get_phonetic_feature_vector(word[-1],'hi')):
- word=word+'\u094d'
- outwords.append(word)
- line=' '.join(outwords)
-
-
- ## script conversion
- line=unicode_transliterate.UnicodeIndicTransliterator.transliterate(line,'hi','kn')
-
- outfile.write(line+'\n')
-
-
diff --git a/spaces/HugoDzz/super-godot-galaxy/build/_app/immutable/assets/0.15589e04.css b/spaces/HugoDzz/super-godot-galaxy/build/_app/immutable/assets/0.15589e04.css
deleted file mode 100644
index 1e2d8b08c27bb20c61feaff58efaadabc9c8c6d8..0000000000000000000000000000000000000000
--- a/spaces/HugoDzz/super-godot-galaxy/build/_app/immutable/assets/0.15589e04.css
+++ /dev/null
@@ -1 +0,0 @@
-*,:before,:after{box-sizing:border-box;border-width:0;border-style:solid;border-color:#e5e7eb}:before,:after{--tw-content: ""}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;-o-tab-size:4;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,"Apple Color Emoji","Segoe UI Emoji",Segoe UI Symbol,"Noto Color Emoji";font-feature-settings:normal;font-variation-settings:normal}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr:where([title]){-webkit-text-decoration:underline dotted;text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{color:inherit;text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,samp,pre{font-family:ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-size:100%;font-weight:inherit;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dl,dd,h1,h2,h3,h4,h5,h6,hr,figure,p,pre{margin:0}fieldset{margin:0;padding:0}legend{padding:0}ol,ul,menu{list-style:none;margin:0;padding:0}textarea{resize:vertical}input::-moz-placeholder,textarea::-moz-placeholder{opacity:1;color:#9ca3af}input::placeholder,textarea::placeholder{opacity:1;color:#9ca3af}button,[role=button]{cursor:pointer}:disabled{cursor:default}img,svg,video,canvas,audio,iframe,embed,object{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}[hidden]{display:none}@font-face{font-family:Hellovetica;font-weight:300;src:local("Hellovetica"),url(../../../fonts/hellovetica.ttf);font-display:swap}*,:before,:after{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::backdrop{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }.fixed{position:fixed}.absolute{position:absolute}.relative{position:relative}.-bottom-\[3px\]{bottom:-3px}.-left-\[3px\]{left:-3px}.-right-\[3px\]{right:-3px}.-top-\[3px\]{top:-3px}.bottom-6{bottom:1.5rem}.z-10{z-index:10}.z-50{z-index:50}.mt-1{margin-top:.25rem}.mt-10{margin-top:2.5rem}.mt-12{margin-top:3rem}.mt-20{margin-top:5rem}.mt-4{margin-top:1rem}.mt-6{margin-top:1.5rem}.flex{display:flex}.contents{display:contents}.h-40{height:10rem}.h-\[2px\]{height:2px}.h-\[3px\]{height:3px}.w-\[2px\]{width:2px}.w-\[3px\]{width:3px}.w-full{width:100%}.rotate-45{--tw-rotate: 45deg;transform:translate(var(--tw-translate-x),var(--tw-translate-y)) rotate(var(--tw-rotate)) skew(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y))}.flex-row{flex-direction:row}.flex-col{flex-direction:column}.items-center{align-items:center}.justify-center{justify-content:center}.space-y-4>:not([hidden])~:not([hidden]){--tw-space-y-reverse: 0;margin-top:calc(1rem * calc(1 - var(--tw-space-y-reverse)));margin-bottom:calc(1rem * var(--tw-space-y-reverse))}.overflow-hidden{overflow:hidden}.rounded-full{border-radius:9999px}.border-\[3px\]{border-width:3px}.border-slate-800{--tw-border-opacity: 1;border-color:rgb(30 41 59 / var(--tw-border-opacity))}.bg-\[\#0C0F19\]{--tw-bg-opacity: 1;background-color:rgb(12 15 25 / var(--tw-bg-opacity))}.bg-blue-400{--tw-bg-opacity: 1;background-color:rgb(96 165 250 / var(--tw-bg-opacity))}.bg-slate-800{--tw-bg-opacity: 1;background-color:rgb(30 41 59 / var(--tw-bg-opacity))}.bg-cover{background-size:cover}.p-4{padding:1rem}.px-3{padding-left:.75rem;padding-right:.75rem}.py-5{padding-top:1.25rem;padding-bottom:1.25rem}.text-center{text-align:center}.font-Hellovetica{font-family:Hellovetica}.text-\[9px\]{font-size:9px}.text-xs{font-size:.75rem;line-height:1rem}.text-slate-100{--tw-text-opacity: 1;color:rgb(241 245 249 / var(--tw-text-opacity))}.text-slate-500{--tw-text-opacity: 1;color:rgb(100 116 139 / var(--tw-text-opacity))}.underline{text-decoration-line:underline}.filter{filter:var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow)}
diff --git a/spaces/ICML2022/OFA/fairseq/examples/gottbert/README.md b/spaces/ICML2022/OFA/fairseq/examples/gottbert/README.md
deleted file mode 100644
index 1d58feb279a4a50222290546c3bb285d3cea98e6..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/gottbert/README.md
+++ /dev/null
@@ -1,64 +0,0 @@
-# GottBERT: a pure German language model
-
-## Introduction
-
-[GottBERT](http://arxiv.org/abs/2012.02110) is a pretrained language model trained on 145GB of German text based on RoBERTa.
-
-## Example usage
-
-### fairseq
-##### Load GottBERT from torch.hub (PyTorch >= 1.1):
-```python
-import torch
-gottbert = torch.hub.load('pytorch/fairseq', 'gottbert-base')
-gottbert.eval() # disable dropout (or leave in train mode to finetune)
-```
-
-##### Load GottBERT (for PyTorch 1.0 or custom models):
-```python
-# Download gottbert model
-wget https://dl.gottbert.de/fairseq/models/gottbert-base.tar.gz
-tar -xzvf gottbert.tar.gz
-
-# Load the model in fairseq
-from fairseq.models.roberta import GottbertModel
-gottbert = GottbertModel.from_pretrained('/path/to/gottbert')
-gottbert.eval() # disable dropout (or leave in train mode to finetune)
-```
-
-##### Filling masks:
-```python
-masked_line = 'Gott ist ! :)'
-gottbert.fill_mask(masked_line, topk=3)
-# [('Gott ist gut ! :)', 0.3642110526561737, ' gut'),
-# ('Gott ist überall ! :)', 0.06009674072265625, ' überall'),
-# ('Gott ist großartig ! :)', 0.0370681993663311, ' großartig')]
-```
-
-##### Extract features from GottBERT
-
-```python
-# Extract the last layer's features
-line = "Der erste Schluck aus dem Becher der Naturwissenschaft macht atheistisch , aber auf dem Grunde des Bechers wartet Gott !"
-tokens = gottbert.encode(line)
-last_layer_features = gottbert.extract_features(tokens)
-assert last_layer_features.size() == torch.Size([1, 27, 768])
-
-# Extract all layer's features (layer 0 is the embedding layer)
-all_layers = gottbert.extract_features(tokens, return_all_hiddens=True)
-assert len(all_layers) == 13
-assert torch.all(all_layers[-1] == last_layer_features)
-```
-## Citation
-If you use our work, please cite:
-
-```bibtex
-@misc{scheible2020gottbert,
- title={GottBERT: a pure German Language Model},
- author={Raphael Scheible and Fabian Thomczyk and Patric Tippmann and Victor Jaravine and Martin Boeker},
- year={2020},
- eprint={2012.02110},
- archivePrefix={arXiv},
- primaryClass={cs.CL}
-}
-```
diff --git a/spaces/ICML2022/OFA/fairseq/examples/m2m_100/tokenizers/tokenize_zh.py b/spaces/ICML2022/OFA/fairseq/examples/m2m_100/tokenizers/tokenize_zh.py
deleted file mode 100644
index 674b5849cba829cf4f07a69369e9cc6eed376d4c..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/m2m_100/tokenizers/tokenize_zh.py
+++ /dev/null
@@ -1,14 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-import fileinput
-
-import sacrebleu
-
-
-for line in fileinput.input():
- print(sacrebleu.tokenize_zh(line))
diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/pca.py b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/pca.py
deleted file mode 100644
index 948cf5319fd86ba1bccff65270b2881048faf9b1..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/pca.py
+++ /dev/null
@@ -1,53 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import os
-import os.path as osp
-import numpy as np
-
-import faiss
-
-
-
-def get_parser():
- parser = argparse.ArgumentParser(
- description="compute a pca matrix given an array of numpy features"
- )
- # fmt: off
- parser.add_argument('data', help='numpy file containing features')
- parser.add_argument('--output', help='where to save the pca matrix', required=True)
- parser.add_argument('--dim', type=int, help='dim for pca reduction', required=True)
- parser.add_argument('--eigen-power', type=float, default=0, help='eigen power, -0.5 for whitening')
-
- return parser
-
-
-def main():
- parser = get_parser()
- args = parser.parse_args()
-
- print("Reading features")
- x = np.load(args.data, mmap_mode="r")
-
- print("Computing PCA")
- pca = faiss.PCAMatrix(x.shape[-1], args.dim, args.eigen_power)
- pca.train(x)
- b = faiss.vector_to_array(pca.b)
- A = faiss.vector_to_array(pca.A).reshape(pca.d_out, pca.d_in)
-
- os.makedirs(args.output, exist_ok=True)
-
- prefix = str(args.dim)
- if args.eigen_power != 0:
- prefix += f"_{args.eigen_power}"
-
- np.save(osp.join(args.output, f"{prefix}_pca_A"), A.T)
- np.save(osp.join(args.output, f"{prefix}_pca_b"), b)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/monolingual_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/monolingual_dataset.py
deleted file mode 100644
index 54fd583b64a3a475324ade6eaaeccf593d747fdc..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/data/monolingual_dataset.py
+++ /dev/null
@@ -1,253 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-import torch
-
-from . import FairseqDataset, data_utils
-
-
-def collate(samples, pad_idx, eos_idx, fixed_pad_length=None, pad_to_bsz=None):
- if len(samples) == 0:
- return {}
-
- def merge(key, is_list=False):
- if is_list:
- res = []
- for i in range(len(samples[0][key])):
- res.append(
- data_utils.collate_tokens(
- [s[key][i] for s in samples],
- pad_idx,
- eos_idx,
- left_pad=False,
- pad_to_length=fixed_pad_length,
- pad_to_bsz=pad_to_bsz,
- )
- )
- return res
- else:
- return data_utils.collate_tokens(
- [s[key] for s in samples],
- pad_idx,
- eos_idx,
- left_pad=False,
- pad_to_length=fixed_pad_length,
- pad_to_bsz=pad_to_bsz,
- )
-
- src_tokens = merge("source")
- if samples[0]["target"] is not None:
- is_target_list = isinstance(samples[0]["target"], list)
- target = merge("target", is_target_list)
- else:
- target = src_tokens
-
- return {
- "id": torch.LongTensor([s["id"] for s in samples]),
- "nsentences": len(samples),
- "ntokens": sum(len(s["source"]) for s in samples),
- "net_input": {
- "src_tokens": src_tokens,
- "src_lengths": torch.LongTensor([s["source"].numel() for s in samples]),
- },
- "target": target,
- }
-
-
-class MonolingualDataset(FairseqDataset):
- """
- A wrapper around torch.utils.data.Dataset for monolingual data.
-
- Args:
- dataset (torch.utils.data.Dataset): dataset to wrap
- sizes (List[int]): sentence lengths
- vocab (~fairseq.data.Dictionary): vocabulary
- shuffle (bool, optional): shuffle the elements before batching
- (default: True).
- """
-
- def __init__(
- self,
- dataset,
- sizes,
- src_vocab,
- tgt_vocab=None,
- add_eos_for_other_targets=False,
- shuffle=False,
- targets=None,
- add_bos_token=False,
- fixed_pad_length=None,
- pad_to_bsz=None,
- src_lang_idx=None,
- tgt_lang_idx=None,
- ):
- self.dataset = dataset
- self.sizes = np.array(sizes)
- self.vocab = src_vocab
- self.tgt_vocab = tgt_vocab or src_vocab
- self.add_eos_for_other_targets = add_eos_for_other_targets
- self.shuffle = shuffle
- self.add_bos_token = add_bos_token
- self.fixed_pad_length = fixed_pad_length
- self.pad_to_bsz = pad_to_bsz
- self.src_lang_idx = src_lang_idx
- self.tgt_lang_idx = tgt_lang_idx
-
- assert targets is None or all(
- t in {"self", "future", "past"} for t in targets
- ), "targets must be none or one of 'self', 'future', 'past'"
- if targets is not None and len(targets) == 0:
- targets = None
- self.targets = targets
-
- def __getitem__(self, index):
- if self.targets is not None:
- # *future_target* is the original sentence
- # *source* is shifted right by 1 (maybe left-padded with eos)
- # *past_target* is shifted right by 2 (left-padded as needed)
- #
- # Left-to-right language models should condition on *source* and
- # predict *future_target*.
- # Right-to-left language models should condition on *source* and
- # predict *past_target*.
- source, future_target, past_target = self.dataset[index]
- source, target = self._make_source_target(
- source, future_target, past_target
- )
- else:
- source = self.dataset[index]
- target = None
- source, target = self._maybe_add_bos(source, target)
- return {"id": index, "source": source, "target": target}
-
- def __len__(self):
- return len(self.dataset)
-
- def _make_source_target(self, source, future_target, past_target):
- if self.targets is not None:
- target = []
-
- if (
- self.add_eos_for_other_targets
- and (("self" in self.targets) or ("past" in self.targets))
- and source[-1] != self.vocab.eos()
- ):
- # append eos at the end of source
- source = torch.cat([source, source.new([self.vocab.eos()])])
-
- if "future" in self.targets:
- future_target = torch.cat(
- [future_target, future_target.new([self.vocab.pad()])]
- )
- if "past" in self.targets:
- # first token is before the start of sentence which is only used in "none" break mode when
- # add_eos_for_other_targets is False
- past_target = torch.cat(
- [
- past_target.new([self.vocab.pad()]),
- past_target[1:],
- source[-2, None],
- ]
- )
-
- for t in self.targets:
- if t == "self":
- target.append(source)
- elif t == "future":
- target.append(future_target)
- elif t == "past":
- target.append(past_target)
- else:
- raise Exception("invalid target " + t)
-
- if len(target) == 1:
- target = target[0]
- else:
- target = future_target
-
- return source, self._filter_vocab(target)
-
- def _maybe_add_bos(self, source, target):
- if self.add_bos_token:
- source = torch.cat([source.new([self.vocab.bos()]), source])
- if target is not None:
- target = torch.cat([target.new([self.tgt_vocab.bos()]), target])
- return source, target
-
- def num_tokens_vec(self, indices):
- """Return the number of tokens for a set of positions defined by indices.
- This value is used to enforce ``--max-tokens`` during batching."""
- return self.sizes[indices]
-
- def _filter_vocab(self, target):
- if len(self.tgt_vocab) != len(self.vocab):
-
- def _filter(target):
- mask = target.ge(len(self.tgt_vocab))
- if mask.any():
- target[mask] = self.tgt_vocab.unk()
- return target
-
- if isinstance(target, list):
- return [_filter(t) for t in target]
- return _filter(target)
- return target
-
- def collater(self, samples):
- """Merge a list of samples to form a mini-batch.
-
- Args:
- samples (List[dict]): samples to collate
-
- Returns:
- dict: a mini-batch with the following keys:
-
- - `id` (LongTensor): example IDs in the original input order
- - `ntokens` (int): total number of tokens in the batch
- - `net_input` (dict): the input to the Model, containing keys:
-
- - `src_tokens` (LongTensor): a padded 2D Tensor of tokens in
- the source sentence of shape `(bsz, src_len)`. Padding will
- appear on the right.
-
- - `target` (LongTensor): a padded 2D Tensor of tokens in the
- target sentence of shape `(bsz, tgt_len)`. Padding will appear
- on the right.
- """
- return collate(
- samples,
- self.vocab.pad(),
- self.vocab.eos(),
- self.fixed_pad_length,
- self.pad_to_bsz,
- )
-
- def num_tokens(self, index):
- """Return the number of tokens in a sample. This value is used to
- enforce ``--max-tokens`` during batching."""
- return self.sizes[index]
-
- def size(self, index):
- """Return an example's size as a float or tuple. This value is used when
- filtering a dataset with ``--max-positions``."""
- return self.sizes[index]
-
- def ordered_indices(self):
- """Return an ordered list of indices. Batches will be constructed based
- on this order."""
- if self.shuffle:
- order = [np.random.permutation(len(self))]
- else:
- order = [np.arange(len(self))]
- order.append(self.sizes)
- return np.lexsort(order)
-
- @property
- def supports_prefetch(self):
- return getattr(self.dataset, "supports_prefetch", False)
-
- def prefetch(self, indices):
- self.dataset.prefetch(indices)
diff --git a/spaces/Illumotion/Koboldcpp/llama.h b/spaces/Illumotion/Koboldcpp/llama.h
deleted file mode 100644
index a78015adab30c37179a87249684df66963d62204..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/llama.h
+++ /dev/null
@@ -1,752 +0,0 @@
-#ifndef LLAMA_H
-#define LLAMA_H
-
-#include "ggml.h"
-#ifdef GGML_USE_CUBLAS
-#include "ggml-cuda.h"
-#define LLAMA_MAX_DEVICES GGML_CUDA_MAX_DEVICES
-#else
-#define LLAMA_MAX_DEVICES 1
-#endif // GGML_USE_CUBLAS
-#include
-#include
-#include
-#include
-
-#ifdef LLAMA_SHARED
-# if defined(_WIN32) && !defined(__MINGW32__)
-# ifdef LLAMA_BUILD
-# define LLAMA_API __declspec(dllexport)
-# else
-# define LLAMA_API __declspec(dllimport)
-# endif
-# else
-# define LLAMA_API __attribute__ ((visibility ("default")))
-# endif
-#else
-# define LLAMA_API
-#endif
-
-#ifdef __GNUC__
-# define DEPRECATED(func, hint) func __attribute__((deprecated(hint)))
-#elif defined(_MSC_VER)
-# define DEPRECATED(func, hint) __declspec(deprecated(hint)) func
-#else
-# define DEPRECATED(func, hint) func
-#endif
-
-#define LLAMA_DEFAULT_SEED 0xFFFFFFFF
-
-#define LLAMA_MAX_RNG_STATE (64*1024)
-
-#define LLAMA_FILE_MAGIC_GGSN 0x6767736eu // 'ggsn'
-
-#define LLAMA_SESSION_MAGIC LLAMA_FILE_MAGIC_GGSN
-#define LLAMA_SESSION_VERSION 2
-
-#if defined(GGML_USE_CUBLAS) || defined(GGML_USE_CLBLAST) || defined(GGML_USE_METAL)
-// Defined when llama.cpp is compiled with support for offloading model layers to GPU.
-#define LLAMA_SUPPORTS_GPU_OFFLOAD
-#endif
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
- //
- // C interface
- //
- // TODO: show sample usage
- //
-
- struct llama_model;
- struct llama_context;
-
- typedef int32_t llama_pos;
- typedef int32_t llama_token;
- typedef int32_t llama_seq_id;
-
- enum llama_vocab_type {
- LLAMA_VOCAB_TYPE_SPM = 0, // SentencePiece
- LLAMA_VOCAB_TYPE_BPE = 1, // Byte Pair Encoding
- };
-
- enum llama_token_type {
- LLAMA_TOKEN_TYPE_UNDEFINED = 0,
- LLAMA_TOKEN_TYPE_NORMAL = 1,
- LLAMA_TOKEN_TYPE_UNKNOWN = 2,
- LLAMA_TOKEN_TYPE_CONTROL = 3,
- LLAMA_TOKEN_TYPE_USER_DEFINED = 4,
- LLAMA_TOKEN_TYPE_UNUSED = 5,
- LLAMA_TOKEN_TYPE_BYTE = 6,
- };
-
- // model file types
- enum llama_ftype {
- LLAMA_FTYPE_ALL_F32 = 0,
- LLAMA_FTYPE_MOSTLY_F16 = 1, // except 1d tensors
- LLAMA_FTYPE_MOSTLY_Q4_0 = 2, // except 1d tensors
- LLAMA_FTYPE_MOSTLY_Q4_1 = 3, // except 1d tensors
- LLAMA_FTYPE_MOSTLY_Q4_1_SOME_F16 = 4, // tok_embeddings.weight and output.weight are F16
- // LLAMA_FTYPE_MOSTLY_Q4_2 = 5, // support has been removed
- // LLAMA_FTYPE_MOSTLY_Q4_3 = 6, // support has been removed
- LLAMA_FTYPE_MOSTLY_Q8_0 = 7, // except 1d tensors
- LLAMA_FTYPE_MOSTLY_Q5_0 = 8, // except 1d tensors
- LLAMA_FTYPE_MOSTLY_Q5_1 = 9, // except 1d tensors
- LLAMA_FTYPE_MOSTLY_Q2_K = 10, // except 1d tensors
- LLAMA_FTYPE_MOSTLY_Q3_K_S = 11, // except 1d tensors
- LLAMA_FTYPE_MOSTLY_Q3_K_M = 12, // except 1d tensors
- LLAMA_FTYPE_MOSTLY_Q3_K_L = 13, // except 1d tensors
- LLAMA_FTYPE_MOSTLY_Q4_K_S = 14, // except 1d tensors
- LLAMA_FTYPE_MOSTLY_Q4_K_M = 15, // except 1d tensors
- LLAMA_FTYPE_MOSTLY_Q5_K_S = 16, // except 1d tensors
- LLAMA_FTYPE_MOSTLY_Q5_K_M = 17, // except 1d tensors
- LLAMA_FTYPE_MOSTLY_Q6_K = 18, // except 1d tensors
-
- LLAMA_FTYPE_GUESSED = 1024, // not specified in the model file
- };
-
- typedef struct llama_token_data {
- llama_token id; // token id
- float logit; // log-odds of the token
- float p; // probability of the token
- } llama_token_data;
-
- typedef struct llama_token_data_array {
- llama_token_data * data;
- size_t size;
- bool sorted;
- } llama_token_data_array;
-
- typedef void (*llama_progress_callback)(float progress, void *ctx);
-
- // Input data for llama_decode
- // A llama_batch object can contain input about one or many sequences
- // The provided arrays (i.e. token, embd, pos, etc.) must have size of n_tokens
- //
- // - token : the token ids of the input (used when embd is NULL)
- // - embd : token embeddings (i.e. float vector of size n_embd) (used when token is NULL)
- // - pos : the positions of the respective token in the sequence
- // - seq_id : the sequence to which the respective token belongs
- // - logits : if zero, the logits for the respective token will not be output
- //
- typedef struct llama_batch {
- int32_t n_tokens;
-
- llama_token * token;
- float * embd;
- llama_pos * pos;
- llama_seq_id * seq_id;
- int8_t * logits;
-
- // NOTE: helpers for smooth API transition - can be deprecated in the future
- // for future-proof code, use the above fields instead and ignore everything below
- //
- // pos[i] = all_pos_0 + i*all_pos_1
- //
- llama_pos all_pos_0; // used if pos == NULL
- llama_pos all_pos_1; // used if pos == NULL
- llama_seq_id all_seq_id; // used if seq_id == NULL
- } llama_batch;
-
- struct llama_model_params {
- int32_t n_gpu_layers; // number of layers to store in VRAM
- int32_t main_gpu; // the GPU that is used for scratch and small tensors
- const float * tensor_split; // how to split layers across multiple GPUs (size: LLAMA_MAX_DEVICES)
-
- // called with a progress value between 0 and 1, pass NULL to disable
- llama_progress_callback progress_callback;
- // context pointer passed to the progress callback
- void * progress_callback_user_data;
-
- // Keep the booleans together to avoid misalignment during copy-by-value.
- bool vocab_only; // only load the vocabulary, no weights
- bool use_mmap; // use mmap if possible
- bool use_mlock; // force system to keep model in RAM
- };
-
- struct llama_context_params {
- uint32_t seed; // RNG seed, -1 for random
- uint32_t n_ctx; // text context, 0 = from model
- uint32_t n_batch; // prompt processing maximum batch size
- uint32_t n_threads; // number of threads to use for generation
- uint32_t n_threads_batch; // number of threads to use for batch processing
-
- // ref: https://github.com/ggerganov/llama.cpp/pull/2054
- float rope_freq_base; // RoPE base frequency, 0 = from model
- float rope_freq_scale; // RoPE frequency scaling factor, 0 = from model
-
- // Keep the booleans together to avoid misalignment during copy-by-value.
- bool mul_mat_q; // if true, use experimental mul_mat_q kernels
- bool f16_kv; // use fp16 for KV cache, fp32 otherwise
- bool logits_all; // the llama_eval() call computes all logits, not just the last one
- bool embedding; // embedding mode only
- };
-
- // model quantization parameters
- typedef struct llama_model_quantize_params {
- int nthread; // number of threads to use for quantizing, if <=0 will use std::thread::hardware_concurrency()
- enum llama_ftype ftype; // quantize to this llama_ftype
- bool allow_requantize; // allow quantizing non-f32/f16 tensors
- bool quantize_output_tensor; // quantize output.weight
- bool only_copy; // only copy tensors - ftype, allow_requantize and quantize_output_tensor are ignored
- } llama_model_quantize_params;
-
- // grammar types
- struct llama_grammar;
-
- // grammar element type
- enum llama_gretype {
- // end of rule definition
- LLAMA_GRETYPE_END = 0,
-
- // start of alternate definition for rule
- LLAMA_GRETYPE_ALT = 1,
-
- // non-terminal element: reference to rule
- LLAMA_GRETYPE_RULE_REF = 2,
-
- // terminal element: character (code point)
- LLAMA_GRETYPE_CHAR = 3,
-
- // inverse char(s) ([^a], [^a-b] [^abc])
- LLAMA_GRETYPE_CHAR_NOT = 4,
-
- // modifies a preceding LLAMA_GRETYPE_CHAR or LLAMA_GRETYPE_CHAR_ALT to
- // be an inclusive range ([a-z])
- LLAMA_GRETYPE_CHAR_RNG_UPPER = 5,
-
- // modifies a preceding LLAMA_GRETYPE_CHAR or
- // LLAMA_GRETYPE_CHAR_RNG_UPPER to add an alternate char to match ([ab], [a-zA])
- LLAMA_GRETYPE_CHAR_ALT = 6,
- };
-
- typedef struct llama_grammar_element {
- enum llama_gretype type;
- uint32_t value; // Unicode code point or rule ID
- } llama_grammar_element;
-
- // performance timing information
- struct llama_timings {
- double t_start_ms;
- double t_end_ms;
- double t_load_ms;
- double t_sample_ms;
- double t_p_eval_ms;
- double t_eval_ms;
-
- int32_t n_sample;
- int32_t n_p_eval;
- int32_t n_eval;
- };
-
- // Helpers for getting default parameters
- LLAMA_API struct llama_model_params llama_model_default_params(void);
- LLAMA_API struct llama_context_params llama_context_default_params(void);
- LLAMA_API struct llama_model_quantize_params llama_model_quantize_default_params(void);
-
- // Initialize the llama + ggml backend
- // If numa is true, use NUMA optimizations
- // Call once at the start of the program
- LLAMA_API void llama_backend_init(bool numa);
-
- // Call once at the end of the program - currently only used for MPI
- LLAMA_API void llama_backend_free(void);
-
- LLAMA_API struct llama_model * llama_load_model_from_file(
- const char * path_model,
- struct llama_model_params params);
-
- LLAMA_API void llama_free_model(struct llama_model * model);
-
- LLAMA_API struct llama_context * llama_new_context_with_model(
- struct llama_model * model,
- struct llama_context_params params);
-
- // Frees all allocated memory
- LLAMA_API void llama_free(struct llama_context * ctx);
-
- LLAMA_API int64_t llama_time_us(void);
-
- LLAMA_API int llama_max_devices (void);
- LLAMA_API bool llama_mmap_supported (void);
- LLAMA_API bool llama_mlock_supported(void);
-
- LLAMA_API const struct llama_model * llama_get_model(const struct llama_context * ctx);
-
- LLAMA_API int llama_n_ctx (const struct llama_context * ctx);
-
- LLAMA_API enum llama_vocab_type llama_vocab_type(const struct llama_model * model);
-
- LLAMA_API int llama_n_vocab (const struct llama_model * model);
- LLAMA_API int llama_n_ctx_train(const struct llama_model * model);
- LLAMA_API int llama_n_embd (const struct llama_model * model);
-
- // Get the model's RoPE frequency scaling factor
- LLAMA_API float llama_rope_freq_scale_train(const struct llama_model * model);
-
- // Get a string describing the model type
- LLAMA_API int llama_model_desc(const struct llama_model * model, char * buf, size_t buf_size);
-
- // Returns the total size of all the tensors in the model in bytes
- LLAMA_API uint64_t llama_model_size(const struct llama_model * model);
-
- // Returns the total number of parameters in the model
- LLAMA_API uint64_t llama_model_n_params(const struct llama_model * model);
-
- // Get a llama model tensor
- LLAMA_API struct ggml_tensor * llama_get_model_tensor(struct llama_model * model, const char * name);
-
- // Returns 0 on success
- LLAMA_API int llama_model_quantize(
- const char * fname_inp,
- const char * fname_out,
- const llama_model_quantize_params * params);
-
- // Apply a LoRA adapter to a loaded model
- // path_base_model is the path to a higher quality model to use as a base for
- // the layers modified by the adapter. Can be NULL to use the current loaded model.
- // The model needs to be reloaded before applying a new adapter, otherwise the adapter
- // will be applied on top of the previous one
- // Returns 0 on success
- LLAMA_API DEPRECATED(int llama_apply_lora_from_file(
- struct llama_context * ctx,
- const char * path_lora,
- float scale,
- const char * path_base_model,
- int n_threads),
- "use llama_model_apply_lora_from_file instead");
-
- LLAMA_API int llama_model_apply_lora_from_file(
- const struct llama_model * model,
- const char * path_lora,
- float scale,
- const char * path_base_model,
- int n_threads);
-
- //
- // KV cache
- //
-
- // Returns the number of tokens in the KV cache
- LLAMA_API DEPRECATED(int llama_get_kv_cache_token_count(const struct llama_context * ctx),
- "avoid using this, it will be removed in the future, instead - count the tokens in user code");
-
- // Remove all tokens data of cells in [c0, c1)
- // c0 < 0 : [0, c1]
- // c1 < 0 : [c0, inf)
- LLAMA_API void llama_kv_cache_tokens_rm(
- struct llama_context * ctx,
- int32_t c0,
- int32_t c1);
-
- // Removes all tokens that belong to the specified sequence and have positions in [p0, p1)
- // p0 < 0 : [0, p1]
- // p1 < 0 : [p0, inf)
- LLAMA_API void llama_kv_cache_seq_rm(
- struct llama_context * ctx,
- llama_seq_id seq_id,
- llama_pos p0,
- llama_pos p1);
-
- // Copy all tokens that belong to the specified sequence to another sequence
- // Note that this does not allocate extra KV cache memory - it simply assigns the tokens to the new sequence
- // p0 < 0 : [0, p1]
- // p1 < 0 : [p0, inf)
- LLAMA_API void llama_kv_cache_seq_cp(
- struct llama_context * ctx,
- llama_seq_id seq_id_src,
- llama_seq_id seq_id_dst,
- llama_pos p0,
- llama_pos p1);
-
- // Removes all tokens that do not belong to the specified sequence
- LLAMA_API void llama_kv_cache_seq_keep(
- struct llama_context * ctx,
- llama_seq_id seq_id);
-
- // Adds relative position "delta" to all tokens that belong to the specified sequence and have positions in [p0, p1)
- // If the KV cache is RoPEd, the KV data is updated accordingly
- // p0 < 0 : [0, p1]
- // p1 < 0 : [p0, inf)
- LLAMA_API void llama_kv_cache_seq_shift(
- struct llama_context * ctx,
- llama_seq_id seq_id,
- llama_pos p0,
- llama_pos p1,
- llama_pos delta);
-
- //
- // State / sessions
- //
-
- // Returns the maximum size in bytes of the state (rng, logits, embedding
- // and kv_cache) - will often be smaller after compacting tokens
- LLAMA_API size_t llama_get_state_size(const struct llama_context * ctx);
-
- // Copies the state to the specified destination address.
- // Destination needs to have allocated enough memory.
- // Returns the number of bytes copied
- LLAMA_API size_t llama_copy_state_data(
- struct llama_context * ctx,
- uint8_t * dst);
-
- // Set the state reading from the specified address
- // Returns the number of bytes read
- LLAMA_API size_t llama_set_state_data(
- struct llama_context * ctx,
- uint8_t * src);
-
- // Save/load session file
- LLAMA_API bool llama_load_session_file(
- struct llama_context * ctx,
- const char * path_session,
- llama_token * tokens_out,
- size_t n_token_capacity,
- size_t * n_token_count_out);
-
- LLAMA_API bool llama_save_session_file(
- struct llama_context * ctx,
- const char * path_session,
- const llama_token * tokens,
- size_t n_token_count);
-
- //
- // Decoding
- //
-
- // Run the llama inference to obtain the logits and probabilities for the next token(s).
- // tokens + n_tokens is the provided batch of new tokens to process
- // n_past is the number of tokens to use from previous eval calls
- // Returns 0 on success
- // DEPRECATED: use llama_decode() instead
- LLAMA_API DEPRECATED(int llama_eval(
- struct llama_context * ctx,
- llama_token * tokens,
- int32_t n_tokens,
- int n_past),
- "use llama_decode() instead");
-
- // Same as llama_eval, but use float matrix input directly.
- // DEPRECATED: use llama_decode() instead
- LLAMA_API DEPRECATED(int llama_eval_embd(
- struct llama_context * ctx,
- float * embd,
- int32_t n_tokens,
- int n_past),
- "use llama_decode() instead");
-
- // Return batch for single sequence of tokens starting at pos_0
- //
- // NOTE: this is a helper function to facilitate transition to the new batch API - avoid using it
- //
- LLAMA_API struct llama_batch llama_batch_get_one(
- llama_token * tokens,
- int32_t n_tokens,
- llama_pos pos_0,
- llama_seq_id seq_id);
-
- // Allocates a batch of tokens on the heap
- // The batch has to be freed with llama_batch_free()
- // If embd != 0, llama_batch.embd will be allocated with size of n_tokens * embd * sizeof(float)
- // Otherwise, llama_batch.token will be allocated to store n_tokens llama_token
- // The rest of the llama_batch members are allocated with size n_tokens
- // All members are left uninitialized
- LLAMA_API struct llama_batch llama_batch_init(
- int32_t n_tokens,
- int32_t embd);
-
- // Frees a batch of tokens allocated with llama_batch_init()
- LLAMA_API void llama_batch_free(struct llama_batch batch);
-
- // Positive return values does not mean a fatal error, but rather a warning.
- // 0 - success
- // 1 - could not find a KV slot for the batch (try reducing the size of the batch or increase the context)
- // < 0 - error
- LLAMA_API int llama_decode(
- struct llama_context * ctx,
- struct llama_batch batch);
-
- // Set the number of threads used for decoding
- // n_threads is the number of threads used for generation (single token)
- // n_threads_batch is the number of threads used for prompt and batch processing (multiple tokens)
- LLAMA_API void llama_set_n_threads(struct llama_context * ctx, uint32_t n_threads, uint32_t n_threads_batch);
-
- // Token logits obtained from the last call to llama_eval()
- // The logits for the last token are stored in the last row
- // Logits for which llama_batch.logits[i] == 0 are undefined
- // Rows: n_tokens provided with llama_batch
- // Cols: n_vocab
- LLAMA_API float * llama_get_logits(struct llama_context * ctx);
-
- // Logits for the ith token. Equivalent to:
- // llama_get_logits(ctx) + i*n_vocab
- LLAMA_API float * llama_get_logits_ith(struct llama_context * ctx, int32_t i);
-
- // Get the embeddings for the input
- // shape: [n_embd] (1-dimensional)
- LLAMA_API float * llama_get_embeddings(struct llama_context * ctx);
-
- //
- // Vocab
- //
-
- LLAMA_API const char * llama_token_get_text(const struct llama_context * ctx, llama_token token);
-
- LLAMA_API float llama_token_get_score(const struct llama_context * ctx, llama_token token);
-
- LLAMA_API enum llama_token_type llama_token_get_type(const struct llama_context * ctx, llama_token token);
-
- // Special tokens
- LLAMA_API llama_token llama_token_bos(const struct llama_context * ctx); // beginning-of-sentence
- LLAMA_API llama_token llama_token_eos(const struct llama_context * ctx); // end-of-sentence
- LLAMA_API llama_token llama_token_nl (const struct llama_context * ctx); // next-line
- // codellama infill tokens
- LLAMA_API llama_token llama_token_prefix(const struct llama_context * ctx); // Beginning of infill prefix
- LLAMA_API llama_token llama_token_middle(const struct llama_context * ctx); // Beginning of infill middle
- LLAMA_API llama_token llama_token_suffix(const struct llama_context * ctx); // Beginning of infill suffix
- LLAMA_API llama_token llama_token_eot (const struct llama_context * ctx); // End of infill middle
-
- //
- // Tokenization
- //
-
- // Convert the provided text into tokens.
- // The tokens pointer must be large enough to hold the resulting tokens.
- // Returns the number of tokens on success, no more than n_max_tokens
- // Returns a negative number on failure - the number of tokens that would have been returned
- LLAMA_API int llama_tokenize(
- const struct llama_model * model,
- const char * text,
- int text_len,
- llama_token * tokens,
- int n_max_tokens,
- bool add_bos);
-
- // Token Id -> Piece.
- // Uses the vocabulary in the provided context.
- // Does not write null terminator to the buffer.
- // User code is responsible to remove the leading whitespace of the first non-BOS token when decoding multiple tokens.
- LLAMA_API int llama_token_to_piece(
- const struct llama_model * model,
- llama_token token,
- char * buf,
- int length);
-
- //
- // Grammar
- //
-
- LLAMA_API struct llama_grammar * llama_grammar_init(
- const llama_grammar_element ** rules,
- size_t n_rules,
- size_t start_rule_index);
-
- LLAMA_API void llama_grammar_free(struct llama_grammar * grammar);
-
- LLAMA_API struct llama_grammar * llama_grammar_copy(const struct llama_grammar * grammar);
-
- //
- // Sampling functions
- //
-
- // Sets the current rng seed.
- LLAMA_API void llama_set_rng_seed(struct llama_context * ctx, uint32_t seed);
-
- /// @details Repetition penalty described in CTRL academic paper https://arxiv.org/abs/1909.05858, with negative logit fix.
- LLAMA_API void llama_sample_repetition_penalty(
- struct llama_context * ctx,
- llama_token_data_array * candidates,
- const llama_token * last_tokens,
- size_t last_tokens_size,
- float penalty);
-
- /// @details Frequency and presence penalties described in OpenAI API https://platform.openai.com/docs/api-reference/parameter-details.
- LLAMA_API void llama_sample_frequency_and_presence_penalties(
- struct llama_context * ctx,
- llama_token_data_array * candidates,
- const llama_token * last_tokens,
- size_t last_tokens_size,
- float alpha_frequency,
- float alpha_presence);
-
- /// @details Apply classifier-free guidance to the logits as described in academic paper "Stay on topic with Classifier-Free Guidance" https://arxiv.org/abs/2306.17806
- /// @param candidates A vector of `llama_token_data` containing the candidate tokens, the logits must be directly extracted from the original generation context without being sorted.
- /// @params guidance_ctx A separate context from the same model. Other than a negative prompt at the beginning, it should have all generated and user input tokens copied from the main context.
- /// @params scale Guidance strength. 1.0f means no guidance. Higher values mean stronger guidance.
- LLAMA_API void llama_sample_classifier_free_guidance(
- struct llama_context * ctx,
- llama_token_data_array * candidates,
- struct llama_context * guidance_ctx,
- float scale);
-
- /// @details Sorts candidate tokens by their logits in descending order and calculate probabilities based on logits.
- LLAMA_API void llama_sample_softmax(
- struct llama_context * ctx,
- llama_token_data_array * candidates);
-
- /// @details Top-K sampling described in academic paper "The Curious Case of Neural Text Degeneration" https://arxiv.org/abs/1904.09751
- LLAMA_API void llama_sample_top_k(
- struct llama_context * ctx,
- llama_token_data_array * candidates,
- int k,
- size_t min_keep);
-
- /// @details Nucleus sampling described in academic paper "The Curious Case of Neural Text Degeneration" https://arxiv.org/abs/1904.09751
- LLAMA_API void llama_sample_top_p(
- struct llama_context * ctx,
- llama_token_data_array * candidates,
- float p,
- size_t min_keep);
-
- /// @details Tail Free Sampling described in https://www.trentonbricken.com/Tail-Free-Sampling/.
- LLAMA_API void llama_sample_tail_free(
- struct llama_context * ctx,
- llama_token_data_array * candidates,
- float z,
- size_t min_keep);
-
- /// @details Locally Typical Sampling implementation described in the paper https://arxiv.org/abs/2202.00666.
- LLAMA_API void llama_sample_typical(
- struct llama_context * ctx,
- llama_token_data_array * candidates,
- float p,
- size_t min_keep);
-
- LLAMA_API void llama_sample_temp(
- struct llama_context * ctx,
- llama_token_data_array * candidates,
- float temp);
-
- LLAMA_API DEPRECATED(void llama_sample_temperature(
- struct llama_context * ctx,
- llama_token_data_array * candidates,
- float temp),
- "use llama_sample_temp instead");
-
- /// @details Apply constraints from grammar
- LLAMA_API void llama_sample_grammar(
- struct llama_context * ctx,
- llama_token_data_array * candidates,
- const struct llama_grammar * grammar);
-
- /// @details Mirostat 1.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
- /// @param candidates A vector of `llama_token_data` containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
- /// @param tau The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
- /// @param eta The learning rate used to update `mu` based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause `mu` to be updated more quickly, while a smaller learning rate will result in slower updates.
- /// @param m The number of tokens considered in the estimation of `s_hat`. This is an arbitrary value that is used to calculate `s_hat`, which in turn helps to calculate the value of `k`. In the paper, they use `m = 100`, but you can experiment with different values to see how it affects the performance of the algorithm.
- /// @param mu Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (`2 * tau`) and is updated in the algorithm based on the error between the target and observed surprisal.
- LLAMA_API llama_token llama_sample_token_mirostat(
- struct llama_context * ctx,
- llama_token_data_array * candidates,
- float tau,
- float eta,
- int m,
- float * mu);
-
- /// @details Mirostat 2.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
- /// @param candidates A vector of `llama_token_data` containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
- /// @param tau The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
- /// @param eta The learning rate used to update `mu` based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause `mu` to be updated more quickly, while a smaller learning rate will result in slower updates.
- /// @param mu Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (`2 * tau`) and is updated in the algorithm based on the error between the target and observed surprisal.
- LLAMA_API llama_token llama_sample_token_mirostat_v2(
- struct llama_context * ctx,
- llama_token_data_array * candidates,
- float tau,
- float eta,
- float * mu);
-
- /// @details Selects the token with the highest probability.
- LLAMA_API llama_token llama_sample_token_greedy(
- struct llama_context * ctx,
- llama_token_data_array * candidates);
-
- /// @details Randomly selects a token from the candidates based on their probabilities.
- LLAMA_API llama_token llama_sample_token(
- struct llama_context * ctx,
- llama_token_data_array * candidates);
-
- /// @details Accepts the sampled token into the grammar
- LLAMA_API void llama_grammar_accept_token(
- struct llama_context * ctx,
- struct llama_grammar * grammar,
- llama_token token);
-
- //
- // Beam search
- //
-
- struct llama_beam_view {
- const llama_token * tokens;
-
- size_t n_tokens;
- float p; // Cumulative beam probability (renormalized relative to all beams)
- bool eob; // Callback should set this to true when a beam is at end-of-beam.
- };
-
- // Passed to beam_search_callback function.
- // Whenever 0 < common_prefix_length, this number of tokens should be copied from any of the beams
- // (e.g. beams[0]) as they will be removed (shifted) from all beams in all subsequent callbacks.
- // These pointers are valid only during the synchronous callback, so should not be saved.
- struct llama_beams_state {
- struct llama_beam_view * beam_views;
-
- size_t n_beams; // Number of elements in beam_views[].
- size_t common_prefix_length; // Current max length of prefix tokens shared by all beams.
- bool last_call; // True iff this is the last callback invocation.
- };
-
- // Type of pointer to the beam_search_callback function.
- // void* callback_data is any custom data passed to llama_beam_search, that is subsequently
- // passed back to beam_search_callback. This avoids having to use global variables in the callback.
- typedef void (*llama_beam_search_callback_fn_t)(void * callback_data, struct llama_beams_state);
-
- /// @details Deterministically returns entire sentence constructed by a beam search.
- /// @param ctx Pointer to the llama_context.
- /// @param callback Invoked for each iteration of the beam_search loop, passing in beams_state.
- /// @param callback_data A pointer that is simply passed back to callback.
- /// @param n_beams Number of beams to use.
- /// @param n_past Number of tokens already evaluated.
- /// @param n_predict Maximum number of tokens to predict. EOS may occur earlier.
- LLAMA_API void llama_beam_search(
- struct llama_context * ctx,
- llama_beam_search_callback_fn_t callback,
- void * callback_data,
- size_t n_beams,
- int n_past,
- int n_predict);
-
- // Performance information
- LLAMA_API struct llama_timings llama_get_timings(struct llama_context * ctx);
-
- LLAMA_API void llama_print_timings(struct llama_context * ctx);
- LLAMA_API void llama_reset_timings(struct llama_context * ctx);
-
- // Print system information
- LLAMA_API const char * llama_print_system_info(void);
-
- // Set callback for all future logging events.
- // If this is not called, or NULL is supplied, everything is output on stderr.
- LLAMA_API void llama_log_set(ggml_log_callback log_callback, void * user_data);
-
- LLAMA_API void llama_dump_timing_info_yaml(FILE * stream, const struct llama_context * ctx);
-
-#ifdef __cplusplus
-}
-#endif
-
-// Internal API to be implemented by llama.cpp and used by tests/benchmarks only
-#ifdef LLAMA_API_INTERNAL
-
-#include
-#include
-
-struct ggml_tensor;
-
-const std::vector> & llama_internal_get_tensor_map(
- struct llama_context * ctx
-);
-
-#endif // LLAMA_API_INTERNAL
-
-#endif // LLAMA_H
diff --git a/spaces/Intel/Stable-Diffusion/app.py b/spaces/Intel/Stable-Diffusion/app.py
deleted file mode 100644
index b4f2942fc3e6becf6f6180285473e210ce7de2d2..0000000000000000000000000000000000000000
--- a/spaces/Intel/Stable-Diffusion/app.py
+++ /dev/null
@@ -1,302 +0,0 @@
-import os
-import gradio as gr
-import numpy as np
-import random
-import torch
-import subprocess
-import time
-import requests
-import json
-import threading
-
-import base64
-from io import BytesIO
-from PIL import Image
-from huggingface_hub import login
-
-
-myip_spr = os.environ["myip_spr"]
-myip_clx = os.environ["myip_clx"]
-myport = os.environ["myport"]
-
-SPR = f"http://{myip_spr}:{myport}"
-CLX = f"http://{myip_clx}:{myport}"
-
-
-print('=='*20)
-print(os.system("hostname -i"))
-print(SPR)
-print(CLX)
-
-prompt_examples_list = [
- ['A cascading waterfall tumbles down moss-covered rocks, surrounded by a lush and vibrant forest.'],
- ['In a serene garden, delicate cherry blossoms fall like pink snowflakes.'],
- ['A breathtaking mountain range towers above a picturesque valley, with a winding river reflecting the surrounding beauty.'],
- ['A serene beach scene with turquoise waters, palm trees swaying in the breeze, and a radiant sunset painting the sky in hues of orange and pink.'],
- ['After the rain, sunlight breaks through the clouds, illuminating the verdant fields.']
- ]
-CN_prompt_examples_list = [
- ['瀑布从长满苔藓的岩石上奔流而下,周围是一片茂密而充满活力的森林。'],
- ['在一个宁静的花园里,精致的樱花像粉色的雪花一样飘落。'],
- ['壮丽的山脉高耸在风景如画的山谷之上,一条蜿蜒的河流映衬着周围的美景。'],
- ['一个宁静的海滩场景,湛蓝的海水,微风中摇曳的棕榈树,夺目的日落将天空染成橙色和粉红色的色调。'],
- ['雨后,阳光穿过云层,照亮了青翠的田野。']
-]
-
-def update_language(value):
- if value == "zh-CN":
- return [gr.update(visible=False), gr.update(visible=True)]
- else:
- return [gr.update(visible=True), gr.update(visible=False)]
-
-def url_requests(url, data):
- resp = requests.post(url, data=json.dumps(data))
- img_str = json.loads(resp.text)["img_str"]
- location = json.loads(resp.text)["ip"]
-
- img_byte = base64.b64decode(img_str)
- img_io = BytesIO(img_byte) # convert image to file-like object
- img = Image.open(img_io) # img is now PIL Image object
-
- return img, location
-
-def img2img_generate(url, source_img, prompt, steps=25, strength=0.75, seed=42, guidance_scale=7.5, hidden=""):
-
- if hidden != os.environ["front_token"]:
- return None
-
- print('=*'*20)
- print(type(source_img))
- print("prompt: ", prompt)
- buffered = BytesIO()
- source_img.save(buffered, format="JPEG")
- img_b64 = base64.b64encode(buffered.getvalue())
-
- data = {"source_img": img_b64.decode(), "prompt": prompt, "steps": steps,
- "guidance_scale": guidance_scale, "seed": seed, "strength": strength,
- "token": os.environ["access_token"]}
-
- start_time = time.time()
- img, location = url_requests(url, data)
- print("*="*20)
- print("location: ", location)
- print("cost: ", time.time() - start_time)
-
- return img
-
-def toggle_content():
- if toggle_content.collapsed:
- toggle_content.collapsed = False
- return "Content expanded"
- else:
- toggle_content.collapsed = True
- return "Content collapsed"
-
-def txt2img_example_input(value):
- print('6/12/2023', value)
- return value
-
-def txt2img_generate(url, prompt, steps=25, seed=42, guidance_scale=7.5, hidden=""):
-
- if hidden != os.environ["front_token"]:
- return None
-
- print("prompt: ", prompt)
- print("steps: ", steps)
- print("url: ", url)
- data = {"prompt": prompt,
- "steps": steps, "guidance_scale": guidance_scale, "seed": seed,
- "token": os.environ["access_token"]}
- start_time = time.time()
- img, location = url_requests(url, data)
-
- print("*="*20)
- print("location: ", location)
- print("cost: ", time.time() - start_time)
-
- return img
-
-title = """
-# Stable Diffusion Inference Acceleration Comparison
-"""
-CN_title = """
-# Stable Diffusion 推理加速比较
-"""
-
-subtitle = """
-# between 4th Gen and 3rd Gen Intel Xeon Scalable Processor
-"""
-CN_subtitle = """
-## 第四代和第三代英特尔至强可扩展处理器
-"""
-
-md = """
-Have fun and try your own prompts and see a up to 9x performance acceleration on the new 4th Gen Intel Xeon using **Intel Extension for Transformers** . You may also want to try creating your own Stable Diffusion with few-shot fine-tuning. Please refer to our blog and code available in **Intel Neural Compressor** and **Hugging Face Diffusers** .
-"""
-
-CN_md = """
-请尽情体验这些功能!利用**Intel Extension for Transformers** 和新一代英特尔至强可扩展处理器可获得高达9倍的性能提升。您还可以使用少样本微调的方式来创建属于自己的稳定扩散模型。请参考我们的博客 和代码 ,这些资源可在**Intel Neural Compressor** 和**Hugging Face Diffusers** 的GitHub上找到。
-"""
-
-legal = """
-Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.
-© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
-"""
-
-CN_legal = """
-性能因使用、配置和其他因素而异。想要了解更多信息,请访问www.Intel.com/PerformanceIndex。
-- 性能结果基于所示配置的测试,可能不反映所有公开可用的更新。
-- 有关配置详细信息,请参考备份文件。没有任何产品/组件绝对安全。
-- © 英特尔公司。英特尔、英特尔标识和其他英特尔商标是英特尔公司或其子公司的商标。其他名称和品牌可能归他人所有。"""
-
-details = """
-- 4th Gen Intel Xeon Scalable Processor Inference. Test by Intel on 10/06/2023. Ubuntu 22.04.1 LTS, Intel Extension for Transformers(1.1.dev154+g448cc17e), Transformers 4.28.1, Diffusers 0.12.1, oneDNN v2.7.4.
-- 3rd Gen Intel Xeon Scalable Processor Inference: Test by Intel on 10/06/2023. Ubuntu 22.04.1 LTS, PyTorch Nightly build (2.0.0.dev20230105+cpu), Transformers 4.25.1, Diffusers 0.11.1, oneDNN v2.7.2.
-"""
-
-CN_details = """
-- 英特尔第四代至强可扩展处理器推理。由英特尔于2023年6月10日测试。Ubuntu 22.04.1 LTS,英特尔Transformer扩展(1.1.dev154+g448cc17e),Transformer 4.28.1,Diffusers 0.12.1,oneDNN v2.7.4。
-- 英特尔第三代至强可扩展处理器推理:由英特尔于2023年6月10日测试。Ubuntu 22.04.1 LTS,PyTorch Nightly构建(2.0.0.dev20230105+cpu),Transformer 4.25.1,Diffusers 0.11.1,oneDNN v2.7.2。
-"""
-
-# warining = """
-# ⚠ Upgrading, service temporarily paused.
-# """
-
-css = '''
- .instruction{position: absolute; top: 0;right: 0;margin-top: 0px !important}
- .arrow{position: absolute;top: 0;right: -110px;margin-top: -8px !important}
- #component-4, #component-3, #component-10{min-height: 0}
- .duplicate-button img{margin: 0}
- #img_1, #img_2, #img_3, #img_4{height:15rem}
- #mdStyle{font-size: 0.7rem}
- #titleCenter {text-align:center}
-'''
-
-random_seed = random.randint(0, 2147483647)
-
-with gr.Blocks(css=css) as demo:
- # gr.Markdown(warining, elem_id="warning")
- with gr.Box(visible=False) as zh:
- gr.Markdown(CN_title, elem_id='titleCenter')
- gr.Markdown(CN_subtitle, elem_id='titleCenter')
- gr.Markdown(CN_md)
-
- with gr.Tab("文字转图片"):
- with gr.Row() as text_to_image:
- with gr.Column():
- prompt = gr.inputs.Textbox(label='提示词', default='a photo of an astronaut riding a horse on mars')
- inference_steps = gr.inputs.Slider(1, 100, label='采样步数 - 步数越长质量越高 ', default=20, step=1)
- seed = gr.inputs.Slider(0, 2147483647, label='随机种子', default=random_seed, step=1)
- guidance_scale = gr.inputs.Slider(1.0, 20.0, label='引导程度 - 提示词对结果的影响程度', default=7.5, step=0.1)
- hidden = gr.Textbox(label='hidden', value=os.environ["front_token"], visible=False)
- txt2img_button = gr.Button("生成图片", variant="primary")
- url_SPR_txt = gr.Textbox(label='url_SPR_txt', value=SPR, visible=False)
- url_CLX_txt = gr.Textbox(label='url_CLX_txt', value=CLX, visible=False)
-
- with gr.Column():
- result_image_1 = gr.Image(label="第四代英特尔至强可扩展处理器 (SPR)", elem_id="img_1")
- result_image_2 = gr.Image(label="第三代英特尔至强可扩展处理器 (ICX)", elem_id="img_2")
-
- txt2img_input = gr.Textbox(visible=False)
-
- gr.Examples(
- examples=prompt_examples_list,
- inputs=txt2img_input,
- outputs=prompt,
- fn=txt2img_example_input,
- cache_examples=True,
- label="示例"
- )
-
- with gr.Tab("图片转图片"):
- with gr.Row() as image_to_image:
- with gr.Column():
- source_img = gr.Image(source="upload", type="pil", value="https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg")
- prompt_2 = gr.inputs.Textbox(label='提示词', default='A fantasy landscape, trending on artstation')
- inference_steps_2 = gr.inputs.Slider(1, 100, label='采样步数 - 步数越长质量越高', default=20, step=1)
- seed_2 = gr.inputs.Slider(0, 2147483647, label='随机种子', default=random_seed, step=1)
- guidance_scale_2 = gr.inputs.Slider(1.0, 20.0, label='引导程度 - 提示词对结果的影响程度', default=7.5, step=0.1)
- strength = gr.inputs.Slider(0.0, 1.0, label='强度级别 - 强度增加时噪声也变大', default=0.75, step=0.01)
- hidden_2 = gr.Textbox(label='hidden', value=os.environ["front_token"], visible=False)
- img2img_button = gr.Button("生成图片", variant="primary")
- url_SPR = gr.Textbox(label='url_SPR', value=SPR, visible=False)
- url_CLX = gr.Textbox(label='url_CLX', value=CLX, visible=False)
-
- with gr.Column():
- result_image_3 = gr.Image(label="第四代英特尔至强可扩展处理器 (SPR)", elem_id="img_3")
- result_image_4 = gr.Image(label="第三代英特尔至强可扩展处理器 (ICX)", elem_id="img_4")
- with gr.Accordion("附加信息" , open=False) as area_crazy_fn:
- gr.Markdown("**测试配置详情:**", elem_id='mdStyle')
- gr.Markdown(CN_details, elem_id='mdStyle')
-
- gr.Markdown("**注意事项和免责声明:**", elem_id='mdStyle')
- gr.Markdown(CN_legal, elem_id='mdStyle')
-
- with gr.Box(visible=False) as Eng:
- gr.Markdown(title)
- gr.Markdown(subtitle)
- gr.Markdown(md)
-
- with gr.Tab("Text-to-Image"):
- with gr.Row() as text_to_image:
- with gr.Column():
- prompt = gr.inputs.Textbox(label='Prompt', default='a photo of an astronaut riding a horse on mars')
- inference_steps = gr.inputs.Slider(1, 100, label='Inference Steps - increase the steps for better quality (e.g., avoiding black image) ', default=20, step=1)
- seed = gr.inputs.Slider(0, 2147483647, label='Seed', default=random_seed, step=1)
- guidance_scale = gr.inputs.Slider(1.0, 20.0, label='Guidance Scale - how much the prompt will influence the results', default=7.5, step=0.1)
- hidden = gr.Textbox(label='hidden', value=os.environ["front_token"], visible=False)
- txt2img_button = gr.Button("Generate Image", variant="primary")
- url_SPR_txt = gr.Textbox(label='url_SPR_txt', value=SPR, visible=False)
- url_CLX_txt = gr.Textbox(label='url_CLX_txt', value=CLX, visible=False)
-
- with gr.Column():
- result_image_1 = gr.Image(label="4th Gen Intel Xeon Scalable Processors (SPR)", elem_id="img_1")
- result_image_2 = gr.Image(label="3rd Gen Intel Xeon Scalable Processors (ICX)", elem_id="img_2")
-
- txt2img_input = gr.Textbox(visible=False)
-
- gr.Examples(
- examples=prompt_examples_list,
- inputs=txt2img_input,
- outputs=prompt,
- fn=txt2img_example_input,
- cache_examples=True,
- )
-
- with gr.Tab("Image-to-Image text-guided generation"):
- with gr.Row() as image_to_image:
- with gr.Column():
- source_img = gr.Image(source="upload", type="pil", value="https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg")
- prompt_2 = gr.inputs.Textbox(label='Prompt', default='A fantasy landscape, trending on artstation')
- inference_steps_2 = gr.inputs.Slider(1, 100, label='Inference Steps - increase the steps for better quality (e.g., avoiding black image) ', default=20, step=1)
- seed_2 = gr.inputs.Slider(0, 2147483647, label='Seed', default=random_seed, step=1)
- guidance_scale_2 = gr.inputs.Slider(1.0, 20.0, label='Guidance Scale - how much the prompt will influence the results', default=7.5, step=0.1)
- strength = gr.inputs.Slider(0.0, 1.0, label='Strength - adding more noise to it the larger the strength', default=0.75, step=0.01)
- hidden_2 = gr.Textbox(label='hidden', value=os.environ["front_token"], visible=False)
- img2img_button = gr.Button("Generate Image", variant="primary")
- url_SPR = gr.Textbox(label='url_SPR', value=SPR, visible=False)
- url_CLX = gr.Textbox(label='url_CLX', value=CLX, visible=False)
-
- with gr.Column():
- result_image_3 = gr.Image(label="4th Gen Intel Xeon Scalable Processors (SPR)", elem_id="img_3")
- result_image_4 = gr.Image(label="3rd Gen Intel Xeon Scalable Processors (ICX)", elem_id="img_4")
- with gr.Accordion("Additional Info", open=False) as area_crazy_fn:
- gr.Markdown("**Test Configuration Details:**", elem_id='mdStyle')
- gr.Markdown(details, elem_id='mdStyle')
-
- gr.Markdown("**Notices and Disclaimers:**", elem_id='mdStyle')
- gr.Markdown(legal, elem_id='mdStyle')
-
-
- txt2img_button.click(fn=txt2img_generate, inputs=[url_SPR_txt, prompt, inference_steps, seed, guidance_scale, hidden], outputs=result_image_1, queue=False)
- txt2img_button.click(fn=txt2img_generate, inputs=[url_CLX_txt, prompt, inference_steps, seed, guidance_scale, hidden], outputs=result_image_2, queue=False)
- img2img_button.click(fn=img2img_generate, inputs=[url_SPR, source_img, prompt_2, inference_steps_2, strength, seed_2, guidance_scale_2, hidden_2], outputs=result_image_3, queue=False)
- img2img_button.click(fn=img2img_generate, inputs=[url_CLX, source_img, prompt_2, inference_steps_2, strength, seed_2, guidance_scale_2, hidden_2], outputs=result_image_4, queue=False)
-
- dt = gr.Textbox(label="Current language", visible=False)
- dt.change(update_language, inputs=dt, outputs=[Eng, zh])
- demo.load(None, inputs=None, outputs=dt, _js="() => navigator.language")
-
-
-demo.queue(default_enabled=False, api_open=False, max_size=5).launch(debug=True, show_api=False)
\ No newline at end of file
diff --git a/spaces/ItsJayQz/Roy_PopArt_Diffusion/README.md b/spaces/ItsJayQz/Roy_PopArt_Diffusion/README.md
deleted file mode 100644
index 8510bf6dba4b9e2f95b94adc545b7c18a8cd0334..0000000000000000000000000000000000000000
--- a/spaces/ItsJayQz/Roy_PopArt_Diffusion/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Roy PopArt Diffusion
-emoji: 🦀
-colorFrom: pink
-colorTo: gray
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Jamkonams/AutoGPT/autogpt/memory/weaviate.py b/spaces/Jamkonams/AutoGPT/autogpt/memory/weaviate.py
deleted file mode 100644
index 5408e9a97aa3594ad443448cfc31f2546a01eb09..0000000000000000000000000000000000000000
--- a/spaces/Jamkonams/AutoGPT/autogpt/memory/weaviate.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import uuid
-
-import weaviate
-from weaviate import Client
-from weaviate.embedded import EmbeddedOptions
-from weaviate.util import generate_uuid5
-
-from autogpt.config import Config
-from autogpt.memory.base import MemoryProviderSingleton, get_ada_embedding
-
-
-def default_schema(weaviate_index):
- return {
- "class": weaviate_index,
- "properties": [
- {
- "name": "raw_text",
- "dataType": ["text"],
- "description": "original text for the embedding",
- }
- ],
- }
-
-
-class WeaviateMemory(MemoryProviderSingleton):
- def __init__(self, cfg):
- auth_credentials = self._build_auth_credentials(cfg)
-
- url = f"{cfg.weaviate_protocol}://{cfg.weaviate_host}:{cfg.weaviate_port}"
-
- if cfg.use_weaviate_embedded:
- self.client = Client(
- embedded_options=EmbeddedOptions(
- hostname=cfg.weaviate_host,
- port=int(cfg.weaviate_port),
- persistence_data_path=cfg.weaviate_embedded_path,
- )
- )
-
- print(
- f"Weaviate Embedded running on: {url} with persistence path: {cfg.weaviate_embedded_path}"
- )
- else:
- self.client = Client(url, auth_client_secret=auth_credentials)
-
- self.index = WeaviateMemory.format_classname(cfg.memory_index)
- self._create_schema()
-
- @staticmethod
- def format_classname(index):
- # weaviate uses capitalised index names
- # The python client uses the following code to format
- # index names before the corresponding class is created
- if len(index) == 1:
- return index.capitalize()
- return index[0].capitalize() + index[1:]
-
- def _create_schema(self):
- schema = default_schema(self.index)
- if not self.client.schema.contains(schema):
- self.client.schema.create_class(schema)
-
- def _build_auth_credentials(self, cfg):
- if cfg.weaviate_username and cfg.weaviate_password:
- return weaviate.AuthClientPassword(
- cfg.weaviate_username, cfg.weaviate_password
- )
- if cfg.weaviate_api_key:
- return weaviate.AuthApiKey(api_key=cfg.weaviate_api_key)
- else:
- return None
-
- def add(self, data):
- vector = get_ada_embedding(data)
-
- doc_uuid = generate_uuid5(data, self.index)
- data_object = {"raw_text": data}
-
- with self.client.batch as batch:
- batch.add_data_object(
- uuid=doc_uuid,
- data_object=data_object,
- class_name=self.index,
- vector=vector,
- )
-
- return f"Inserting data into memory at uuid: {doc_uuid}:\n data: {data}"
-
- def get(self, data):
- return self.get_relevant(data, 1)
-
- def clear(self):
- self.client.schema.delete_all()
-
- # weaviate does not yet have a neat way to just remove the items in an index
- # without removing the entire schema, therefore we need to re-create it
- # after a call to delete_all
- self._create_schema()
-
- return "Obliterated"
-
- def get_relevant(self, data, num_relevant=5):
- query_embedding = get_ada_embedding(data)
- try:
- results = (
- self.client.query.get(self.index, ["raw_text"])
- .with_near_vector({"vector": query_embedding, "certainty": 0.7})
- .with_limit(num_relevant)
- .do()
- )
-
- if len(results["data"]["Get"][self.index]) > 0:
- return [
- str(item["raw_text"]) for item in results["data"]["Get"][self.index]
- ]
- else:
- return []
-
- except Exception as err:
- print(f"Unexpected error {err=}, {type(err)=}")
- return []
-
- def get_stats(self):
- result = self.client.query.aggregate(self.index).with_meta_count().do()
- class_data = result["data"]["Aggregate"][self.index]
-
- return class_data[0]["meta"] if class_data else {}
diff --git a/spaces/JavierIA/gccopen/utils/metrics.py b/spaces/JavierIA/gccopen/utils/metrics.py
deleted file mode 100644
index 666b8c7ec1c0a488eab1b4e7f2f0474973589525..0000000000000000000000000000000000000000
--- a/spaces/JavierIA/gccopen/utils/metrics.py
+++ /dev/null
@@ -1,223 +0,0 @@
-# Model validation metrics
-
-from pathlib import Path
-
-import matplotlib.pyplot as plt
-import numpy as np
-import torch
-
-from . import general
-
-
-def fitness(x):
- # Model fitness as a weighted combination of metrics
- w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95]
- return (x[:, :4] * w).sum(1)
-
-
-def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=()):
- """ Compute the average precision, given the recall and precision curves.
- Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.
- # Arguments
- tp: True positives (nparray, nx1 or nx10).
- conf: Objectness value from 0-1 (nparray).
- pred_cls: Predicted object classes (nparray).
- target_cls: True object classes (nparray).
- plot: Plot precision-recall curve at mAP@0.5
- save_dir: Plot save directory
- # Returns
- The average precision as computed in py-faster-rcnn.
- """
-
- # Sort by objectness
- i = np.argsort(-conf)
- tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]
-
- # Find unique classes
- unique_classes = np.unique(target_cls)
- nc = unique_classes.shape[0] # number of classes, number of detections
-
- # Create Precision-Recall curve and compute AP for each class
- px, py = np.linspace(0, 1, 1000), [] # for plotting
- ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000))
- for ci, c in enumerate(unique_classes):
- i = pred_cls == c
- n_l = (target_cls == c).sum() # number of labels
- n_p = i.sum() # number of predictions
-
- if n_p == 0 or n_l == 0:
- continue
- else:
- # Accumulate FPs and TPs
- fpc = (1 - tp[i]).cumsum(0)
- tpc = tp[i].cumsum(0)
-
- # Recall
- recall = tpc / (n_l + 1e-16) # recall curve
- r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases
-
- # Precision
- precision = tpc / (tpc + fpc) # precision curve
- p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score
-
- # AP from recall-precision curve
- for j in range(tp.shape[1]):
- ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j])
- if plot and j == 0:
- py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5
-
- # Compute F1 (harmonic mean of precision and recall)
- f1 = 2 * p * r / (p + r + 1e-16)
- if plot:
- plot_pr_curve(px, py, ap, Path(save_dir) / 'PR_curve.png', names)
- plot_mc_curve(px, f1, Path(save_dir) / 'F1_curve.png', names, ylabel='F1')
- plot_mc_curve(px, p, Path(save_dir) / 'P_curve.png', names, ylabel='Precision')
- plot_mc_curve(px, r, Path(save_dir) / 'R_curve.png', names, ylabel='Recall')
-
- i = f1.mean(0).argmax() # max F1 index
- return p[:, i], r[:, i], ap, f1[:, i], unique_classes.astype('int32')
-
-
-def compute_ap(recall, precision):
- """ Compute the average precision, given the recall and precision curves
- # Arguments
- recall: The recall curve (list)
- precision: The precision curve (list)
- # Returns
- Average precision, precision curve, recall curve
- """
-
- # Append sentinel values to beginning and end
- mrec = np.concatenate(([0.], recall, [recall[-1] + 0.01]))
- mpre = np.concatenate(([1.], precision, [0.]))
-
- # Compute the precision envelope
- mpre = np.flip(np.maximum.accumulate(np.flip(mpre)))
-
- # Integrate area under curve
- method = 'interp' # methods: 'continuous', 'interp'
- if method == 'interp':
- x = np.linspace(0, 1, 101) # 101-point interp (COCO)
- ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate
- else: # 'continuous'
- i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes
- ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve
-
- return ap, mpre, mrec
-
-
-class ConfusionMatrix:
- # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix
- def __init__(self, nc, conf=0.25, iou_thres=0.45):
- self.matrix = np.zeros((nc + 1, nc + 1))
- self.nc = nc # number of classes
- self.conf = conf
- self.iou_thres = iou_thres
-
- def process_batch(self, detections, labels):
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- detections (Array[N, 6]), x1, y1, x2, y2, conf, class
- labels (Array[M, 5]), class, x1, y1, x2, y2
- Returns:
- None, updates confusion matrix accordingly
- """
- detections = detections[detections[:, 4] > self.conf]
- gt_classes = labels[:, 0].int()
- detection_classes = detections[:, 5].int()
- iou = general.box_iou(labels[:, 1:], detections[:, :4])
-
- x = torch.where(iou > self.iou_thres)
- if x[0].shape[0]:
- matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy()
- if x[0].shape[0] > 1:
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 1], return_index=True)[1]]
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 0], return_index=True)[1]]
- else:
- matches = np.zeros((0, 3))
-
- n = matches.shape[0] > 0
- m0, m1, _ = matches.transpose().astype(np.int16)
- for i, gc in enumerate(gt_classes):
- j = m0 == i
- if n and sum(j) == 1:
- self.matrix[gc, detection_classes[m1[j]]] += 1 # correct
- else:
- self.matrix[self.nc, gc] += 1 # background FP
-
- if n:
- for i, dc in enumerate(detection_classes):
- if not any(m1 == i):
- self.matrix[dc, self.nc] += 1 # background FN
-
- def matrix(self):
- return self.matrix
-
- def plot(self, save_dir='', names=()):
- try:
- import seaborn as sn
-
- array = self.matrix / (self.matrix.sum(0).reshape(1, self.nc + 1) + 1E-6) # normalize
- array[array < 0.005] = np.nan # don't annotate (would appear as 0.00)
-
- fig = plt.figure(figsize=(12, 9), tight_layout=True)
- sn.set(font_scale=1.0 if self.nc < 50 else 0.8) # for label size
- labels = (0 < len(names) < 99) and len(names) == self.nc # apply names to ticklabels
- sn.heatmap(array, annot=self.nc < 30, annot_kws={"size": 8}, cmap='Blues', fmt='.2f', square=True,
- xticklabels=names + ['background FP'] if labels else "auto",
- yticklabels=names + ['background FN'] if labels else "auto").set_facecolor((1, 1, 1))
- fig.axes[0].set_xlabel('True')
- fig.axes[0].set_ylabel('Predicted')
- fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250)
- except Exception as e:
- pass
-
- def print(self):
- for i in range(self.nc + 1):
- print(' '.join(map(str, self.matrix[i])))
-
-
-# Plots ----------------------------------------------------------------------------------------------------------------
-
-def plot_pr_curve(px, py, ap, save_dir='pr_curve.png', names=()):
- # Precision-recall curve
- fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
- py = np.stack(py, axis=1)
-
- if 0 < len(names) < 21: # display per-class legend if < 21 classes
- for i, y in enumerate(py.T):
- ax.plot(px, y, linewidth=1, label=f'{names[i]} {ap[i, 0]:.3f}') # plot(recall, precision)
- else:
- ax.plot(px, py, linewidth=1, color='grey') # plot(recall, precision)
-
- ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean())
- ax.set_xlabel('Recall')
- ax.set_ylabel('Precision')
- ax.set_xlim(0, 1)
- ax.set_ylim(0, 1)
- plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
- fig.savefig(Path(save_dir), dpi=250)
-
-
-def plot_mc_curve(px, py, save_dir='mc_curve.png', names=(), xlabel='Confidence', ylabel='Metric'):
- # Metric-confidence curve
- fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
-
- if 0 < len(names) < 21: # display per-class legend if < 21 classes
- for i, y in enumerate(py):
- ax.plot(px, y, linewidth=1, label=f'{names[i]}') # plot(confidence, metric)
- else:
- ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric)
-
- y = py.mean(0)
- ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}')
- ax.set_xlabel(xlabel)
- ax.set_ylabel(ylabel)
- ax.set_xlim(0, 1)
- ax.set_ylim(0, 1)
- plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
- fig.savefig(Path(save_dir), dpi=250)
diff --git a/spaces/JenkinsGage/WritingHelper/README.md b/spaces/JenkinsGage/WritingHelper/README.md
deleted file mode 100644
index a3e8bafd7c98e864c479ad48386a8a6626ba9dd0..0000000000000000000000000000000000000000
--- a/spaces/JenkinsGage/WritingHelper/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: WritingHelper
-emoji: 🚀
-colorFrom: blue
-colorTo: pink
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Junity/TokaiTeio-SVC/vdecoder/hifigan/utils.py b/spaces/Junity/TokaiTeio-SVC/vdecoder/hifigan/utils.py
deleted file mode 100644
index 9c93c996d3cc73c30d71c1fc47056e4230f35c0f..0000000000000000000000000000000000000000
--- a/spaces/Junity/TokaiTeio-SVC/vdecoder/hifigan/utils.py
+++ /dev/null
@@ -1,68 +0,0 @@
-import glob
-import os
-import matplotlib
-import torch
-from torch.nn.utils import weight_norm
-# matplotlib.use("Agg")
-import matplotlib.pylab as plt
-
-
-def plot_spectrogram(spectrogram):
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
-
- fig.canvas.draw()
- plt.close()
-
- return fig
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def apply_weight_norm(m):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- weight_norm(m)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def load_checkpoint(filepath, device):
- assert os.path.isfile(filepath)
- print("Loading '{}'".format(filepath))
- checkpoint_dict = torch.load(filepath, map_location=device)
- print("Complete.")
- return checkpoint_dict
-
-
-def save_checkpoint(filepath, obj):
- print("Saving checkpoint to {}".format(filepath))
- torch.save(obj, filepath)
- print("Complete.")
-
-
-def del_old_checkpoints(cp_dir, prefix, n_models=2):
- pattern = os.path.join(cp_dir, prefix + '????????')
- cp_list = glob.glob(pattern) # get checkpoint paths
- cp_list = sorted(cp_list)# sort by iter
- if len(cp_list) > n_models: # if more than n_models models are found
- for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models
- open(cp, 'w').close()# empty file contents
- os.unlink(cp)# delete file (move to trash when using Colab)
-
-
-def scan_checkpoint(cp_dir, prefix):
- pattern = os.path.join(cp_dir, prefix + '????????')
- cp_list = glob.glob(pattern)
- if len(cp_list) == 0:
- return None
- return sorted(cp_list)[-1]
-
diff --git a/spaces/K00B404/langchain-llama2-7b-chat-uncensored-ggml/app.py b/spaces/K00B404/langchain-llama2-7b-chat-uncensored-ggml/app.py
deleted file mode 100644
index cd31257a825ae4a198b723a192eed8e6357631b3..0000000000000000000000000000000000000000
--- a/spaces/K00B404/langchain-llama2-7b-chat-uncensored-ggml/app.py
+++ /dev/null
@@ -1,554 +0,0 @@
-"""Run codes."""
-# pylint: disable=line-too-long, broad-exception-caught, invalid-name, missing-function-docstring, too-many-instance-attributes, missing-class-docstring
-# ruff: noqa: E501
-import gc
-import os
-import platform
-import random
-import time
-from collections import deque
-from pathlib import Path
-from threading import Thread
-from typing import Any, Dict, List, Union
-
-# from types import SimpleNamespace
-import gradio as gr
-import psutil
-from about_time import about_time
-from ctransformers import Config
-from dl_hf_model import dl_hf_model
-from langchain.callbacks.base import BaseCallbackHandler
-from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
-from langchain.chains import ConversationChain
-from langchain.chains.conversation.memory import ConversationBufferWindowMemory
-
-# from ctransformers import AutoModelForCausalLM
-from langchain.llms import CTransformers
-from langchain.prompts import PromptTemplate
-from langchain.schema import LLMResult
-from loguru import logger
-
-deq = deque()
-sig_end = object() # signals the processing is done
-
-# from langchain.llms import OpenAI
-
-filename_list = [
- "Wizard-Vicuna-7B-Uncensored.ggmlv3.q2_K.bin",
- "Wizard-Vicuna-7B-Uncensored.ggmlv3.q3_K_L.bin",
- "Wizard-Vicuna-7B-Uncensored.ggmlv3.q3_K_M.bin",
- "Wizard-Vicuna-7B-Uncensored.ggmlv3.q3_K_S.bin",
- "Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_0.bin",
- "Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_1.bin",
- "Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_K_M.bin",
- "Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_K_S.bin",
- "Wizard-Vicuna-7B-Uncensored.ggmlv3.q5_0.bin",
- "Wizard-Vicuna-7B-Uncensored.ggmlv3.q5_1.bin",
- "Wizard-Vicuna-7B-Uncensored.ggmlv3.q5_K_M.bin",
- "Wizard-Vicuna-7B-Uncensored.ggmlv3.q5_K_S.bin",
- "Wizard-Vicuna-7B-Uncensored.ggmlv3.q6_K.bin",
- "Wizard-Vicuna-7B-Uncensored.ggmlv3.q8_0.bin",
-]
-
-URL = "https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-GGML/raw/main/Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_K_M.bin" # 4.05G
-
-url = "https://huggingface.co/savvamadar/ggml-gpt4all-j-v1.3-groovy/blob/main/ggml-gpt4all-j-v1.3-groovy.bin"
-url = "https://huggingface.co/TheBloke/Llama-2-13B-GGML/blob/main/llama-2-13b.ggmlv3.q4_K_S.bin" # 7.37G
-# url = "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/blob/main/llama-2-13b-chat.ggmlv3.q3_K_L.bin"
-url = "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/blob/main/llama-2-13b-chat.ggmlv3.q3_K_L.bin" # 6.93G
-# url = "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/blob/main/llama-2-13b-chat.ggmlv3.q3_K_L.binhttps://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/blob/main/llama-2-13b-chat.ggmlv3.q4_K_M.bin" # 7.87G
-
-url = "https://huggingface.co/localmodels/Llama-2-13B-Chat-ggml/blob/main/llama-2-13b-chat.ggmlv3.q4_K_S.bin" # 7.37G
-
-_ = (
- "golay" in platform.node()
- or "okteto" in platform.node()
- or Path("/kaggle").exists()
- # or psutil.cpu_count(logical=False) < 4
- or 1 # run 7b in hf
-)
-
-if _:
- # url = "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/blob/main/llama-2-13b-chat.ggmlv3.q2_K.bin"
- url = "https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/blob/main/llama-2-7b-chat.ggmlv3.q2_K.bin" # 2.87G
- url = "https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/blob/main/llama-2-7b-chat.ggmlv3.q4_K_M.bin" # 2.87G
- url = "https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGML/blob/main/llama2_7b_chat_uncensored.ggmlv3.q4_K_M.bin" # 4.08G
-
-
-prompt_template = """Below is an instruction that describes a task. Write a response that appropriately completes the request.
-
-### Instruction: {user_prompt}
-
-### Response:
-"""
-
-prompt_template = """System: You are a helpful,
-respectful and honest assistant. Always answer as
-helpfully as possible, while being safe. Your answers
-should not include any harmful, unethical, racist,
-sexist, toxic, dangerous, or illegal content. Please
-ensure that your responses are socially unbiased and
-positive in nature. If a question does not make any
-sense, or is not factually coherent, explain why instead
-of answering something not correct. If you don't know
-the answer to a question, please don't share false
-information.
-User: {prompt}
-Assistant: """
-
-prompt_template = """System: You are a helpful assistant.
-User: {prompt}
-Assistant: """
-
-prompt_template = """Question: {question}
-Answer: Let's work this out in a step by step way to be sure we have the right answer."""
-
-prompt_template = """[INST] <>
-You are a helpful, respectful and honest assistant. Always answer as helpfully as possible assistant. Think step by step.
-<>
-
-What NFL team won the Super Bowl in the year Justin Bieber was born?
-[/INST]"""
-
-prompt_template = """[INST] <>
-You are an unhelpful assistant. Always answer as helpfully as possible. Think step by step. < >
-
-{question} [/INST]
-"""
-
-prompt_template = """[INST] <>
-You are a helpful assistant.
-< >
-
-{question} [/INST]
-"""
-
-prompt_template = """### HUMAN:
-{question}
-
-### RESPONSE:"""
-
-prompt_template = """### HUMAN:
-You are a helpful assistant. Think step by step.
-{history}
-{input}
-### RESPONSE:"""
-
-prompt_template = """You are a helpful assistant. Let's think step by step.
-{history}
-### HUMAN:
-{input}
-### RESPONSE:"""
-
-# PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is afriendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n{history}\nHuman: {input}\nAI:', template_format='f-string', validate_template=True)
-
-human_prefix = "### HUMAN"
-ai_prefix = "### RESPONSE"
-stop = [f"{human_prefix}:"]
-
-_ = [elm for elm in prompt_template.splitlines() if elm.strip()]
-stop_string = [elm.split(":")[0] + ":" for elm in _][-2]
-
-# logger.debug(f"{stop_string=} not used")
-
-os.environ["TZ"] = "Asia/Shanghai"
-try:
- time.tzset() # type: ignore # pylint: disable=no-member
-except Exception:
- # Windows
- logger.warning("Windows, cant run time.tzset()")
-
-
-class DequeCallbackHandler(BaseCallbackHandler):
- """Mediate gradio and stream output."""
-
- def __init__(self, deq_: deque):
- """Init deque for FIFO, may need to upgrade to queue.Queue or queue.SimpleQueue."""
- self.q = deq_
-
- # def on_chat_model_start(self): self.q.clear()
-
- def on_llm_start(
- self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
- ) -> None:
- """Run when LLM starts running. Clean the queue."""
- self.q.clear()
-
- def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
- """Run on new LLM token. Only available when streaming is enabled."""
- self.q.append(token)
-
- def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
- """Run when LLM ends running."""
- self.q.append(sig_end)
-
- def on_llm_error(
- self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
- ) -> None:
- """Run when LLM errors."""
- self.q.append(sig_end)
-
-
-_ = psutil.cpu_count(logical=False) - 1
-cpu_count: int = int(_) if _ else 1
-logger.debug(f"{cpu_count=}")
-
-LLM = None
-gc.collect()
-
-try:
- model_loc, file_size = dl_hf_model(url)
-except Exception as exc_:
- logger.error(exc_)
- raise SystemExit(1) from exc_
-
-config = Config()
-# Config(top_k=40, top_p=0.95, temperature=0.8, repetition_penalty=1.1, last_n_tokens=64, seed=-1, batch_size=8, threads=-1, max_new_tokens=256, stop=None, stream=False, reset=True, context_length=-1, gpu_layers=0)
-config.stream = True
-config.stop = stop
-config.threads = cpu_count
-
-deqcb = DequeCallbackHandler(deq)
-
-# LLM = AutoModelForCausalLM.from_pretrained(
-LLM = CTransformers(
- model=model_loc,
- model_type="llama",
- callbacks=[StreamingStdOutCallbackHandler(), deqcb],
- # config=config,
- **vars(config),
-)
-
-logger.info(f"done load llm {model_loc=} {file_size=}G")
-
-prompt = PromptTemplate(
- input_variables=["history", "input"],
- output_parser=None,
- partial_variables={},
- template=prompt_template,
- template_format="f-string",
- validate_template=True,
-)
-
-memory = ConversationBufferWindowMemory(
- human_prefix=human_prefix,
- ai_prefix=ai_prefix,
-) # default k=5
-
-conversation = ConversationChain(
- llm=LLM,
- prompt=prompt,
- memory=memory,
- verbose=True,
-)
-logger.debug(f"{conversation.prompt.template=}") # type: ignore
-
-# for api access ===
-config = Config()
-# Config(top_k=40, top_p=0.95, temperature=0.8, repetition_penalty=1.1, last_n_tokens=64, seed=-1, batch_size=8, threads=-1, max_new_tokens=256, stop=None, stream=False, reset=True, context_length=-1, gpu_layers=0)
-config.stop = stop
-config.threads = cpu_count
-
-try:
- LLM_api = CTransformers(
- model=model_loc,
- model_type="llama",
- # callbacks=[StreamingStdOutCallbackHandler(), deqcb],
- callbacks=[StreamingStdOutCallbackHandler()],
- **vars(config),
- )
- conversation_api = ConversationChain(
- llm=LLM_api, # need a separate LLM, or else deq may be messed up
- prompt=prompt,
- verbose=True,
- )
-except Exception as exc_:
- logger.error(exc_)
- conversation_api = None
- logger.warning("Not able to instantiate conversation_api, api will not work")
-
-# conversation.predict(input="Hello, my name is Andrea")
-
-
-def user(user_message, history):
- # return user_message, history + [[user_message, None]]
- history.append([user_message, None])
- return user_message, history # keep user_message
-
-
-def user1(user_message, history):
- # return user_message, history + [[user_message, None]]
- history.append([user_message, None])
- return "", history # clear user_message
-
-
-def bot_(history):
- user_message = history[-1][0]
- resp = random.choice(["How are you?", "I love you", "I'm very hungry"])
- bot_message = user_message + ": " + resp
- history[-1][1] = ""
- for character in bot_message:
- history[-1][1] += character
- time.sleep(0.02)
- yield history
-
- history[-1][1] = resp
- yield history
-
-
-def bot(history):
- user_message = history[-1][0]
- response = []
-
- logger.debug(f"{user_message=}")
-
- # conversation.predict(input="What's my name?")
- thr = Thread(target=conversation.predict, kwargs={"input": user_message})
- thr.start()
-
- # preocess deq
- response = []
- flag = 1
- then = time.time()
- prefix = "" # to please pyright
- with about_time() as atime: # type: ignore
- while True:
- if deq:
- if flag:
- prefix = f"({time.time() - then:.2f}s) "
- flag = 0
- _ = deq.popleft()
- if _ is sig_end:
- break
- # print(_, end='')
- response.append(_)
- history[-1][1] = prefix + "".join(response).strip()
- yield history
- else:
- time.sleep(0.01)
- _ = (
- f"(time elapsed: {atime.duration_human}, " # type: ignore
- f"{atime.duration/len(''.join(response)):.2f}s/char)" # type: ignore
- )
-
- history[-1][1] = "".join(response) + f"\n{_}"
- yield history
-
-
-def predict_api(user_prompt):
- if conversation_api is None:
- return "conversation_api is None, probably due to insufficient memory, api not usable"
-
- logger.debug(f"api: {user_prompt=}")
- try:
- _ = """
- response = generate(
- prompt,
- config=config,
- )
- # """
- response = conversation_api.predict(input=user_prompt)
- logger.debug(f"api: {response=}")
- except Exception as exc:
- logger.error(exc)
- response = f"{exc=}"
- # bot = {"inputs": [response]}
- # bot = [(prompt, response)]
-
- return response.strip()
-
-
-css = """
- .importantButton {
- background: linear-gradient(45deg, #7e0570,#5d1c99, #6e00ff) !important;
- border: none !important;
- }
- .importantButton:hover {
- background: linear-gradient(45deg, #ff00e0,#8500ff, #6e00ff) !important;
- border: none !important;
- }
- .disclaimer {font-variant-caps: all-small-caps; font-size: xx-small;}
- .xsmall {font-size: x-small;}
-"""
-etext = """In America, where cars are an important part of the national psyche, a decade ago people had suddenly started to drive less, which had not happened since the oil shocks of the 1970s. """
-examples_list = [
- ["Hello I am mike."],
- ["What's my name?"],
- ["What NFL team won the Super Bowl in the year Justin Bieber was born?"],
- [
- "What NFL team won the Super Bowl in the year Justin Bieber was born? Think step by step."
- ],
- ["When was Justin Bieber born?"],
- ["What NFL team won the Super Bowl in 1994?"],
- ["How to pick a lock? Provide detailed steps."],
- [
- "If it takes 10 hours to dry 10 clothes, assuming all the clothes are hanged together at the same time for drying , then how long will it take to dry a cloth?"
- ],
- ["is infinity + 1 bigger than infinity?"],
- ["Explain the plot of Cinderella in a sentence."],
- [
- "How long does it take to become proficient in French, and what are the best methods for retaining information?"
- ],
- ["What are some common mistakes to avoid when writing code?"],
- ["Build a prompt to generate a beautiful portrait of a horse"],
- ["Suggest four metaphors to describe the benefits of AI"],
- ["Write a pop song about leaving home for the sandy beaches."],
- ["Write a pop song about having hot sex on a sandy beach."],
- ["Write a summary demonstrating my ability to tame lions"],
- ["鲁迅和周树人什么关系? 说中文。"],
- ["鲁迅和周树人什么关系?"],
- ["鲁迅和周树人什么关系? 用英文回答。"],
- ["从前有一头牛,这头牛后面有什么?"],
- ["正无穷大加一大于正无穷大吗?"],
- ["正无穷大加正无穷大大于正无穷大吗?"],
- ["-2的平方根等于什么?"],
- ["树上有5只鸟,猎人开枪打死了一只。树上还有几只鸟?"],
- ["树上有11只鸟,猎人开枪打死了一只。树上还有几只鸟?提示:需考虑鸟可能受惊吓飞走。"],
- ["以红楼梦的行文风格写一张委婉的请假条。不少于320字。"],
- [f"{etext} 翻成中文,列出3个版本。"],
- [f"{etext} \n 翻成中文,保留原意,但使用文学性的语言。不要写解释。列出3个版本。"],
- ["假定 1 + 2 = 4, 试求 7 + 8。"],
- ["给出判断一个数是不是质数的 javascript 码。"],
- ["给出实现python 里 range(10)的 javascript 码。"],
- ["给出实现python 里 [*(range(10)]的 javascript 码。"],
- ["Erkläre die Handlung von Cinderella in einem Satz."],
- ["Erkläre die Handlung von Cinderella in einem Satz. Auf Deutsch."],
-]
-
-logger.info("start block")
-
-with gr.Blocks(
- title=f"{Path(model_loc).name}",
- theme=gr.themes.Soft(text_size="sm", spacing_size="sm"),
- css=css,
-) as block:
- # buff_var = gr.State("")
- with gr.Accordion("🎈 Info", open=False):
- # gr.HTML(
- # """ and spin a CPU UPGRADE to avoid the queue """
- # )
- gr.Markdown(
- f"""{Path(model_loc).name}
- The bot can conduct multi-turn conversations, i.e. it remembers past dialogs. The process time is longer.
- It typically takes about 120 seconds for the first response to appear.
-
- Most examples are meant for another model.
- You probably should try to test
- some related prompts.""",
- elem_classes="xsmall",
- )
-
- chatbot = gr.Chatbot(height=500)
-
- with gr.Row():
- with gr.Column(scale=5):
- msg = gr.Textbox(
- label="Chat Message Box",
- placeholder="Ask me anything (press Shift+Enter or click Submit to send)",
- show_label=False,
- # container=False,
- lines=6,
- max_lines=30,
- show_copy_button=True,
- # ).style(container=False)
- )
- with gr.Column(scale=1, min_width=50):
- with gr.Row():
- submit = gr.Button("Submit", elem_classes="xsmall")
- stop = gr.Button("Stop", visible=True)
- clear = gr.Button("Clear History", visible=True)
- with gr.Row(visible=False):
- with gr.Accordion("Advanced Options:", open=False):
- with gr.Row():
- with gr.Column(scale=2):
- system = gr.Textbox(
- label="System Prompt",
- value=prompt_template,
- show_label=False,
- container=False,
- # ).style(container=False)
- )
- with gr.Column():
- with gr.Row():
- change = gr.Button("Change System Prompt")
- reset = gr.Button("Reset System Prompt")
-
- with gr.Accordion("Example Inputs", open=True):
- examples = gr.Examples(
- examples=examples_list,
- inputs=[msg],
- examples_per_page=40,
- )
-
- with gr.Accordion("Disclaimer", open=False):
- _ = Path(model_loc).name
- gr.Markdown(
- f"Disclaimer: {_} can produce factually incorrect output, and should not be relied on to produce "
- "factually accurate information. {_} was trained on various public datasets; while great efforts "
- "have been taken to clean the pretraining data, it is possible that this model could generate lewd, "
- "biased, or otherwise offensive outputs.",
- elem_classes=["disclaimer"],
- )
-
- msg_submit_event = msg.submit(
- # fn=conversation.user_turn,
- fn=user,
- inputs=[msg, chatbot],
- outputs=[msg, chatbot],
- queue=True,
- show_progress="full",
- # api_name=None,
- ).then(bot, chatbot, chatbot, queue=True)
- submit_click_event = submit.click(
- # fn=lambda x, y: ("",) + user(x, y)[1:], # clear msg
- fn=user1, # clear msg
- inputs=[msg, chatbot],
- outputs=[msg, chatbot],
- queue=True,
- # queue=False,
- show_progress="full",
- # api_name=None,
- ).then(bot, chatbot, chatbot, queue=True)
- stop.click(
- fn=None,
- inputs=None,
- outputs=None,
- cancels=[msg_submit_event, submit_click_event],
- queue=False,
- )
-
- # TODO: clear conversation memory as well
- clear.click(lambda: None, None, chatbot, queue=False)
-
- with gr.Accordion("For Chat/Translation API", open=False, visible=False):
- input_text = gr.Text()
- api_btn = gr.Button("Go", variant="primary")
- out_text = gr.Text()
-
- if conversation_api is not None:
- api_btn.click(
- predict_api,
- input_text,
- out_text,
- api_name="api",
- )
-
-# concurrency_count=5, max_size=20
-# max_size=36, concurrency_count=14
-# CPU cpu_count=2 16G, model 7G
-# CPU UPGRADE cpu_count=8 32G, model 7G
-
-# does not work
-_ = """
-# _ = int(psutil.virtual_memory().total / 10**9 // file_size - 1)
-# concurrency_count = max(_, 1)
-if psutil.cpu_count(logical=False) >= 8:
- # concurrency_count = max(int(32 / file_size) - 1, 1)
-else:
- # concurrency_count = max(int(16 / file_size) - 1, 1)
-# """
-
-concurrency_count = 1
-logger.info(f"{concurrency_count=}")
-
-block.queue(concurrency_count=concurrency_count, max_size=5).launch(debug=True)
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/Fixes/tensor-launch.py b/spaces/Kangarroar/ApplioRVC-Inference/Fixes/tensor-launch.py
deleted file mode 100644
index cd4ec997fb4b1338d7f29912987865899281b083..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/Fixes/tensor-launch.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import threading
-import time
-from tensorboard import program
-import os
-
-log_path = "logs"
-
-if __name__ == "__main__":
- tb = program.TensorBoard()
- tb.configure(argv=[None, '--logdir', log_path])
- url = tb.launch()
- print(f'Tensorboard can be accessed at: {url}')
-
- while True:
- time.sleep(600) # Keep the main thread running
\ No newline at end of file
diff --git a/spaces/KenjieDec/GPEN/face_colorization.py b/spaces/KenjieDec/GPEN/face_colorization.py
deleted file mode 100644
index c6607245b408bc7d6156bb7b8781cdd86e4badea..0000000000000000000000000000000000000000
--- a/spaces/KenjieDec/GPEN/face_colorization.py
+++ /dev/null
@@ -1,23 +0,0 @@
-'''
-@paper: GAN Prior Embedded Network for Blind Face Restoration in the Wild (CVPR2021)
-@author: yangxy (yangtao9009@gmail.com)
-'''
-import os
-import cv2
-import glob
-import time
-import numpy as np
-from PIL import Image
-import __init_paths
-from face_model.face_gan import FaceGAN
-
-class FaceColorization(object):
- def __init__(self, base_dir='./', size=1024, out_size=None, model=None, channel_multiplier=2, narrow=1, key=None, device='cuda'):
- self.facegan = FaceGAN(base_dir, size, out_size, model, channel_multiplier, narrow, key, device=device)
-
- # make sure the face image==well aligned. Please refer to face_enhancement.py
- def process(self, gray):
- # colorize the face
- out = self.facegan.process(gray)
-
- return out
diff --git a/spaces/KunalSinha2024/cledgeEssayIdeationTool/README.md b/spaces/KunalSinha2024/cledgeEssayIdeationTool/README.md
deleted file mode 100644
index dc33a38488b13aeda5dd9aaa14e32dfe514750c7..0000000000000000000000000000000000000000
--- a/spaces/KunalSinha2024/cledgeEssayIdeationTool/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: CledgeEssayIdeationTool
-emoji: 💩
-colorFrom: pink
-colorTo: red
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/LEBEI/00002/README.md b/spaces/LEBEI/00002/README.md
deleted file mode 100644
index 9860239cf42c94e385faaaa75a85311e010d64f7..0000000000000000000000000000000000000000
--- a/spaces/LEBEI/00002/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-python_version: 3.7
-title: White Box Cartoonization
-emoji: 📚
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: hylee/White-box-Cartoonization
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/LanguageBind/LanguageBind/vl_ret/dataloader_didemo_retrieval.py b/spaces/LanguageBind/LanguageBind/vl_ret/dataloader_didemo_retrieval.py
deleted file mode 100644
index f7169d7e138fa7468139e5259dd3bc582c24bfa1..0000000000000000000000000000000000000000
--- a/spaces/LanguageBind/LanguageBind/vl_ret/dataloader_didemo_retrieval.py
+++ /dev/null
@@ -1,238 +0,0 @@
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import unicode_literals
-from __future__ import print_function
-
-import os
-from torch.utils.data import Dataset
-import numpy as np
-import json
-from .rawvideo_util import RawVideoExtractor
-
-class DiDeMo_DataLoader(Dataset):
- def __init__(
- self,
- subset,
- data_path,
- features_path,
- tokenizer,
- max_words=30,
- feature_framerate=1.0,
- max_frames=100,
- image_resolution=224,
- frame_order=0,
- slice_framepos=0,
- ):
- self.data_path = data_path
- self.features_path = features_path
- self.feature_framerate = feature_framerate
- self.max_words = max_words
- self.max_frames = max_frames
- self.tokenizer = tokenizer
- # 0: ordinary order; 1: reverse order; 2: random order.
- self.frame_order = frame_order
- assert self.frame_order in [0, 1, 2]
- # 0: cut from head frames; 1: cut from tail frames; 2: extract frames uniformly.
- self.slice_framepos = slice_framepos
- assert self.slice_framepos in [0, 1, 2]
-
- self.subset = subset
- assert self.subset in ["train", "val", "test"]
-
- video_id_path_dict = {}
- video_id_path_dict["train"] = os.path.join(self.data_path, "train_list.txt")
- video_id_path_dict["val"] = os.path.join(self.data_path, "val_list.txt")
- video_id_path_dict["test"] = os.path.join(self.data_path, "test_list.txt")
-
- video_json_path_dict = {}
- video_json_path_dict["train"] = os.path.join(self.data_path, "train_data.json")
- video_json_path_dict["val"] = os.path.join(self.data_path, "val_data.json")
- video_json_path_dict["test"] = os.path.join(self.data_path, "test_data.json")
-
- with open(video_id_path_dict[self.subset], 'r') as fp:
- video_ids = [itm.strip() for itm in fp.readlines()]
-
- caption_dict = {}
- with open(video_json_path_dict[self.subset], 'r') as f:
- json_data = json.load(f)
- for itm in json_data:
- description = itm["description"]
- times = itm["times"]
- video = itm["video"]
- if video not in video_ids:
- continue
-
- # each video is split into 5-second temporal chunks
- # average the points from each annotator
- start_ = np.mean([t_[0] for t_ in times]) * 5
- end_ = (np.mean([t_[1] for t_ in times]) + 1) * 5
- if video in caption_dict:
- caption_dict[video]["start"].append(start_)
- caption_dict[video]["end"].append(end_)
- caption_dict[video]["text"].append(description)
- else:
- caption_dict[video] = {}
- caption_dict[video]["start"] = [start_]
- caption_dict[video]["end"] = [end_]
- caption_dict[video]["text"] = [description]
-
- for k_ in caption_dict.keys():
- caption_dict[k_]["start"] = [0]
- # trick to save time on obtaining each video length
- # [https://github.com/LisaAnne/LocalizingMoments/blob/master/README.md]:
- # Some videos are longer than 30 seconds. These videos were truncated to 30 seconds during annotation.
- caption_dict[k_]["end"] = [31]
- caption_dict[k_]["text"] = [" ".join(caption_dict[k_]["text"])]
-
- video_dict = {}
- for root, dub_dir, video_files in os.walk(self.features_path):
- for video_file in video_files:
- video_id_ = os.path.splitext(video_file)[0] ###############3
- if video_id_ not in video_ids:
- continue
- file_path_ = os.path.join(root, video_file)
- video_dict[video_id_] = file_path_
-
- self.caption_dict = caption_dict
- self.video_dict = video_dict
- video_ids = list(set(video_ids) & set(self.caption_dict.keys()) & set(self.video_dict.keys()))
-
- # Get all captions
- self.iter2video_pairs_dict = {}
- for video_id in self.caption_dict.keys():
- if video_id not in video_ids:
- continue
- caption = self.caption_dict[video_id]
- n_caption = len(caption['start'])
- for sub_id in range(n_caption):
- self.iter2video_pairs_dict[len(self.iter2video_pairs_dict)] = (video_id, sub_id)
-
- self.rawVideoExtractor = RawVideoExtractor(framerate=feature_framerate, size=image_resolution)
- self.SPECIAL_TOKEN = {"CLS_TOKEN": "<|startoftext|>", "SEP_TOKEN": "<|endoftext|>",
- "MASK_TOKEN": "[MASK]", "UNK_TOKEN": "[UNK]", "PAD_TOKEN": "[PAD]"}
-
- def __len__(self):
- return len(self.iter2video_pairs_dict)
-
- def _get_text(self, video_id, sub_id):
- caption = self.caption_dict[video_id]
- k = 1
- r_ind = [sub_id]
-
- starts = np.zeros(k, dtype=np.long)
- ends = np.zeros(k, dtype=np.long)
- pairs_text = np.zeros((k, self.max_words), dtype=np.long)
- pairs_mask = np.zeros((k, self.max_words), dtype=np.long)
- pairs_segment = np.zeros((k, self.max_words), dtype=np.long)
-
- for i in range(k):
- # ind = r_ind[i]
- # start_, end_ = caption['start'][ind], caption['end'][ind]
- # words = self.tokenizer.tokenize(caption['text'][ind])
- # starts[i], ends[i] = start_, end_
- #
- # words = [self.SPECIAL_TOKEN["CLS_TOKEN"]] + words
- # total_length_with_CLS = self.max_words - 1
- # if len(words) > total_length_with_CLS:
- # words = words[:total_length_with_CLS]
- # words = words + [self.SPECIAL_TOKEN["SEP_TOKEN"]]
- #
- # input_ids = self.tokenizer.convert_tokens_to_ids(words)
- # input_mask = [1] * len(input_ids)
- # segment_ids = [0] * len(input_ids)
-
-
-
- ind = r_ind[i]
- start_, end_ = caption['start'][ind], caption['end'][ind]
- output = self.tokenizer(caption['text'][ind])
- starts[i], ends[i] = start_, end_
-
- input_ids = output[0].squeeze()
- input_mask = output[1].squeeze()
- segment_ids = [0] * len(input_ids)
-
-
-
- while len(input_ids) < self.max_words:
- input_ids.append(0)
- input_mask.append(0)
- segment_ids.append(0)
- assert len(input_ids) == self.max_words
- assert len(input_mask) == self.max_words
- assert len(segment_ids) == self.max_words
-
- pairs_text[i] = np.array(input_ids)
- pairs_mask[i] = np.array(input_mask)
- pairs_segment[i] = np.array(segment_ids)
-
- return pairs_text, pairs_mask, pairs_segment, starts, ends
-
- def _get_rawvideo(self, idx, s, e):
- video_mask = np.zeros((len(s), self.max_frames), dtype=np.long)
- max_video_length = [0] * len(s)
-
- # Pair x L x T x 3 x H x W
- video = np.zeros((len(s), self.max_frames, 1, 3,
- self.rawVideoExtractor.size, self.rawVideoExtractor.size), dtype=np.float)
- video_path = self.video_dict[idx]
-
- try:
- for i in range(len(s)):
- start_time = int(s[i])
- end_time = int(e[i])
- start_time = start_time if start_time >= 0. else 0.
- end_time = end_time if end_time >= 0. else 0.
- if start_time > end_time:
- start_time, end_time = end_time, start_time
- elif start_time == end_time:
- end_time = end_time + 1
-
- cache_id = "{}_{}_{}".format(video_path, start_time, end_time)
- # Should be optimized by gathering all asking of this video
- raw_video_data = self.rawVideoExtractor.get_video_data(video_path, start_time, end_time)
- raw_video_data = raw_video_data['video']
- # print('raw_video_data', raw_video_data.shape)
-
- if len(raw_video_data.shape) > 3:
- raw_video_data_clip = raw_video_data
- # L x T x 3 x H x W
- raw_video_slice = self.rawVideoExtractor.process_raw_data(raw_video_data_clip)
- if self.max_frames < raw_video_slice.shape[0]:
- if self.slice_framepos == 0:
- video_slice = raw_video_slice[:self.max_frames, ...]
- elif self.slice_framepos == 1:
- video_slice = raw_video_slice[-self.max_frames:, ...]
- else:
- sample_indx = np.linspace(0, raw_video_slice.shape[0] - 1, num=self.max_frames, dtype=int)
- # print('sample_indx', raw_video_slice.shape[0], sample_indx)
- video_slice = raw_video_slice[sample_indx, ...]
- else:
- video_slice = raw_video_slice
-
- video_slice = self.rawVideoExtractor.process_frame_order(video_slice, frame_order=self.frame_order)
-
- slice_len = video_slice.shape[0]
- max_video_length[i] = max_video_length[i] if max_video_length[i] > slice_len else slice_len
- if slice_len < 1:
- pass
- else:
- video[i][:slice_len, ...] = video_slice
- else:
- print("video path: {} error. video id: {}, start: {}, end: {}".format(video_path, idx, start_time, end_time))
- except Exception as excep:
- print("video path: {} error. video id: {}, start: {}, end: {}, Error: {}".format(video_path, idx, s, e, excep))
- pass
- # raise e
-
- for i, v_length in enumerate(max_video_length):
- video_mask[i][:v_length] = [1] * v_length
-
- return video, video_mask
-
- def __getitem__(self, feature_idx):
- video_id, sub_id = self.iter2video_pairs_dict[feature_idx]
-
- pairs_text, pairs_mask, pairs_segment, starts, ends = self._get_text(video_id, sub_id)
- video, video_mask = self._get_rawvideo(video_id, starts, ends)
- return pairs_text, pairs_mask, pairs_segment, video, video_mask
\ No newline at end of file
diff --git a/spaces/Linaqruf/Animagine-XL/lpw_stable_diffusion_xl.py b/spaces/Linaqruf/Animagine-XL/lpw_stable_diffusion_xl.py
deleted file mode 100644
index 31b93169771d1affd36f2a96ae537d1daa677fd8..0000000000000000000000000000000000000000
--- a/spaces/Linaqruf/Animagine-XL/lpw_stable_diffusion_xl.py
+++ /dev/null
@@ -1,1496 +0,0 @@
-## ----------------------------------------------------------
-# A SDXL pipeline can take unlimited weighted prompt
-#
-# Author: Andrew Zhu
-# Github: https://github.com/xhinker
-# Medium: https://medium.com/@xhinker
-## -----------------------------------------------------------
-
-import inspect
-import os
-from typing import Any, Callable, Dict, List, Optional, Tuple, Union
-
-import torch
-from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
-
-from diffusers import DiffusionPipeline, StableDiffusionXLPipeline
-from diffusers.image_processor import VaeImageProcessor
-from diffusers.loaders import (
- FromSingleFileMixin,
- LoraLoaderMixin,
- TextualInversionLoaderMixin,
-)
-from diffusers.models import AutoencoderKL, UNet2DConditionModel
-from diffusers.models.attention_processor import (
- AttnProcessor2_0,
- LoRAAttnProcessor2_0,
- LoRAXFormersAttnProcessor,
- XFormersAttnProcessor,
-)
-from diffusers.pipelines.stable_diffusion_xl import StableDiffusionXLPipelineOutput
-from diffusers.schedulers import KarrasDiffusionSchedulers
-from diffusers.utils import (
- is_accelerate_available,
- is_accelerate_version,
- is_invisible_watermark_available,
- logging,
- randn_tensor,
- replace_example_docstring,
-)
-
-
-if is_invisible_watermark_available():
- from diffusers.pipelines.stable_diffusion_xl.watermark import (
- StableDiffusionXLWatermarker,
- )
-
-
-def parse_prompt_attention(text):
- """
- Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
- Accepted tokens are:
- (abc) - increases attention to abc by a multiplier of 1.1
- (abc:3.12) - increases attention to abc by a multiplier of 3.12
- [abc] - decreases attention to abc by a multiplier of 1.1
- \( - literal character '('
- \[ - literal character '['
- \) - literal character ')'
- \] - literal character ']'
- \\ - literal character '\'
- anything else - just text
-
- >>> parse_prompt_attention('normal text')
- [['normal text', 1.0]]
- >>> parse_prompt_attention('an (important) word')
- [['an ', 1.0], ['important', 1.1], [' word', 1.0]]
- >>> parse_prompt_attention('(unbalanced')
- [['unbalanced', 1.1]]
- >>> parse_prompt_attention('\(literal\]')
- [['(literal]', 1.0]]
- >>> parse_prompt_attention('(unnecessary)(parens)')
- [['unnecessaryparens', 1.1]]
- >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).')
- [['a ', 1.0],
- ['house', 1.5730000000000004],
- [' ', 1.1],
- ['on', 1.0],
- [' a ', 1.1],
- ['hill', 0.55],
- [', sun, ', 1.1],
- ['sky', 1.4641000000000006],
- ['.', 1.1]]
- """
- import re
-
- re_attention = re.compile(
- r"""
- \\\(|\\\)|\\\[|\\]|\\\\|\\|\(|\[|:([+-]?[.\d]+)\)|
- \)|]|[^\\()\[\]:]+|:
- """,
- re.X,
- )
-
- re_break = re.compile(r"\s*\bBREAK\b\s*", re.S)
-
- res = []
- round_brackets = []
- square_brackets = []
-
- round_bracket_multiplier = 1.1
- square_bracket_multiplier = 1 / 1.1
-
- def multiply_range(start_position, multiplier):
- for p in range(start_position, len(res)):
- res[p][1] *= multiplier
-
- for m in re_attention.finditer(text):
- text = m.group(0)
- weight = m.group(1)
-
- if text.startswith("\\"):
- res.append([text[1:], 1.0])
- elif text == "(":
- round_brackets.append(len(res))
- elif text == "[":
- square_brackets.append(len(res))
- elif weight is not None and len(round_brackets) > 0:
- multiply_range(round_brackets.pop(), float(weight))
- elif text == ")" and len(round_brackets) > 0:
- multiply_range(round_brackets.pop(), round_bracket_multiplier)
- elif text == "]" and len(square_brackets) > 0:
- multiply_range(square_brackets.pop(), square_bracket_multiplier)
- else:
- parts = re.split(re_break, text)
- for i, part in enumerate(parts):
- if i > 0:
- res.append(["BREAK", -1])
- res.append([part, 1.0])
-
- for pos in round_brackets:
- multiply_range(pos, round_bracket_multiplier)
-
- for pos in square_brackets:
- multiply_range(pos, square_bracket_multiplier)
-
- if len(res) == 0:
- res = [["", 1.0]]
-
- # merge runs of identical weights
- i = 0
- while i + 1 < len(res):
- if res[i][1] == res[i + 1][1]:
- res[i][0] += res[i + 1][0]
- res.pop(i + 1)
- else:
- i += 1
-
- return res
-
-
-def get_prompts_tokens_with_weights(clip_tokenizer: CLIPTokenizer, prompt: str):
- """
- Get prompt token ids and weights, this function works for both prompt and negative prompt
-
- Args:
- pipe (CLIPTokenizer)
- A CLIPTokenizer
- prompt (str)
- A prompt string with weights
-
- Returns:
- text_tokens (list)
- A list contains token ids
- text_weight (list)
- A list contains the correspodent weight of token ids
-
- Example:
- import torch
- from transformers import CLIPTokenizer
-
- clip_tokenizer = CLIPTokenizer.from_pretrained(
- "stablediffusionapi/deliberate-v2"
- , subfolder = "tokenizer"
- , dtype = torch.float16
- )
-
- token_id_list, token_weight_list = get_prompts_tokens_with_weights(
- clip_tokenizer = clip_tokenizer
- ,prompt = "a (red:1.5) cat"*70
- )
- """
- texts_and_weights = parse_prompt_attention(prompt)
- text_tokens, text_weights = [], []
- for word, weight in texts_and_weights:
- # tokenize and discard the starting and the ending token
- token = clip_tokenizer(word, truncation=False).input_ids[
- 1:-1
- ] # so that tokenize whatever length prompt
- # the returned token is a 1d list: [320, 1125, 539, 320]
-
- # merge the new tokens to the all tokens holder: text_tokens
- text_tokens = [*text_tokens, *token]
-
- # each token chunk will come with one weight, like ['red cat', 2.0]
- # need to expand weight for each token.
- chunk_weights = [weight] * len(token)
-
- # append the weight back to the weight holder: text_weights
- text_weights = [*text_weights, *chunk_weights]
- return text_tokens, text_weights
-
-
-def group_tokens_and_weights(token_ids: list, weights: list, pad_last_block=False):
- """
- Produce tokens and weights in groups and pad the missing tokens
-
- Args:
- token_ids (list)
- The token ids from tokenizer
- weights (list)
- The weights list from function get_prompts_tokens_with_weights
- pad_last_block (bool)
- Control if fill the last token list to 75 tokens with eos
- Returns:
- new_token_ids (2d list)
- new_weights (2d list)
-
- Example:
- token_groups,weight_groups = group_tokens_and_weights(
- token_ids = token_id_list
- , weights = token_weight_list
- )
- """
- bos, eos = 49406, 49407
-
- # this will be a 2d list
- new_token_ids = []
- new_weights = []
- while len(token_ids) >= 75:
- # get the first 75 tokens
- head_75_tokens = [token_ids.pop(0) for _ in range(75)]
- head_75_weights = [weights.pop(0) for _ in range(75)]
-
- # extract token ids and weights
- temp_77_token_ids = [bos] + head_75_tokens + [eos]
- temp_77_weights = [1.0] + head_75_weights + [1.0]
-
- # add 77 token and weights chunk to the holder list
- new_token_ids.append(temp_77_token_ids)
- new_weights.append(temp_77_weights)
-
- # padding the left
- if len(token_ids) > 0:
- padding_len = 75 - len(token_ids) if pad_last_block else 0
-
- temp_77_token_ids = [bos] + token_ids + [eos] * padding_len + [eos]
- new_token_ids.append(temp_77_token_ids)
-
- temp_77_weights = [1.0] + weights + [1.0] * padding_len + [1.0]
- new_weights.append(temp_77_weights)
-
- return new_token_ids, new_weights
-
-
-def get_weighted_text_embeddings_sdxl(
- pipe: StableDiffusionXLPipeline,
- prompt: str = "",
- prompt_2: str = None,
- neg_prompt: str = "",
- neg_prompt_2: str = None,
-):
- """
- This function can process long prompt with weights, no length limitation
- for Stable Diffusion XL
-
- Args:
- pipe (StableDiffusionPipeline)
- prompt (str)
- prompt_2 (str)
- neg_prompt (str)
- neg_prompt_2 (str)
- Returns:
- prompt_embeds (torch.Tensor)
- neg_prompt_embeds (torch.Tensor)
- """
- if prompt_2:
- prompt = f"{prompt} {prompt_2}"
-
- if neg_prompt_2:
- neg_prompt = f"{neg_prompt} {neg_prompt_2}"
-
- eos = pipe.tokenizer.eos_token_id
-
- # tokenizer 1
- prompt_tokens, prompt_weights = get_prompts_tokens_with_weights(
- pipe.tokenizer, prompt
- )
-
- neg_prompt_tokens, neg_prompt_weights = get_prompts_tokens_with_weights(
- pipe.tokenizer, neg_prompt
- )
-
- # tokenizer 2
- prompt_tokens_2, prompt_weights_2 = get_prompts_tokens_with_weights(
- pipe.tokenizer_2, prompt
- )
-
- neg_prompt_tokens_2, neg_prompt_weights_2 = get_prompts_tokens_with_weights(
- pipe.tokenizer_2, neg_prompt
- )
-
- # padding the shorter one for prompt set 1
- prompt_token_len = len(prompt_tokens)
- neg_prompt_token_len = len(neg_prompt_tokens)
-
- if prompt_token_len > neg_prompt_token_len:
- # padding the neg_prompt with eos token
- neg_prompt_tokens = neg_prompt_tokens + [eos] * abs(
- prompt_token_len - neg_prompt_token_len
- )
- neg_prompt_weights = neg_prompt_weights + [1.0] * abs(
- prompt_token_len - neg_prompt_token_len
- )
- else:
- # padding the prompt
- prompt_tokens = prompt_tokens + [eos] * abs(
- prompt_token_len - neg_prompt_token_len
- )
- prompt_weights = prompt_weights + [1.0] * abs(
- prompt_token_len - neg_prompt_token_len
- )
-
- # padding the shorter one for token set 2
- prompt_token_len_2 = len(prompt_tokens_2)
- neg_prompt_token_len_2 = len(neg_prompt_tokens_2)
-
- if prompt_token_len_2 > neg_prompt_token_len_2:
- # padding the neg_prompt with eos token
- neg_prompt_tokens_2 = neg_prompt_tokens_2 + [eos] * abs(
- prompt_token_len_2 - neg_prompt_token_len_2
- )
- neg_prompt_weights_2 = neg_prompt_weights_2 + [1.0] * abs(
- prompt_token_len_2 - neg_prompt_token_len_2
- )
- else:
- # padding the prompt
- prompt_tokens_2 = prompt_tokens_2 + [eos] * abs(
- prompt_token_len_2 - neg_prompt_token_len_2
- )
- prompt_weights_2 = prompt_weights + [1.0] * abs(
- prompt_token_len_2 - neg_prompt_token_len_2
- )
-
- embeds = []
- neg_embeds = []
-
- prompt_token_groups, prompt_weight_groups = group_tokens_and_weights(
- prompt_tokens.copy(), prompt_weights.copy()
- )
-
- neg_prompt_token_groups, neg_prompt_weight_groups = group_tokens_and_weights(
- neg_prompt_tokens.copy(), neg_prompt_weights.copy()
- )
-
- prompt_token_groups_2, prompt_weight_groups_2 = group_tokens_and_weights(
- prompt_tokens_2.copy(), prompt_weights_2.copy()
- )
-
- neg_prompt_token_groups_2, neg_prompt_weight_groups_2 = group_tokens_and_weights(
- neg_prompt_tokens_2.copy(), neg_prompt_weights_2.copy()
- )
-
- # get prompt embeddings one by one is not working.
- for i in range(len(prompt_token_groups)):
- # get positive prompt embeddings with weights
- token_tensor = torch.tensor(
- [prompt_token_groups[i]], dtype=torch.long, device=pipe.device
- )
- weight_tensor = torch.tensor(
- prompt_weight_groups[i], dtype=torch.float16, device=pipe.device
- )
-
- token_tensor_2 = torch.tensor(
- [prompt_token_groups_2[i]], dtype=torch.long, device=pipe.device
- )
-
- # use first text encoder
- prompt_embeds_1 = pipe.text_encoder(
- token_tensor.to(pipe.device), output_hidden_states=True
- )
- prompt_embeds_1_hidden_states = prompt_embeds_1.hidden_states[-2]
-
- # use second text encoder
- prompt_embeds_2 = pipe.text_encoder_2(
- token_tensor_2.to(pipe.device), output_hidden_states=True
- )
- prompt_embeds_2_hidden_states = prompt_embeds_2.hidden_states[-2]
- pooled_prompt_embeds = prompt_embeds_2[0]
-
- prompt_embeds_list = [
- prompt_embeds_1_hidden_states,
- prompt_embeds_2_hidden_states,
- ]
- token_embedding = torch.concat(prompt_embeds_list, dim=-1).squeeze(0)
-
- for j in range(len(weight_tensor)):
- if weight_tensor[j] != 1.0:
- token_embedding[j] = (
- token_embedding[-1]
- + (token_embedding[j] - token_embedding[-1]) * weight_tensor[j]
- )
-
- token_embedding = token_embedding.unsqueeze(0)
- embeds.append(token_embedding)
-
- # get negative prompt embeddings with weights
- neg_token_tensor = torch.tensor(
- [neg_prompt_token_groups[i]], dtype=torch.long, device=pipe.device
- )
- neg_token_tensor_2 = torch.tensor(
- [neg_prompt_token_groups_2[i]], dtype=torch.long, device=pipe.device
- )
- neg_weight_tensor = torch.tensor(
- neg_prompt_weight_groups[i], dtype=torch.float16, device=pipe.device
- )
-
- # use first text encoder
- neg_prompt_embeds_1 = pipe.text_encoder(
- neg_token_tensor.to(pipe.device), output_hidden_states=True
- )
- neg_prompt_embeds_1_hidden_states = neg_prompt_embeds_1.hidden_states[-2]
-
- # use second text encoder
- neg_prompt_embeds_2 = pipe.text_encoder_2(
- neg_token_tensor_2.to(pipe.device), output_hidden_states=True
- )
- neg_prompt_embeds_2_hidden_states = neg_prompt_embeds_2.hidden_states[-2]
- negative_pooled_prompt_embeds = neg_prompt_embeds_2[0]
-
- neg_prompt_embeds_list = [
- neg_prompt_embeds_1_hidden_states,
- neg_prompt_embeds_2_hidden_states,
- ]
- neg_token_embedding = torch.concat(neg_prompt_embeds_list, dim=-1).squeeze(0)
-
- for z in range(len(neg_weight_tensor)):
- if neg_weight_tensor[z] != 1.0:
- neg_token_embedding[z] = (
- neg_token_embedding[-1]
- + (neg_token_embedding[z] - neg_token_embedding[-1])
- * neg_weight_tensor[z]
- )
-
- neg_token_embedding = neg_token_embedding.unsqueeze(0)
- neg_embeds.append(neg_token_embedding)
-
- prompt_embeds = torch.cat(embeds, dim=1)
- negative_prompt_embeds = torch.cat(neg_embeds, dim=1)
-
- return (
- prompt_embeds,
- negative_prompt_embeds,
- pooled_prompt_embeds,
- negative_pooled_prompt_embeds,
- )
-
-
-# -------------------------------------------------------------------------------------------------------------------------------
-# reuse the backbone code from StableDiffusionXLPipeline
-# -------------------------------------------------------------------------------------------------------------------------------
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-EXAMPLE_DOC_STRING = """
- Examples:
- ```py
- from diffusers import DiffusionPipeline
- import torch
-
- pipe = DiffusionPipeline.from_pretrained(
- "stabilityai/stable-diffusion-xl-base-1.0"
- , torch_dtype = torch.float16
- , use_safetensors = True
- , variant = "fp16"
- , custom_pipeline = "lpw_stable_diffusion_xl",
- )
-
- prompt = "a white cat running on the grass"*20
- prompt2 = "play a football"*20
- prompt = f"{prompt},{prompt2}"
- neg_prompt = "blur, low quality"
-
- pipe.to("cuda")
- images = pipe(
- prompt = prompt
- , negative_prompt = neg_prompt
- ).images[0]
-
- pipe.to("cpu")
- torch.cuda.empty_cache()
- images
- ```
-"""
-
-
-# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.rescale_noise_cfg
-def rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=0.0):
- """
- Rescale `noise_cfg` according to `guidance_rescale`. Based on findings of [Common Diffusion Noise Schedules and
- Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). See Section 3.4
- """
- std_text = noise_pred_text.std(
- dim=list(range(1, noise_pred_text.ndim)), keepdim=True
- )
- std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True)
- # rescale the results from guidance (fixes overexposure)
- noise_pred_rescaled = noise_cfg * (std_text / std_cfg)
- # mix with the original results from guidance by factor guidance_rescale to avoid "plain looking" images
- noise_cfg = (
- guidance_rescale * noise_pred_rescaled + (1 - guidance_rescale) * noise_cfg
- )
- return noise_cfg
-
-
-class SDXLLongPromptWeightingPipeline(
- DiffusionPipeline, FromSingleFileMixin, LoraLoaderMixin
-):
- r"""
- Pipeline for text-to-image generation using Stable Diffusion XL.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- In addition the pipeline inherits the following loading methods:
- - *LoRA*: [`StableDiffusionXLPipeline.load_lora_weights`]
- - *Ckpt*: [`loaders.FromSingleFileMixin.from_single_file`]
-
- as well as the following saving methods:
- - *LoRA*: [`loaders.StableDiffusionXLPipeline.save_lora_weights`]
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder. Stable Diffusion XL uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- text_encoder_2 ([` CLIPTextModelWithProjection`]):
- Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
- specifically the
- [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
- variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- tokenizer_2 (`CLIPTokenizer`):
- Second Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- """
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- text_encoder_2: CLIPTextModelWithProjection,
- tokenizer: CLIPTokenizer,
- tokenizer_2: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: KarrasDiffusionSchedulers,
- force_zeros_for_empty_prompt: bool = True,
- add_watermarker: Optional[bool] = None,
- ):
- super().__init__()
-
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- text_encoder_2=text_encoder_2,
- tokenizer=tokenizer,
- tokenizer_2=tokenizer_2,
- unet=unet,
- scheduler=scheduler,
- )
- self.register_to_config(
- force_zeros_for_empty_prompt=force_zeros_for_empty_prompt
- )
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
- self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
- self.default_sample_size = self.unet.config.sample_size
-
- add_watermarker = (
- add_watermarker
- if add_watermarker is not None
- else is_invisible_watermark_available()
- )
-
- if add_watermarker:
- self.watermark = StableDiffusionXLWatermarker()
- else:
- self.watermark = None
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing
- def enable_vae_slicing(self):
- r"""
- Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
- compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
- """
- self.vae.enable_slicing()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing
- def disable_vae_slicing(self):
- r"""
- Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
- computing decoding in one step.
- """
- self.vae.disable_slicing()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_tiling
- def enable_vae_tiling(self):
- r"""
- Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
- compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
- processing larger images.
- """
- self.vae.enable_tiling()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_tiling
- def disable_vae_tiling(self):
- r"""
- Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
- computing decoding in one step.
- """
- self.vae.disable_tiling()
-
- def enable_model_cpu_offload(self, gpu_id=0):
- r"""
- Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
- to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
- method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
- `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
- """
- if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
- from accelerate import cpu_offload_with_hook
- else:
- raise ImportError(
- "`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher."
- )
-
- device = torch.device(f"cuda:{gpu_id}")
-
- if self.device.type != "cpu":
- self.to("cpu", silence_dtype_warnings=True)
- torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
-
- model_sequence = (
- [self.text_encoder, self.text_encoder_2]
- if self.text_encoder is not None
- else [self.text_encoder_2]
- )
- model_sequence.extend([self.unet, self.vae])
-
- hook = None
- for cpu_offloaded_model in model_sequence:
- _, hook = cpu_offload_with_hook(
- cpu_offloaded_model, device, prev_module_hook=hook
- )
-
- # We'll offload the last model manually.
- self.final_offload_hook = hook
-
- def encode_prompt(
- self,
- prompt: str,
- prompt_2: Optional[str] = None,
- device: Optional[torch.device] = None,
- num_images_per_prompt: int = 1,
- do_classifier_free_guidance: bool = True,
- negative_prompt: Optional[str] = None,
- negative_prompt_2: Optional[str] = None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
- lora_scale: Optional[float] = None,
- ):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- prompt to be encoded
- prompt_2 (`str` or `List[str]`, *optional*):
- The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
- used in both text-encoders
- device: (`torch.device`):
- torch device
- num_images_per_prompt (`int`):
- number of images that should be generated per prompt
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
- less than `1`).
- negative_prompt_2 (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
- `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
- If not provided, pooled text embeddings will be generated from `prompt` input argument.
- negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
- input argument.
- lora_scale (`float`, *optional*):
- A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- """
- device = device or self._execution_device
-
- # set lora scale so that monkey patched LoRA
- # function of text encoder can correctly access it
- if lora_scale is not None and isinstance(self, LoraLoaderMixin):
- self._lora_scale = lora_scale
-
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- # Define tokenizers and text encoders
- tokenizers = (
- [self.tokenizer, self.tokenizer_2]
- if self.tokenizer is not None
- else [self.tokenizer_2]
- )
- text_encoders = (
- [self.text_encoder, self.text_encoder_2]
- if self.text_encoder is not None
- else [self.text_encoder_2]
- )
-
- if prompt_embeds is None:
- prompt_2 = prompt_2 or prompt
- # textual inversion: procecss multi-vector tokens if necessary
- prompt_embeds_list = []
- prompts = [prompt, prompt_2]
- for prompt, tokenizer, text_encoder in zip(
- prompts, tokenizers, text_encoders
- ):
- if isinstance(self, TextualInversionLoaderMixin):
- prompt = self.maybe_convert_prompt(prompt, tokenizer)
-
- text_inputs = tokenizer(
- prompt,
- padding="max_length",
- max_length=tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
-
- text_input_ids = text_inputs.input_ids
- untruncated_ids = tokenizer(
- prompt, padding="longest", return_tensors="pt"
- ).input_ids
-
- if untruncated_ids.shape[-1] >= text_input_ids.shape[
- -1
- ] and not torch.equal(text_input_ids, untruncated_ids):
- removed_text = tokenizer.batch_decode(
- untruncated_ids[:, tokenizer.model_max_length - 1 : -1]
- )
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- prompt_embeds = text_encoder(
- text_input_ids.to(device),
- output_hidden_states=True,
- )
-
- # We are only ALWAYS interested in the pooled output of the final text encoder
- pooled_prompt_embeds = prompt_embeds[0]
- prompt_embeds = prompt_embeds.hidden_states[-2]
-
- prompt_embeds_list.append(prompt_embeds)
-
- prompt_embeds = torch.concat(prompt_embeds_list, dim=-1)
-
- # get unconditional embeddings for classifier free guidance
- zero_out_negative_prompt = (
- negative_prompt is None and self.config.force_zeros_for_empty_prompt
- )
- if (
- do_classifier_free_guidance
- and negative_prompt_embeds is None
- and zero_out_negative_prompt
- ):
- negative_prompt_embeds = torch.zeros_like(prompt_embeds)
- negative_pooled_prompt_embeds = torch.zeros_like(pooled_prompt_embeds)
- elif do_classifier_free_guidance and negative_prompt_embeds is None:
- negative_prompt = negative_prompt or ""
- negative_prompt_2 = negative_prompt_2 or negative_prompt
-
- uncond_tokens: List[str]
- if prompt is not None and type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt, negative_prompt_2]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = [negative_prompt, negative_prompt_2]
-
- negative_prompt_embeds_list = []
- for negative_prompt, tokenizer, text_encoder in zip(
- uncond_tokens, tokenizers, text_encoders
- ):
- if isinstance(self, TextualInversionLoaderMixin):
- negative_prompt = self.maybe_convert_prompt(
- negative_prompt, tokenizer
- )
-
- max_length = prompt_embeds.shape[1]
- uncond_input = tokenizer(
- negative_prompt,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
-
- negative_prompt_embeds = text_encoder(
- uncond_input.input_ids.to(device),
- output_hidden_states=True,
- )
- # We are only ALWAYS interested in the pooled output of the final text encoder
- negative_pooled_prompt_embeds = negative_prompt_embeds[0]
- negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
-
- negative_prompt_embeds_list.append(negative_prompt_embeds)
-
- negative_prompt_embeds = torch.concat(negative_prompt_embeds_list, dim=-1)
-
- prompt_embeds = prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device)
- bs_embed, seq_len, _ = prompt_embeds.shape
- # duplicate text embeddings for each generation per prompt, using mps friendly method
- prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
- prompt_embeds = prompt_embeds.view(
- bs_embed * num_images_per_prompt, seq_len, -1
- )
-
- if do_classifier_free_guidance:
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
- seq_len = negative_prompt_embeds.shape[1]
- negative_prompt_embeds = negative_prompt_embeds.to(
- dtype=self.text_encoder_2.dtype, device=device
- )
- negative_prompt_embeds = negative_prompt_embeds.repeat(
- 1, num_images_per_prompt, 1
- )
- negative_prompt_embeds = negative_prompt_embeds.view(
- batch_size * num_images_per_prompt, seq_len, -1
- )
-
- pooled_prompt_embeds = pooled_prompt_embeds.repeat(
- 1, num_images_per_prompt
- ).view(bs_embed * num_images_per_prompt, -1)
- if do_classifier_free_guidance:
- negative_pooled_prompt_embeds = negative_pooled_prompt_embeds.repeat(
- 1, num_images_per_prompt
- ).view(bs_embed * num_images_per_prompt, -1)
-
- return (
- prompt_embeds,
- negative_prompt_embeds,
- pooled_prompt_embeds,
- negative_pooled_prompt_embeds,
- )
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
- def prepare_extra_step_kwargs(self, generator, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(
- inspect.signature(self.scheduler.step).parameters.keys()
- )
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- # check if the scheduler accepts generator
- accepts_generator = "generator" in set(
- inspect.signature(self.scheduler.step).parameters.keys()
- )
- if accepts_generator:
- extra_step_kwargs["generator"] = generator
- return extra_step_kwargs
-
- def check_inputs(
- self,
- prompt,
- prompt_2,
- height,
- width,
- callback_steps,
- negative_prompt=None,
- negative_prompt_2=None,
- prompt_embeds=None,
- negative_prompt_embeds=None,
- pooled_prompt_embeds=None,
- negative_pooled_prompt_embeds=None,
- ):
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(
- f"`height` and `width` have to be divisible by 8 but are {height} and {width}."
- )
-
- if (callback_steps is None) or (
- callback_steps is not None
- and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- if prompt is not None and prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
- " only forward one of the two."
- )
- elif prompt_2 is not None and prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `prompt_2`: {prompt_2} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
- " only forward one of the two."
- )
- elif prompt is None and prompt_embeds is None:
- raise ValueError(
- "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
- )
- elif prompt is not None and (
- not isinstance(prompt, str) and not isinstance(prompt, list)
- ):
- raise ValueError(
- f"`prompt` has to be of type `str` or `list` but is {type(prompt)}"
- )
- elif prompt_2 is not None and (
- not isinstance(prompt_2, str) and not isinstance(prompt_2, list)
- ):
- raise ValueError(
- f"`prompt_2` has to be of type `str` or `list` but is {type(prompt_2)}"
- )
-
- if negative_prompt is not None and negative_prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
- )
- elif negative_prompt_2 is not None and negative_prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `negative_prompt_2`: {negative_prompt_2} and `negative_prompt_embeds`:"
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
- )
-
- if prompt_embeds is not None and negative_prompt_embeds is not None:
- if prompt_embeds.shape != negative_prompt_embeds.shape:
- raise ValueError(
- "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
- f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
- f" {negative_prompt_embeds.shape}."
- )
-
- if prompt_embeds is not None and pooled_prompt_embeds is None:
- raise ValueError(
- "If `prompt_embeds` are provided, `pooled_prompt_embeds` also have to be passed. Make sure to generate `pooled_prompt_embeds` from the same text encoder that was used to generate `prompt_embeds`."
- )
-
- if negative_prompt_embeds is not None and negative_pooled_prompt_embeds is None:
- raise ValueError(
- "If `negative_prompt_embeds` are provided, `negative_pooled_prompt_embeds` also have to be passed. Make sure to generate `negative_pooled_prompt_embeds` from the same text encoder that was used to generate `negative_prompt_embeds`."
- )
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
- def prepare_latents(
- self,
- batch_size,
- num_channels_latents,
- height,
- width,
- dtype,
- device,
- generator,
- latents=None,
- ):
- shape = (
- batch_size,
- num_channels_latents,
- height // self.vae_scale_factor,
- width // self.vae_scale_factor,
- )
- if isinstance(generator, list) and len(generator) != batch_size:
- raise ValueError(
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
- )
-
- if latents is None:
- latents = randn_tensor(
- shape, generator=generator, device=device, dtype=dtype
- )
- else:
- latents = latents.to(device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
- return latents
-
- def _get_add_time_ids(
- self, original_size, crops_coords_top_left, target_size, dtype
- ):
- add_time_ids = list(original_size + crops_coords_top_left + target_size)
-
- passed_add_embed_dim = (
- self.unet.config.addition_time_embed_dim * len(add_time_ids)
- + self.text_encoder_2.config.projection_dim
- )
- expected_add_embed_dim = self.unet.add_embedding.linear_1.in_features
-
- if expected_add_embed_dim != passed_add_embed_dim:
- raise ValueError(
- f"Model expects an added time embedding vector of length {expected_add_embed_dim}, but a vector of {passed_add_embed_dim} was created. The model has an incorrect config. Please check `unet.config.time_embedding_type` and `text_encoder_2.config.projection_dim`."
- )
-
- add_time_ids = torch.tensor([add_time_ids], dtype=dtype)
- return add_time_ids
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_upscale.StableDiffusionUpscalePipeline.upcast_vae
- def upcast_vae(self):
- dtype = self.vae.dtype
- self.vae.to(dtype=torch.float32)
- use_torch_2_0_or_xformers = isinstance(
- self.vae.decoder.mid_block.attentions[0].processor,
- (
- AttnProcessor2_0,
- XFormersAttnProcessor,
- LoRAXFormersAttnProcessor,
- LoRAAttnProcessor2_0,
- ),
- )
- # if xformers or torch_2_0 is used attention block does not need
- # to be in float32 which can save lots of memory
- if use_torch_2_0_or_xformers:
- self.vae.post_quant_conv.to(dtype)
- self.vae.decoder.conv_in.to(dtype)
- self.vae.decoder.mid_block.to(dtype)
-
- @torch.no_grad()
- @replace_example_docstring(EXAMPLE_DOC_STRING)
- def __call__(
- self,
- prompt: str = None,
- prompt_2: Optional[str] = None,
- height: Optional[int] = None,
- width: Optional[int] = None,
- num_inference_steps: int = 50,
- denoising_end: Optional[float] = None,
- guidance_scale: float = 5.0,
- negative_prompt: Optional[str] = None,
- negative_prompt_2: Optional[str] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- latents: Optional[torch.FloatTensor] = None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- guidance_rescale: float = 0.0,
- original_size: Optional[Tuple[int, int]] = None,
- crops_coords_top_left: Tuple[int, int] = (0, 0),
- target_size: Optional[Tuple[int, int]] = None,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str`):
- The prompt to guide the image generation. If not defined, one has to pass `prompt_embeds`.
- instead.
- prompt_2 (`str`):
- The prompt to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
- used in both text-encoders
- height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- denoising_end (`float`, *optional*):
- When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
- completed before it is intentionally prematurely terminated. As a result, the returned sample will
- still retain a substantial amount of noise as determined by the discrete timesteps selected by the
- scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
- "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
- Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
- guidance_scale (`float`, *optional*, defaults to 5.0):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str`):
- The prompt not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
- less than `1`).
- negative_prompt_2 (`str`):
- The prompt not to guide the image generation to be sent to `tokenizer_2` and
- `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
- to make generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
- If not provided, pooled text embeddings will be generated from `prompt` input argument.
- negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
- input argument.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] instead
- of a plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
- cross_attention_kwargs (`dict`, *optional*):
- A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
- `self.processor` in
- [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- guidance_rescale (`float`, *optional*, defaults to 0.7):
- Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
- Flawed](https://arxiv.org/pdf/2305.08891.pdf) `guidance_scale` is defined as `φ` in equation 16. of
- [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf).
- Guidance rescale factor should fix overexposure when using zero terminal SNR.
- original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
- If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
- `original_size` defaults to `(width, height)` if not specified. Part of SDXL's micro-conditioning as
- explained in section 2.2 of
- [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)):
- `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
- `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
- `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
- [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
- For most cases, `target_size` should be set to the desired height and width of the generated image. If
- not specified it will default to `(width, height)`. Part of SDXL's micro-conditioning as explained in
- section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
-
- Examples:
-
- Returns:
- [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] if `return_dict` is True, otherwise a
- `tuple`. When returning a tuple, the first element is a list with the generated images.
- """
- # 0. Default height and width to unet
- height = height or self.default_sample_size * self.vae_scale_factor
- width = width or self.default_sample_size * self.vae_scale_factor
-
- original_size = original_size or (height, width)
- target_size = target_size or (height, width)
-
- # 1. Check inputs. Raise error if not correct
- self.check_inputs(
- prompt,
- prompt_2,
- height,
- width,
- callback_steps,
- negative_prompt,
- negative_prompt_2,
- prompt_embeds,
- negative_prompt_embeds,
- pooled_prompt_embeds,
- negative_pooled_prompt_embeds,
- )
-
- # 2. Define call parameters
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- device = self._execution_device
-
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 3. Encode input prompt
- (
- cross_attention_kwargs.get("scale", None)
- if cross_attention_kwargs is not None
- else None
- )
-
- negative_prompt = negative_prompt if negative_prompt is not None else ""
-
- (
- prompt_embeds,
- negative_prompt_embeds,
- pooled_prompt_embeds,
- negative_pooled_prompt_embeds,
- ) = get_weighted_text_embeddings_sdxl(
- pipe=self, prompt=prompt, neg_prompt=negative_prompt
- )
-
- # 4. Prepare timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=device)
-
- timesteps = self.scheduler.timesteps
-
- # 5. Prepare latent variables
- num_channels_latents = self.unet.config.in_channels
- latents = self.prepare_latents(
- batch_size * num_images_per_prompt,
- num_channels_latents,
- height,
- width,
- prompt_embeds.dtype,
- device,
- generator,
- latents,
- )
-
- # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
-
- # 7. Prepare added time ids & embeddings
- add_text_embeds = pooled_prompt_embeds
- add_time_ids = self._get_add_time_ids(
- original_size, crops_coords_top_left, target_size, dtype=prompt_embeds.dtype
- )
-
- if do_classifier_free_guidance:
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0)
- add_text_embeds = torch.cat(
- [negative_pooled_prompt_embeds, add_text_embeds], dim=0
- )
- add_time_ids = torch.cat([add_time_ids, add_time_ids], dim=0)
-
- prompt_embeds = prompt_embeds.to(device)
- add_text_embeds = add_text_embeds.to(device)
- add_time_ids = add_time_ids.to(device).repeat(
- batch_size * num_images_per_prompt, 1
- )
-
- # 8. Denoising loop
- num_warmup_steps = max(
- len(timesteps) - num_inference_steps * self.scheduler.order, 0
- )
-
- # 7.1 Apply denoising_end
- if (
- denoising_end is not None
- and type(denoising_end) == float
- and denoising_end > 0
- and denoising_end < 1
- ):
- discrete_timestep_cutoff = int(
- round(
- self.scheduler.config.num_train_timesteps
- - (denoising_end * self.scheduler.config.num_train_timesteps)
- )
- )
- num_inference_steps = len(
- list(filter(lambda ts: ts >= discrete_timestep_cutoff, timesteps))
- )
- timesteps = timesteps[:num_inference_steps]
-
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- for i, t in enumerate(timesteps):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = (
- torch.cat([latents] * 2) if do_classifier_free_guidance else latents
- )
-
- latent_model_input = self.scheduler.scale_model_input(
- latent_model_input, t
- )
-
- # predict the noise residual
- added_cond_kwargs = {
- "text_embeds": add_text_embeds,
- "time_ids": add_time_ids,
- }
- noise_pred = self.unet(
- latent_model_input,
- t,
- encoder_hidden_states=prompt_embeds,
- cross_attention_kwargs=cross_attention_kwargs,
- added_cond_kwargs=added_cond_kwargs,
- return_dict=False,
- )[0]
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (
- noise_pred_text - noise_pred_uncond
- )
-
- if do_classifier_free_guidance and guidance_rescale > 0.0:
- # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
- noise_pred = rescale_noise_cfg(
- noise_pred, noise_pred_text, guidance_rescale=guidance_rescale
- )
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(
- noise_pred, t, latents, **extra_step_kwargs, return_dict=False
- )[0]
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or (
- (i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0
- ):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # make sure the VAE is in float32 mode, as it overflows in float16
- if self.vae.dtype == torch.float16 and self.vae.config.force_upcast:
- self.upcast_vae()
- latents = latents.to(
- next(iter(self.vae.post_quant_conv.parameters())).dtype
- )
-
- if not output_type == "latent":
- image = self.vae.decode(
- latents / self.vae.config.scaling_factor, return_dict=False
- )[0]
- else:
- image = latents
- return StableDiffusionXLPipelineOutput(images=image)
-
- # apply watermark if available
- if self.watermark is not None:
- image = self.watermark.apply_watermark(image)
-
- image = self.image_processor.postprocess(image, output_type=output_type)
-
- # Offload last model to CPU
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.final_offload_hook.offload()
-
- if not return_dict:
- return (image,)
-
- return StableDiffusionXLPipelineOutput(images=image)
-
- # Overrride to properly handle the loading and unloading of the additional text encoder.
- def load_lora_weights(
- self,
- pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]],
- **kwargs,
- ):
- # We could have accessed the unet config from `lora_state_dict()` too. We pass
- # it here explicitly to be able to tell that it's coming from an SDXL
- # pipeline.
- state_dict, network_alphas = self.lora_state_dict(
- pretrained_model_name_or_path_or_dict,
- unet_config=self.unet.config,
- **kwargs,
- )
- self.load_lora_into_unet(
- state_dict, network_alphas=network_alphas, unet=self.unet
- )
-
- text_encoder_state_dict = {
- k: v for k, v in state_dict.items() if "text_encoder." in k
- }
- if len(text_encoder_state_dict) > 0:
- self.load_lora_into_text_encoder(
- text_encoder_state_dict,
- network_alphas=network_alphas,
- text_encoder=self.text_encoder,
- prefix="text_encoder",
- lora_scale=self.lora_scale,
- )
-
- text_encoder_2_state_dict = {
- k: v for k, v in state_dict.items() if "text_encoder_2." in k
- }
- if len(text_encoder_2_state_dict) > 0:
- self.load_lora_into_text_encoder(
- text_encoder_2_state_dict,
- network_alphas=network_alphas,
- text_encoder=self.text_encoder_2,
- prefix="text_encoder_2",
- lora_scale=self.lora_scale,
- )
-
- @classmethod
- def save_lora_weights(
- self,
- save_directory: Union[str, os.PathLike],
- unet_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None,
- text_encoder_lora_layers: Dict[
- str, Union[torch.nn.Module, torch.Tensor]
- ] = None,
- text_encoder_2_lora_layers: Dict[
- str, Union[torch.nn.Module, torch.Tensor]
- ] = None,
- is_main_process: bool = True,
- weight_name: str = None,
- save_function: Callable = None,
- safe_serialization: bool = False,
- ):
- state_dict = {}
-
- def pack_weights(layers, prefix):
- layers_weights = (
- layers.state_dict() if isinstance(layers, torch.nn.Module) else layers
- )
- layers_state_dict = {
- f"{prefix}.{module_name}": param
- for module_name, param in layers_weights.items()
- }
- return layers_state_dict
-
- state_dict.update(pack_weights(unet_lora_layers, "unet"))
-
- if text_encoder_lora_layers and text_encoder_2_lora_layers:
- state_dict.update(pack_weights(text_encoder_lora_layers, "text_encoder"))
- state_dict.update(
- pack_weights(text_encoder_2_lora_layers, "text_encoder_2")
- )
-
- self.write_lora_layers(
- state_dict=state_dict,
- save_directory=save_directory,
- is_main_process=is_main_process,
- weight_name=weight_name,
- save_function=save_function,
- safe_serialization=safe_serialization,
- )
-
- def _remove_text_encoder_monkey_patch(self):
- self._remove_text_encoder_monkey_patch_classmethod(self.text_encoder)
- self._remove_text_encoder_monkey_patch_classmethod(self.text_encoder_2)
diff --git a/spaces/Liu-LAB/GPT-academic/crazy_functions/vt_fns/vt_modify_config.py b/spaces/Liu-LAB/GPT-academic/crazy_functions/vt_fns/vt_modify_config.py
deleted file mode 100644
index e7fd745c3dc2ee1cf260ac2ac97a053b2985d4c8..0000000000000000000000000000000000000000
--- a/spaces/Liu-LAB/GPT-academic/crazy_functions/vt_fns/vt_modify_config.py
+++ /dev/null
@@ -1,81 +0,0 @@
-from pydantic import BaseModel, Field
-from typing import List
-from toolbox import update_ui_lastest_msg, get_conf
-from request_llm.bridge_all import predict_no_ui_long_connection
-from crazy_functions.json_fns.pydantic_io import GptJsonIO
-import copy, json, pickle, os, sys
-
-
-def modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
- ALLOW_RESET_CONFIG, = get_conf('ALLOW_RESET_CONFIG')
- if not ALLOW_RESET_CONFIG:
- yield from update_ui_lastest_msg(
- lastmsg=f"当前配置不允许被修改!如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。",
- chatbot=chatbot, history=history, delay=2
- )
- return
-
- # ⭐ ⭐ ⭐ 读取可配置项目条目
- names = {}
- from enum import Enum
- import config
- for k, v in config.__dict__.items():
- if k.startswith('__'): continue
- names.update({k:k})
- # if len(names) > 20: break # 限制最多前10个配置项,如果太多了会导致gpt无法理解
-
- ConfigOptions = Enum('ConfigOptions', names)
- class ModifyConfigurationIntention(BaseModel):
- which_config_to_modify: ConfigOptions = Field(description="the name of the configuration to modify, you must choose from one of the ConfigOptions enum.", default=None)
- new_option_value: str = Field(description="the new value of the option", default=None)
-
- # ⭐ ⭐ ⭐ 分析用户意图
- yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n读取新配置中", chatbot=chatbot, history=history, delay=0)
- gpt_json_io = GptJsonIO(ModifyConfigurationIntention)
- inputs = "Analyze how to change configuration according to following user input, answer me with json: \n\n" + \
- ">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + \
- gpt_json_io.format_instructions
-
- run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
- inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
- user_intention = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn)
-
- explicit_conf = user_intention.which_config_to_modify.value
-
- ok = (explicit_conf in txt)
- if ok:
- yield from update_ui_lastest_msg(
- lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}",
- chatbot=chatbot, history=history, delay=1
- )
- yield from update_ui_lastest_msg(
- lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}\n\n正在修改配置中",
- chatbot=chatbot, history=history, delay=2
- )
-
- # ⭐ ⭐ ⭐ 立即应用配置
- from toolbox import set_conf
- set_conf(explicit_conf, user_intention.new_option_value)
-
- yield from update_ui_lastest_msg(
- lastmsg=f"正在执行任务: {txt}\n\n配置修改完成,重新页面即可生效。", chatbot=chatbot, history=history, delay=1
- )
- else:
- yield from update_ui_lastest_msg(
- lastmsg=f"失败,如果需要配置{explicit_conf},您需要明确说明并在指令中提到它。", chatbot=chatbot, history=history, delay=5
- )
-
-def modify_configuration_reboot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
- ALLOW_RESET_CONFIG, = get_conf('ALLOW_RESET_CONFIG')
- if not ALLOW_RESET_CONFIG:
- yield from update_ui_lastest_msg(
- lastmsg=f"当前配置不允许被修改!如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。",
- chatbot=chatbot, history=history, delay=2
- )
- return
-
- yield from modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention)
- yield from update_ui_lastest_msg(
- lastmsg=f"正在执行任务: {txt}\n\n配置修改完成,五秒后即将重启!若出现报错请无视即可。", chatbot=chatbot, history=history, delay=5
- )
- os.execl(sys.executable, sys.executable, *sys.argv)
diff --git a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/text/cantonese.py b/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/text/cantonese.py
deleted file mode 100644
index b66d12138b81b70b86f18217d24a08fce76305c0..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/text/cantonese.py
+++ /dev/null
@@ -1,59 +0,0 @@
-import re
-import cn2an
-import opencc
-
-
-converter = opencc.OpenCC('jyutjyu')
-
-# List of (Latin alphabet, ipa) pairs:
-_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('A', 'ei˥'),
- ('B', 'biː˥'),
- ('C', 'siː˥'),
- ('D', 'tiː˥'),
- ('E', 'iː˥'),
- ('F', 'e˥fuː˨˩'),
- ('G', 'tsiː˥'),
- ('H', 'ɪk̚˥tsʰyː˨˩'),
- ('I', 'ɐi˥'),
- ('J', 'tsei˥'),
- ('K', 'kʰei˥'),
- ('L', 'e˥llou˨˩'),
- ('M', 'ɛːm˥'),
- ('N', 'ɛːn˥'),
- ('O', 'ou˥'),
- ('P', 'pʰiː˥'),
- ('Q', 'kʰiːu˥'),
- ('R', 'aː˥lou˨˩'),
- ('S', 'ɛː˥siː˨˩'),
- ('T', 'tʰiː˥'),
- ('U', 'juː˥'),
- ('V', 'wiː˥'),
- ('W', 'tʊk̚˥piː˥juː˥'),
- ('X', 'ɪk̚˥siː˨˩'),
- ('Y', 'waːi˥'),
- ('Z', 'iː˨sɛːt̚˥')
-]]
-
-
-def number_to_cantonese(text):
- return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text)
-
-
-def latin_to_ipa(text):
- for regex, replacement in _latin_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def cantonese_to_ipa(text):
- text = number_to_cantonese(text.upper())
- text = converter.convert(text).replace('-','').replace('$',' ')
- text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text)
- text = re.sub(r'[、;:]', ',', text)
- text = re.sub(r'\s*,\s*', ', ', text)
- text = re.sub(r'\s*。\s*', '. ', text)
- text = re.sub(r'\s*?\s*', '? ', text)
- text = re.sub(r'\s*!\s*', '! ', text)
- text = re.sub(r'\s*$', '', text)
- return text
diff --git a/spaces/MarkMcCormack/Automated-Grading-Dashboard/studentDashboard.py b/spaces/MarkMcCormack/Automated-Grading-Dashboard/studentDashboard.py
deleted file mode 100644
index 5dd3e5f3d02f3e2d9e072fb27795391feb27fd0d..0000000000000000000000000000000000000000
--- a/spaces/MarkMcCormack/Automated-Grading-Dashboard/studentDashboard.py
+++ /dev/null
@@ -1,113 +0,0 @@
-import streamlit as st
-from utils import createComponent
-from langchain.prompts import PromptTemplate
-import random
-import json
-
-data = ["~", "~", "~", "~", "~", "~", "~", "~", "~", "~", "~", "~"]
-llmOpenAI = None
-chain = None
-components = []
-
-promptTemplate = PromptTemplate(
- input_variables = ['corpus'],
- template='''Please analyse the corpus of text and rate it from 0-100% on the following properties: [
- 1: Text comparison effectiveness
- 2: Clarity in similarities and differences
- 3: Identifying common themes accurately
- 4: Thorough textual evidence examination
- 5: Analysis of writing styles or techniques
- 6: Cohesive idea connections
- 7: Structuring the comparative analysis
- 8: Insightful textual interpretation
- 9: Consistent referencing and citations
- 10: Understanding of contextual factors
- 11: Consideration of strengths and weaknesses
- 12: Overall depth of the analysis
- ]. Please format the results in the following format: ["Result1", "Result2", "Result3"...] where the square brackets are included. The following is the corpus of text in question.
- : {corpus}'''
-)
-
-corpus = '''Title: A Tale of Hubris and Heartbreak: A Comparative Analysis of "The Great Gatsby" and "King Lear"
-
- Introduction:
- The novels "The Great Gatsby" by F. Scott Fitzgerald and "King Lear" by William Shakespeare are two masterpieces of literature, exploring profound themes of human ambition, love, betrayal, and tragic consequences. While set in different eras and contexts, both novels delve into the complexities of human nature and the devastating outcomes of unchecked ambition and hubris. This essay aims to compare and contrast these timeless classics, examining their central themes, character development, and tragic endings to gain insight into the human condition.
-
- Themes:
- Both "The Great Gatsby" and "King Lear" revolve around themes of ambition, power, and the fragility of love. In "The Great Gatsby," Jay Gatsby's relentless pursuit of wealth and status leads him to lose sight of true love and genuine connections. Likewise, in "King Lear," the titular character's arrogance and desire for flattery result in a chain of tragic events leading to his downfall. Both novels illustrate how the quest for power and status can distort one's perception of love and loyalty, ultimately leading to catastrophic consequences.
-
- Character Development:
- In "The Great Gatsby," F. Scott Fitzgerald presents a diverse array of characters with intricate backgrounds. Jay Gatsby, a self-made millionaire, represents the embodiment of the American Dream. He yearns for the love of Daisy Buchanan, who is married to Tom Buchanan, a wealthy but arrogant man. Through Gatsby's character, the novel explores the price one pays for holding onto dreams too tightly, eventually losing touch with reality.
-
- In "King Lear," Shakespeare crafts a compelling tale centered on the aging King Lear's tragic character arc. At the outset, Lear's hubris and inability to see beyond appearances lead him to misjudge the loyalty of his daughters, Goneril and Regan, while rejecting the genuine love of his loyal daughter, Cordelia. As the story progresses, Lear's vulnerability and descent into madness emphasize the destructiveness of unchecked power and the tragedy of misplaced trust.
-
- Tragic Endings:
- Both novels conclude with heart-wrenching endings, portraying the consequences of the protagonists' actions. In "The Great Gatsby," Gatsby's illusions of love and status shatter when Daisy's true allegiance to her husband is revealed. In a fatal turn of events, Gatsby's refusal to let go of his dreams leads to his demise, victim to the consequences of his own ambition.
- Similarly, in "King Lear," the pursuit of power and flattery proves to be Lear's undoing. The loss of his sanity and the betrayal of those he trusted the most culminate in a tragic finale. Lear's realization of his mistakes and his eventual demise evoke sympathy from the readers, illustrating the catastrophic impact of hubris and the frailty of human relationships.
-
- Societal Reflection:
- "The Great Gatsby" offers a critical portrayal of the Jazz Age, examining the pursuit of wealth and pleasure in 1920s America. The novel explores the hollowness and moral decay hidden beneath the facade of prosperity and social extravagance. Similarly, "King Lear" delves into the intricacies of power and loyalty, revealing the dark side of human nature and the consequences of political machinations in Shakespearean England.
- Conclusion:
- "The Great Gatsby" and "King Lear" are timeless works of literature that explore the human condition through the lens of ambition, love, betrayal, and tragic consequences. While separated by time and cultural contexts, both novels share common themes that resonate with readers across generations. They serve as cautionary tales, reminding us of the dangers of unchecked ambition and hubris, and the importance of authentic connections and compassion in a world where power and wealth can blind us to what truly matters. Through their rich character development and poignant endings, these literary classics continue to captivate and enlighten readers, offering profound insights into the human psyche and the complexity of our existence.
-'''
-
-def run():
- st.title("🧑🎓 Individual Student Profile")
-
- st.header("Student Name")
- st.subheader("Assignment Transcript")
-
- st.code(corpus)
-
- if st.button("Retrieve Student Grades for Sample Piece!"):
- newResult = chain.run(corpus)
- newData = parse_array_string(newResult)
-
- for i in range(len(newData)):
- components[i].value = newData[i]
- data[i] = newData[i]
-
-
- st.subheader("Student Transcript Analysis")
-
- columnOne, columnTwo, columnThree, columnFour, columnFive, columnSix = st.columns(6)
-
- with columnOne:
- createComponent("Text Comparison Effectiveness", data[0], "Student", 0)
- createComponent("Clarity in Similarities and Differences", data[1], "Student", 1)
-
- with columnTwo:
- createComponent("Identifying Common Themes Accurately", data[2], "Student", 2)
- createComponent("Thorough Textual Evidence Examination", data[3], "Student", 3)
-
- with columnThree:
- createComponent("Analysis of Writing Styles or Techniques", data[4], "Student", 4)
- createComponent("Cohesive Idea Connections", data[5], "Student", 5)
-
- with columnFour:
- createComponent("Structuring the Comparative Analysis", data[6], "Student", 6)
- createComponent("Insightful Textual Interpretation", data[7], "Student", 7)
-
- with columnFive:
- createComponent("Consistent Referencing and Citations", data[8], "Student", 8)
- createComponent("Understanding of Contextual Factors", data[9], "Student", 9)
-
- with columnSix:
- createComponent("Consideration of Strengths/Weaknesses", data[10], "Student", 10)
- createComponent("Overall Depth of the Analysis", data[11], "Student", 11)
-
-def parse_array_string(array_string):
- try:
- # Remove any extra spaces, newlines, etc., from the string
- cleaned_string = "[" + array_string.strip() + "]"
-
- # Check if the string starts with '[' and ends with ']'
- if cleaned_string.startswith('[') and cleaned_string.endswith(']'):
- # Use the json.loads() function to parse the string into a Python list
- array = json.loads(cleaned_string)
- return array[0]
- else:
- raise ValueError("Invalid array format. The string should start with '[' and end with ']'.")
-
- except Exception as e:
- raise ValueError("Error while parsing the array string: {}".format(str(e)))
\ No newline at end of file
diff --git a/spaces/Marshalls/testmtd/analysis/__init__.py b/spaces/Marshalls/testmtd/analysis/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Mileena/nitrosocke-Arcane-Diffusion/app.py b/spaces/Mileena/nitrosocke-Arcane-Diffusion/app.py
deleted file mode 100644
index c2c4d7e899e43df9718fe37df2a8810c437f4b61..0000000000000000000000000000000000000000
--- a/spaces/Mileena/nitrosocke-Arcane-Diffusion/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/nitrosocke/Arcane-Diffusion").launch()
\ No newline at end of file
diff --git a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/position_encoding.py b/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/position_encoding.py
deleted file mode 100644
index eac7e896bbe85a670824bfe8ef487d0535d5bd99..0000000000000000000000000000000000000000
--- a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/position_encoding.py
+++ /dev/null
@@ -1,186 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# DINO
-# Copyright (c) 2022 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Conditional DETR
-# Copyright (c) 2021 Microsoft. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Copied from DETR (https://github.com/facebookresearch/detr)
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-# ------------------------------------------------------------------------
-
-"""
-Various positional encodings for the transformer.
-"""
-import math
-
-import torch
-from torch import nn
-
-from groundingdino.util.misc import NestedTensor
-
-
-class PositionEmbeddingSine(nn.Module):
- """
- This is a more standard version of the position embedding, very similar to the one
- used by the Attention is all you need paper, generalized to work on images.
- """
-
- def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None):
- super().__init__()
- self.num_pos_feats = num_pos_feats
- self.temperature = temperature
- self.normalize = normalize
- if scale is not None and normalize is False:
- raise ValueError("normalize should be True if scale is passed")
- if scale is None:
- scale = 2 * math.pi
- self.scale = scale
-
- def forward(self, tensor_list: NestedTensor):
- x = tensor_list.tensors
- mask = tensor_list.mask
- assert mask is not None
- not_mask = ~mask
- y_embed = not_mask.cumsum(1, dtype=torch.float32)
- x_embed = not_mask.cumsum(2, dtype=torch.float32)
- if self.normalize:
- eps = 1e-6
- # if os.environ.get("SHILONG_AMP", None) == '1':
- # eps = 1e-4
- # else:
- # eps = 1e-6
- y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale
- x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale
-
- dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
- dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)
-
- pos_x = x_embed[:, :, :, None] / dim_t
- pos_y = y_embed[:, :, :, None] / dim_t
- pos_x = torch.stack(
- (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4
- ).flatten(3)
- pos_y = torch.stack(
- (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4
- ).flatten(3)
- pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
- return pos
-
-
-class PositionEmbeddingSineHW(nn.Module):
- """
- This is a more standard version of the position embedding, very similar to the one
- used by the Attention is all you need paper, generalized to work on images.
- """
-
- def __init__(
- self, num_pos_feats=64, temperatureH=10000, temperatureW=10000, normalize=False, scale=None
- ):
- super().__init__()
- self.num_pos_feats = num_pos_feats
- self.temperatureH = temperatureH
- self.temperatureW = temperatureW
- self.normalize = normalize
- if scale is not None and normalize is False:
- raise ValueError("normalize should be True if scale is passed")
- if scale is None:
- scale = 2 * math.pi
- self.scale = scale
-
- def forward(self, tensor_list: NestedTensor):
- x = tensor_list.tensors
- mask = tensor_list.mask
- assert mask is not None
- not_mask = ~mask
- y_embed = not_mask.cumsum(1, dtype=torch.float32)
- x_embed = not_mask.cumsum(2, dtype=torch.float32)
-
- # import ipdb; ipdb.set_trace()
-
- if self.normalize:
- eps = 1e-6
- y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale
- x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale
-
- dim_tx = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
- dim_tx = self.temperatureW ** (2 * (torch.div(dim_tx, 2, rounding_mode='floor')) / self.num_pos_feats)
- pos_x = x_embed[:, :, :, None] / dim_tx
-
- dim_ty = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
- dim_ty = self.temperatureH ** (2 * (torch.div(dim_ty, 2, rounding_mode='floor')) / self.num_pos_feats)
- pos_y = y_embed[:, :, :, None] / dim_ty
-
- pos_x = torch.stack(
- (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4
- ).flatten(3)
- pos_y = torch.stack(
- (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4
- ).flatten(3)
- pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
-
- # import ipdb; ipdb.set_trace()
-
- return pos
-
-
-class PositionEmbeddingLearned(nn.Module):
- """
- Absolute pos embedding, learned.
- """
-
- def __init__(self, num_pos_feats=256):
- super().__init__()
- self.row_embed = nn.Embedding(50, num_pos_feats)
- self.col_embed = nn.Embedding(50, num_pos_feats)
- self.reset_parameters()
-
- def reset_parameters(self):
- nn.init.uniform_(self.row_embed.weight)
- nn.init.uniform_(self.col_embed.weight)
-
- def forward(self, tensor_list: NestedTensor):
- x = tensor_list.tensors
- h, w = x.shape[-2:]
- i = torch.arange(w, device=x.device)
- j = torch.arange(h, device=x.device)
- x_emb = self.col_embed(i)
- y_emb = self.row_embed(j)
- pos = (
- torch.cat(
- [
- x_emb.unsqueeze(0).repeat(h, 1, 1),
- y_emb.unsqueeze(1).repeat(1, w, 1),
- ],
- dim=-1,
- )
- .permute(2, 0, 1)
- .unsqueeze(0)
- .repeat(x.shape[0], 1, 1, 1)
- )
- return pos
-
-
-def build_position_encoding(args):
- N_steps = args.hidden_dim // 2
- if args.position_embedding in ("v2", "sine"):
- # TODO find a better way of exposing other arguments
- position_embedding = PositionEmbeddingSineHW(
- N_steps,
- temperatureH=args.pe_temperatureH,
- temperatureW=args.pe_temperatureW,
- normalize=True,
- )
- elif args.position_embedding in ("v3", "learned"):
- position_embedding = PositionEmbeddingLearned(N_steps)
- else:
- raise ValueError(f"not supported {args.position_embedding}")
-
- return position_embedding
diff --git a/spaces/MirageML/sjc/sd1/ldm/modules/encoders/modules.py b/spaces/MirageML/sjc/sd1/ldm/modules/encoders/modules.py
deleted file mode 100644
index 6a684e0efdaff06fff7c18bd2d733e4ad19ba03f..0000000000000000000000000000000000000000
--- a/spaces/MirageML/sjc/sd1/ldm/modules/encoders/modules.py
+++ /dev/null
@@ -1,406 +0,0 @@
-import torch
-import torch.nn as nn
-from functools import partial
-import clip
-from einops import rearrange, repeat
-from transformers import CLIPTokenizer, CLIPTextModel
-import kornia
-
-from ldm.modules.x_transformer import Encoder, TransformerWrapper # TODO: can we directly rely on lucidrains code and simply add this as a reuirement? --> test
-
-def _expand_mask(mask, dtype, tgt_len = None):
- """
- Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
- """
- bsz, src_len = mask.size()
- tgt_len = tgt_len if tgt_len is not None else src_len
-
- expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
-
- inverted_mask = 1.0 - expanded_mask
-
- return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min)
-
-def _build_causal_attention_mask(bsz, seq_len, dtype):
- # lazily create causal attention mask, with full attention between the vision tokens
- # pytorch uses additive attention mask; fill with -inf
- mask = torch.empty(bsz, seq_len, seq_len, dtype=dtype)
- mask.fill_(torch.tensor(torch.finfo(dtype).min))
- mask.triu_(1) # zero out the lower diagonal
- mask = mask.unsqueeze(1) # expand mask
- return mask
-
-class AbstractEncoder(nn.Module):
- def __init__(self):
- super().__init__()
-
- def encode(self, *args, **kwargs):
- raise NotImplementedError
-
-
-
-class ClassEmbedder(nn.Module):
- def __init__(self, embed_dim, n_classes=1000, key='class'):
- super().__init__()
- self.key = key
- self.embedding = nn.Embedding(n_classes, embed_dim)
-
- def forward(self, batch, key=None):
- if key is None:
- key = self.key
- # this is for use in crossattn
- c = batch[key][:, None]
- c = self.embedding(c)
- return c
-
-
-class TransformerEmbedder(AbstractEncoder):
- """Some transformer encoder layers"""
- def __init__(self, n_embed, n_layer, vocab_size, max_seq_len=77, device="cuda"):
- super().__init__()
- self.device = device
- self.transformer = TransformerWrapper(num_tokens=vocab_size, max_seq_len=max_seq_len,
- attn_layers=Encoder(dim=n_embed, depth=n_layer))
-
- def forward(self, tokens):
- tokens = tokens.to(self.device) # meh
- z = self.transformer(tokens, return_embeddings=True)
- return z
-
- def encode(self, x):
- return self(x)
-
-
-class BERTTokenizer(AbstractEncoder):
- """ Uses a pretrained BERT tokenizer by huggingface. Vocab size: 30522 (?)"""
- def __init__(self, device="cuda", vq_interface=True, max_length=77):
- super().__init__()
- from transformers import BertTokenizerFast # TODO: add to reuquirements
- self.tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
- self.device = device
- self.vq_interface = vq_interface
- self.max_length = max_length
-
- def forward(self, text):
- batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True,
- return_overflowing_tokens=False, padding="max_length", return_tensors="pt")
- tokens = batch_encoding["input_ids"].to(self.device)
- return tokens
-
- @torch.no_grad()
- def encode(self, text):
- tokens = self(text)
- if not self.vq_interface:
- return tokens
- return None, None, [None, None, tokens]
-
- def decode(self, text):
- return text
-
-
-class BERTEmbedder(AbstractEncoder):
- """Uses the BERT tokenizr model and add some transformer encoder layers"""
- def __init__(self, n_embed, n_layer, vocab_size=30522, max_seq_len=77,
- device="cuda",use_tokenizer=True, embedding_dropout=0.0):
- super().__init__()
- self.use_tknz_fn = use_tokenizer
- if self.use_tknz_fn:
- self.tknz_fn = BERTTokenizer(vq_interface=False, max_length=max_seq_len)
- self.device = device
- self.transformer = TransformerWrapper(num_tokens=vocab_size, max_seq_len=max_seq_len,
- attn_layers=Encoder(dim=n_embed, depth=n_layer),
- emb_dropout=embedding_dropout)
-
- def forward(self, text, embedding_manager=None):
- if self.use_tknz_fn:
- tokens = self.tknz_fn(text)#.to(self.device)
- else:
- tokens = text
- z = self.transformer(tokens, return_embeddings=True, embedding_manager=embedding_manager)
- return z
-
- def encode(self, text, **kwargs):
- # output of length 77
- return self(text, **kwargs)
-
-class SpatialRescaler(nn.Module):
- def __init__(self,
- n_stages=1,
- method='bilinear',
- multiplier=0.5,
- in_channels=3,
- out_channels=None,
- bias=False):
- super().__init__()
- self.n_stages = n_stages
- assert self.n_stages >= 0
- assert method in ['nearest','linear','bilinear','trilinear','bicubic','area']
- self.multiplier = multiplier
- self.interpolator = partial(torch.nn.functional.interpolate, mode=method)
- self.remap_output = out_channels is not None
- if self.remap_output:
- print(f'Spatial Rescaler mapping from {in_channels} to {out_channels} channels after resizing.')
- self.channel_mapper = nn.Conv2d(in_channels,out_channels,1,bias=bias)
-
- def forward(self,x):
- for stage in range(self.n_stages):
- x = self.interpolator(x, scale_factor=self.multiplier)
-
-
- if self.remap_output:
- x = self.channel_mapper(x)
- return x
-
- def encode(self, x):
- return self(x)
-
-class FrozenCLIPEmbedder(AbstractEncoder):
- """Uses the CLIP transformer encoder for text (from Hugging Face)"""
- def __init__(self, version="openai/clip-vit-large-patch14", device="cuda", max_length=77):
- super().__init__()
- self.tokenizer = CLIPTokenizer.from_pretrained(version)
- self.transformer = CLIPTextModel.from_pretrained(version)
- self.device = device
- self.max_length = max_length
- self.freeze()
-
- def embedding_forward(
- self,
- input_ids = None,
- position_ids = None,
- inputs_embeds = None,
- embedding_manager = None,
- ) -> torch.Tensor:
-
- seq_length = input_ids.shape[-1] if input_ids is not None else inputs_embeds.shape[-2]
-
- if position_ids is None:
- position_ids = self.position_ids[:, :seq_length]
-
- if inputs_embeds is None:
- inputs_embeds = self.token_embedding(input_ids)
-
- if embedding_manager is not None:
- inputs_embeds = embedding_manager(input_ids, inputs_embeds)
-
-
- position_embeddings = self.position_embedding(position_ids)
- embeddings = inputs_embeds + position_embeddings
-
- return embeddings
-
- self.transformer.text_model.embeddings.forward = embedding_forward.__get__(self.transformer.text_model.embeddings)
-
- def encoder_forward(
- self,
- inputs_embeds,
- attention_mask = None,
- causal_attention_mask = None,
- output_attentions = None,
- output_hidden_states = None,
- return_dict = None,
- ):
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- encoder_states = () if output_hidden_states else None
- all_attentions = () if output_attentions else None
-
- hidden_states = inputs_embeds
- for idx, encoder_layer in enumerate(self.layers):
- if output_hidden_states:
- encoder_states = encoder_states + (hidden_states,)
-
- layer_outputs = encoder_layer(
- hidden_states,
- attention_mask,
- causal_attention_mask,
- output_attentions=output_attentions,
- )
-
- hidden_states = layer_outputs[0]
-
- if output_attentions:
- all_attentions = all_attentions + (layer_outputs[1],)
-
- if output_hidden_states:
- encoder_states = encoder_states + (hidden_states,)
-
- return hidden_states
-
- self.transformer.text_model.encoder.forward = encoder_forward.__get__(self.transformer.text_model.encoder)
-
-
- def text_encoder_forward(
- self,
- input_ids = None,
- attention_mask = None,
- position_ids = None,
- output_attentions = None,
- output_hidden_states = None,
- return_dict = None,
- embedding_manager = None,
- ):
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if input_ids is None:
- raise ValueError("You have to specify either input_ids")
-
- input_shape = input_ids.size()
- input_ids = input_ids.view(-1, input_shape[-1])
-
- hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids, embedding_manager=embedding_manager)
-
- bsz, seq_len = input_shape
- # CLIP's text model uses causal mask, prepare it here.
- # https://github.com/openai/CLIP/blob/cfcffb90e69f37bf2ff1e988237a0fbe41f33c04/clip/model.py#L324
- causal_attention_mask = _build_causal_attention_mask(bsz, seq_len, hidden_states.dtype).to(
- hidden_states.device
- )
-
- # expand attention_mask
- if attention_mask is not None:
- # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
- attention_mask = _expand_mask(attention_mask, hidden_states.dtype)
-
- last_hidden_state = self.encoder(
- inputs_embeds=hidden_states,
- attention_mask=attention_mask,
- causal_attention_mask=causal_attention_mask,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- last_hidden_state = self.final_layer_norm(last_hidden_state)
-
- # pooled_output = last_hidden_state[torch.arange(last_hidden_state.shape[0]), input_ids.argmax(dim=-1)]
-
- return last_hidden_state
-
- self.transformer.text_model.forward = text_encoder_forward.__get__(self.transformer.text_model)
-
- def transformer_forward(
- self,
- input_ids = None,
- attention_mask = None,
- position_ids = None,
- output_attentions = None,
- output_hidden_states = None,
- return_dict = None,
- embedding_manager = None,
- ):
- return self.text_model(
- input_ids=input_ids,
- attention_mask=attention_mask,
- position_ids=position_ids,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- embedding_manager = embedding_manager
- )
-
- self.transformer.forward = transformer_forward.__get__(self.transformer)
-
-
- def freeze(self):
- self.transformer = self.transformer.eval()
- # self.vit = self.vit.eval()
- for param in self.parameters():
- param.requires_grad = False
-
-
-
- def forward(self, text, **kwargs):
- batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True,
- return_overflowing_tokens=False, padding="max_length", return_tensors="pt")
- tokens = batch_encoding["input_ids"].to(self.device)
- z = self.transformer(input_ids=tokens, **kwargs)
- # from pdb import set_trace
- # set_trace()
- if kwargs.get('return_pooled', False):
- return z, z[torch.arange(z.shape[0]), tokens.argmax(dim=-1)]
- return z
-
- def encode(self, text, **kwargs):
- return self(text, **kwargs)
-
-
-
-class FrozenCLIPTextEmbedder(nn.Module):
- """
- Uses the CLIP transformer encoder for text.
- """
- def __init__(self, version='ViT-L/14', device="cuda", max_length=77, n_repeat=1, normalize=True):
- super().__init__()
- self.model, _ = clip.load(version, jit=False, device="cpu")
- self.device = device
- self.max_length = max_length
- self.n_repeat = n_repeat
- self.normalize = normalize
-
- def freeze(self):
- self.model = self.model.eval()
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, text):
- tokens = clip.tokenize(text).to(self.device)
- z = self.model.encode_text(tokens)
- if self.normalize:
- z = z / torch.linalg.norm(z, dim=1, keepdim=True)
- return z
-
- def encode(self, text):
- z = self(text)
- if z.ndim==2:
- z = z[:, None, :]
- z = repeat(z, 'b 1 d -> b k d', k=self.n_repeat)
- return z
-
-
-class FrozenClipImageEmbedder(nn.Module):
- """
- Uses the CLIP image encoder.
- """
- def __init__(
- self,
- model,
- jit=False,
- device='cuda' if torch.cuda.is_available() else 'cpu',
- antialias=False,
- ):
- super().__init__()
- self.model, _ = clip.load(name=model, device=device, jit=jit)
-
- self.antialias = antialias
-
- self.register_buffer('mean', torch.Tensor([0.48145466, 0.4578275, 0.40821073]), persistent=False)
- self.register_buffer('std', torch.Tensor([0.26862954, 0.26130258, 0.27577711]), persistent=False)
-
- def preprocess(self, x):
- # normalize to [0,1]
- x = kornia.geometry.resize(x, (224, 224),
- interpolation='bicubic',align_corners=True,
- antialias=self.antialias)
- x = (x + 1.) / 2.
- # renormalize according to clip
- x = kornia.enhance.normalize(x, self.mean, self.std)
- return x
-
- def forward(self, x):
- # x is assumed to be in range [-1,1]
- return self.model.encode_image(self.preprocess(x))
-
-
-if __name__ == "__main__":
- from ldm.util import count_params
- model = FrozenCLIPEmbedder()
- count_params(model, verbose=True)
\ No newline at end of file
diff --git a/spaces/Miuzarte/SUI-svc-3.0/attentions.py b/spaces/Miuzarte/SUI-svc-3.0/attentions.py
deleted file mode 100644
index 4e0b0c1fd48c962e21e1fbe60b23fc574927435c..0000000000000000000000000000000000000000
--- a/spaces/Miuzarte/SUI-svc-3.0/attentions.py
+++ /dev/null
@@ -1,303 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/MoonQiu/LongerCrafter/lvdm/ema.py b/spaces/MoonQiu/LongerCrafter/lvdm/ema.py
deleted file mode 100644
index c8c75af43565f6e140287644aaaefa97dd6e67c5..0000000000000000000000000000000000000000
--- a/spaces/MoonQiu/LongerCrafter/lvdm/ema.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import torch
-from torch import nn
-
-
-class LitEma(nn.Module):
- def __init__(self, model, decay=0.9999, use_num_upates=True):
- super().__init__()
- if decay < 0.0 or decay > 1.0:
- raise ValueError('Decay must be between 0 and 1')
-
- self.m_name2s_name = {}
- self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32))
- self.register_buffer('num_updates', torch.tensor(0,dtype=torch.int) if use_num_upates
- else torch.tensor(-1,dtype=torch.int))
-
- for name, p in model.named_parameters():
- if p.requires_grad:
- #remove as '.'-character is not allowed in buffers
- s_name = name.replace('.','')
- self.m_name2s_name.update({name:s_name})
- self.register_buffer(s_name,p.clone().detach().data)
-
- self.collected_params = []
-
- def forward(self,model):
- decay = self.decay
-
- if self.num_updates >= 0:
- self.num_updates += 1
- decay = min(self.decay,(1 + self.num_updates) / (10 + self.num_updates))
-
- one_minus_decay = 1.0 - decay
-
- with torch.no_grad():
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
-
- for key in m_param:
- if m_param[key].requires_grad:
- sname = self.m_name2s_name[key]
- shadow_params[sname] = shadow_params[sname].type_as(m_param[key])
- shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key]))
- else:
- assert not key in self.m_name2s_name
-
- def copy_to(self, model):
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
- for key in m_param:
- if m_param[key].requires_grad:
- m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data)
- else:
- assert not key in self.m_name2s_name
-
- def store(self, parameters):
- """
- Save the current parameters for restoring later.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- temporarily stored.
- """
- self.collected_params = [param.clone() for param in parameters]
-
- def restore(self, parameters):
- """
- Restore the parameters stored with the `store` method.
- Useful to validate the model with EMA parameters without affecting the
- original optimization process. Store the parameters before the
- `copy_to` method. After validation (or model saving), use this to
- restore the former parameters.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- updated with the stored parameters.
- """
- for c_param, param in zip(self.collected_params, parameters):
- param.data.copy_(c_param.data)
diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/_base_/datasets/ctw1500.py b/spaces/Mountchicken/MAERec-Gradio/configs/textdet/_base_/datasets/ctw1500.py
deleted file mode 100644
index 3361f734d0d92752336d13b60f293b785a92e927..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/_base_/datasets/ctw1500.py
+++ /dev/null
@@ -1,15 +0,0 @@
-ctw1500_textdet_data_root = 'data/ctw1500'
-
-ctw1500_textdet_train = dict(
- type='OCRDataset',
- data_root=ctw1500_textdet_data_root,
- ann_file='textdet_train.json',
- filter_cfg=dict(filter_empty_gt=True, min_size=32),
- pipeline=None)
-
-ctw1500_textdet_test = dict(
- type='OCRDataset',
- data_root=ctw1500_textdet_data_root,
- ann_file='textdet_test.json',
- test_mode=True,
- pipeline=None)
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/postprocessors/ctc_postprocessor.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/postprocessors/ctc_postprocessor.py
deleted file mode 100644
index 0fa28779abaf64e1d964ae05b4296e81308aab13..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/postprocessors/ctc_postprocessor.py
+++ /dev/null
@@ -1,52 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import math
-from typing import Sequence, Tuple
-
-import torch
-
-from mmocr.registry import MODELS
-from mmocr.structures import TextRecogDataSample
-from .base import BaseTextRecogPostprocessor
-
-
-# TODO support beam search
-@MODELS.register_module()
-class CTCPostProcessor(BaseTextRecogPostprocessor):
- """PostProcessor for CTC."""
-
- def get_single_prediction(self, probs: torch.Tensor,
- data_sample: TextRecogDataSample
- ) -> Tuple[Sequence[int], Sequence[float]]:
- """Convert the output probabilities of a single image to index and
- score.
-
- Args:
- probs (torch.Tensor): Character probabilities with shape
- :math:`(T, C)`.
- data_sample (TextRecogDataSample): Datasample of an image.
-
- Returns:
- tuple(list[int], list[float]): index and score.
- """
- feat_len = probs.size(0)
- max_value, max_idx = torch.max(probs, -1)
- valid_ratio = data_sample.get('valid_ratio', 1)
- decode_len = min(feat_len, math.ceil(feat_len * valid_ratio))
- index = []
- score = []
-
- prev_idx = self.dictionary.padding_idx
- for t in range(decode_len):
- tmp_value = max_idx[t].item()
- if tmp_value not in (prev_idx, *self.ignore_indexes):
- index.append(tmp_value)
- score.append(max_value[t].item())
- prev_idx = tmp_value
- return index, score
-
- def __call__(
- self, outputs: torch.Tensor,
- data_samples: Sequence[TextRecogDataSample]
- ) -> Sequence[TextRecogDataSample]:
- outputs = outputs.cpu().detach()
- return super().__call__(outputs, data_samples)
diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/resnet/resnet_config.py b/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/resnet/resnet_config.py
deleted file mode 100644
index a746257f02b85eddfc72192b9474638b92378644..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/resnet/resnet_config.py
+++ /dev/null
@@ -1,63 +0,0 @@
-# Lint as: python3
-# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Configuration definitions for ResNet losses, learning rates, and optimizers."""
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-from typing import Any, Mapping
-
-import dataclasses
-
-from official.modeling.hyperparams import base_config
-from official.vision.image_classification.configs import base_configs
-
-
-_RESNET_LR_SCHEDULE = [ # (multiplier, epoch to start) tuples
- (1.0, 5), (0.1, 30), (0.01, 60), (0.001, 80)
-]
-_RESNET_LR_BOUNDARIES = list(p[1] for p in _RESNET_LR_SCHEDULE[1:])
-_RESNET_LR_MULTIPLIERS = list(p[0] for p in _RESNET_LR_SCHEDULE)
-_RESNET_LR_WARMUP_EPOCHS = _RESNET_LR_SCHEDULE[0][1]
-
-
-@dataclasses.dataclass
-class ResNetModelConfig(base_configs.ModelConfig):
- """Configuration for the ResNet model."""
- name: str = 'ResNet'
- num_classes: int = 1000
- model_params: base_config.Config = dataclasses.field(
- default_factory=lambda: {
- 'num_classes': 1000,
- 'batch_size': None,
- 'use_l2_regularizer': True,
- 'rescale_inputs': False,
- })
- loss: base_configs.LossConfig = base_configs.LossConfig(
- name='sparse_categorical_crossentropy')
- optimizer: base_configs.OptimizerConfig = base_configs.OptimizerConfig(
- name='momentum',
- decay=0.9,
- epsilon=0.001,
- momentum=0.9,
- moving_average_decay=None)
- learning_rate: base_configs.LearningRateConfig = (
- base_configs.LearningRateConfig(
- name='piecewise_constant_with_warmup',
- examples_per_epoch=1281167,
- warmup_epochs=_RESNET_LR_WARMUP_EPOCHS,
- boundaries=_RESNET_LR_BOUNDARIES,
- multipliers=_RESNET_LR_MULTIPLIERS))
diff --git a/spaces/Naszirs397/rvc-models/config.py b/spaces/Naszirs397/rvc-models/config.py
deleted file mode 100644
index c0c16e0017efbcaf250cb539a1d0edb4e83575e4..0000000000000000000000000000000000000000
--- a/spaces/Naszirs397/rvc-models/config.py
+++ /dev/null
@@ -1,88 +0,0 @@
-########################硬件参数########################
-
-# 填写cuda:x, cpu 或 mps, x指代第几张卡,只支持 N卡 / Apple Silicon 加速
-device = "cuda:0"
-
-# 9-10-20-30-40系显卡无脑True,不影响质量,>=20显卡开启有加速
-is_half = True
-
-# 默认0用上所有线程,写数字限制CPU资源使用
-n_cpu = 0
-
-########################硬件参数########################
-
-
-##################下为参数处理逻辑,勿动##################
-
-########################命令行参数########################
-import argparse
-
-parser = argparse.ArgumentParser()
-parser.add_argument("--port", type=int, default=7865, help="Listen port")
-parser.add_argument("--pycmd", type=str, default="python", help="Python command")
-parser.add_argument("--colab", action="store_true", help="Launch in colab")
-parser.add_argument(
- "--noparallel", action="store_true", help="Disable parallel processing"
-)
-parser.add_argument(
- "--noautoopen", action="store_true", help="Do not open in browser automatically"
-)
-cmd_opts, unknown = parser.parse_known_args()
-
-python_cmd = cmd_opts.pycmd
-listen_port = cmd_opts.port
-iscolab = cmd_opts.colab
-noparallel = cmd_opts.noparallel
-noautoopen = cmd_opts.noautoopen
-########################命令行参数########################
-
-import sys
-import torch
-
-
-# has_mps is only available in nightly pytorch (for now) and MasOS 12.3+.
-# check `getattr` and try it for compatibility
-def has_mps() -> bool:
- if sys.platform != "darwin":
- return False
- else:
- if not getattr(torch, "has_mps", False):
- return False
- try:
- torch.zeros(1).to(torch.device("mps"))
- return True
- except Exception:
- return False
-
-
-if not torch.cuda.is_available():
- if has_mps():
- print("没有发现支持的N卡, 使用MPS进行推理")
- device = "mps"
- else:
- print("没有发现支持的N卡, 使用CPU进行推理")
- device = "cpu"
- is_half = False
-
-if device not in ["cpu", "mps"]:
- gpu_name = torch.cuda.get_device_name(int(device.split(":")[-1]))
- if "16" in gpu_name or "MX" in gpu_name:
- print("16系显卡/MX系显卡强制单精度")
- is_half = False
-
-from multiprocessing import cpu_count
-
-if n_cpu == 0:
- n_cpu = cpu_count()
-if is_half:
- # 6G显存配置
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
-else:
- # 5G显存配置
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
diff --git a/spaces/Nee001/bing0/src/pages/api/proxy.ts b/spaces/Nee001/bing0/src/pages/api/proxy.ts
deleted file mode 100644
index 240b5fb5561d993c6381649bf4544ce12f3cdab2..0000000000000000000000000000000000000000
--- a/spaces/Nee001/bing0/src/pages/api/proxy.ts
+++ /dev/null
@@ -1,24 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { fetch } from '@/lib/isomorphic'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { url, headers, method = 'GET', body } = req.body
- if (!url) {
- return res.end('ok')
- }
- const response = await fetch(url, { headers, method, body, redirect: 'manual' })
- const text = await response.text()
- res.writeHead(200, {
- 'Content-Type': 'application/text',
- 'x-url': response.url,
- 'x-status': response.status,
- })
- res.end(text)
- } catch (e) {
- console.log(e)
- return res.end(e)
- }
-}
diff --git a/spaces/NeuralInternet/Text-Generation_Playground/modules/deepspeed_parameters.py b/spaces/NeuralInternet/Text-Generation_Playground/modules/deepspeed_parameters.py
deleted file mode 100644
index 3dbed437f5b5196d0b1fcbc582085319fb8d40d1..0000000000000000000000000000000000000000
--- a/spaces/NeuralInternet/Text-Generation_Playground/modules/deepspeed_parameters.py
+++ /dev/null
@@ -1,75 +0,0 @@
-def generate_ds_config(ds_bf16, train_batch_size, nvme_offload_dir):
-
- '''
- DeepSpeed configration
- https://huggingface.co/docs/transformers/main_classes/deepspeed
- '''
-
- if nvme_offload_dir:
- ds_config = {
- "fp16": {
- "enabled": not ds_bf16,
- },
- "bf16": {
- "enabled": ds_bf16,
- },
- "zero_optimization": {
- "stage": 3,
- "offload_param": {
- "device": "nvme",
- "nvme_path": nvme_offload_dir,
- "pin_memory": True,
- "buffer_count": 5,
- "buffer_size": 1e9,
- "max_in_cpu": 1e9
- },
- "overlap_comm": True,
- "reduce_bucket_size": "auto",
- "contiguous_gradients": True,
- "sub_group_size": 1e8,
- "stage3_prefetch_bucket_size": "auto",
- "stage3_param_persistence_threshold": "auto",
- "stage3_max_live_parameters": "auto",
- "stage3_max_reuse_distance": "auto",
- },
- "aio": {
- "block_size": 262144,
- "queue_depth": 32,
- "thread_count": 1,
- "single_submit": False,
- "overlap_events": True
- },
- "steps_per_print": 2000,
- "train_batch_size": train_batch_size,
- "train_micro_batch_size_per_gpu": 1,
- "wall_clock_breakdown": False
- }
- else:
- ds_config = {
- "fp16": {
- "enabled": not ds_bf16,
- },
- "bf16": {
- "enabled": ds_bf16,
- },
- "zero_optimization": {
- "stage": 3,
- "offload_param": {
- "device": "cpu",
- "pin_memory": True
- },
- "overlap_comm": True,
- "contiguous_gradients": True,
- "reduce_bucket_size": "auto",
- "stage3_prefetch_bucket_size": "auto",
- "stage3_param_persistence_threshold": "auto",
- "stage3_max_live_parameters": "auto",
- "stage3_max_reuse_distance": "auto",
- },
- "steps_per_print": 2000,
- "train_batch_size": train_batch_size,
- "train_micro_batch_size_per_gpu": 1,
- "wall_clock_breakdown": False
- }
-
- return ds_config
diff --git a/spaces/OAOA/DifFace/models/respace.py b/spaces/OAOA/DifFace/models/respace.py
deleted file mode 100644
index 10acf220cae0510e40b2621228816ad83b02f0f7..0000000000000000000000000000000000000000
--- a/spaces/OAOA/DifFace/models/respace.py
+++ /dev/null
@@ -1,116 +0,0 @@
-import numpy as np
-import torch as th
-
-from .gaussian_diffusion import GaussianDiffusion
-
-
-def space_timesteps(num_timesteps, section_counts):
- """
- Create a list of timesteps to use from an original diffusion process,
- given the number of timesteps we want to take from equally-sized portions
- of the original process.
-
- For example, if there's 300 timesteps and the section counts are [10,15,20]
- then the first 100 timesteps are strided to be 10 timesteps, the second 100
- are strided to be 15 timesteps, and the final 100 are strided to be 20.
-
- If the stride is a string starting with "ddim", then the fixed striding
- from the DDIM paper is used, and only one section is allowed.
-
- :param num_timesteps: the number of diffusion steps in the original
- process to divide up.
- :param section_counts: either a list of numbers, or a string containing
- comma-separated numbers, indicating the step count
- per section. As a special case, use "ddimN" where N
- is a number of steps to use the striding from the
- DDIM paper.
- :return: a set of diffusion steps from the original process to use.
- """
- if isinstance(section_counts, str):
- if section_counts.startswith("ddim"):
- desired_count = int(section_counts[len("ddim"):])
- for i in range(1, num_timesteps):
- if len(range(0, num_timesteps, i)) == desired_count:
- return set(range(0, num_timesteps, i))
- raise ValueError(
- f"cannot create exactly {num_timesteps} steps with an integer stride"
- )
- section_counts = [int(x) for x in section_counts.split(",")] #[250,]
- size_per = num_timesteps // len(section_counts)
- extra = num_timesteps % len(section_counts)
- start_idx = 0
- all_steps = []
- for i, section_count in enumerate(section_counts):
- size = size_per + (1 if i < extra else 0)
- if size < section_count:
- raise ValueError(
- f"cannot divide section of {size} steps into {section_count}"
- )
- if section_count <= 1:
- frac_stride = 1
- else:
- frac_stride = (size - 1) / (section_count - 1)
- cur_idx = 0.0
- taken_steps = []
- for _ in range(section_count):
- taken_steps.append(start_idx + round(cur_idx))
- cur_idx += frac_stride
- all_steps += taken_steps
- start_idx += size
- return set(all_steps)
-
-class SpacedDiffusion(GaussianDiffusion):
- """
- A diffusion process which can skip steps in a base diffusion process.
-
- :param use_timesteps: a collection (sequence or set) of timesteps from the
- original diffusion process to retain.
- :param kwargs: the kwargs to create the base diffusion process.
- """
-
- def __init__(self, use_timesteps, **kwargs):
- self.use_timesteps = set(use_timesteps)
- self.timestep_map = []
- self.original_num_steps = len(kwargs["betas"])
-
- base_diffusion = GaussianDiffusion(**kwargs) # pylint: disable=missing-kwoa
- last_alpha_cumprod = 1.0
- new_betas = []
- for i, alpha_cumprod in enumerate(base_diffusion.alphas_cumprod):
- if i in self.use_timesteps:
- new_betas.append(1 - alpha_cumprod / last_alpha_cumprod)
- last_alpha_cumprod = alpha_cumprod
- self.timestep_map.append(i)
- kwargs["betas"] = np.array(new_betas)
- super().__init__(**kwargs)
-
- def p_mean_variance(self, model, *args, **kwargs): # pylint: disable=signature-differs
- return super().p_mean_variance(self._wrap_model(model), *args, **kwargs)
-
- def training_losses(self, model, *args, **kwargs): # pylint: disable=signature-differs
- return super().training_losses(self._wrap_model(model), *args, **kwargs)
-
- def _wrap_model(self, model):
- if isinstance(model, _WrappedModel):
- return model
- return _WrappedModel(
- model, self.timestep_map, self.rescale_timesteps, self.original_num_steps
- )
-
- def _scale_timesteps(self, t):
- # Scaling is done by the wrapped model.
- return t
-
-class _WrappedModel:
- def __init__(self, model, timestep_map, rescale_timesteps, original_num_steps):
- self.model = model
- self.timestep_map = timestep_map
- self.rescale_timesteps = rescale_timesteps
- self.original_num_steps = original_num_steps
-
- def __call__(self, x, ts, **kwargs):
- map_tensor = th.tensor(self.timestep_map, device=ts.device, dtype=ts.dtype)
- new_ts = map_tensor[ts]
- if self.rescale_timesteps:
- new_ts = new_ts.float() * (1000.0 / self.original_num_steps)
- return self.model(x, new_ts, **kwargs)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/denoiser/utils.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/denoiser/utils.py
deleted file mode 100644
index 734d047f1bb8e3aa98c88e152eee7f91fea3d814..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/denoiser/utils.py
+++ /dev/null
@@ -1,176 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-# author: adefossez
-
-import functools
-import logging
-from contextlib import contextmanager
-import inspect
-import time
-
-logger = logging.getLogger(__name__)
-
-EPS = 1e-8
-
-
-def capture_init(init):
- """capture_init.
-
- Decorate `__init__` with this, and you can then
- recover the *args and **kwargs passed to it in `self._init_args_kwargs`
- """
- @functools.wraps(init)
- def __init__(self, *args, **kwargs):
- self._init_args_kwargs = (args, kwargs)
- init(self, *args, **kwargs)
-
- return __init__
-
-
-def deserialize_model(package, strict=False):
- """deserialize_model.
-
- """
- klass = package['class']
- if strict:
- model = klass(*package['args'], **package['kwargs'])
- else:
- sig = inspect.signature(klass)
- kw = package['kwargs']
- for key in list(kw):
- if key not in sig.parameters:
- logger.warning("Dropping inexistant parameter %s", key)
- del kw[key]
- model = klass(*package['args'], **kw)
- model.load_state_dict(package['state'])
- return model
-
-
-def copy_state(state):
- return {k: v.cpu().clone() for k, v in state.items()}
-
-
-def serialize_model(model):
- args, kwargs = model._init_args_kwargs
- state = copy_state(model.state_dict())
- return {"class": model.__class__, "args": args, "kwargs": kwargs, "state": state}
-
-
-@contextmanager
-def swap_state(model, state):
- """
- Context manager that swaps the state of a model, e.g:
-
- # model is in old state
- with swap_state(model, new_state):
- # model in new state
- # model back to old state
- """
- old_state = copy_state(model.state_dict())
- model.load_state_dict(state)
- try:
- yield
- finally:
- model.load_state_dict(old_state)
-
-
-def pull_metric(history, name):
- out = []
- for metrics in history:
- if name in metrics:
- out.append(metrics[name])
- return out
-
-
-class LogProgress:
- """
- Sort of like tqdm but using log lines and not as real time.
- Args:
- - logger: logger obtained from `logging.getLogger`,
- - iterable: iterable object to wrap
- - updates (int): number of lines that will be printed, e.g.
- if `updates=5`, log every 1/5th of the total length.
- - total (int): length of the iterable, in case it does not support
- `len`.
- - name (str): prefix to use in the log.
- - level: logging level (like `logging.INFO`).
- """
- def __init__(self,
- logger,
- iterable,
- updates=5,
- total=None,
- name="LogProgress",
- level=logging.INFO):
- self.iterable = iterable
- self.total = total or len(iterable)
- self.updates = updates
- self.name = name
- self.logger = logger
- self.level = level
-
- def update(self, **infos):
- self._infos = infos
-
- def __iter__(self):
- self._iterator = iter(self.iterable)
- self._index = -1
- self._infos = {}
- self._begin = time.time()
- return self
-
- def __next__(self):
- self._index += 1
- try:
- value = next(self._iterator)
- except StopIteration:
- raise
- else:
- return value
- finally:
- log_every = max(1, self.total // self.updates)
- # logging is delayed by 1 it, in order to have the metrics from update
- if self._index >= 1 and self._index % log_every == 0:
- self._log()
-
- def _log(self):
- self._speed = (1 + self._index) / (time.time() - self._begin)
- infos = " | ".join(f"{k.capitalize()} {v}" for k, v in self._infos.items())
- if self._speed < 1e-4:
- speed = "oo sec/it"
- elif self._speed < 0.1:
- speed = f"{1/self._speed:.1f} sec/it"
- else:
- speed = f"{self._speed:.1f} it/sec"
- out = f"{self.name} | {self._index}/{self.total} | {speed}"
- if infos:
- out += " | " + infos
- self.logger.log(self.level, out)
-
-
-def colorize(text, color):
- """
- Display text with some ANSI color in the terminal.
- """
- code = f"\033[{color}m"
- restore = "\033[0m"
- return "".join([code, text, restore])
-
-
-def bold(text):
- """
- Display text in bold in the terminal.
- """
- return colorize(text, "1")
-
-
-def cal_snr(lbl, est):
- import torch
- y = 10.0 * torch.log10(
- torch.sum(lbl**2, dim=-1) / (torch.sum((est-lbl)**2, dim=-1) + EPS) +
- EPS
- )
- return y
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_to_text/docs/mtedx_example.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_to_text/docs/mtedx_example.md
deleted file mode 100644
index 25b4556affbf5bc141b103095d15fffef6225c0e..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_to_text/docs/mtedx_example.md
+++ /dev/null
@@ -1,200 +0,0 @@
-[[Back]](..)
-
-# S2T Example: Speech Translation (ST) on Multilingual TEDx
-
-[Multilingual TEDx](https://arxiv.org/abs/2102.01757) is multilingual corpus for speech recognition and
-speech translation. The data is derived from TEDx talks in 8 source languages
-with translations to a subset of 5 target languages.
-
-## Data Preparation
-[Download](http://openslr.org/100/) and unpack Multilingual TEDx data to a path
-`${MTEDX_ROOT}/${LANG_PAIR}`, then preprocess it with
-```bash
-# additional Python packages for S2T data processing/model training
-pip install pandas torchaudio soundfile sentencepiece
-
-# Generate TSV manifests, features, vocabulary
-# and configuration for each language
-python examples/speech_to_text/prep_mtedx_data.py \
- --data-root ${MTEDX_ROOT} --task asr \
- --vocab-type unigram --vocab-size 1000
-python examples/speech_to_text/prep_mtedx_data.py \
- --data-root ${MTEDX_ROOT} --task st \
- --vocab-type unigram --vocab-size 1000
-
-# Add vocabulary and configuration for joint data
-# (based on the manifests and features generated above)
-python examples/speech_to_text/prep_mtedx_data.py \
- --data-root ${MTEDX_ROOT} --task asr --joint \
- --vocab-type unigram --vocab-size 8000
-python examples/speech_to_text/prep_mtedx_data.py \
- --data-root ${MTEDX_ROOT} --task st --joint \
- --vocab-type unigram --vocab-size 8000
-```
-The generated files (manifest, features, vocabulary and data configuration) will be added to
-`${MTEDX_ROOT}/${LANG_PAIR}` (per-language data) and `MTEDX_ROOT` (joint data).
-
-
-## ASR
-#### Training
-Spanish as example:
-```bash
-fairseq-train ${MTEDX_ROOT}/es-es \
- --config-yaml config_asr.yaml --train-subset train_asr --valid-subset valid_asr \
- --save-dir ${ASR_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-epoch 200 \
- --task speech_to_text --criterion label_smoothed_cross_entropy --report-accuracy \
- --arch s2t_transformer_xs --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \
- --warmup-updates 10000 --clip-norm 10.0 --seed 1 --dropout 0.3 --label-smoothing 0.1 \
- --load-pretrained-encoder-from ${PRETRAINED_ENCODER} \
- --skip-invalid-size-inputs-valid-test \
- --keep-last-epochs 10 --update-freq 8 --patience 10
-```
-For joint model (using ASR data from all 8 languages):
-```bash
-fairseq-train ${MTEDX_ROOT} \
- --config-yaml config_asr.yaml \
- --train-subset train_es-es_asr,train_fr-fr_asr,train_pt-pt_asr,train_it-it_asr,train_ru-ru_asr,train_el-el_asr,train_ar-ar_asr,train_de-de_asr \
- --valid-subset valid_es-es_asr,valid_fr-fr_asr,valid_pt-pt_asr,valid_it-it_asr,valid_ru-ru_asr,valid_el-el_asr,valid_ar-ar_asr,valid_de-de_asr \
- --save-dir ${MULTILINGUAL_ASR_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-epoch 200 \
- --task speech_to_text --criterion label_smoothed_cross_entropy --report-accuracy \
- --arch s2t_transformer_s --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \
- --warmup-updates 10000 --clip-norm 10.0 --seed 1 --dropout 0.3 --label-smoothing 0.1 \
- --skip-invalid-size-inputs-valid-test \
- --keep-last-epochs 10 --update-freq 8 --patience 10 \
- --ignore-prefix-size 1
-```
-where `MULTILINGUAL_ASR_SAVE_DIR` is the checkpoint root path. We set `--update-freq 8` to simulate 8 GPUs
-with 1 GPU. You may want to update it accordingly when using more than 1 GPU.
-For multilingual models, we prepend target language ID token as target BOS, which should be excluded from
-the training loss via `--ignore-prefix-size 1`.
-
-#### Inference & Evaluation
-```bash
-CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt
-python scripts/average_checkpoints.py \
- --inputs ${ASR_SAVE_DIR} --num-epoch-checkpoints 10 \
- --output "${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}"
-
-fairseq-generate ${MTEDX_ROOT}/es-es \
- --config-yaml config_asr.yaml --gen-subset test --task speech_to_text \
- --path ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 \
- --skip-invalid-size-inputs-valid-test \
- --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct --remove-bpe
-
-# For models trained on joint data
-CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt
-python scripts/average_checkpoints.py \
- --inputs ${MULTILINGUAL_ASR_SAVE_DIR} --num-epoch-checkpoints 10 \
- --output "${MULTILINGUAL_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}"
-
-for LANG in es fr pt it ru el ar de; do
- fairseq-generate ${MTEDX_ROOT} \
- --config-yaml config_asr.yaml --gen-subset test_${LANG}-${LANG}_asr --task speech_to_text \
- --prefix-size 1 --path ${MULTILINGUAL_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} \
- --max-tokens 40000 --beam 5 \
- --skip-invalid-size-inputs-valid-test \
- --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct --remove-bpe
-done
-```
-#### Results
-| Data | --arch | Params | Es | Fr | Pt | It | Ru | El | Ar | De |
-|--------------|--------------------|--------|------|------|------|------|------|-------|-------|-------|
-| Monolingual | s2t_transformer_xs | 10M | 46.4 | 45.6 | 54.8 | 48.0 | 74.7 | 109.5 | 104.4 | 111.1 |
-
-
-## ST
-#### Training
-Es-En as example:
-```bash
-fairseq-train ${MTEDX_ROOT}/es-en \
- --config-yaml config_st.yaml --train-subset train_st --valid-subset valid_st \
- --save-dir ${ST_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-epoch 200 \
- --task speech_to_text --criterion label_smoothed_cross_entropy --report-accuracy \
- --arch s2t_transformer_xs --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \
- --warmup-updates 10000 --clip-norm 10.0 --seed 1 --dropout 0.3 --label-smoothing 0.1 \
- --load-pretrained-encoder-from ${PRETRAINED_ENCODER} \
- --skip-invalid-size-inputs-valid-test \
- --keep-last-epochs 10 --update-freq 8 --patience 10
-```
-For multilingual model (all 12 directions):
-```bash
-fairseq-train ${MTEDX_ROOT} \
- --config-yaml config_st.yaml \
- --train-subset train_el-en_st,train_es-en_st,train_es-fr_st,train_es-it_st,train_es-pt_st,train_fr-en_st,train_fr-es_st,train_fr-pt_st,train_it-en_st,train_it-es_st,train_pt-en_st,train_pt-es_st,train_ru-en_st \
- --valid-subset valid_el-en_st,valid_es-en_st,valid_es-fr_st,valid_es-it_st,valid_es-pt_st,valid_fr-en_st,valid_fr-es_st,valid_fr-pt_st,valid_it-en_st,valid_it-es_st,valid_pt-en_st,valid_pt-es_st,valid_ru-en_st \
- --save-dir ${MULTILINGUAL_ST_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-epoch 200 \
- --task speech_to_text --criterion label_smoothed_cross_entropy --report-accuracy \
- --arch s2t_transformer_s --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \
- --warmup-updates 10000 --clip-norm 10.0 --seed 1 --dropout 0.3 --label-smoothing 0.1 \
- --skip-invalid-size-inputs-valid-test \
- --keep-last-epochs 10 --update-freq 8 --patience 10 \
- --ignore-prefix-size 1 \
- --load-pretrained-encoder-from ${PRETRAINED_ENCODER}
-```
-where `ST_SAVE_DIR` (`MULTILINGUAL_ST_SAVE_DIR`) is the checkpoint root path. The ST encoder is pre-trained by ASR
-for faster training and better performance: `--load-pretrained-encoder-from <(JOINT_)ASR checkpoint path>`. We set
-`--update-freq 8` to simulate 8 GPUs with 1 GPU. You may want to update it accordingly when using more than 1 GPU.
-For multilingual models, we prepend target language ID token as target BOS, which should be excluded from
-the training loss via `--ignore-prefix-size 1`.
-
-#### Inference & Evaluation
-Average the last 10 checkpoints and evaluate on the `test` split:
-```bash
-CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt
-python scripts/average_checkpoints.py \
- --inputs ${ST_SAVE_DIR} --num-epoch-checkpoints 10 \
- --output "${ST_SAVE_DIR}/${CHECKPOINT_FILENAME}"
-
-fairseq-generate ${MTEDX_ROOT}/es-en \
- --config-yaml config_st.yaml --gen-subset test --task speech_to_text \
- --path ${ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \
- --max-tokens 50000 --beam 5 --scoring sacrebleu --remove-bpe
-
-# For multilingual models
-python scripts/average_checkpoints.py \
- --inputs ${MULTILINGUAL_ST_SAVE_DIR} --num-epoch-checkpoints 10 \
- --output "${MULTILINGUAL_ST_SAVE_DIR}/${CHECKPOINT_FILENAME}"
-
-for LANGPAIR in es-en es-fr es-pt fr-en fr-es fr-pt pt-en pt-es it-en it-es ru-en el-en; do
- fairseq-generate ${MTEDX_ROOT} \
- --config-yaml config_st.yaml --gen-subset test_${LANGPAIR}_st --task speech_to_text \
- --prefix-size 1 --path ${MULTILINGUAL_ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \
- --max-tokens 40000 --beam 5 \
- --skip-invalid-size-inputs-valid-test \
- --scoring sacrebleu --remove-bpe
-done
-```
-For multilingual models, we force decoding from the target language ID token (as BOS) via `--prefix-size 1`.
-
-#### Results
-| Data | --arch | Params | Es-En | Es-Pt | Es-Fr | Fr-En | Fr-Es | Fr-Pt | Pt-En | Pt-Es | It-En | It-Es | Ru-En | El-En |
-|--------------|--------------------|-----|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
-| Bilingual | s2t_transformer_xs | 10M | 7.0 | 12.2 | 1.7 | 8.9 | 10.6 | 7.9 | 8.1 | 8.7 | 6.4 | 1.0 | 0.7 | 0.6 |
-| Multilingual | s2t_transformer_s | 31M | 12.3 | 17.4 | 6.1 | 12.0 | 13.6 | 13.2 | 12.0 | 13.7 | 10.7 | 13.1 | 0.6 | 0.8 |
-
-
-## Citation
-Please cite as:
-```
-@misc{salesky2021mtedx,
- title={Multilingual TEDx Corpus for Speech Recognition and Translation},
- author={Elizabeth Salesky and Matthew Wiesner and Jacob Bremerman and Roldano Cattoni and Matteo Negri and Marco Turchi and Douglas W. Oard and Matt Post},
- year={2021},
-}
-
-@inproceedings{wang2020fairseqs2t,
- title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
- author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
- booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
- year = {2020},
-}
-
-@inproceedings{ott2019fairseq,
- title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
- author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
- booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
- year = {2019},
-}
-```
-
-[[Back]](..)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/sequence_generator.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/sequence_generator.py
deleted file mode 100644
index 2e61140dd834210cfd7ecc14808951f4709c3519..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/sequence_generator.py
+++ /dev/null
@@ -1,973 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-from typing import Dict, List, Optional
-import sys
-
-import torch
-import torch.nn as nn
-from fairseq import search, utils
-from fairseq.data import data_utils
-from fairseq.models import FairseqIncrementalDecoder
-from torch import Tensor
-from fairseq.ngram_repeat_block import NGramRepeatBlock
-
-
-class SequenceGenerator(nn.Module):
- def __init__(
- self,
- models,
- tgt_dict,
- beam_size=1,
- max_len_a=0,
- max_len_b=200,
- max_len=0,
- min_len=1,
- normalize_scores=True,
- len_penalty=1.0,
- unk_penalty=0.0,
- temperature=1.0,
- match_source_len=False,
- no_repeat_ngram_size=0,
- search_strategy=None,
- eos=None,
- symbols_to_strip_from_output=None,
- lm_model=None,
- lm_weight=1.0,
- ):
- """Generates translations of a given source sentence.
-
- Args:
- models (List[~fairseq.models.FairseqModel]): ensemble of models,
- currently support fairseq.models.TransformerModel for scripting
- beam_size (int, optional): beam width (default: 1)
- max_len_a/b (int, optional): generate sequences of maximum length
- ax + b, where x is the source length
- max_len (int, optional): the maximum length of the generated output
- (not including end-of-sentence)
- min_len (int, optional): the minimum length of the generated output
- (not including end-of-sentence)
- normalize_scores (bool, optional): normalize scores by the length
- of the output (default: True)
- len_penalty (float, optional): length penalty, where <1.0 favors
- shorter, >1.0 favors longer sentences (default: 1.0)
- unk_penalty (float, optional): unknown word penalty, where <0
- produces more unks, >0 produces fewer (default: 0.0)
- temperature (float, optional): temperature, where values
- >1.0 produce more uniform samples and values <1.0 produce
- sharper samples (default: 1.0)
- match_source_len (bool, optional): outputs should match the source
- length (default: False)
- """
- super().__init__()
- if isinstance(models, EnsembleModel):
- self.model = models
- else:
- self.model = EnsembleModel(models)
- self.tgt_dict = tgt_dict
- self.pad = tgt_dict.pad()
- self.unk = tgt_dict.unk()
- self.eos = tgt_dict.eos() if eos is None else eos
- self.symbols_to_strip_from_output = (
- symbols_to_strip_from_output.union({self.eos})
- if symbols_to_strip_from_output is not None
- else {self.eos}
- )
- self.vocab_size = len(tgt_dict)
- self.beam_size = beam_size
- # the max beam size is the dictionary size - 1, since we never select pad
- self.beam_size = min(beam_size, self.vocab_size - 1)
- self.max_len_a = max_len_a
- self.max_len_b = max_len_b
- self.min_len = min_len
- self.max_len = max_len or self.model.max_decoder_positions()
-
- self.normalize_scores = normalize_scores
- self.len_penalty = len_penalty
- self.unk_penalty = unk_penalty
- self.temperature = temperature
- self.match_source_len = match_source_len
-
- if no_repeat_ngram_size > 0:
- self.repeat_ngram_blocker = NGramRepeatBlock(no_repeat_ngram_size)
- else:
- self.repeat_ngram_blocker = None
-
- assert temperature > 0, "--temperature must be greater than 0"
-
- self.search = (
- search.BeamSearch(tgt_dict) if search_strategy is None else search_strategy
- )
- # We only need to set src_lengths in LengthConstrainedBeamSearch.
- # As a module attribute, setting it would break in multithread
- # settings when the model is shared.
- self.should_set_src_lengths = (
- hasattr(self.search, "needs_src_lengths") and self.search.needs_src_lengths
- )
-
- self.model.eval()
-
- self.lm_model = lm_model
- self.lm_weight = lm_weight
- if self.lm_model is not None:
- self.lm_model.eval()
-
- def cuda(self):
- self.model.cuda()
- return self
-
- @torch.no_grad()
- def forward(
- self,
- sample: Dict[str, Dict[str, Tensor]],
- prefix_tokens: Optional[Tensor] = None,
- bos_token: Optional[int] = None,
- ):
- """Generate a batch of translations.
-
- Args:
- sample (dict): batch
- prefix_tokens (torch.LongTensor, optional): force decoder to begin
- with these tokens
- bos_token (int, optional): beginning of sentence token
- (default: self.eos)
- """
- return self._generate(sample, prefix_tokens, bos_token=bos_token)
-
- # TODO(myleott): unused, deprecate after pytorch-translate migration
- def generate_batched_itr(self, data_itr, beam_size=None, cuda=False, timer=None):
- """Iterate over a batched dataset and yield individual translations.
- Args:
- cuda (bool, optional): use GPU for generation
- timer (StopwatchMeter, optional): time generations
- """
- for sample in data_itr:
- s = utils.move_to_cuda(sample) if cuda else sample
- if "net_input" not in s:
- continue
- input = s["net_input"]
- # model.forward normally channels prev_output_tokens into the decoder
- # separately, but SequenceGenerator directly calls model.encoder
- encoder_input = {
- k: v for k, v in input.items() if k != "prev_output_tokens"
- }
- if timer is not None:
- timer.start()
- with torch.no_grad():
- hypos = self.generate(encoder_input)
- if timer is not None:
- timer.stop(sum(len(h[0]["tokens"]) for h in hypos))
- for i, id in enumerate(s["id"].data):
- # remove padding
- src = utils.strip_pad(input["src_tokens"].data[i, :], self.pad)
- ref = (
- utils.strip_pad(s["target"].data[i, :], self.pad)
- if s["target"] is not None
- else None
- )
- yield id, src, ref, hypos[i]
-
- @torch.no_grad()
- def generate(self, models, sample: Dict[str, Dict[str, Tensor]], **kwargs) -> List[List[Dict[str, Tensor]]]:
- """Generate translations. Match the api of other fairseq generators.
-
- Args:
- models (List[~fairseq.models.FairseqModel]): ensemble of models
- sample (dict): batch
- prefix_tokens (torch.LongTensor, optional): force decoder to begin
- with these tokens
- constraints (torch.LongTensor, optional): force decoder to include
- the list of constraints
- bos_token (int, optional): beginning of sentence token
- (default: self.eos)
- """
- return self._generate(sample, **kwargs)
-
- def _generate(
- self,
- sample: Dict[str, Dict[str, Tensor]],
- prefix_tokens: Optional[Tensor] = None,
- constraints: Optional[Tensor] = None,
- bos_token: Optional[int] = None,
- ):
- incremental_states = torch.jit.annotate(
- List[Dict[str, Dict[str, Optional[Tensor]]]],
- [
- torch.jit.annotate(Dict[str, Dict[str, Optional[Tensor]]], {})
- for i in range(self.model.models_size)
- ],
- )
- net_input = sample["net_input"]
-
- if "src_tokens" in net_input:
- src_tokens = net_input["src_tokens"]
- # length of the source text being the character length except EndOfSentence and pad
- src_lengths = (
- (src_tokens.ne(self.eos) & src_tokens.ne(self.pad)).long().sum(dim=1)
- )
- elif "source" in net_input:
- src_tokens = net_input["source"]
- src_lengths = (
- net_input["padding_mask"].size(-1) - net_input["padding_mask"].sum(-1)
- if net_input["padding_mask"] is not None
- else torch.tensor(src_tokens.size(-1)).to(src_tokens)
- )
- elif "features" in net_input:
- src_tokens = net_input["features"]
- src_lengths = (
- net_input["padding_mask"].size(-1) - net_input["padding_mask"].sum(-1)
- if net_input["padding_mask"] is not None
- else torch.tensor(src_tokens.size(-1)).to(src_tokens)
- )
- else:
- raise Exception("expected src_tokens or source in net input. input keys: " + str(net_input.keys()))
-
- # bsz: total number of sentences in beam
- # Note that src_tokens may have more than 2 dimensions (i.e. audio features)
- bsz, src_len = src_tokens.size()[:2]
- beam_size = self.beam_size
-
- if constraints is not None and not self.search.supports_constraints:
- raise NotImplementedError(
- "Target-side constraints were provided, but search method doesn't support them"
- )
-
- # Initialize constraints, when active
- self.search.init_constraints(constraints, beam_size)
-
- max_len: int = -1
- if self.match_source_len:
- max_len = src_lengths.max().item()
- else:
- max_len = min(
- int(self.max_len_a * src_len + self.max_len_b),
- self.max_len - 1,
- )
- assert (
- self.min_len <= max_len
- ), "min_len cannot be larger than max_len, please adjust these!"
- # compute the encoder output for each beam
- with torch.autograd.profiler.record_function("EnsembleModel: forward_encoder"):
- encoder_outs = self.model.forward_encoder(net_input)
-
- # placeholder of indices for bsz * beam_size to hold tokens and accumulative scores
- new_order = torch.arange(bsz).view(-1, 1).repeat(1, beam_size).view(-1)
- new_order = new_order.to(src_tokens.device).long()
- encoder_outs = self.model.reorder_encoder_out(encoder_outs, new_order)
- # ensure encoder_outs is a List.
- assert encoder_outs is not None
-
- # initialize buffers
- scores = (
- torch.zeros(bsz * beam_size, max_len + 1).to(src_tokens).float()
- ) # +1 for eos; pad is never chosen for scoring
- tokens = (
- torch.zeros(bsz * beam_size, max_len + 2)
- .to(src_tokens)
- .long()
- .fill_(self.pad)
- ) # +2 for eos and pad
- tokens[:, 0] = self.eos if bos_token is None else bos_token
- attn: Optional[Tensor] = None
-
- # A list that indicates candidates that should be ignored.
- # For example, suppose we're sampling and have already finalized 2/5
- # samples. Then cands_to_ignore would mark 2 positions as being ignored,
- # so that we only finalize the remaining 3 samples.
- cands_to_ignore = (
- torch.zeros(bsz, beam_size).to(src_tokens).eq(-1)
- ) # forward and backward-compatible False mask
-
- # list of completed sentences
- finalized = torch.jit.annotate(
- List[List[Dict[str, Tensor]]],
- [torch.jit.annotate(List[Dict[str, Tensor]], []) for i in range(bsz)],
- ) # contains lists of dictionaries of infomation about the hypothesis being finalized at each step
-
- # a boolean array indicating if the sentence at the index is finished or not
- finished = [False for i in range(bsz)]
- num_remaining_sent = bsz # number of sentences remaining
-
- # number of candidate hypos per step
- cand_size = 2 * beam_size # 2 x beam size in case half are EOS
-
- # offset arrays for converting between different indexing schemes
- bbsz_offsets = (
- (torch.arange(0, bsz) * beam_size)
- .unsqueeze(1)
- .type_as(tokens)
- .to(src_tokens.device)
- )
- cand_offsets = torch.arange(0, cand_size).type_as(tokens).to(src_tokens.device)
-
- reorder_state: Optional[Tensor] = None
- batch_idxs: Optional[Tensor] = None
-
- original_batch_idxs: Optional[Tensor] = None
- if "id" in sample and isinstance(sample["id"], Tensor):
- original_batch_idxs = sample["id"]
- else:
- original_batch_idxs = torch.arange(0, bsz).type_as(tokens)
-
- for step in range(max_len + 1): # one extra step for EOS marker
- # reorder decoder internal states based on the prev choice of beams
- if reorder_state is not None:
- if batch_idxs is not None:
- # update beam indices to take into account removed sentences
- corr = batch_idxs - torch.arange(batch_idxs.numel()).type_as(
- batch_idxs
- )
- reorder_state.view(-1, beam_size).add_(
- corr.unsqueeze(-1) * beam_size
- )
- original_batch_idxs = original_batch_idxs[batch_idxs]
- self.model.reorder_incremental_state(incremental_states, reorder_state)
- encoder_outs = self.model.reorder_encoder_out(
- encoder_outs, reorder_state
- )
- with torch.autograd.profiler.record_function("EnsembleModel: forward_decoder"):
- lprobs, avg_attn_scores = self.model.forward_decoder(
- tokens[:, : step + 1],
- encoder_outs,
- incremental_states,
- self.temperature,
- )
-
- if self.lm_model is not None:
- lm_out = self.lm_model(tokens[:, : step + 1])
- probs = self.lm_model.get_normalized_probs(
- lm_out, log_probs=True, sample=None
- )
- probs = probs[:, -1, :] * self.lm_weight
- lprobs += probs
- # handle prefix tokens (possibly with different lengths)
- if (
- prefix_tokens is not None
- and step < prefix_tokens.size(1)
- and step < max_len
- ):
- lprobs, tokens, scores = self._prefix_tokens(
- step, lprobs, scores, tokens, prefix_tokens, beam_size
- )
- elif step < self.min_len:
- # minimum length constraint (does not apply if using prefix_tokens)
- lprobs[:, self.eos] = -math.inf
-
- lprobs[lprobs != lprobs] = torch.tensor(-math.inf).to(lprobs)
-
- lprobs[:, self.pad] = -math.inf # never select pad
- lprobs[:, self.unk] -= self.unk_penalty # apply unk penalty
-
- # handle max length constraint
- if step >= max_len:
- lprobs[:, : self.eos] = -math.inf
- lprobs[:, self.eos + 1 :] = -math.inf
-
- # Record attention scores, only support avg_attn_scores is a Tensor
- if avg_attn_scores is not None:
- if attn is None:
- attn = torch.empty(
- bsz * beam_size, avg_attn_scores.size(1), max_len + 2
- ).to(scores)
- attn[:, :, step + 1].copy_(avg_attn_scores)
-
- scores = scores.type_as(lprobs)
- eos_bbsz_idx = torch.empty(0).to(
- tokens
- ) # indices of hypothesis ending with eos (finished sentences)
- eos_scores = torch.empty(0).to(
- scores
- ) # scores of hypothesis ending with eos (finished sentences)
-
- if self.should_set_src_lengths:
- self.search.set_src_lengths(src_lengths)
-
- if self.repeat_ngram_blocker is not None:
- lprobs = self.repeat_ngram_blocker(tokens, lprobs, bsz, beam_size, step)
-
- # Shape: (batch, cand_size)
- cand_scores, cand_indices, cand_beams = self.search.step(
- step,
- lprobs.view(bsz, -1, self.vocab_size),
- scores.view(bsz, beam_size, -1)[:, :, :step],
- tokens[:, : step + 1],
- original_batch_idxs,
- )
-
- # cand_bbsz_idx contains beam indices for the top candidate
- # hypotheses, with a range of values: [0, bsz*beam_size),
- # and dimensions: [bsz, cand_size]
- cand_bbsz_idx = cand_beams.add(bbsz_offsets)
-
- # finalize hypotheses that end in eos
- # Shape of eos_mask: (batch size, beam size)
- eos_mask = cand_indices.eq(self.eos) & cand_scores.ne(-math.inf)
- eos_mask[:, :beam_size][cands_to_ignore] = torch.tensor(0).to(eos_mask)
-
- # only consider eos when it's among the top beam_size indices
- # Now we know what beam item(s) to finish
- # Shape: 1d list of absolute-numbered
- eos_bbsz_idx = torch.masked_select(
- cand_bbsz_idx[:, :beam_size], mask=eos_mask[:, :beam_size]
- )
-
- finalized_sents: List[int] = []
- if eos_bbsz_idx.numel() > 0:
- eos_scores = torch.masked_select(
- cand_scores[:, :beam_size], mask=eos_mask[:, :beam_size]
- )
-
- finalized_sents = self.finalize_hypos(
- step,
- eos_bbsz_idx,
- eos_scores,
- tokens,
- scores,
- finalized,
- finished,
- beam_size,
- attn,
- src_lengths,
- max_len,
- )
- num_remaining_sent -= len(finalized_sents)
-
- assert num_remaining_sent >= 0
- if num_remaining_sent == 0:
- break
- if self.search.stop_on_max_len and step >= max_len:
- break
- assert step < max_len, f"{step} < {max_len}"
-
- # Remove finalized sentences (ones for which {beam_size}
- # finished hypotheses have been generated) from the batch.
- if len(finalized_sents) > 0:
- new_bsz = bsz - len(finalized_sents)
-
- # construct batch_idxs which holds indices of batches to keep for the next pass
- batch_mask = torch.ones(
- bsz, dtype=torch.bool, device=cand_indices.device
- )
- batch_mask[finalized_sents] = False
- # TODO replace `nonzero(as_tuple=False)` after TorchScript supports it
- batch_idxs = torch.arange(
- bsz, device=cand_indices.device
- ).masked_select(batch_mask)
-
- # Choose the subset of the hypothesized constraints that will continue
- self.search.prune_sentences(batch_idxs)
-
- eos_mask = eos_mask[batch_idxs]
- cand_beams = cand_beams[batch_idxs]
- bbsz_offsets.resize_(new_bsz, 1)
- cand_bbsz_idx = cand_beams.add(bbsz_offsets)
- cand_scores = cand_scores[batch_idxs]
- cand_indices = cand_indices[batch_idxs]
-
- if prefix_tokens is not None:
- prefix_tokens = prefix_tokens[batch_idxs]
- src_lengths = src_lengths[batch_idxs]
- cands_to_ignore = cands_to_ignore[batch_idxs]
-
- scores = scores.view(bsz, -1)[batch_idxs].view(new_bsz * beam_size, -1)
- tokens = tokens.view(bsz, -1)[batch_idxs].view(new_bsz * beam_size, -1)
- if attn is not None:
- attn = attn.view(bsz, -1)[batch_idxs].view(
- new_bsz * beam_size, attn.size(1), -1
- )
- bsz = new_bsz
- else:
- batch_idxs = None
-
- # Set active_mask so that values > cand_size indicate eos hypos
- # and values < cand_size indicate candidate active hypos.
- # After, the min values per row are the top candidate active hypos
-
- # Rewrite the operator since the element wise or is not supported in torchscript.
-
- eos_mask[:, :beam_size] = ~((~cands_to_ignore) & (~eos_mask[:, :beam_size]))
- active_mask = torch.add(
- eos_mask.type_as(cand_offsets) * cand_size,
- cand_offsets[: eos_mask.size(1)],
- )
-
- # get the top beam_size active hypotheses, which are just
- # the hypos with the smallest values in active_mask.
- # {active_hypos} indicates which {beam_size} hypotheses
- # from the list of {2 * beam_size} candidates were
- # selected. Shapes: (batch size, beam size)
- new_cands_to_ignore, active_hypos = torch.topk(
- active_mask, k=beam_size, dim=1, largest=False
- )
-
- # update cands_to_ignore to ignore any finalized hypos.
- cands_to_ignore = new_cands_to_ignore.ge(cand_size)[:, :beam_size]
- # Make sure there is at least one active item for each sentence in the batch.
- assert (~cands_to_ignore).any(dim=1).all()
-
- # update cands_to_ignore to ignore any finalized hypos
-
- # {active_bbsz_idx} denotes which beam number is continued for each new hypothesis (a beam
- # can be selected more than once).
- active_bbsz_idx = torch.gather(cand_bbsz_idx, dim=1, index=active_hypos)
- active_scores = torch.gather(cand_scores, dim=1, index=active_hypos)
-
- active_bbsz_idx = active_bbsz_idx.view(-1)
- active_scores = active_scores.view(-1)
-
- # copy tokens and scores for active hypotheses
-
- # Set the tokens for each beam (can select the same row more than once)
- tokens[:, : step + 1] = torch.index_select(
- tokens[:, : step + 1], dim=0, index=active_bbsz_idx
- )
- # Select the next token for each of them
- tokens.view(bsz, beam_size, -1)[:, :, step + 1] = torch.gather(
- cand_indices, dim=1, index=active_hypos
- )
- if step > 0:
- scores[:, :step] = torch.index_select(
- scores[:, :step], dim=0, index=active_bbsz_idx
- )
- scores.view(bsz, beam_size, -1)[:, :, step] = torch.gather(
- cand_scores, dim=1, index=active_hypos
- )
-
- # Update constraints based on which candidates were selected for the next beam
- self.search.update_constraints(active_hypos)
-
- # copy attention for active hypotheses
- if attn is not None:
- attn[:, :, : step + 2] = torch.index_select(
- attn[:, :, : step + 2], dim=0, index=active_bbsz_idx
- )
-
- # reorder incremental state in decoder
- reorder_state = active_bbsz_idx
-
- # sort by score descending
- for sent in range(len(finalized)):
- scores = torch.tensor(
- [float(elem["score"].item()) for elem in finalized[sent]]
- )
- _, sorted_scores_indices = torch.sort(scores, descending=True)
- finalized[sent] = [finalized[sent][ssi] for ssi in sorted_scores_indices]
- finalized[sent] = torch.jit.annotate(
- List[Dict[str, Tensor]], finalized[sent]
- )
- return finalized
-
- def _prefix_tokens(
- self, step: int, lprobs, scores, tokens, prefix_tokens, beam_size: int
- ):
- """Handle prefix tokens"""
- prefix_toks = prefix_tokens[:, step].unsqueeze(-1).repeat(1, beam_size).view(-1)
- prefix_lprobs = lprobs.gather(-1, prefix_toks.unsqueeze(-1))
- prefix_mask = prefix_toks.ne(self.pad)
- lprobs[prefix_mask] = torch.min(prefix_lprobs) - 1
- lprobs[prefix_mask] = lprobs[prefix_mask].scatter(
- -1, prefix_toks[prefix_mask].unsqueeze(-1), prefix_lprobs[prefix_mask]
- )
- # if prefix includes eos, then we should make sure tokens and
- # scores are the same across all beams
- eos_mask = prefix_toks.eq(self.eos)
- if eos_mask.any():
- # validate that the first beam matches the prefix
- first_beam = tokens[eos_mask].view(-1, beam_size, tokens.size(-1))[
- :, 0, 1 : step + 1
- ]
- eos_mask_batch_dim = eos_mask.view(-1, beam_size)[:, 0]
- target_prefix = prefix_tokens[eos_mask_batch_dim][:, :step]
- assert (first_beam == target_prefix).all()
-
- # copy tokens, scores and lprobs from the first beam to all beams
- tokens = self.replicate_first_beam(tokens, eos_mask_batch_dim, beam_size)
- scores = self.replicate_first_beam(scores, eos_mask_batch_dim, beam_size)
- lprobs = self.replicate_first_beam(lprobs, eos_mask_batch_dim, beam_size)
- return lprobs, tokens, scores
-
- def replicate_first_beam(self, tensor, mask, beam_size: int):
- tensor = tensor.view(-1, beam_size, tensor.size(-1))
- tensor[mask] = tensor[mask][:, :1, :]
- return tensor.view(-1, tensor.size(-1))
-
- def finalize_hypos(
- self,
- step: int,
- bbsz_idx,
- eos_scores,
- tokens,
- scores,
- finalized: List[List[Dict[str, Tensor]]],
- finished: List[bool],
- beam_size: int,
- attn: Optional[Tensor],
- src_lengths,
- max_len: int,
- ):
- """Finalize hypothesis, store finalized information in `finalized`, and change `finished` accordingly.
- A sentence is finalized when {beam_size} finished items have been collected for it.
-
- Returns number of sentences (not beam items) being finalized.
- These will be removed from the batch and not processed further.
- Args:
- bbsz_idx (Tensor):
- """
- assert bbsz_idx.numel() == eos_scores.numel()
-
- # clone relevant token and attention tensors.
- # tokens is (batch * beam, max_len). So the index_select
- # gets the newly EOS rows, then selects cols 1..{step + 2}
- tokens_clone = tokens.index_select(0, bbsz_idx)[
- :, 1 : step + 2
- ] # skip the first index, which is EOS
-
- tokens_clone[:, step] = self.eos
- attn_clone = (
- attn.index_select(0, bbsz_idx)[:, :, 1 : step + 2]
- if attn is not None
- else None
- )
-
- # compute scores per token position
- pos_scores = scores.index_select(0, bbsz_idx)[:, : step + 1]
- pos_scores[:, step] = eos_scores
- # convert from cumulative to per-position scores
- pos_scores[:, 1:] = pos_scores[:, 1:] - pos_scores[:, :-1]
-
- # normalize sentence-level scores
- if self.normalize_scores:
- eos_scores /= (step + 1) ** self.len_penalty
-
- # cum_unfin records which sentences in the batch are finished.
- # It helps match indexing between (a) the original sentences
- # in the batch and (b) the current, possibly-reduced set of
- # sentences.
- cum_unfin: List[int] = []
- prev = 0
- for f in finished:
- if f:
- prev += 1
- else:
- cum_unfin.append(prev)
- cum_fin_tensor = torch.tensor(cum_unfin, dtype=torch.int).to(bbsz_idx)
-
- unfin_idx = bbsz_idx // beam_size
- sent = unfin_idx + torch.index_select(cum_fin_tensor, 0, unfin_idx)
-
- # Create a set of "{sent}{unfin_idx}", where
- # "unfin_idx" is the index in the current (possibly reduced)
- # list of sentences, and "sent" is the index in the original,
- # unreduced batch
- # For every finished beam item
- # sentence index in the current (possibly reduced) batch
- seen = (sent << 32) + unfin_idx
- unique_seen: List[int] = torch.unique(seen).tolist()
-
- if self.match_source_len:
- condition = step > torch.index_select(src_lengths, 0, unfin_idx)
- eos_scores = torch.where(condition, torch.tensor(-math.inf), eos_scores)
- sent_list: List[int] = sent.tolist()
- for i in range(bbsz_idx.size()[0]):
- # An input sentence (among those in a batch) is finished when
- # beam_size hypotheses have been collected for it
- if len(finalized[sent_list[i]]) < beam_size:
- if attn_clone is not None:
- # remove padding tokens from attn scores
- hypo_attn = attn_clone[i]
- else:
- hypo_attn = torch.empty(0)
-
- finalized[sent_list[i]].append(
- {
- "tokens": tokens_clone[i],
- "score": eos_scores[i],
- "attention": hypo_attn, # src_len x tgt_len
- "alignment": torch.empty(0),
- "positional_scores": pos_scores[i],
- }
- )
-
- newly_finished: List[int] = []
- for unique_s in unique_seen:
- # check termination conditions for this sentence
- unique_sent: int = unique_s >> 32
- unique_unfin_idx: int = unique_s - (unique_sent << 32)
-
- if not finished[unique_sent] and self.is_finished(
- step, unique_unfin_idx, max_len, len(finalized[unique_sent]), beam_size
- ):
- finished[unique_sent] = True
- newly_finished.append(unique_unfin_idx)
-
- return newly_finished
-
- def is_finished(
- self,
- step: int,
- unfin_idx: int,
- max_len: int,
- finalized_sent_len: int,
- beam_size: int,
- ):
- """
- Check whether decoding for a sentence is finished, which
- occurs when the list of finalized sentences has reached the
- beam size, or when we reach the maximum length.
- """
- assert finalized_sent_len <= beam_size
- if finalized_sent_len == beam_size or step == max_len:
- return True
- return False
-
-
-class EnsembleModel(nn.Module):
- """A wrapper around an ensemble of models."""
-
- def __init__(self, models):
- super().__init__()
- self.models_size = len(models)
- # method '__len__' is not supported in ModuleList for torch script
- self.single_model = models[0]
- self.models = nn.ModuleList(models)
-
- self.has_incremental: bool = False
- if all(
- hasattr(m, "decoder") and isinstance(m.decoder, FairseqIncrementalDecoder)
- for m in models
- ):
- self.has_incremental = True
-
- def forward(self):
- pass
-
- def has_encoder(self):
- return hasattr(self.single_model, "encoder")
-
- def has_incremental_states(self):
- return self.has_incremental
-
- def max_decoder_positions(self):
- return min([m.max_decoder_positions() for m in self.models if hasattr(m, "max_decoder_positions")] + [sys.maxsize])
-
- @torch.jit.export
- def forward_encoder(self, net_input: Dict[str, Tensor]):
- if not self.has_encoder():
- return None
- return [model.encoder.forward_torchscript(net_input) for model in self.models]
-
- @torch.jit.export
- def forward_decoder(
- self,
- tokens,
- encoder_outs: List[Dict[str, List[Tensor]]],
- incremental_states: List[Dict[str, Dict[str, Optional[Tensor]]]],
- temperature: float = 1.0,
- ):
- log_probs = []
- avg_attn: Optional[Tensor] = None
- encoder_out: Optional[Dict[str, List[Tensor]]] = None
- for i, model in enumerate(self.models):
- if self.has_encoder():
- encoder_out = encoder_outs[i]
- # decode each model
- if self.has_incremental_states():
- decoder_out = model.decoder.forward(
- tokens,
- encoder_out=encoder_out,
- incremental_state=incremental_states[i],
- )
- else:
- if hasattr(model, "decoder"):
- decoder_out = model.decoder.forward(tokens, encoder_out=encoder_out)
- else:
- decoder_out = model.forward(tokens)
-
- attn: Optional[Tensor] = None
- decoder_len = len(decoder_out)
- if decoder_len > 1 and decoder_out[1] is not None:
- if isinstance(decoder_out[1], Tensor):
- attn = decoder_out[1]
- else:
- attn_holder = decoder_out[1]["attn"]
- if isinstance(attn_holder, Tensor):
- attn = attn_holder
- elif attn_holder is not None:
- attn = attn_holder[0]
- if attn is not None:
- attn = attn[:, -1, :]
-
- decoder_out_tuple = (
- decoder_out[0][:, -1:, :].div_(temperature),
- None if decoder_len <= 1 else decoder_out[1],
- )
- probs = model.get_normalized_probs(
- decoder_out_tuple, log_probs=True, sample=None
- )
- probs = probs[:, -1, :]
- if self.models_size == 1:
- return probs, attn
-
- log_probs.append(probs)
- if attn is not None:
- if avg_attn is None:
- avg_attn = attn
- else:
- avg_attn.add_(attn)
-
- avg_probs = torch.logsumexp(torch.stack(log_probs, dim=0), dim=0) - math.log(
- self.models_size
- )
-
- if avg_attn is not None:
- avg_attn.div_(self.models_size)
- return avg_probs, avg_attn
-
- @torch.jit.export
- def reorder_encoder_out(
- self, encoder_outs: Optional[List[Dict[str, List[Tensor]]]], new_order
- ):
- """
- Reorder encoder output according to *new_order*.
-
- Args:
- encoder_out: output from the ``forward()`` method
- new_order (LongTensor): desired order
-
- Returns:
- *encoder_out* rearranged according to *new_order*
- """
- new_outs: List[Dict[str, List[Tensor]]] = []
- if not self.has_encoder():
- return new_outs
- for i, model in enumerate(self.models):
- assert encoder_outs is not None
- new_outs.append(
- model.encoder.reorder_encoder_out(encoder_outs[i], new_order)
- )
- return new_outs
-
- @torch.jit.export
- def reorder_incremental_state(
- self,
- incremental_states: List[Dict[str, Dict[str, Optional[Tensor]]]],
- new_order,
- ):
- if not self.has_incremental_states():
- return
- for i, model in enumerate(self.models):
- model.decoder.reorder_incremental_state_scripting(
- incremental_states[i], new_order
- )
-
-
-class SequenceGeneratorWithAlignment(SequenceGenerator):
- def __init__(
- self, models, tgt_dict, left_pad_target=False, print_alignment="hard", **kwargs
- ):
- """Generates translations of a given source sentence.
-
- Produces alignments following "Jointly Learning to Align and
- Translate with Transformer Models" (Garg et al., EMNLP 2019).
-
- Args:
- left_pad_target (bool, optional): Whether or not the
- hypothesis should be left padded or not when they are
- teacher forced for generating alignments.
- """
- super().__init__(EnsembleModelWithAlignment(models), tgt_dict, **kwargs)
- self.left_pad_target = left_pad_target
-
- if print_alignment == "hard":
- self.extract_alignment = utils.extract_hard_alignment
- elif print_alignment == "soft":
- self.extract_alignment = utils.extract_soft_alignment
-
- @torch.no_grad()
- def generate(self, models, sample, **kwargs):
- finalized = super()._generate(sample, **kwargs)
-
- src_tokens = sample["net_input"]["src_tokens"]
- bsz = src_tokens.shape[0]
- beam_size = self.beam_size
- (
- src_tokens,
- src_lengths,
- prev_output_tokens,
- tgt_tokens,
- ) = self._prepare_batch_for_alignment(sample, finalized)
- if any(getattr(m, "full_context_alignment", False) for m in self.model.models):
- attn = self.model.forward_align(src_tokens, src_lengths, prev_output_tokens)
- else:
- attn = [
- finalized[i // beam_size][i % beam_size]["attention"].transpose(1, 0)
- for i in range(bsz * beam_size)
- ]
-
- if src_tokens.device != "cpu":
- src_tokens = src_tokens.to("cpu")
- tgt_tokens = tgt_tokens.to("cpu")
- attn = [i.to("cpu") for i in attn]
-
- # Process the attn matrix to extract hard alignments.
- for i in range(bsz * beam_size):
- alignment = self.extract_alignment(
- attn[i], src_tokens[i], tgt_tokens[i], self.pad, self.eos
- )
- finalized[i // beam_size][i % beam_size]["alignment"] = alignment
- return finalized
-
- def _prepare_batch_for_alignment(self, sample, hypothesis):
- src_tokens = sample["net_input"]["src_tokens"]
- bsz = src_tokens.shape[0]
- src_tokens = (
- src_tokens[:, None, :]
- .expand(-1, self.beam_size, -1)
- .contiguous()
- .view(bsz * self.beam_size, -1)
- )
- src_lengths = sample["net_input"]["src_lengths"]
- src_lengths = (
- src_lengths[:, None]
- .expand(-1, self.beam_size)
- .contiguous()
- .view(bsz * self.beam_size)
- )
- prev_output_tokens = data_utils.collate_tokens(
- [beam["tokens"] for example in hypothesis for beam in example],
- self.pad,
- self.eos,
- self.left_pad_target,
- move_eos_to_beginning=True,
- )
- tgt_tokens = data_utils.collate_tokens(
- [beam["tokens"] for example in hypothesis for beam in example],
- self.pad,
- self.eos,
- self.left_pad_target,
- move_eos_to_beginning=False,
- )
- return src_tokens, src_lengths, prev_output_tokens, tgt_tokens
-
-
-class EnsembleModelWithAlignment(EnsembleModel):
- """A wrapper around an ensemble of models."""
-
- def __init__(self, models):
- super().__init__(models)
-
- def forward_align(self, src_tokens, src_lengths, prev_output_tokens):
- avg_attn = None
- for model in self.models:
- decoder_out = model(src_tokens, src_lengths, prev_output_tokens)
- attn = decoder_out[1]["attn"][0]
- if avg_attn is None:
- avg_attn = attn
- else:
- avg_attn.add_(attn)
- if len(self.models) > 1:
- avg_attn.div_(len(self.models))
- return avg_attn
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/data/file_dataset.py b/spaces/OFA-Sys/OFA-Image_Caption/data/file_dataset.py
deleted file mode 100644
index 0dcbe9a3e02cb5503d02e2511c75a2871fcc70f6..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/data/file_dataset.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import os
-import torch
-import pickle
-
-
-class FileDataset:
- def __init__(self, file_path, selected_col_ids=None, dtypes=None, separator="\t", cached_index=False):
- self.file_path = file_path
- assert os.path.exists(self.file_path), "Error: The local datafile {} not exists!".format(self.file_path)
-
- self.separator = separator
- if selected_col_ids is None:
- # default to all fields
- self.selected_col_ids = list(
- range(len(open(self.file_path).readline().rstrip("\n").split(self.separator))))
- else:
- self.selected_col_ids = [int(col_id) for col_id in selected_col_ids.split(",")]
- if dtypes is None:
- # default to str
- self.dtypes = [str for col_id in self.selected_col_ids]
- else:
- self.dtypes = [eval(col_dtype) for col_dtype in dtypes.split(",")]
- assert len(self.dtypes) == len(self.selected_col_ids)
-
- self.data_cnt = 0
- try:
- self.slice_id = torch.distributed.get_rank()
- self.slice_count = torch.distributed.get_world_size()
- except Exception:
- self.slice_id = 0
- self.slice_count = 1
- self.cached_index = cached_index
- self._init_seek_index()
- self._reader = self._get_reader()
- print("file {} slice_id {} row count {} total row count {}".format(
- self.file_path, self.slice_id, self.row_count, self.total_row_count)
- )
-
- def _init_seek_index(self):
- if self.cached_index:
- cache_path = "{}.index".format(self.file_path)
- assert os.path.exists(cache_path), "cache file {} not exists!".format(cache_path)
- self.total_row_count, self.lineid_to_offset = pickle.load(open(cache_path, "rb"))
- print("local datafile {} slice_id {} use cached row_count and line_idx-to-offset mapping".format(
- self.file_path, self.slice_id))
- else:
- # make an iteration over the file to get row_count and line_idx-to-offset mapping
- fp = open(self.file_path, "r")
- print("local datafile {} slice_id {} begin to initialize row_count and line_idx-to-offset mapping".format(
- self.file_path, self.slice_id))
- self.total_row_count = 0
- offset = 0
- self.lineid_to_offset = []
- for line in fp:
- self.lineid_to_offset.append(offset)
- self.total_row_count += 1
- offset += len(line)
- self._compute_start_pos_and_row_count()
- print("local datafile {} slice_id {} finished initializing row_count and line_idx-to-offset mapping".format(
- self.file_path, self.slice_id))
-
- def _compute_start_pos_and_row_count(self):
- self.row_count = self.total_row_count // self.slice_count
- if self.slice_id < self.total_row_count - self.row_count * self.slice_count:
- self.row_count += 1
- self.start_pos = self.row_count * self.slice_id
- else:
- self.start_pos = self.row_count * self.slice_id + (self.total_row_count - self.row_count * self.slice_count)
-
- def _get_reader(self):
- fp = open(self.file_path, "r")
- fp.seek(self.lineid_to_offset[self.start_pos])
- return fp
-
- def _seek(self, offset=0):
- try:
- print("slice_id {} seek offset {}".format(self.slice_id, self.start_pos + offset))
- self._reader.seek(self.lineid_to_offset[self.start_pos + offset])
- self.data_cnt = offset
- except Exception:
- print("slice_id {} seek offset {}".format(self.slice_id, offset))
- self._reader.seek(self.lineid_to_offset[offset])
- self.data_cnt = offset
-
- def __del__(self):
- self._reader.close()
-
- def __len__(self):
- return self.row_count
-
- def get_total_row_count(self):
- return self.total_row_count
-
- def __getitem__(self, index):
- if self.data_cnt == self.row_count:
- print("reach the end of datafile, start a new reader")
- self.data_cnt = 0
- self._reader = self._get_reader()
- column_l = self._reader.readline().rstrip("\n").split(self.separator)
- self.data_cnt += 1
- column_l = [dtype(column_l[col_id]) for col_id, dtype in zip(self.selected_col_ids, self.dtypes)]
- return column_l
\ No newline at end of file
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/roberta/wsc/wsc_utils.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/roberta/wsc/wsc_utils.py
deleted file mode 100644
index da6ba74383a2490e1108609f315f44ad4b3bf002..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/roberta/wsc/wsc_utils.py
+++ /dev/null
@@ -1,241 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import json
-from functools import lru_cache
-
-
-def convert_sentence_to_json(sentence):
- if "_" in sentence:
- prefix, rest = sentence.split("_", 1)
- query, rest = rest.split("_", 1)
- query_index = len(prefix.rstrip().split(" "))
- else:
- query, query_index = None, None
-
- prefix, rest = sentence.split("[", 1)
- pronoun, rest = rest.split("]", 1)
- pronoun_index = len(prefix.rstrip().split(" "))
-
- sentence = sentence.replace("_", "").replace("[", "").replace("]", "")
-
- return {
- "idx": 0,
- "text": sentence,
- "target": {
- "span1_index": query_index,
- "span1_text": query,
- "span2_index": pronoun_index,
- "span2_text": pronoun,
- },
- }
-
-
-def extended_noun_chunks(sentence):
- noun_chunks = {(np.start, np.end) for np in sentence.noun_chunks}
- np_start, cur_np = 0, "NONE"
- for i, token in enumerate(sentence):
- np_type = token.pos_ if token.pos_ in {"NOUN", "PROPN"} else "NONE"
- if np_type != cur_np:
- if cur_np != "NONE":
- noun_chunks.add((np_start, i))
- if np_type != "NONE":
- np_start = i
- cur_np = np_type
- if cur_np != "NONE":
- noun_chunks.add((np_start, len(sentence)))
- return [sentence[s:e] for (s, e) in sorted(noun_chunks)]
-
-
-def find_token(sentence, start_pos):
- found_tok = None
- for tok in sentence:
- if tok.idx == start_pos:
- found_tok = tok
- break
- return found_tok
-
-
-def find_span(sentence, search_text, start=0):
- search_text = search_text.lower()
- for tok in sentence[start:]:
- remainder = sentence[tok.i :].text.lower()
- if remainder.startswith(search_text):
- len_to_consume = len(search_text)
- start_idx = tok.idx
- for next_tok in sentence[tok.i :]:
- end_idx = next_tok.idx + len(next_tok.text)
- if end_idx - start_idx == len_to_consume:
- span = sentence[tok.i : next_tok.i + 1]
- return span
- return None
-
-
-@lru_cache(maxsize=1)
-def get_detokenizer():
- from sacremoses import MosesDetokenizer
-
- detok = MosesDetokenizer(lang="en")
- return detok
-
-
-@lru_cache(maxsize=1)
-def get_spacy_nlp():
- import en_core_web_lg
-
- nlp = en_core_web_lg.load()
- return nlp
-
-
-def jsonl_iterator(input_fname, positive_only=False, ngram_order=3, eval=False):
- detok = get_detokenizer()
- nlp = get_spacy_nlp()
-
- with open(input_fname) as fin:
- for line in fin:
- sample = json.loads(line.strip())
-
- if positive_only and "label" in sample and not sample["label"]:
- # only consider examples where the query is correct
- continue
-
- target = sample["target"]
-
- # clean up the query
- query = target["span1_text"]
- if query is not None:
- if "\n" in query:
- continue
- if query.endswith(".") or query.endswith(","):
- query = query[:-1]
-
- # split tokens
- tokens = sample["text"].split(" ")
-
- def strip_pronoun(x):
- return x.rstrip('.,"')
-
- # find the pronoun
- pronoun_idx = target["span2_index"]
- pronoun = strip_pronoun(target["span2_text"])
- if strip_pronoun(tokens[pronoun_idx]) != pronoun:
- # hack: sometimes the index is misaligned
- if strip_pronoun(tokens[pronoun_idx + 1]) == pronoun:
- pronoun_idx += 1
- else:
- raise Exception("Misaligned pronoun!")
- assert strip_pronoun(tokens[pronoun_idx]) == pronoun
-
- # split tokens before and after the pronoun
- before = tokens[:pronoun_idx]
- after = tokens[pronoun_idx + 1 :]
-
- # the GPT BPE attaches leading spaces to tokens, so we keep track
- # of whether we need spaces before or after the pronoun
- leading_space = " " if pronoun_idx > 0 else ""
- trailing_space = " " if len(after) > 0 else ""
-
- # detokenize
- before = detok.detokenize(before, return_str=True)
- pronoun = detok.detokenize([pronoun], return_str=True)
- after = detok.detokenize(after, return_str=True)
-
- # hack: when the pronoun ends in a period (or comma), move the
- # punctuation to the "after" part
- if pronoun.endswith(".") or pronoun.endswith(","):
- after = pronoun[-1] + trailing_space + after
- pronoun = pronoun[:-1]
-
- # hack: when the "after" part begins with a comma or period, remove
- # the trailing space
- if after.startswith(".") or after.startswith(","):
- trailing_space = ""
-
- # parse sentence with spacy
- sentence = nlp(before + leading_space + pronoun + trailing_space + after)
-
- # find pronoun span
- start = len(before + leading_space)
- first_pronoun_tok = find_token(sentence, start_pos=start)
- pronoun_span = find_span(sentence, pronoun, start=first_pronoun_tok.i)
- assert pronoun_span.text == pronoun
-
- if eval:
- # convert to format where pronoun is surrounded by "[]" and
- # query is surrounded by "_"
- query_span = find_span(sentence, query)
- query_with_ws = "_{}_{}".format(
- query_span.text,
- (" " if query_span.text_with_ws.endswith(" ") else ""),
- )
- pronoun_with_ws = "[{}]{}".format(
- pronoun_span.text,
- (" " if pronoun_span.text_with_ws.endswith(" ") else ""),
- )
- if query_span.start < pronoun_span.start:
- first = (query_span, query_with_ws)
- second = (pronoun_span, pronoun_with_ws)
- else:
- first = (pronoun_span, pronoun_with_ws)
- second = (query_span, query_with_ws)
- sentence = (
- sentence[: first[0].start].text_with_ws
- + first[1]
- + sentence[first[0].end : second[0].start].text_with_ws
- + second[1]
- + sentence[second[0].end :].text
- )
- yield sentence, sample.get("label", None)
- else:
- yield sentence, pronoun_span, query, sample.get("label", None)
-
-
-def winogrande_jsonl_iterator(input_fname, eval=False):
- with open(input_fname) as fin:
- for line in fin:
- sample = json.loads(line.strip())
- sentence, option1, option2 = (
- sample["sentence"],
- sample["option1"],
- sample["option2"],
- )
-
- pronoun_span = (sentence.index("_"), sentence.index("_") + 1)
-
- if eval:
- query, cand = option1, option2
- else:
- query = option1 if sample["answer"] == "1" else option2
- cand = option2 if sample["answer"] == "1" else option1
- yield sentence, pronoun_span, query, cand
-
-
-def filter_noun_chunks(
- chunks, exclude_pronouns=False, exclude_query=None, exact_match=False
-):
- if exclude_pronouns:
- chunks = [
- np
- for np in chunks
- if (np.lemma_ != "-PRON-" and not all(tok.pos_ == "PRON" for tok in np))
- ]
-
- if exclude_query is not None:
- excl_txt = [exclude_query.lower()]
- filtered_chunks = []
- for chunk in chunks:
- lower_chunk = chunk.text.lower()
- found = False
- for excl in excl_txt:
- if (
- not exact_match and (lower_chunk in excl or excl in lower_chunk)
- ) or lower_chunk == excl:
- found = True
- break
- if not found:
- filtered_chunks.append(chunk)
- chunks = filtered_chunks
-
- return chunks
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/pointer_generator/pointer_generator_src/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/pointer_generator/pointer_generator_src/__init__.py
deleted file mode 100644
index c361ff6bd616512fe2521387665de1ad1aff66d0..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/pointer_generator/pointer_generator_src/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import transformer_pg # noqa
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/kaldi/add-self-loop-simple.cc b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/kaldi/add-self-loop-simple.cc
deleted file mode 100644
index e18fb62df52ab85d7802615d8619b0fd94a08f8c..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/kaldi/add-self-loop-simple.cc
+++ /dev/null
@@ -1,94 +0,0 @@
-/*
- * Copyright (c) Facebook, Inc. and its affiliates.
- *
- * This source code is licensed under the MIT license found in the
- * LICENSE file in the root directory of this source tree.
- */
-
-#include
-#include "fstext/fstext-lib.h" // @manual
-#include "util/common-utils.h" // @manual
-
-/*
- * This program is to modify a FST without self-loop by:
- * for each incoming arc with non-eps input symbol, add a self-loop arc
- * with that non-eps symbol as input and eps as output.
- *
- * This is to make sure the resultant FST can do deduplication for repeated
- * symbols, which is very common in acoustic model
- *
- */
-namespace {
-int32 AddSelfLoopsSimple(fst::StdVectorFst* fst) {
- typedef fst::MutableArcIterator IterType;
-
- int32 num_states_before = fst->NumStates();
- fst::MakePrecedingInputSymbolsSame(false, fst);
- int32 num_states_after = fst->NumStates();
- KALDI_LOG << "There are " << num_states_before
- << " states in the original FST; "
- << " after MakePrecedingInputSymbolsSame, there are "
- << num_states_after << " states " << std::endl;
-
- auto weight_one = fst::StdArc::Weight::One();
-
- int32 num_arc_added = 0;
-
- fst::StdArc self_loop_arc;
- self_loop_arc.weight = weight_one;
-
- int32 num_states = fst->NumStates();
- std::vector> incoming_non_eps_label_per_state(num_states);
-
- for (int32 state = 0; state < num_states; state++) {
- for (IterType aiter(fst, state); !aiter.Done(); aiter.Next()) {
- fst::StdArc arc(aiter.Value());
- if (arc.ilabel != 0) {
- incoming_non_eps_label_per_state[arc.nextstate].insert(arc.ilabel);
- }
- }
- }
-
- for (int32 state = 0; state < num_states; state++) {
- if (!incoming_non_eps_label_per_state[state].empty()) {
- auto& ilabel_set = incoming_non_eps_label_per_state[state];
- for (auto it = ilabel_set.begin(); it != ilabel_set.end(); it++) {
- self_loop_arc.ilabel = *it;
- self_loop_arc.olabel = 0;
- self_loop_arc.nextstate = state;
- fst->AddArc(state, self_loop_arc);
- num_arc_added++;
- }
- }
- }
- return num_arc_added;
-}
-
-void print_usage() {
- std::cout << "add-self-loop-simple usage:\n"
- "\tadd-self-loop-simple \n";
-}
-} // namespace
-
-int main(int argc, char** argv) {
- if (argc != 3) {
- print_usage();
- exit(1);
- }
-
- auto input = argv[1];
- auto output = argv[2];
-
- auto fst = fst::ReadFstKaldi(input);
- auto num_states = fst->NumStates();
- KALDI_LOG << "Loading FST from " << input << " with " << num_states
- << " states." << std::endl;
-
- int32 num_arc_added = AddSelfLoopsSimple(fst);
- KALDI_LOG << "Adding " << num_arc_added << " self-loop arcs " << std::endl;
-
- fst::WriteFstKaldi(*fst, std::string(output));
- KALDI_LOG << "Writing FST to " << output << std::endl;
-
- delete fst;
-}
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/kaldi/kaldi_initializer.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/kaldi/kaldi_initializer.py
deleted file mode 100644
index 6d2a2a4b6b809ba1106f9a57cb6f241dc083e670..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/kaldi/kaldi_initializer.py
+++ /dev/null
@@ -1,698 +0,0 @@
-#!/usr/bin/env python3
-
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass
-import hydra
-from hydra.core.config_store import ConfigStore
-import logging
-from omegaconf import MISSING, OmegaConf
-import os
-import os.path as osp
-from pathlib import Path
-import subprocess
-from typing import Optional
-
-from fairseq.data.dictionary import Dictionary
-from fairseq.dataclass import FairseqDataclass
-
-script_dir = Path(__file__).resolve().parent
-config_path = script_dir / "config"
-
-
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class KaldiInitializerConfig(FairseqDataclass):
- data_dir: str = MISSING
- fst_dir: Optional[str] = None
- in_labels: str = MISSING
- out_labels: Optional[str] = None
- wav2letter_lexicon: Optional[str] = None
- lm_arpa: str = MISSING
- kaldi_root: str = MISSING
- blank_symbol: str = ""
- silence_symbol: Optional[str] = None
-
-
-def create_units(fst_dir: Path, in_labels: str, vocab: Dictionary) -> Path:
- in_units_file = fst_dir / f"kaldi_dict.{in_labels}.txt"
- if not in_units_file.exists():
-
- logger.info(f"Creating {in_units_file}")
-
- with open(in_units_file, "w") as f:
- print(" 0", file=f)
- i = 1
- for symb in vocab.symbols[vocab.nspecial :]:
- if not symb.startswith("madeupword"):
- print(f"{symb} {i}", file=f)
- i += 1
- return in_units_file
-
-
-def create_lexicon(
- cfg: KaldiInitializerConfig,
- fst_dir: Path,
- unique_label: str,
- in_units_file: Path,
- out_words_file: Path,
-) -> (Path, Path):
-
- disambig_in_units_file = fst_dir / f"kaldi_dict.{cfg.in_labels}_disambig.txt"
- lexicon_file = fst_dir / f"kaldi_lexicon.{unique_label}.txt"
- disambig_lexicon_file = fst_dir / f"kaldi_lexicon.{unique_label}_disambig.txt"
- if (
- not lexicon_file.exists()
- or not disambig_lexicon_file.exists()
- or not disambig_in_units_file.exists()
- ):
- logger.info(f"Creating {lexicon_file} (in units file: {in_units_file})")
-
- assert cfg.wav2letter_lexicon is not None or cfg.in_labels == cfg.out_labels
-
- if cfg.wav2letter_lexicon is not None:
- lm_words = set()
- with open(out_words_file, "r") as lm_dict_f:
- for line in lm_dict_f:
- lm_words.add(line.split()[0])
-
- num_skipped = 0
- total = 0
- with open(cfg.wav2letter_lexicon, "r") as w2l_lex_f, open(
- lexicon_file, "w"
- ) as out_f:
- for line in w2l_lex_f:
- items = line.rstrip().split("\t")
- assert len(items) == 2, items
- if items[0] in lm_words:
- print(items[0], items[1], file=out_f)
- else:
- num_skipped += 1
- logger.debug(
- f"Skipping word {items[0]} as it was not found in LM"
- )
- total += 1
- if num_skipped > 0:
- logger.warning(
- f"Skipped {num_skipped} out of {total} words as they were not found in LM"
- )
- else:
- with open(in_units_file, "r") as in_f, open(lexicon_file, "w") as out_f:
- for line in in_f:
- symb = line.split()[0]
- if symb != "" and symb != "" and symb != "":
- print(symb, symb, file=out_f)
-
- lex_disambig_path = (
- Path(cfg.kaldi_root) / "egs/wsj/s5/utils/add_lex_disambig.pl"
- )
- res = subprocess.run(
- [lex_disambig_path, lexicon_file, disambig_lexicon_file],
- check=True,
- capture_output=True,
- )
- ndisambig = int(res.stdout)
- disamib_path = Path(cfg.kaldi_root) / "egs/wsj/s5/utils/add_disambig.pl"
- res = subprocess.run(
- [disamib_path, "--include-zero", in_units_file, str(ndisambig)],
- check=True,
- capture_output=True,
- )
- with open(disambig_in_units_file, "wb") as f:
- f.write(res.stdout)
-
- return disambig_lexicon_file, disambig_in_units_file
-
-
-def create_G(
- kaldi_root: Path, fst_dir: Path, lm_arpa: Path, arpa_base: str
-) -> (Path, Path):
-
- out_words_file = fst_dir / f"kaldi_dict.{arpa_base}.txt"
- grammar_graph = fst_dir / f"G_{arpa_base}.fst"
- if not grammar_graph.exists() or not out_words_file.exists():
- logger.info(f"Creating {grammar_graph}")
- arpa2fst = kaldi_root / "src/lmbin/arpa2fst"
- subprocess.run(
- [
- arpa2fst,
- "--disambig-symbol=#0",
- f"--write-symbol-table={out_words_file}",
- lm_arpa,
- grammar_graph,
- ],
- check=True,
- )
- return grammar_graph, out_words_file
-
-
-def create_L(
- kaldi_root: Path,
- fst_dir: Path,
- unique_label: str,
- lexicon_file: Path,
- in_units_file: Path,
- out_words_file: Path,
-) -> Path:
- lexicon_graph = fst_dir / f"L.{unique_label}.fst"
-
- if not lexicon_graph.exists():
- logger.info(f"Creating {lexicon_graph} (in units: {in_units_file})")
- make_lex = kaldi_root / "egs/wsj/s5/utils/make_lexicon_fst.pl"
- fstcompile = kaldi_root / "tools/openfst-1.6.7/bin/fstcompile"
- fstaddselfloops = kaldi_root / "src/fstbin/fstaddselfloops"
- fstarcsort = kaldi_root / "tools/openfst-1.6.7/bin/fstarcsort"
-
- def write_disambig_symbol(file):
- with open(file, "r") as f:
- for line in f:
- items = line.rstrip().split()
- if items[0] == "#0":
- out_path = str(file) + "_disamig"
- with open(out_path, "w") as out_f:
- print(items[1], file=out_f)
- return out_path
-
- return None
-
- in_disambig_sym = write_disambig_symbol(in_units_file)
- assert in_disambig_sym is not None
- out_disambig_sym = write_disambig_symbol(out_words_file)
- assert out_disambig_sym is not None
-
- try:
- with open(lexicon_graph, "wb") as out_f:
- res = subprocess.run(
- [make_lex, lexicon_file], capture_output=True, check=True
- )
- assert len(res.stderr) == 0, res.stderr.decode("utf-8")
- res = subprocess.run(
- [
- fstcompile,
- f"--isymbols={in_units_file}",
- f"--osymbols={out_words_file}",
- "--keep_isymbols=false",
- "--keep_osymbols=false",
- ],
- input=res.stdout,
- capture_output=True,
- )
- assert len(res.stderr) == 0, res.stderr.decode("utf-8")
- res = subprocess.run(
- [fstaddselfloops, in_disambig_sym, out_disambig_sym],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstarcsort, "--sort_type=olabel"],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- out_f.write(res.stdout)
- except subprocess.CalledProcessError as e:
- logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}")
- os.remove(lexicon_graph)
- raise
- except AssertionError:
- os.remove(lexicon_graph)
- raise
-
- return lexicon_graph
-
-
-def create_LG(
- kaldi_root: Path,
- fst_dir: Path,
- unique_label: str,
- lexicon_graph: Path,
- grammar_graph: Path,
-) -> Path:
- lg_graph = fst_dir / f"LG.{unique_label}.fst"
-
- if not lg_graph.exists():
- logger.info(f"Creating {lg_graph}")
-
- fsttablecompose = kaldi_root / "src/fstbin/fsttablecompose"
- fstdeterminizestar = kaldi_root / "src/fstbin/fstdeterminizestar"
- fstminimizeencoded = kaldi_root / "src/fstbin/fstminimizeencoded"
- fstpushspecial = kaldi_root / "src/fstbin/fstpushspecial"
- fstarcsort = kaldi_root / "tools/openfst-1.6.7/bin/fstarcsort"
-
- try:
- with open(lg_graph, "wb") as out_f:
- res = subprocess.run(
- [fsttablecompose, lexicon_graph, grammar_graph],
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [
- fstdeterminizestar,
- "--use-log=true",
- ],
- input=res.stdout,
- capture_output=True,
- )
- res = subprocess.run(
- [fstminimizeencoded],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstpushspecial],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstarcsort, "--sort_type=ilabel"],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- out_f.write(res.stdout)
- except subprocess.CalledProcessError as e:
- logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}")
- os.remove(lg_graph)
- raise
-
- return lg_graph
-
-
-def create_H(
- kaldi_root: Path,
- fst_dir: Path,
- disambig_out_units_file: Path,
- in_labels: str,
- vocab: Dictionary,
- blk_sym: str,
- silence_symbol: Optional[str],
-) -> (Path, Path, Path):
- h_graph = (
- fst_dir / f"H.{in_labels}{'_' + silence_symbol if silence_symbol else ''}.fst"
- )
- h_out_units_file = fst_dir / f"kaldi_dict.h_out.{in_labels}.txt"
- disambig_in_units_file_int = Path(str(h_graph) + "isym_disambig.int")
- disambig_out_units_file_int = Path(str(disambig_out_units_file) + ".int")
- if (
- not h_graph.exists()
- or not h_out_units_file.exists()
- or not disambig_in_units_file_int.exists()
- ):
- logger.info(f"Creating {h_graph}")
- eps_sym = ""
-
- num_disambig = 0
- osymbols = []
-
- with open(disambig_out_units_file, "r") as f, open(
- disambig_out_units_file_int, "w"
- ) as out_f:
- for line in f:
- symb, id = line.rstrip().split()
- if line.startswith("#"):
- num_disambig += 1
- print(id, file=out_f)
- else:
- if len(osymbols) == 0:
- assert symb == eps_sym, symb
- osymbols.append((symb, id))
-
- i_idx = 0
- isymbols = [(eps_sym, 0)]
-
- imap = {}
-
- for i, s in enumerate(vocab.symbols):
- i_idx += 1
- isymbols.append((s, i_idx))
- imap[s] = i_idx
-
- fst_str = []
-
- node_idx = 0
- root_node = node_idx
-
- special_symbols = [blk_sym]
- if silence_symbol is not None:
- special_symbols.append(silence_symbol)
-
- for ss in special_symbols:
- fst_str.append("{} {} {} {}".format(root_node, root_node, ss, eps_sym))
-
- for symbol, _ in osymbols:
- if symbol == eps_sym or symbol.startswith("#"):
- continue
-
- node_idx += 1
- # 1. from root to emitting state
- fst_str.append("{} {} {} {}".format(root_node, node_idx, symbol, symbol))
- # 2. from emitting state back to root
- fst_str.append("{} {} {} {}".format(node_idx, root_node, eps_sym, eps_sym))
- # 3. from emitting state to optional blank state
- pre_node = node_idx
- node_idx += 1
- for ss in special_symbols:
- fst_str.append("{} {} {} {}".format(pre_node, node_idx, ss, eps_sym))
- # 4. from blank state back to root
- fst_str.append("{} {} {} {}".format(node_idx, root_node, eps_sym, eps_sym))
-
- fst_str.append("{}".format(root_node))
-
- fst_str = "\n".join(fst_str)
- h_str = str(h_graph)
- isym_file = h_str + ".isym"
-
- with open(isym_file, "w") as f:
- for sym, id in isymbols:
- f.write("{} {}\n".format(sym, id))
-
- with open(h_out_units_file, "w") as f:
- for sym, id in osymbols:
- f.write("{} {}\n".format(sym, id))
-
- with open(disambig_in_units_file_int, "w") as f:
- disam_sym_id = len(isymbols)
- for _ in range(num_disambig):
- f.write("{}\n".format(disam_sym_id))
- disam_sym_id += 1
-
- fstcompile = kaldi_root / "tools/openfst-1.6.7/bin/fstcompile"
- fstaddselfloops = kaldi_root / "src/fstbin/fstaddselfloops"
- fstarcsort = kaldi_root / "tools/openfst-1.6.7/bin/fstarcsort"
-
- try:
- with open(h_graph, "wb") as out_f:
- res = subprocess.run(
- [
- fstcompile,
- f"--isymbols={isym_file}",
- f"--osymbols={h_out_units_file}",
- "--keep_isymbols=false",
- "--keep_osymbols=false",
- ],
- input=str.encode(fst_str),
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [
- fstaddselfloops,
- disambig_in_units_file_int,
- disambig_out_units_file_int,
- ],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstarcsort, "--sort_type=olabel"],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- out_f.write(res.stdout)
- except subprocess.CalledProcessError as e:
- logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}")
- os.remove(h_graph)
- raise
- return h_graph, h_out_units_file, disambig_in_units_file_int
-
-
-def create_HLGa(
- kaldi_root: Path,
- fst_dir: Path,
- unique_label: str,
- h_graph: Path,
- lg_graph: Path,
- disambig_in_words_file_int: Path,
-) -> Path:
- hlga_graph = fst_dir / f"HLGa.{unique_label}.fst"
-
- if not hlga_graph.exists():
- logger.info(f"Creating {hlga_graph}")
-
- fsttablecompose = kaldi_root / "src/fstbin/fsttablecompose"
- fstdeterminizestar = kaldi_root / "src/fstbin/fstdeterminizestar"
- fstrmsymbols = kaldi_root / "src/fstbin/fstrmsymbols"
- fstrmepslocal = kaldi_root / "src/fstbin/fstrmepslocal"
- fstminimizeencoded = kaldi_root / "src/fstbin/fstminimizeencoded"
-
- try:
- with open(hlga_graph, "wb") as out_f:
- res = subprocess.run(
- [
- fsttablecompose,
- h_graph,
- lg_graph,
- ],
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstdeterminizestar, "--use-log=true"],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstrmsymbols, disambig_in_words_file_int],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstrmepslocal],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstminimizeencoded],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- out_f.write(res.stdout)
- except subprocess.CalledProcessError as e:
- logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}")
- os.remove(hlga_graph)
- raise
-
- return hlga_graph
-
-
-def create_HLa(
- kaldi_root: Path,
- fst_dir: Path,
- unique_label: str,
- h_graph: Path,
- l_graph: Path,
- disambig_in_words_file_int: Path,
-) -> Path:
- hla_graph = fst_dir / f"HLa.{unique_label}.fst"
-
- if not hla_graph.exists():
- logger.info(f"Creating {hla_graph}")
-
- fsttablecompose = kaldi_root / "src/fstbin/fsttablecompose"
- fstdeterminizestar = kaldi_root / "src/fstbin/fstdeterminizestar"
- fstrmsymbols = kaldi_root / "src/fstbin/fstrmsymbols"
- fstrmepslocal = kaldi_root / "src/fstbin/fstrmepslocal"
- fstminimizeencoded = kaldi_root / "src/fstbin/fstminimizeencoded"
-
- try:
- with open(hla_graph, "wb") as out_f:
- res = subprocess.run(
- [
- fsttablecompose,
- h_graph,
- l_graph,
- ],
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstdeterminizestar, "--use-log=true"],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstrmsymbols, disambig_in_words_file_int],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstrmepslocal],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstminimizeencoded],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- out_f.write(res.stdout)
- except subprocess.CalledProcessError as e:
- logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}")
- os.remove(hla_graph)
- raise
-
- return hla_graph
-
-
-def create_HLG(
- kaldi_root: Path,
- fst_dir: Path,
- unique_label: str,
- hlga_graph: Path,
- prefix: str = "HLG",
-) -> Path:
- hlg_graph = fst_dir / f"{prefix}.{unique_label}.fst"
-
- if not hlg_graph.exists():
- logger.info(f"Creating {hlg_graph}")
-
- add_self_loop = script_dir / "add-self-loop-simple"
- kaldi_src = kaldi_root / "src"
- kaldi_lib = kaldi_src / "lib"
-
- try:
- if not add_self_loop.exists():
- fst_include = kaldi_root / "tools/openfst-1.6.7/include"
- add_self_loop_src = script_dir / "add-self-loop-simple.cc"
-
- subprocess.run(
- [
- "c++",
- f"-I{kaldi_src}",
- f"-I{fst_include}",
- f"-L{kaldi_lib}",
- add_self_loop_src,
- "-lkaldi-base",
- "-lkaldi-fstext",
- "-o",
- add_self_loop,
- ],
- check=True,
- )
-
- my_env = os.environ.copy()
- my_env["LD_LIBRARY_PATH"] = f"{kaldi_lib}:{my_env['LD_LIBRARY_PATH']}"
-
- subprocess.run(
- [
- add_self_loop,
- hlga_graph,
- hlg_graph,
- ],
- check=True,
- capture_output=True,
- env=my_env,
- )
- except subprocess.CalledProcessError as e:
- logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}")
- raise
-
- return hlg_graph
-
-
-def initalize_kaldi(cfg: KaldiInitializerConfig) -> Path:
- if cfg.fst_dir is None:
- cfg.fst_dir = osp.join(cfg.data_dir, "kaldi")
- if cfg.out_labels is None:
- cfg.out_labels = cfg.in_labels
-
- kaldi_root = Path(cfg.kaldi_root)
- data_dir = Path(cfg.data_dir)
- fst_dir = Path(cfg.fst_dir)
- fst_dir.mkdir(parents=True, exist_ok=True)
-
- arpa_base = osp.splitext(osp.basename(cfg.lm_arpa))[0]
- unique_label = f"{cfg.in_labels}.{arpa_base}"
-
- with open(data_dir / f"dict.{cfg.in_labels}.txt", "r") as f:
- vocab = Dictionary.load(f)
-
- in_units_file = create_units(fst_dir, cfg.in_labels, vocab)
-
- grammar_graph, out_words_file = create_G(
- kaldi_root, fst_dir, Path(cfg.lm_arpa), arpa_base
- )
-
- disambig_lexicon_file, disambig_L_in_units_file = create_lexicon(
- cfg, fst_dir, unique_label, in_units_file, out_words_file
- )
-
- h_graph, h_out_units_file, disambig_in_units_file_int = create_H(
- kaldi_root,
- fst_dir,
- disambig_L_in_units_file,
- cfg.in_labels,
- vocab,
- cfg.blank_symbol,
- cfg.silence_symbol,
- )
- lexicon_graph = create_L(
- kaldi_root,
- fst_dir,
- unique_label,
- disambig_lexicon_file,
- disambig_L_in_units_file,
- out_words_file,
- )
- lg_graph = create_LG(
- kaldi_root, fst_dir, unique_label, lexicon_graph, grammar_graph
- )
- hlga_graph = create_HLGa(
- kaldi_root, fst_dir, unique_label, h_graph, lg_graph, disambig_in_units_file_int
- )
- hlg_graph = create_HLG(kaldi_root, fst_dir, unique_label, hlga_graph)
-
- # for debugging
- # hla_graph = create_HLa(kaldi_root, fst_dir, unique_label, h_graph, lexicon_graph, disambig_in_units_file_int)
- # hl_graph = create_HLG(kaldi_root, fst_dir, unique_label, hla_graph, prefix="HL_looped")
- # create_HLG(kaldi_root, fst_dir, "phnc", h_graph, prefix="H_looped")
-
- return hlg_graph
-
-
-@hydra.main(config_path=config_path, config_name="kaldi_initializer")
-def cli_main(cfg: KaldiInitializerConfig) -> None:
- container = OmegaConf.to_container(cfg, resolve=True, enum_to_str=True)
- cfg = OmegaConf.create(container)
- OmegaConf.set_struct(cfg, True)
- initalize_kaldi(cfg)
-
-
-if __name__ == "__main__":
-
- logging.root.setLevel(logging.INFO)
- logging.basicConfig(level=logging.INFO)
-
- try:
- from hydra._internal.utils import (
- get_args,
- ) # pylint: disable=import-outside-toplevel
-
- cfg_name = get_args().config_name or "kaldi_initializer"
- except ImportError:
- logger.warning("Failed to get config name from hydra args")
- cfg_name = "kaldi_initializer"
-
- cs = ConfigStore.instance()
- cs.store(name=cfg_name, node=KaldiInitializerConfig)
-
- cli_main()
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/encoders/bytes.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/encoders/bytes.py
deleted file mode 100644
index f88f8f6929f5b6bdb0db470be9ebedf8fe1f752d..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/encoders/bytes.py
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-from fairseq.data.encoders import register_bpe
-from fairseq.data.encoders.byte_utils import (
- SPACE,
- SPACE_ESCAPE,
- byte_encode,
- smart_byte_decode,
-)
-
-
-@register_bpe("bytes")
-class Bytes(object):
- def __init__(self, *unused):
- pass
-
- @staticmethod
- def add_args(parser):
- pass
-
- @staticmethod
- def encode(x: str) -> str:
- encoded = byte_encode(x)
- escaped = encoded.replace(SPACE, SPACE_ESCAPE)
- return SPACE.join(list(escaped))
-
- @staticmethod
- def decode(x: str) -> str:
- unescaped = x.replace(SPACE, "").replace(SPACE_ESCAPE, SPACE)
- return smart_byte_decode(unescaped)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/hubert_pretraining.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/hubert_pretraining.py
deleted file mode 100644
index f756080dd17b380d004420c045a8744411c0e93d..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/hubert_pretraining.py
+++ /dev/null
@@ -1,195 +0,0 @@
-# Copyright (c) 2017-present, Facebook, Inc.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the LICENSE file in
-# the root directory of this source tree. An additional grant of patent rights
-# can be found in the PATENTS file in the same directory.
-
-import logging
-import os
-import sys
-from typing import Dict, List, Optional, Tuple
-
-import numpy as np
-
-from dataclasses import dataclass, field
-from fairseq.data import Dictionary, HubertDataset
-from fairseq.dataclass.configs import FairseqDataclass
-from fairseq.tasks import register_task
-from fairseq.tasks.fairseq_task import FairseqTask
-from omegaconf import MISSING
-
-logger = logging.getLogger(__name__)
-
-
-class LabelEncoder(object):
- def __init__(self, dictionary: Dictionary) -> None:
- self.dictionary = dictionary
-
- def __call__(self, label: str) -> List[str]:
- return self.dictionary.encode_line(
- label, append_eos=False, add_if_not_exist=False,
- )
-
-
-@dataclass
-class HubertPretrainingConfig(FairseqDataclass):
- data: str = field(
- default=MISSING, metadata={"help": "path to data directory"}
- )
- fine_tuning: bool = field(
- default=False, metadata={"help": "set to true if fine-tuning Hubert"}
- )
- labels: List[str] = field(
- default_factory=lambda: ["ltr"],
- metadata={
- "help": (
- "extension of the label files to load, frame-level labels for"
- " pre-training, and sequence-level label for fine-tuning"
- )
- },
- )
- label_dir: Optional[str] = field(
- default=None,
- metadata={
- "help": "if set, looks for labels in this directory instead",
- },
- )
- label_rate: int = field(
- default=-1,
- metadata={"help": "label frame rate. -1 for sequence label"},
- )
- sample_rate: int = field(
- default=16_000,
- metadata={
- "help": "target sample rate. audio files will be up/down "
- "sampled to this rate"
- },
- )
- normalize: bool = field(
- default=False,
- metadata={
- "help": "if set, normalizes input to have 0 mean and unit variance"
- },
- )
- enable_padding: bool = field(
- default=False,
- metadata={"help": "pad shorter samples instead of cropping"},
- )
- max_keep_size: Optional[int] = field(
- default=None,
- metadata={"help": "exclude sample longer than this"},
- )
- max_sample_size: Optional[int] = field(
- default=None,
- metadata={"help": "max sample size to crop to for batching"},
- )
- min_sample_size: Optional[int] = field(
- default=None,
- metadata={"help": "min sample size to crop to for batching"},
- )
- single_target: Optional[bool] = field(
- default=False,
- metadata={
- "help": "if set, AddTargetDatasets outputs same keys "
- "as AddTargetDataset"
- },
- )
- random_crop: Optional[bool] = field(
- default=True,
- metadata={"help": "always crop from the beginning if false"},
- )
- pad_audio: Optional[bool] = field(
- default=False,
- metadata={"help": "pad audio to the longest one in the batch if true"},
- )
-
-
-@register_task("hubert_pretraining", dataclass=HubertPretrainingConfig)
-class HubertPretrainingTask(FairseqTask):
-
- cfg: HubertPretrainingConfig
-
- def __init__(
- self,
- cfg: HubertPretrainingConfig,
- ) -> None:
- super().__init__(cfg)
-
- logger.info(f"current directory is {os.getcwd()}")
- logger.info(f"HubertPretrainingTask Config {cfg}")
-
- self.cfg = cfg
- self.fine_tuning = cfg.fine_tuning
-
- if cfg.fine_tuning:
- self.state.add_factory("target_dictionary", self.load_dictionaries)
- else:
- self.state.add_factory("dictionaries", self.load_dictionaries)
-
- self.blank_symbol = ""
-
- @property
- def source_dictionary(self) -> Optional[Dictionary]:
- return None
-
- @property
- def target_dictionary(self) -> Optional[Dictionary]:
- return self.state.target_dictionary
-
- @property
- def dictionaries(self) -> List[Dictionary]:
- return self.state.dictionaries
-
- @classmethod
- def setup_task(
- cls, cfg: HubertPretrainingConfig, **kwargs
- ) -> "HubertPretrainingTask":
- return cls(cfg)
-
- def load_dictionaries(self):
- label_dir = self.cfg.data if self.cfg.label_dir is None else self.cfg.label_dir
- dictionaries = [Dictionary.load(f"{label_dir}/dict.{label}.txt") for label in self.cfg.labels]
- return dictionaries[0] if self.cfg.fine_tuning else dictionaries
-
- def get_label_dir(self) -> str:
- if self.cfg.label_dir is None:
- return self.cfg.data
- return self.cfg.label_dir
-
- def load_dataset(self, split: str, **kwargs) -> None:
- manifest = f"{self.cfg.data}/{split}.tsv"
- dicts = [self.target_dictionary] if self.cfg.fine_tuning else self.dictionaries
- pad_list = [dict.pad() for dict in dicts]
- eos_list = [dict.eos() for dict in dicts]
- procs = [LabelEncoder(dict) for dict in dicts]
- paths = [
- f"{self.get_label_dir()}/{split}.{l}" for l in self.cfg.labels
- ]
-
- # hubert v1: pad_audio=True, random_crop=False;
- self.datasets[split] = HubertDataset(
- manifest,
- sample_rate=self.cfg.sample_rate,
- label_paths=paths,
- label_rates=self.cfg.label_rate,
- pad_list=pad_list,
- eos_list=eos_list,
- label_processors=procs,
- max_keep_sample_size=self.cfg.max_keep_size,
- min_keep_sample_size=self.cfg.min_sample_size,
- max_sample_size=self.cfg.max_sample_size,
- pad_audio=self.cfg.pad_audio,
- normalize=self.cfg.normalize,
- store_labels=False,
- random_crop=self.cfg.random_crop,
- single_target=self.cfg.single_target,
- )
-
- def max_positions(self) -> Tuple[int, int]:
- return (sys.maxsize, sys.maxsize)
-
- def filter_indices_by_size(
- self, indices: np.array, *args, **kwargs
- ) -> np.array:
- return indices
diff --git a/spaces/OFA-Sys/small-stable-diffusion-v0/README.md b/spaces/OFA-Sys/small-stable-diffusion-v0/README.md
deleted file mode 100644
index 3592a944fafdd018b4c40fc4a6f27144b3bb0436..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/small-stable-diffusion-v0/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Small Stable Diffusion V0
-emoji: 💻
-colorFrom: indigo
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-duplicated_from: akhaliq/small-stable-diffusion-v0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ORI-Muchim/RaidenTTS/README.md b/spaces/ORI-Muchim/RaidenTTS/README.md
deleted file mode 100644
index 28d249a83cc87b2a5250cc52966ac8caec739f73..0000000000000000000000000000000000000000
--- a/spaces/ORI-Muchim/RaidenTTS/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: RaidenTTS
-emoji: ⚡
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/mobilenet.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/mobilenet.py
deleted file mode 100644
index f501266e56ee71cdf455744020f8fc1a58ec9fff..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/mobilenet.py
+++ /dev/null
@@ -1,154 +0,0 @@
-"""
-This MobileNetV2 implementation is modified from the following repository:
-https://github.com/tonylins/pytorch-mobilenet-v2
-"""
-
-import torch.nn as nn
-import math
-from .utils import load_url
-from .segm_lib.nn import SynchronizedBatchNorm2d
-
-BatchNorm2d = SynchronizedBatchNorm2d
-
-
-__all__ = ['mobilenetv2']
-
-
-model_urls = {
- 'mobilenetv2': 'http://sceneparsing.csail.mit.edu/model/pretrained_resnet/mobilenet_v2.pth.tar',
-}
-
-
-def conv_bn(inp, oup, stride):
- return nn.Sequential(
- nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
- BatchNorm2d(oup),
- nn.ReLU6(inplace=True)
- )
-
-
-def conv_1x1_bn(inp, oup):
- return nn.Sequential(
- nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
- BatchNorm2d(oup),
- nn.ReLU6(inplace=True)
- )
-
-
-class InvertedResidual(nn.Module):
- def __init__(self, inp, oup, stride, expand_ratio):
- super(InvertedResidual, self).__init__()
- self.stride = stride
- assert stride in [1, 2]
-
- hidden_dim = round(inp * expand_ratio)
- self.use_res_connect = self.stride == 1 and inp == oup
-
- if expand_ratio == 1:
- self.conv = nn.Sequential(
- # dw
- nn.Conv2d(hidden_dim, hidden_dim, 3, stride, 1, groups=hidden_dim, bias=False),
- BatchNorm2d(hidden_dim),
- nn.ReLU6(inplace=True),
- # pw-linear
- nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),
- BatchNorm2d(oup),
- )
- else:
- self.conv = nn.Sequential(
- # pw
- nn.Conv2d(inp, hidden_dim, 1, 1, 0, bias=False),
- BatchNorm2d(hidden_dim),
- nn.ReLU6(inplace=True),
- # dw
- nn.Conv2d(hidden_dim, hidden_dim, 3, stride, 1, groups=hidden_dim, bias=False),
- BatchNorm2d(hidden_dim),
- nn.ReLU6(inplace=True),
- # pw-linear
- nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),
- BatchNorm2d(oup),
- )
-
- def forward(self, x):
- if self.use_res_connect:
- return x + self.conv(x)
- else:
- return self.conv(x)
-
-
-class MobileNetV2(nn.Module):
- def __init__(self, n_class=1000, input_size=224, width_mult=1.):
- super(MobileNetV2, self).__init__()
- block = InvertedResidual
- input_channel = 32
- last_channel = 1280
- interverted_residual_setting = [
- # t, c, n, s
- [1, 16, 1, 1],
- [6, 24, 2, 2],
- [6, 32, 3, 2],
- [6, 64, 4, 2],
- [6, 96, 3, 1],
- [6, 160, 3, 2],
- [6, 320, 1, 1],
- ]
-
- # building first layer
- assert input_size % 32 == 0
- input_channel = int(input_channel * width_mult)
- self.last_channel = int(last_channel * width_mult) if width_mult > 1.0 else last_channel
- self.features = [conv_bn(3, input_channel, 2)]
- # building inverted residual blocks
- for t, c, n, s in interverted_residual_setting:
- output_channel = int(c * width_mult)
- for i in range(n):
- if i == 0:
- self.features.append(block(input_channel, output_channel, s, expand_ratio=t))
- else:
- self.features.append(block(input_channel, output_channel, 1, expand_ratio=t))
- input_channel = output_channel
- # building last several layers
- self.features.append(conv_1x1_bn(input_channel, self.last_channel))
- # make it nn.Sequential
- self.features = nn.Sequential(*self.features)
-
- # building classifier
- self.classifier = nn.Sequential(
- nn.Dropout(0.2),
- nn.Linear(self.last_channel, n_class),
- )
-
- self._initialize_weights()
-
- def forward(self, x):
- x = self.features(x)
- x = x.mean(3).mean(2)
- x = self.classifier(x)
- return x
-
- def _initialize_weights(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
- m.weight.data.normal_(0, math.sqrt(2. / n))
- if m.bias is not None:
- m.bias.data.zero_()
- elif isinstance(m, BatchNorm2d):
- m.weight.data.fill_(1)
- m.bias.data.zero_()
- elif isinstance(m, nn.Linear):
- n = m.weight.size(1)
- m.weight.data.normal_(0, 0.01)
- m.bias.data.zero_()
-
-
-def mobilenetv2(pretrained=False, **kwargs):
- """Constructs a MobileNet_V2 model.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- """
- model = MobileNetV2(n_class=1000, **kwargs)
- if pretrained:
- model.load_state_dict(load_url(model_urls['mobilenetv2']), strict=False)
- return model
\ No newline at end of file
diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/render/blender/scene.py b/spaces/OpenMotionLab/MotionGPT/mGPT/render/blender/scene.py
deleted file mode 100644
index 5b35e6c64dc0e0cd7a0168286cbd868c5936573d..0000000000000000000000000000000000000000
--- a/spaces/OpenMotionLab/MotionGPT/mGPT/render/blender/scene.py
+++ /dev/null
@@ -1,96 +0,0 @@
-import bpy
-from .materials import plane_mat # noqa
-
-
-def setup_renderer(denoising=True, oldrender=True, accelerator="gpu", device=[0]):
- bpy.context.scene.render.engine = "CYCLES"
- bpy.data.scenes[0].render.engine = "CYCLES"
- if accelerator.lower() == "gpu":
- bpy.context.preferences.addons[
- "cycles"
- ].preferences.compute_device_type = "CUDA"
- bpy.context.scene.cycles.device = "GPU"
- i = 0
- bpy.context.preferences.addons["cycles"].preferences.get_devices()
- for d in bpy.context.preferences.addons["cycles"].preferences.devices:
- if i in device: # gpu id
- d["use"] = 1
- print(d["name"], "".join(str(i) for i in device))
- else:
- d["use"] = 0
- i += 1
-
- if denoising:
- bpy.context.scene.cycles.use_denoising = True
-
- bpy.context.scene.render.tile_x = 256
- bpy.context.scene.render.tile_y = 256
- bpy.context.scene.cycles.samples = 64
- # bpy.context.scene.cycles.denoiser = 'OPTIX'
-
- if not oldrender:
- bpy.context.scene.view_settings.view_transform = "Standard"
- bpy.context.scene.render.film_transparent = True
- bpy.context.scene.display_settings.display_device = "sRGB"
- bpy.context.scene.view_settings.gamma = 1.2
- bpy.context.scene.view_settings.exposure = -0.75
-
-
-# Setup scene
-def setup_scene(
- res="high", denoising=True, oldrender=True, accelerator="gpu", device=[0]
-):
- scene = bpy.data.scenes["Scene"]
- assert res in ["ultra", "high", "med", "low"]
- if res == "high":
- scene.render.resolution_x = 1280
- scene.render.resolution_y = 1024
- elif res == "med":
- scene.render.resolution_x = 1280 // 2
- scene.render.resolution_y = 1024 // 2
- elif res == "low":
- scene.render.resolution_x = 1280 // 4
- scene.render.resolution_y = 1024 // 4
- elif res == "ultra":
- scene.render.resolution_x = 1280 * 2
- scene.render.resolution_y = 1024 * 2
-
- scene.render.film_transparent= True
- world = bpy.data.worlds["World"]
- world.use_nodes = True
- bg = world.node_tree.nodes["Background"]
- bg.inputs[0].default_value[:3] = (1.0, 1.0, 1.0)
- bg.inputs[1].default_value = 1.0
-
- # Remove default cube
- if "Cube" in bpy.data.objects:
- bpy.data.objects["Cube"].select_set(True)
- bpy.ops.object.delete()
-
- bpy.ops.object.light_add(
- type="SUN", align="WORLD", location=(0, 0, 0), scale=(1, 1, 1)
- )
- bpy.data.objects["Sun"].data.energy = 1.5
-
- # rotate camera
- bpy.ops.object.empty_add(
- type="PLAIN_AXES", align="WORLD", location=(0, 0, 0), scale=(1, 1, 1)
- )
- bpy.ops.transform.resize(
- value=(10, 10, 10),
- orient_type="GLOBAL",
- orient_matrix=((1, 0, 0), (0, 1, 0), (0, 0, 1)),
- orient_matrix_type="GLOBAL",
- mirror=True,
- use_proportional_edit=False,
- proportional_edit_falloff="SMOOTH",
- proportional_size=1,
- use_proportional_connected=False,
- use_proportional_projected=False,
- )
- bpy.ops.object.select_all(action="DESELECT")
-
- setup_renderer(
- denoising=denoising, oldrender=oldrender, accelerator=accelerator, device=device
- )
- return scene
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/focal_loss.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/focal_loss.py
deleted file mode 100644
index 763bc93bd2575c49ca8ccf20996bbd92d1e0d1a4..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/focal_loss.py
+++ /dev/null
@@ -1,212 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-import torch.nn as nn
-from torch.autograd import Function
-from torch.autograd.function import once_differentiable
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext', [
- 'sigmoid_focal_loss_forward', 'sigmoid_focal_loss_backward',
- 'softmax_focal_loss_forward', 'softmax_focal_loss_backward'
-])
-
-
-class SigmoidFocalLossFunction(Function):
-
- @staticmethod
- def symbolic(g, input, target, gamma, alpha, weight, reduction):
- return g.op(
- 'mmcv::MMCVSigmoidFocalLoss',
- input,
- target,
- gamma_f=gamma,
- alpha_f=alpha,
- weight_f=weight,
- reduction_s=reduction)
-
- @staticmethod
- def forward(ctx,
- input,
- target,
- gamma=2.0,
- alpha=0.25,
- weight=None,
- reduction='mean'):
-
- assert isinstance(target, (torch.LongTensor, torch.cuda.LongTensor))
- assert input.dim() == 2
- assert target.dim() == 1
- assert input.size(0) == target.size(0)
- if weight is None:
- weight = input.new_empty(0)
- else:
- assert weight.dim() == 1
- assert input.size(1) == weight.size(0)
- ctx.reduction_dict = {'none': 0, 'mean': 1, 'sum': 2}
- assert reduction in ctx.reduction_dict.keys()
-
- ctx.gamma = float(gamma)
- ctx.alpha = float(alpha)
- ctx.reduction = ctx.reduction_dict[reduction]
-
- output = input.new_zeros(input.size())
-
- ext_module.sigmoid_focal_loss_forward(
- input, target, weight, output, gamma=ctx.gamma, alpha=ctx.alpha)
- if ctx.reduction == ctx.reduction_dict['mean']:
- output = output.sum() / input.size(0)
- elif ctx.reduction == ctx.reduction_dict['sum']:
- output = output.sum()
- ctx.save_for_backward(input, target, weight)
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- input, target, weight = ctx.saved_tensors
-
- grad_input = input.new_zeros(input.size())
-
- ext_module.sigmoid_focal_loss_backward(
- input,
- target,
- weight,
- grad_input,
- gamma=ctx.gamma,
- alpha=ctx.alpha)
-
- grad_input *= grad_output
- if ctx.reduction == ctx.reduction_dict['mean']:
- grad_input /= input.size(0)
- return grad_input, None, None, None, None, None
-
-
-sigmoid_focal_loss = SigmoidFocalLossFunction.apply
-
-
-class SigmoidFocalLoss(nn.Module):
-
- def __init__(self, gamma, alpha, weight=None, reduction='mean'):
- super(SigmoidFocalLoss, self).__init__()
- self.gamma = gamma
- self.alpha = alpha
- self.register_buffer('weight', weight)
- self.reduction = reduction
-
- def forward(self, input, target):
- return sigmoid_focal_loss(input, target, self.gamma, self.alpha,
- self.weight, self.reduction)
-
- def __repr__(self):
- s = self.__class__.__name__
- s += f'(gamma={self.gamma}, '
- s += f'alpha={self.alpha}, '
- s += f'reduction={self.reduction})'
- return s
-
-
-class SoftmaxFocalLossFunction(Function):
-
- @staticmethod
- def symbolic(g, input, target, gamma, alpha, weight, reduction):
- return g.op(
- 'mmcv::MMCVSoftmaxFocalLoss',
- input,
- target,
- gamma_f=gamma,
- alpha_f=alpha,
- weight_f=weight,
- reduction_s=reduction)
-
- @staticmethod
- def forward(ctx,
- input,
- target,
- gamma=2.0,
- alpha=0.25,
- weight=None,
- reduction='mean'):
-
- assert isinstance(target, (torch.LongTensor, torch.cuda.LongTensor))
- assert input.dim() == 2
- assert target.dim() == 1
- assert input.size(0) == target.size(0)
- if weight is None:
- weight = input.new_empty(0)
- else:
- assert weight.dim() == 1
- assert input.size(1) == weight.size(0)
- ctx.reduction_dict = {'none': 0, 'mean': 1, 'sum': 2}
- assert reduction in ctx.reduction_dict.keys()
-
- ctx.gamma = float(gamma)
- ctx.alpha = float(alpha)
- ctx.reduction = ctx.reduction_dict[reduction]
-
- channel_stats, _ = torch.max(input, dim=1)
- input_softmax = input - channel_stats.unsqueeze(1).expand_as(input)
- input_softmax.exp_()
-
- channel_stats = input_softmax.sum(dim=1)
- input_softmax /= channel_stats.unsqueeze(1).expand_as(input)
-
- output = input.new_zeros(input.size(0))
- ext_module.softmax_focal_loss_forward(
- input_softmax,
- target,
- weight,
- output,
- gamma=ctx.gamma,
- alpha=ctx.alpha)
-
- if ctx.reduction == ctx.reduction_dict['mean']:
- output = output.sum() / input.size(0)
- elif ctx.reduction == ctx.reduction_dict['sum']:
- output = output.sum()
- ctx.save_for_backward(input_softmax, target, weight)
- return output
-
- @staticmethod
- def backward(ctx, grad_output):
- input_softmax, target, weight = ctx.saved_tensors
- buff = input_softmax.new_zeros(input_softmax.size(0))
- grad_input = input_softmax.new_zeros(input_softmax.size())
-
- ext_module.softmax_focal_loss_backward(
- input_softmax,
- target,
- weight,
- buff,
- grad_input,
- gamma=ctx.gamma,
- alpha=ctx.alpha)
-
- grad_input *= grad_output
- if ctx.reduction == ctx.reduction_dict['mean']:
- grad_input /= input_softmax.size(0)
- return grad_input, None, None, None, None, None
-
-
-softmax_focal_loss = SoftmaxFocalLossFunction.apply
-
-
-class SoftmaxFocalLoss(nn.Module):
-
- def __init__(self, gamma, alpha, weight=None, reduction='mean'):
- super(SoftmaxFocalLoss, self).__init__()
- self.gamma = gamma
- self.alpha = alpha
- self.register_buffer('weight', weight)
- self.reduction = reduction
-
- def forward(self, input, target):
- return softmax_focal_loss(input, target, self.gamma, self.alpha,
- self.weight, self.reduction)
-
- def __repr__(self):
- s = self.__class__.__name__
- s += f'(gamma={self.gamma}, '
- s += f'alpha={self.alpha}, '
- s += f'reduction={self.reduction})'
- return s
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/segmentors/base.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/segmentors/base.py
deleted file mode 100644
index 172fc63b736c4f13be1cd909433bc260760a1eaa..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/segmentors/base.py
+++ /dev/null
@@ -1,273 +0,0 @@
-import logging
-import warnings
-from abc import ABCMeta, abstractmethod
-from collections import OrderedDict
-
-import annotator.uniformer.mmcv as mmcv
-import numpy as np
-import torch
-import torch.distributed as dist
-import torch.nn as nn
-from annotator.uniformer.mmcv.runner import auto_fp16
-
-
-class BaseSegmentor(nn.Module):
- """Base class for segmentors."""
-
- __metaclass__ = ABCMeta
-
- def __init__(self):
- super(BaseSegmentor, self).__init__()
- self.fp16_enabled = False
-
- @property
- def with_neck(self):
- """bool: whether the segmentor has neck"""
- return hasattr(self, 'neck') and self.neck is not None
-
- @property
- def with_auxiliary_head(self):
- """bool: whether the segmentor has auxiliary head"""
- return hasattr(self,
- 'auxiliary_head') and self.auxiliary_head is not None
-
- @property
- def with_decode_head(self):
- """bool: whether the segmentor has decode head"""
- return hasattr(self, 'decode_head') and self.decode_head is not None
-
- @abstractmethod
- def extract_feat(self, imgs):
- """Placeholder for extract features from images."""
- pass
-
- @abstractmethod
- def encode_decode(self, img, img_metas):
- """Placeholder for encode images with backbone and decode into a
- semantic segmentation map of the same size as input."""
- pass
-
- @abstractmethod
- def forward_train(self, imgs, img_metas, **kwargs):
- """Placeholder for Forward function for training."""
- pass
-
- @abstractmethod
- def simple_test(self, img, img_meta, **kwargs):
- """Placeholder for single image test."""
- pass
-
- @abstractmethod
- def aug_test(self, imgs, img_metas, **kwargs):
- """Placeholder for augmentation test."""
- pass
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in segmentor.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- if pretrained is not None:
- logger = logging.getLogger()
- logger.info(f'load model from: {pretrained}')
-
- def forward_test(self, imgs, img_metas, **kwargs):
- """
- Args:
- imgs (List[Tensor]): the outer list indicates test-time
- augmentations and inner Tensor should have a shape NxCxHxW,
- which contains all images in the batch.
- img_metas (List[List[dict]]): the outer list indicates test-time
- augs (multiscale, flip, etc.) and the inner list indicates
- images in a batch.
- """
- for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]:
- if not isinstance(var, list):
- raise TypeError(f'{name} must be a list, but got '
- f'{type(var)}')
-
- num_augs = len(imgs)
- if num_augs != len(img_metas):
- raise ValueError(f'num of augmentations ({len(imgs)}) != '
- f'num of image meta ({len(img_metas)})')
- # all images in the same aug batch all of the same ori_shape and pad
- # shape
- for img_meta in img_metas:
- ori_shapes = [_['ori_shape'] for _ in img_meta]
- assert all(shape == ori_shapes[0] for shape in ori_shapes)
- img_shapes = [_['img_shape'] for _ in img_meta]
- assert all(shape == img_shapes[0] for shape in img_shapes)
- pad_shapes = [_['pad_shape'] for _ in img_meta]
- assert all(shape == pad_shapes[0] for shape in pad_shapes)
-
- if num_augs == 1:
- return self.simple_test(imgs[0], img_metas[0], **kwargs)
- else:
- return self.aug_test(imgs, img_metas, **kwargs)
-
- @auto_fp16(apply_to=('img', ))
- def forward(self, img, img_metas, return_loss=True, **kwargs):
- """Calls either :func:`forward_train` or :func:`forward_test` depending
- on whether ``return_loss`` is ``True``.
-
- Note this setting will change the expected inputs. When
- ``return_loss=True``, img and img_meta are single-nested (i.e. Tensor
- and List[dict]), and when ``resturn_loss=False``, img and img_meta
- should be double nested (i.e. List[Tensor], List[List[dict]]), with
- the outer list indicating test time augmentations.
- """
- if return_loss:
- return self.forward_train(img, img_metas, **kwargs)
- else:
- return self.forward_test(img, img_metas, **kwargs)
-
- def train_step(self, data_batch, optimizer, **kwargs):
- """The iteration step during training.
-
- This method defines an iteration step during training, except for the
- back propagation and optimizer updating, which are done in an optimizer
- hook. Note that in some complicated cases or models, the whole process
- including back propagation and optimizer updating is also defined in
- this method, such as GAN.
-
- Args:
- data (dict): The output of dataloader.
- optimizer (:obj:`torch.optim.Optimizer` | dict): The optimizer of
- runner is passed to ``train_step()``. This argument is unused
- and reserved.
-
- Returns:
- dict: It should contain at least 3 keys: ``loss``, ``log_vars``,
- ``num_samples``.
- ``loss`` is a tensor for back propagation, which can be a
- weighted sum of multiple losses.
- ``log_vars`` contains all the variables to be sent to the
- logger.
- ``num_samples`` indicates the batch size (when the model is
- DDP, it means the batch size on each GPU), which is used for
- averaging the logs.
- """
- losses = self(**data_batch)
- loss, log_vars = self._parse_losses(losses)
-
- outputs = dict(
- loss=loss,
- log_vars=log_vars,
- num_samples=len(data_batch['img_metas']))
-
- return outputs
-
- def val_step(self, data_batch, **kwargs):
- """The iteration step during validation.
-
- This method shares the same signature as :func:`train_step`, but used
- during val epochs. Note that the evaluation after training epochs is
- not implemented with this method, but an evaluation hook.
- """
- output = self(**data_batch, **kwargs)
- return output
-
- @staticmethod
- def _parse_losses(losses):
- """Parse the raw outputs (losses) of the network.
-
- Args:
- losses (dict): Raw output of the network, which usually contain
- losses and other necessary information.
-
- Returns:
- tuple[Tensor, dict]: (loss, log_vars), loss is the loss tensor
- which may be a weighted sum of all losses, log_vars contains
- all the variables to be sent to the logger.
- """
- log_vars = OrderedDict()
- for loss_name, loss_value in losses.items():
- if isinstance(loss_value, torch.Tensor):
- log_vars[loss_name] = loss_value.mean()
- elif isinstance(loss_value, list):
- log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value)
- else:
- raise TypeError(
- f'{loss_name} is not a tensor or list of tensors')
-
- loss = sum(_value for _key, _value in log_vars.items()
- if 'loss' in _key)
-
- log_vars['loss'] = loss
- for loss_name, loss_value in log_vars.items():
- # reduce loss when distributed training
- if dist.is_available() and dist.is_initialized():
- loss_value = loss_value.data.clone()
- dist.all_reduce(loss_value.div_(dist.get_world_size()))
- log_vars[loss_name] = loss_value.item()
-
- return loss, log_vars
-
- def show_result(self,
- img,
- result,
- palette=None,
- win_name='',
- show=False,
- wait_time=0,
- out_file=None,
- opacity=0.5):
- """Draw `result` over `img`.
-
- Args:
- img (str or Tensor): The image to be displayed.
- result (Tensor): The semantic segmentation results to draw over
- `img`.
- palette (list[list[int]]] | np.ndarray | None): The palette of
- segmentation map. If None is given, random palette will be
- generated. Default: None
- win_name (str): The window name.
- wait_time (int): Value of waitKey param.
- Default: 0.
- show (bool): Whether to show the image.
- Default: False.
- out_file (str or None): The filename to write the image.
- Default: None.
- opacity(float): Opacity of painted segmentation map.
- Default 0.5.
- Must be in (0, 1] range.
- Returns:
- img (Tensor): Only if not `show` or `out_file`
- """
- img = mmcv.imread(img)
- img = img.copy()
- seg = result[0]
- if palette is None:
- if self.PALETTE is None:
- palette = np.random.randint(
- 0, 255, size=(len(self.CLASSES), 3))
- else:
- palette = self.PALETTE
- palette = np.array(palette)
- assert palette.shape[0] == len(self.CLASSES)
- assert palette.shape[1] == 3
- assert len(palette.shape) == 2
- assert 0 < opacity <= 1.0
- color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8)
- for label, color in enumerate(palette):
- color_seg[seg == label, :] = color
- # convert to BGR
- color_seg = color_seg[..., ::-1]
-
- img = img * (1 - opacity) + color_seg * opacity
- img = img.astype(np.uint8)
- # if out_file specified, do not show image in window
- if out_file is not None:
- show = False
-
- if show:
- mmcv.imshow(img, win_name, wait_time)
- if out_file is not None:
- mmcv.imwrite(img, out_file)
-
- if not (show or out_file):
- warnings.warn('show==False and out_file is not specified, only '
- 'result image will be returned')
- return img
diff --git a/spaces/PaddlePaddle/PaddleOCR/README.md b/spaces/PaddlePaddle/PaddleOCR/README.md
deleted file mode 100644
index fbe6ca00bd2d09a8be2e62dc586728516124d3f2..0000000000000000000000000000000000000000
--- a/spaces/PaddlePaddle/PaddleOCR/README.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: PaddleOCR
-emoji: ⚡
-colorFrom: pink
-colorTo: green
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/io/ports.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/io/ports.go
deleted file mode 100644
index b1703af297a8fd4fbba850cd2f46b6e6cd0b0246..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/io/ports.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/base/compile.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/base/compile.go
deleted file mode 100644
index 49e1f059ad965eeb2606f732a03aaf1617e28a8a..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/base/compile.go and /dev/null differ
diff --git a/spaces/PeepDaSlan9/AutoGPT/autogpt/commands/image_gen.py b/spaces/PeepDaSlan9/AutoGPT/autogpt/commands/image_gen.py
deleted file mode 100644
index 0809fcdd3e38b52a2ce09ca1444f2574813d40f9..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/AutoGPT/autogpt/commands/image_gen.py
+++ /dev/null
@@ -1,163 +0,0 @@
-""" Image Generation Module for AutoGPT."""
-import io
-import os.path
-import uuid
-from base64 import b64decode
-
-import openai
-import requests
-from PIL import Image
-
-from autogpt.config import Config
-from autogpt.workspace import path_in_workspace
-
-CFG = Config()
-
-
-def generate_image(prompt: str, size: int = 256) -> str:
- """Generate an image from a prompt.
-
- Args:
- prompt (str): The prompt to use
- size (int, optional): The size of the image. Defaults to 256. (Not supported by HuggingFace)
-
- Returns:
- str: The filename of the image
- """
- filename = f"{str(uuid.uuid4())}.jpg"
-
- # DALL-E
- if CFG.image_provider == "dalle":
- return generate_image_with_dalle(prompt, filename, size)
- # HuggingFace
- elif CFG.image_provider == "huggingface":
- return generate_image_with_hf(prompt, filename)
- # SD WebUI
- elif CFG.image_provider == "sdwebui":
- return generate_image_with_sd_webui(prompt, filename, size)
- return "No Image Provider Set"
-
-
-def generate_image_with_hf(prompt: str, filename: str) -> str:
- """Generate an image with HuggingFace's API.
-
- Args:
- prompt (str): The prompt to use
- filename (str): The filename to save the image to
-
- Returns:
- str: The filename of the image
- """
- API_URL = (
- f"https://api-inference.huggingface.co/models/{CFG.huggingface_image_model}"
- )
- if CFG.huggingface_api_token is None:
- raise ValueError(
- "You need to set your Hugging Face API token in the config file."
- )
- headers = {
- "Authorization": f"Bearer {CFG.huggingface_api_token}",
- "X-Use-Cache": "false",
- }
-
- response = requests.post(
- API_URL,
- headers=headers,
- json={
- "inputs": prompt,
- },
- )
-
- image = Image.open(io.BytesIO(response.content))
- print(f"Image Generated for prompt:{prompt}")
-
- image.save(path_in_workspace(filename))
-
- return f"Saved to disk:{filename}"
-
-
-def generate_image_with_dalle(prompt: str, filename: str) -> str:
- """Generate an image with DALL-E.
-
- Args:
- prompt (str): The prompt to use
- filename (str): The filename to save the image to
-
- Returns:
- str: The filename of the image
- """
- openai.api_key = CFG.openai_api_key
-
- # Check for supported image sizes
- if size not in [256, 512, 1024]:
- closest = min([256, 512, 1024], key=lambda x: abs(x - size))
- print(
- f"DALL-E only supports image sizes of 256x256, 512x512, or 1024x1024. Setting to {closest}, was {size}."
- )
- size = closest
-
- response = openai.Image.create(
- prompt=prompt,
- n=1,
- size=f"{size}x{size}",
- response_format="b64_json",
- )
-
- print(f"Image Generated for prompt:{prompt}")
-
- image_data = b64decode(response["data"][0]["b64_json"])
-
- with open(path_in_workspace(filename), mode="wb") as png:
- png.write(image_data)
-
- return f"Saved to disk:{filename}"
-
-
-def generate_image_with_sd_webui(
- prompt: str,
- filename: str,
- size: int = 512,
- negative_prompt: str = "",
- extra: dict = {},
-) -> str:
- """Generate an image with Stable Diffusion webui.
- Args:
- prompt (str): The prompt to use
- filename (str): The filename to save the image to
- size (int, optional): The size of the image. Defaults to 256.
- negative_prompt (str, optional): The negative prompt to use. Defaults to "".
- extra (dict, optional): Extra parameters to pass to the API. Defaults to {}.
- Returns:
- str: The filename of the image
- """
- # Create a session and set the basic auth if needed
- s = requests.Session()
- if CFG.sd_webui_auth:
- username, password = CFG.sd_webui_auth.split(":")
- s.auth = (username, password or "")
-
- # Generate the images
- response = requests.post(
- f"{CFG.sd_webui_url}/sdapi/v1/txt2img",
- json={
- "prompt": prompt,
- "negative_prompt": negative_prompt,
- "sampler_index": "DDIM",
- "steps": 20,
- "cfg_scale": 7.0,
- "width": size,
- "height": size,
- "n_iter": 1,
- **extra,
- },
- )
-
- print(f"Image Generated for prompt:{prompt}")
-
- # Save the image to disk
- response = response.json()
- b64 = b64decode(response["images"][0].split(",", 1)[0])
- image = Image.open(io.BytesIO(b64))
- image.save(path_in_workspace(filename))
-
- return f"Saved to disk:{filename}"
diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/fileio/parse.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/fileio/parse.py
deleted file mode 100644
index f60f0d611b8d75692221d0edd7dc993b0a6445c9..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/fileio/parse.py
+++ /dev/null
@@ -1,97 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-
-from io import StringIO
-
-from .file_client import FileClient
-
-
-def list_from_file(filename,
- prefix='',
- offset=0,
- max_num=0,
- encoding='utf-8',
- file_client_args=None):
- """Load a text file and parse the content as a list of strings.
-
- Note:
- In v1.3.16 and later, ``list_from_file`` supports loading a text file
- which can be storaged in different backends and parsing the content as
- a list for strings.
-
- Args:
- filename (str): Filename.
- prefix (str): The prefix to be inserted to the beginning of each item.
- offset (int): The offset of lines.
- max_num (int): The maximum number of lines to be read,
- zeros and negatives mean no limitation.
- encoding (str): Encoding used to open the file. Default utf-8.
- file_client_args (dict, optional): Arguments to instantiate a
- FileClient. See :class:`mmcv.fileio.FileClient` for details.
- Default: None.
-
- Examples:
- >>> list_from_file('/path/of/your/file') # disk
- ['hello', 'world']
- >>> list_from_file('s3://path/of/your/file') # ceph or petrel
- ['hello', 'world']
-
- Returns:
- list[str]: A list of strings.
- """
- cnt = 0
- item_list = []
- file_client = FileClient.infer_client(file_client_args, filename)
- with StringIO(file_client.get_text(filename, encoding)) as f:
- for _ in range(offset):
- f.readline()
- for line in f:
- if 0 < max_num <= cnt:
- break
- item_list.append(prefix + line.rstrip('\n\r'))
- cnt += 1
- return item_list
-
-
-def dict_from_file(filename,
- key_type=str,
- encoding='utf-8',
- file_client_args=None):
- """Load a text file and parse the content as a dict.
-
- Each line of the text file will be two or more columns split by
- whitespaces or tabs. The first column will be parsed as dict keys, and
- the following columns will be parsed as dict values.
-
- Note:
- In v1.3.16 and later, ``dict_from_file`` supports loading a text file
- which can be storaged in different backends and parsing the content as
- a dict.
-
- Args:
- filename(str): Filename.
- key_type(type): Type of the dict keys. str is user by default and
- type conversion will be performed if specified.
- encoding (str): Encoding used to open the file. Default utf-8.
- file_client_args (dict, optional): Arguments to instantiate a
- FileClient. See :class:`mmcv.fileio.FileClient` for details.
- Default: None.
-
- Examples:
- >>> dict_from_file('/path/of/your/file') # disk
- {'key1': 'value1', 'key2': 'value2'}
- >>> dict_from_file('s3://path/of/your/file') # ceph or petrel
- {'key1': 'value1', 'key2': 'value2'}
-
- Returns:
- dict: The parsed contents.
- """
- mapping = {}
- file_client = FileClient.infer_client(file_client_args, filename)
- with StringIO(file_client.get_text(filename, encoding)) as f:
- for line in f:
- items = line.rstrip('\n').split()
- assert len(items) >= 2
- key = key_type(items[0])
- val = items[1:] if len(items) > 2 else items[1]
- mapping[key] = val
- return mapping
diff --git a/spaces/QINGCHE/TSA/classification.py b/spaces/QINGCHE/TSA/classification.py
deleted file mode 100644
index 544362a3ef6664a9a290abd3cb712942b17b22cc..0000000000000000000000000000000000000000
--- a/spaces/QINGCHE/TSA/classification.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import gensim
-import numpy as np
-from sklearn.feature_extraction.text import TfidfVectorizer
-from sklearn.metrics.pairwise import cosine_similarity
-from transformers import AutoTokenizer, AutoModel
-import torch
-
-
-def classify_by_topic(articles, central_topics):
-
- # 计算与每个中心主题的相似度,返回一个矩阵
- def compute_similarity(articles, central_topics):
-
- model = AutoModel.from_pretrained("distilbert-base-multilingual-cased")
- tokenizer = AutoTokenizer.from_pretrained("distilbert-base-multilingual-cased")
-
- def sentence_to_vector(sentence, context):
-
- sentence = context[0]+context[1]+sentence*4+context[2]+context[3]
- tokens = tokenizer.encode_plus(
- sentence, add_special_tokens=True, return_tensors="pt",max_length = 512,truncation=True)
-
- outputs = model(**tokens)
- hidden_states = outputs.last_hidden_state
-
- vector = np.squeeze(torch.mean(
- hidden_states, dim=1).detach().numpy())
- return vector
-
- # 获取一个句子的上下文
- def get_context(sentences, index):
- if index == 0:
- prev_sentence = ""
- pprev_sentence = ""
- elif index == 1:
- prev_sentence = sentences[index-1]
- pprev_sentence = ""
- else:
- prev_sentence = sentences[index-1]
- pprev_sentence = sentences[index-2]
- if index == len(sentences) - 1:
- next_sentence = ""
- nnext_sentence = ""
- elif index == len(sentences) - 2:
- next_sentence = sentences[index+1]
- nnext_sentence = ""
- else:
- next_sentence = sentences[index+1]
- nnext_sentence = sentences[index+2]
- return (pprev_sentence, prev_sentence, next_sentence, nnext_sentence)
-
- doc_vectors = [sentence_to_vector(sentence, get_context(
- articles, i)) for i, sentence in enumerate(articles)]
- topic_vectors = [sentence_to_vector(sentence, get_context(
- central_topics, i)) for i, sentence in enumerate(central_topics)]
- # 计算余弦相似度矩阵
- cos_sim_matrix = cosine_similarity(doc_vectors, topic_vectors)
-
- return cos_sim_matrix
-
- # 分类文章
- def group_by_topic(articles, central_topics, similarity_matrix):
- group = []
- original_articles = articles.copy()
- for article, similarity in zip(original_articles, similarity_matrix):
- max_similarity = max(similarity)
- max_index = similarity.tolist().index(max_similarity)
-
- group.append((article, central_topics[max_index]))
-
- return group
-
- # 实现分类功能
- similarity_matrix = compute_similarity(articles, central_topics)
- groups = group_by_topic(articles, central_topics, similarity_matrix)
-
- return groups
diff --git a/spaces/RMXK/RVC_HFF/lib/globals/globals.py b/spaces/RMXK/RVC_HFF/lib/globals/globals.py
deleted file mode 100644
index d0da59d56e8c2e482bcda5eeae7cf797b830560e..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/lib/globals/globals.py
+++ /dev/null
@@ -1,5 +0,0 @@
-DoFormant: bool = False
-Quefrency: float = 8.0
-Timbre: float = 1.2
-
-NotesOrHertz: bool = False
\ No newline at end of file
diff --git a/spaces/Realcat/image-matching-webui/third_party/r2d2/nets/sampler.py b/spaces/Realcat/image-matching-webui/third_party/r2d2/nets/sampler.py
deleted file mode 100644
index 3f2e5a276a80b997561549ed3e8466da3876e382..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/r2d2/nets/sampler.py
+++ /dev/null
@@ -1,417 +0,0 @@
-# Copyright 2019-present NAVER Corp.
-# CC BY-NC-SA 3.0
-# Available only for non-commercial use
-
-import pdb
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-""" Different samplers, each specifying how to sample pixels for the AP loss.
-"""
-
-
-class FullSampler(nn.Module):
- """all pixels are selected
- - feats: keypoint descriptors
- - confs: reliability values
- """
-
- def __init__(self):
- nn.Module.__init__(self)
- self.mode = "bilinear"
- self.padding = "zeros"
-
- @staticmethod
- def _aflow_to_grid(aflow):
- H, W = aflow.shape[2:]
- grid = aflow.permute(0, 2, 3, 1).clone()
- grid[:, :, :, 0] *= 2 / (W - 1)
- grid[:, :, :, 1] *= 2 / (H - 1)
- grid -= 1
- grid[torch.isnan(grid)] = 9e9 # invalids
- return grid
-
- def _warp(self, feats, confs, aflow):
- if isinstance(aflow, tuple):
- return aflow # result was precomputed
- feat1, feat2 = feats
- conf1, conf2 = confs if confs else (None, None)
-
- B, two, H, W = aflow.shape
- D = feat1.shape[1]
- assert feat1.shape == feat2.shape == (B, D, H, W) # D = 128, B = batch
- assert conf1.shape == conf2.shape == (B, 1, H, W) if confs else True
-
- # warp img2 to img1
- grid = self._aflow_to_grid(aflow)
- ones2 = feat2.new_ones(feat2[:, 0:1].shape)
- feat2to1 = F.grid_sample(feat2, grid, mode=self.mode, padding_mode=self.padding)
- mask2to1 = F.grid_sample(ones2, grid, mode="nearest", padding_mode="zeros")
- conf2to1 = (
- F.grid_sample(conf2, grid, mode=self.mode, padding_mode=self.padding)
- if confs
- else None
- )
- return feat2to1, mask2to1.byte(), conf2to1
-
- def _warp_positions(self, aflow):
- B, two, H, W = aflow.shape
- assert two == 2
-
- Y = torch.arange(H, device=aflow.device)
- X = torch.arange(W, device=aflow.device)
- XY = torch.stack(torch.meshgrid(Y, X)[::-1], dim=0)
- XY = XY[None].expand(B, 2, H, W).float()
-
- grid = self._aflow_to_grid(aflow)
- XY2 = F.grid_sample(XY, grid, mode="bilinear", padding_mode="zeros")
- return XY, XY2
-
-
-class SubSampler(FullSampler):
- """pixels are selected in an uniformly spaced grid"""
-
- def __init__(self, border, subq, subd, perimage=False):
- FullSampler.__init__(self)
- assert subq % subd == 0, "subq must be multiple of subd"
- self.sub_q = subq
- self.sub_d = subd
- self.border = border
- self.perimage = perimage
-
- def __repr__(self):
- return "SubSampler(border=%d, subq=%d, subd=%d, perimage=%d)" % (
- self.border,
- self.sub_q,
- self.sub_d,
- self.perimage,
- )
-
- def __call__(self, feats, confs, aflow):
- feat1, conf1 = feats[0], (confs[0] if confs else None)
- # warp with optical flow in img1 coords
- feat2, mask2, conf2 = self._warp(feats, confs, aflow)
-
- # subsample img1
- slq = slice(self.border, -self.border or None, self.sub_q)
- feat1 = feat1[:, :, slq, slq]
- conf1 = conf1[:, :, slq, slq] if confs else None
- # subsample img2
- sld = slice(self.border, -self.border or None, self.sub_d)
- feat2 = feat2[:, :, sld, sld]
- mask2 = mask2[:, :, sld, sld]
- conf2 = conf2[:, :, sld, sld] if confs else None
-
- B, D, Hq, Wq = feat1.shape
- B, D, Hd, Wd = feat2.shape
-
- # compute gt
- if self.perimage or self.sub_q != self.sub_d:
- # compute ground-truth by comparing pixel indices
- f = feats[0][0:1, 0] if self.perimage else feats[0][:, 0]
- idxs = torch.arange(f.numel(), dtype=torch.int64, device=feat1.device).view(
- f.shape
- )
- idxs1 = idxs[:, slq, slq].reshape(-1, Hq * Wq)
- idxs2 = idxs[:, sld, sld].reshape(-1, Hd * Wd)
- if self.perimage:
- gt = idxs1[0].view(-1, 1) == idxs2[0].view(1, -1)
- gt = gt[None, :, :].expand(B, Hq * Wq, Hd * Wd)
- else:
- gt = idxs1.view(-1, 1) == idxs2.view(1, -1)
- else:
- gt = torch.eye(
- feat1[:, 0].numel(), dtype=torch.uint8, device=feat1.device
- ) # always binary for AP loss
-
- # compute all images together
- queries = feat1.reshape(B, D, -1) # B x D x (Hq x Wq)
- database = feat2.reshape(B, D, -1) # B x D x (Hd x Wd)
- if self.perimage:
- queries = queries.transpose(1, 2) # B x (Hd x Wd) x D
- scores = torch.bmm(queries, database) # B x (Hq x Wq) x (Hd x Wd)
- else:
- queries = queries.transpose(1, 2).reshape(-1, D) # (B x Hq x Wq) x D
- database = database.transpose(1, 0).reshape(D, -1) # D x (B x Hd x Wd)
- scores = torch.matmul(queries, database) # (B x Hq x Wq) x (B x Hd x Wd)
-
- # compute reliability
- qconf = (conf1 + conf2) / 2 if confs else None
-
- assert gt.shape == scores.shape
- return scores, gt, mask2, qconf
-
-
-class NghSampler(FullSampler):
- """all pixels in a small neighborhood"""
-
- def __init__(self, ngh, subq=1, subd=1, ignore=1, border=None):
- FullSampler.__init__(self)
- assert 0 <= ignore < ngh
- self.ngh = ngh
- self.ignore = ignore
- assert subd <= ngh
- self.sub_q = subq
- self.sub_d = subd
- if border is None:
- border = ngh
- assert border >= ngh, "border has to be larger than ngh"
- self.border = border
-
- def __repr__(self):
- return "NghSampler(ngh=%d, subq=%d, subd=%d, ignore=%d, border=%d)" % (
- self.ngh,
- self.sub_q,
- self.sub_d,
- self.ignore,
- self.border,
- )
-
- def trans(self, arr, i, j):
- s = lambda i: slice(self.border + i, i - self.border or None, self.sub_q)
- return arr[:, :, s(j), s(i)]
-
- def __call__(self, feats, confs, aflow):
- feat1, conf1 = feats[0], (confs[0] if confs else None)
- # warp with optical flow in img1 coords
- feat2, mask2, conf2 = self._warp(feats, confs, aflow)
-
- qfeat = self.trans(feat1, 0, 0)
- qconf = (
- (self.trans(conf1, 0, 0) + self.trans(conf2, 0, 0)) / 2 if confs else None
- )
- mask2 = self.trans(mask2, 0, 0)
- scores_at = lambda i, j: (qfeat * self.trans(feat2, i, j)).sum(dim=1)
-
- # compute scores for all neighbors
- B, D = feat1.shape[:2]
- min_d = self.ignore**2
- max_d = self.ngh**2
- rad = (self.ngh // self.sub_d) * self.ngh # make an integer multiple
- negs = []
- offsets = []
- for j in range(-rad, rad + 1, self.sub_d):
- for i in range(-rad, rad + 1, self.sub_d):
- if not (min_d < i * i + j * j <= max_d):
- continue # out of scope
- offsets.append((i, j)) # Note: this list is just for debug
- negs.append(scores_at(i, j))
-
- scores = torch.stack([scores_at(0, 0)] + negs, dim=-1)
- gt = scores.new_zeros(scores.shape, dtype=torch.uint8)
- gt[..., 0] = 1 # only the center point is positive
-
- return scores, gt, mask2, qconf
-
-
-class FarNearSampler(FullSampler):
- """Sample pixels from *both* a small neighborhood *and* far-away pixels.
-
- How it works?
- 1) Queries are sampled from img1,
- - at least `border` pixels from borders and
- - on a grid with step = `subq`
-
- 2) Close database pixels
- - from the corresponding image (img2),
- - within a `ngh` distance radius
- - on a grid with step = `subd_ngh`
- - ignored if distance to query is >0 and <=`ignore`
-
- 3) Far-away database pixels from ,
- - from all batch images in `img2`
- - at least `border` pixels from borders
- - on a grid with step = `subd_far`
- """
-
- def __init__(
- self, subq, ngh, subd_ngh, subd_far, border=None, ignore=1, maxpool_ngh=False
- ):
- FullSampler.__init__(self)
- border = border or ngh
- assert ignore < ngh < subd_far, "neighborhood needs to be smaller than far step"
- self.close_sampler = NghSampler(
- ngh=ngh, subq=subq, subd=subd_ngh, ignore=not (maxpool_ngh), border=border
- )
- self.faraway_sampler = SubSampler(border=border, subq=subq, subd=subd_far)
- self.maxpool_ngh = maxpool_ngh
-
- def __repr__(self):
- c, f = self.close_sampler, self.faraway_sampler
- res = "FarNearSampler(subq=%d, ngh=%d" % (c.sub_q, c.ngh)
- res += ", subd_ngh=%d, subd_far=%d" % (c.sub_d, f.sub_d)
- res += ", border=%d, ign=%d" % (f.border, c.ignore)
- res += ", maxpool_ngh=%d" % self.maxpool_ngh
- return res + ")"
-
- def __call__(self, feats, confs, aflow):
- # warp with optical flow in img1 coords
- aflow = self._warp(feats, confs, aflow)
-
- # sample ngh pixels
- scores1, gt1, msk1, conf1 = self.close_sampler(feats, confs, aflow)
- scores1, gt1 = scores1.view(-1, scores1.shape[-1]), gt1.view(-1, gt1.shape[-1])
- if self.maxpool_ngh:
- # we consider all scores from ngh as potential positives
- scores1, self._cached_maxpool_ngh = scores1.max(dim=1, keepdim=True)
- gt1 = gt1[:, 0:1]
-
- # sample far pixels
- scores2, gt2, msk2, conf2 = self.faraway_sampler(feats, confs, aflow)
- # assert (msk1 == msk2).all()
- # assert (conf1 == conf2).all()
-
- return (
- torch.cat((scores1, scores2), dim=1),
- torch.cat((gt1, gt2), dim=1),
- msk1,
- conf1 if confs else None,
- )
-
-
-class NghSampler2(nn.Module):
- """Similar to NghSampler, but doesnt warp the 2nd image.
- Distance to GT => 0 ... pos_d ... neg_d ... ngh
- Pixel label => + + + + + + 0 0 - - - - - - -
-
- Subsample on query side: if > 0, regular grid
- < 0, random points
- In both cases, the number of query points is = W*H/subq**2
- """
-
- def __init__(
- self,
- ngh,
- subq=1,
- subd=1,
- pos_d=0,
- neg_d=2,
- border=None,
- maxpool_pos=True,
- subd_neg=0,
- ):
- nn.Module.__init__(self)
- assert 0 <= pos_d < neg_d <= (ngh if ngh else 99)
- self.ngh = ngh
- self.pos_d = pos_d
- self.neg_d = neg_d
- assert subd <= ngh or ngh == 0
- assert subq != 0
- self.sub_q = subq
- self.sub_d = subd
- self.sub_d_neg = subd_neg
- if border is None:
- border = ngh
- assert border >= ngh, "border has to be larger than ngh"
- self.border = border
- self.maxpool_pos = maxpool_pos
- self.precompute_offsets()
-
- def precompute_offsets(self):
- pos_d2 = self.pos_d**2
- neg_d2 = self.neg_d**2
- rad2 = self.ngh**2
- rad = (self.ngh // self.sub_d) * self.ngh # make an integer multiple
- pos = []
- neg = []
- for j in range(-rad, rad + 1, self.sub_d):
- for i in range(-rad, rad + 1, self.sub_d):
- d2 = i * i + j * j
- if d2 <= pos_d2:
- pos.append((i, j))
- elif neg_d2 <= d2 <= rad2:
- neg.append((i, j))
-
- self.register_buffer("pos_offsets", torch.LongTensor(pos).view(-1, 2).t())
- self.register_buffer("neg_offsets", torch.LongTensor(neg).view(-1, 2).t())
-
- def gen_grid(self, step, aflow):
- B, two, H, W = aflow.shape
- dev = aflow.device
- b1 = torch.arange(B, device=dev)
- if step > 0:
- # regular grid
- x1 = torch.arange(self.border, W - self.border, step, device=dev)
- y1 = torch.arange(self.border, H - self.border, step, device=dev)
- H1, W1 = len(y1), len(x1)
- x1 = x1[None, None, :].expand(B, H1, W1).reshape(-1)
- y1 = y1[None, :, None].expand(B, H1, W1).reshape(-1)
- b1 = b1[:, None, None].expand(B, H1, W1).reshape(-1)
- shape = (B, H1, W1)
- else:
- # randomly spread
- n = (H - 2 * self.border) * (W - 2 * self.border) // step**2
- x1 = torch.randint(self.border, W - self.border, (n,), device=dev)
- y1 = torch.randint(self.border, H - self.border, (n,), device=dev)
- x1 = x1[None, :].expand(B, n).reshape(-1)
- y1 = y1[None, :].expand(B, n).reshape(-1)
- b1 = b1[:, None].expand(B, n).reshape(-1)
- shape = (B, n)
- return b1, y1, x1, shape
-
- def forward(self, feats, confs, aflow, **kw):
- B, two, H, W = aflow.shape
- assert two == 2
- feat1, conf1 = feats[0], (confs[0] if confs else None)
- feat2, conf2 = feats[1], (confs[1] if confs else None)
-
- # positions in the first image
- b1, y1, x1, shape = self.gen_grid(self.sub_q, aflow)
-
- # sample features from first image
- feat1 = feat1[b1, :, y1, x1]
- qconf = conf1[b1, :, y1, x1].view(shape) if confs else None
-
- # sample GT from second image
- b2 = b1
- xy2 = (aflow[b1, :, y1, x1] + 0.5).long().t()
- mask = (0 <= xy2[0]) * (0 <= xy2[1]) * (xy2[0] < W) * (xy2[1] < H)
- mask = mask.view(shape)
-
- def clamp(xy):
- torch.clamp(xy[0], 0, W - 1, out=xy[0])
- torch.clamp(xy[1], 0, H - 1, out=xy[1])
- return xy
-
- # compute positive scores
- xy2p = clamp(xy2[:, None, :] + self.pos_offsets[:, :, None])
- pscores = (feat1[None, :, :] * feat2[b2, :, xy2p[1], xy2p[0]]).sum(dim=-1).t()
- # xy1p = clamp(torch.stack((x1,y1))[:,None,:] + self.pos_offsets[:,:,None])
- # grid = FullSampler._aflow_to_grid(aflow)
- # feat2p = F.grid_sample(feat2, grid, mode='bilinear', padding_mode='border')
- # pscores = (feat1[None,:,:] * feat2p[b1,:,xy1p[1], xy1p[0]]).sum(dim=-1).t()
- if self.maxpool_pos:
- pscores, pos = pscores.max(dim=1, keepdim=True)
- if confs:
- sel = clamp(xy2 + self.pos_offsets[:, pos.view(-1)])
- qconf = (qconf + conf2[b2, :, sel[1], sel[0]].view(shape)) / 2
-
- # compute negative scores
- xy2n = clamp(xy2[:, None, :] + self.neg_offsets[:, :, None])
- nscores = (feat1[None, :, :] * feat2[b2, :, xy2n[1], xy2n[0]]).sum(dim=-1).t()
-
- if self.sub_d_neg:
- # add distractors from a grid
- b3, y3, x3, _ = self.gen_grid(self.sub_d_neg, aflow)
- distractors = feat2[b3, :, y3, x3]
- dscores = torch.matmul(feat1, distractors.t())
- del distractors
-
- # remove scores that corresponds to positives or nulls
- dis2 = (x3 - xy2[0][:, None]) ** 2 + (y3 - xy2[1][:, None]) ** 2
- dis2 += (b3 != b2[:, None]).long() * self.neg_d**2
- dscores[dis2 < self.neg_d**2] = 0
-
- scores = torch.cat((pscores, nscores, dscores), dim=1)
- else:
- # concat everything
- scores = torch.cat((pscores, nscores), dim=1)
-
- gt = scores.new_zeros(scores.shape, dtype=torch.uint8)
- gt[:, : pscores.shape[1]] = 1
-
- return scores, gt, mask, qconf
diff --git a/spaces/Reha2704/VToonify/vtoonify/model/raft/core/utils/utils.py b/spaces/Reha2704/VToonify/vtoonify/model/raft/core/utils/utils.py
deleted file mode 100644
index 741ccfe4d0d778c3199c586d368edc2882d4fff8..0000000000000000000000000000000000000000
--- a/spaces/Reha2704/VToonify/vtoonify/model/raft/core/utils/utils.py
+++ /dev/null
@@ -1,82 +0,0 @@
-import torch
-import torch.nn.functional as F
-import numpy as np
-from scipy import interpolate
-
-
-class InputPadder:
- """ Pads images such that dimensions are divisible by 8 """
- def __init__(self, dims, mode='sintel'):
- self.ht, self.wd = dims[-2:]
- pad_ht = (((self.ht // 8) + 1) * 8 - self.ht) % 8
- pad_wd = (((self.wd // 8) + 1) * 8 - self.wd) % 8
- if mode == 'sintel':
- self._pad = [pad_wd//2, pad_wd - pad_wd//2, pad_ht//2, pad_ht - pad_ht//2]
- else:
- self._pad = [pad_wd//2, pad_wd - pad_wd//2, 0, pad_ht]
-
- def pad(self, *inputs):
- return [F.pad(x, self._pad, mode='replicate') for x in inputs]
-
- def unpad(self,x):
- ht, wd = x.shape[-2:]
- c = [self._pad[2], ht-self._pad[3], self._pad[0], wd-self._pad[1]]
- return x[..., c[0]:c[1], c[2]:c[3]]
-
-def forward_interpolate(flow):
- flow = flow.detach().cpu().numpy()
- dx, dy = flow[0], flow[1]
-
- ht, wd = dx.shape
- x0, y0 = np.meshgrid(np.arange(wd), np.arange(ht))
-
- x1 = x0 + dx
- y1 = y0 + dy
-
- x1 = x1.reshape(-1)
- y1 = y1.reshape(-1)
- dx = dx.reshape(-1)
- dy = dy.reshape(-1)
-
- valid = (x1 > 0) & (x1 < wd) & (y1 > 0) & (y1 < ht)
- x1 = x1[valid]
- y1 = y1[valid]
- dx = dx[valid]
- dy = dy[valid]
-
- flow_x = interpolate.griddata(
- (x1, y1), dx, (x0, y0), method='nearest', fill_value=0)
-
- flow_y = interpolate.griddata(
- (x1, y1), dy, (x0, y0), method='nearest', fill_value=0)
-
- flow = np.stack([flow_x, flow_y], axis=0)
- return torch.from_numpy(flow).float()
-
-
-def bilinear_sampler(img, coords, mode='bilinear', mask=False):
- """ Wrapper for grid_sample, uses pixel coordinates """
- H, W = img.shape[-2:]
- xgrid, ygrid = coords.split([1,1], dim=-1)
- xgrid = 2*xgrid/(W-1) - 1
- ygrid = 2*ygrid/(H-1) - 1
-
- grid = torch.cat([xgrid, ygrid], dim=-1)
- img = F.grid_sample(img, grid, align_corners=True)
-
- if mask:
- mask = (xgrid > -1) & (ygrid > -1) & (xgrid < 1) & (ygrid < 1)
- return img, mask.float()
-
- return img
-
-
-def coords_grid(batch, ht, wd, device):
- coords = torch.meshgrid(torch.arange(ht, device=device), torch.arange(wd, device=device))
- coords = torch.stack(coords[::-1], dim=0).float()
- return coords[None].repeat(batch, 1, 1, 1)
-
-
-def upflow8(flow, mode='bilinear'):
- new_size = (8 * flow.shape[2], 8 * flow.shape[3])
- return 8 * F.interpolate(flow, size=new_size, mode=mode, align_corners=True)
diff --git a/spaces/RoCobo/WiggleGAN/app.py b/spaces/RoCobo/WiggleGAN/app.py
deleted file mode 100644
index 493660d798d53fa6b88f512ddcadecb17b6fc5c5..0000000000000000000000000000000000000000
--- a/spaces/RoCobo/WiggleGAN/app.py
+++ /dev/null
@@ -1,96 +0,0 @@
-import os
-import gradio as gr
-import cv2
-import torch
-import urllib.request
-import numpy as np
-import matplotlib.pyplot as plt
-from PIL import Image
-import subprocess
-
-def calculate_depth(model_type, gan_type, dim, slider, img):
-
-
- if not os.path.exists('temp'):
- os.system('mkdir temp')
-
- filename = "Images/Input-Test/1.png"
-
- img.save(filename, "PNG")
-
- midas = torch.hub.load("intel-isl/MiDaS", model_type)
-
- device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
- midas.to(device)
- midas.eval()
-
- midas_transforms = torch.hub.load("intel-isl/MiDaS", "transforms")
-
- if model_type == "DPT_Large" or model_type == "DPT_Hybrid":
- transform = midas_transforms.dpt_transform
- else:
- transform = midas_transforms.small_transform
-
- img = cv2.imread(filename)
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
-
- input_batch = transform(img).to(device)
-
- with torch.no_grad():
- prediction = midas(input_batch)
-
- prediction = torch.nn.functional.interpolate(
- prediction.unsqueeze(1),
- size=img.shape[:2],
- mode="bicubic",
- align_corners=False,
- ).squeeze()
-
- output = prediction.cpu().numpy()
-
- formatted = (output * 255.0 / np.max(output)).astype('uint8')
- out_im = Image.fromarray(formatted)
- out_im.save("Images/Input-Test/1_d.png", "PNG")
-
-
- c_images = '1'
- name_output = 'out'
-
- dict_saved_gans = {'Cycle': '74962_110', 'Cycle(half)': '66942_110','noCycle': '31219_110', 'noCycle-noCr': '92332_110', 'noCycle-noCr-noL1': '82122_110', 'OnlyGen': '70944_110' }
-
- subprocess.run(["python", "main.py", "--gan_type", 'WiggleGAN', "--expandGen", "4", "--expandDis", "4", "--batch_size", c_images, "--cIm", c_images,
- "--visdom", "false", "--wiggleDepth", str(slider), "--seedLoad", dict_saved_gans[gan_type], "--gpu_mode", "false", "--imageDim", dim, "--name_wiggle", name_output
- ])
- subprocess.run(["python", "WiggleResults/split.py", "--dim", dim])
-
- path_video = os.path.join(os.path.dirname(__file__), 'WiggleResults' , name_output + '_0.mp4')
- print(path_video)
-
- return [out_im,f'WiggleResults/' + name_output + '_0.gif', path_video, f'WiggleResults/'+ name_output + '.jpg']
-
-
-with gr.Blocks() as demo:
- gr.Markdown("Start typing below and then click **Run** to see the output.")
-
- ## Depth Estimation
- midas_models = ["DPT_Large","DPT_Hybrid","MiDaS_small"]
- gan_models = ["Cycle","Cycle(half)","noCycle","noCycle-noCr","noCycle-noCr-noL1","OnlyGen"]
- dim = ['256','512','1024']
-
- with gr.Row():
- inp = [gr.inputs.Dropdown(midas_models, default="MiDaS_small", label="Depth estimation model type")]
- inp.append(gr.inputs.Dropdown(gan_models, default="Cycle", label="Different GAN trainings"))
- inp.append(gr.inputs.Dropdown(dim, default="256", label="Wiggle dimension result"))
- inp.append(gr.Slider(1,15, default = 2, label='StepCycles',step= 1))
- with gr.Row():
- inp.append(gr.Image(type="pil", label="Input"))
- out = [gr.Image(type="pil", label="depth_estimation")]
- with gr.Row():
- out.append(gr.Image(type="file", label="Output_wiggle_gif"))
- out.append(gr.Video(label="Output_wiggle_video"))
- out.append(gr.Image(type="file", label="Output_images"))
- btn = gr.Button("Calculate depth + Wiggle")
- btn.click(fn=calculate_depth, inputs=inp, outputs=out)
-
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/Rongjiehuang/GenerSpeech/modules/GenerSpeech/task/generspeech.py b/spaces/Rongjiehuang/GenerSpeech/modules/GenerSpeech/task/generspeech.py
deleted file mode 100644
index 7536bf651ebcfeef3df97c752b4801c487671e95..0000000000000000000000000000000000000000
--- a/spaces/Rongjiehuang/GenerSpeech/modules/GenerSpeech/task/generspeech.py
+++ /dev/null
@@ -1,271 +0,0 @@
-import matplotlib
-matplotlib.use('Agg')
-from data_gen.tts.data_gen_utils import get_pitch
-from modules.fastspeech.tts_modules import mel2ph_to_dur
-import matplotlib.pyplot as plt
-from utils import audio
-from utils.pitch_utils import norm_interp_f0, denorm_f0, f0_to_coarse
-from vocoders.base_vocoder import get_vocoder_cls
-import json
-from utils.plot import spec_to_figure
-from utils.hparams import hparams
-import torch
-import torch.optim
-import torch.nn.functional as F
-import torch.utils.data
-from modules.GenerSpeech.task.dataset import GenerSpeech_dataset
-from modules.GenerSpeech.model.generspeech import GenerSpeech
-import torch.distributions
-import numpy as np
-from utils.tts_utils import select_attn
-import utils
-import os
-from tasks.tts.fs2 import FastSpeech2Task
-
-class GenerSpeechTask(FastSpeech2Task):
- def __init__(self):
- super(GenerSpeechTask, self).__init__()
- self.dataset_cls = GenerSpeech_dataset
-
- def build_tts_model(self):
- self.model = GenerSpeech(self.phone_encoder)
-
- def build_model(self):
- self.build_tts_model()
- if hparams['load_ckpt'] != '':
- self.load_ckpt(hparams['load_ckpt'], strict=False)
- utils.num_params(self.model)
- return self.model
-
- def run_model(self, model, sample, return_output=False):
- txt_tokens = sample['txt_tokens'] # [B, T_t]
- target = sample['mels'] # [B, T_s, 80]
- mel2ph = sample['mel2ph'] # [B, T_s]
- mel2word = sample['mel2word']
- f0 = sample['f0'] # [B, T_s]
- uv = sample['uv'] # [B, T_s] 0/1
-
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- emo_embed = sample.get('emo_embed')
- output = model(txt_tokens, mel2ph=mel2ph, ref_mel2ph=mel2ph, ref_mel2word=mel2word, spk_embed=spk_embed, emo_embed=emo_embed,
- ref_mels=target, f0=f0, uv=uv, tgt_mels=target, global_steps=self.global_step, infer=False)
- losses = {}
- losses['postflow'] = output['postflow']
- if self.global_step > hparams['forcing']:
- losses['gloss'] = (output['gloss_utter'] + output['gloss_ph'] + output['gloss_word']) / 3
- if self.global_step > hparams['vq_start']:
- losses['vq_loss'] = (output['vq_loss_utter'] + output['vq_loss_ph'] + output['vq_loss_word']) / 3
- losses['ppl_utter'] = output['ppl_utter']
- losses['ppl_ph'] = output['ppl_ph']
- losses['ppl_word'] = output['ppl_word']
- self.add_mel_loss(output['mel_out'], target, losses)
- self.add_dur_loss(output['dur'], mel2ph, txt_tokens, losses=losses)
- if hparams['use_pitch_embed']:
- self.add_pitch_loss(output, sample, losses)
- output['select_attn'] = select_attn(output['attn_ph'])
-
- if not return_output:
- return losses
- else:
- return losses, output
-
- def validation_step(self, sample, batch_idx):
- outputs = {}
- outputs['losses'] = {}
- outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True)
- outputs['total_loss'] = sum(outputs['losses'].values())
- outputs['nsamples'] = sample['nsamples']
- encdec_attn = model_out['select_attn']
- mel_out = self.model.out2mel(model_out['mel_out'])
- outputs = utils.tensors_to_scalars(outputs)
- if self.global_step % hparams['valid_infer_interval'] == 0 \
- and batch_idx < hparams['num_valid_plots']:
- vmin = hparams['mel_vmin']
- vmax = hparams['mel_vmax']
- self.plot_mel(batch_idx, sample['mels'], mel_out)
- self.plot_dur(batch_idx, sample, model_out)
- if hparams['use_pitch_embed']:
- self.plot_pitch(batch_idx, sample, model_out)
- if self.vocoder is None:
- self.vocoder = get_vocoder_cls(hparams)()
- if self.global_step > 0:
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- emo_embed = sample.get('emo_embed')
- ref_mels = sample['mels']
- mel2ph = sample['mel2ph'] # [B, T_s]
- mel2word = sample['mel2word']
- # with gt duration
- model_out = self.model(sample['txt_tokens'], mel2ph=mel2ph, ref_mel2ph=mel2ph, ref_mel2word=mel2word, spk_embed=spk_embed,
- emo_embed=emo_embed, ref_mels=ref_mels, global_steps=self.global_step, infer=True)
- wav_pred = self.vocoder.spec2wav(model_out['mel_out'][0].cpu())
- self.logger.add_audio(f'wav_gtdur_{batch_idx}', wav_pred, self.global_step,
- hparams['audio_sample_rate'])
- self.logger.add_figure(f'ali_{batch_idx}', spec_to_figure(encdec_attn[0]), self.global_step)
- self.logger.add_figure(
- f'mel_gtdur_{batch_idx}',
- spec_to_figure(model_out['mel_out'][0], vmin, vmax), self.global_step)
- # with pred duration
- model_out = self.model(sample['txt_tokens'], ref_mel2ph=mel2ph, ref_mel2word=mel2word, spk_embed=spk_embed, emo_embed=emo_embed, ref_mels=ref_mels,
- global_steps=self.global_step, infer=True)
- self.logger.add_figure(
- f'mel_{batch_idx}',
- spec_to_figure(model_out['mel_out'][0], vmin, vmax), self.global_step)
- wav_pred = self.vocoder.spec2wav(model_out['mel_out'][0].cpu())
- self.logger.add_audio(f'wav_{batch_idx}', wav_pred, self.global_step, hparams['audio_sample_rate'])
- # gt wav
- if self.global_step <= hparams['valid_infer_interval']:
- mel_gt = sample['mels'][0].cpu()
- wav_gt = self.vocoder.spec2wav(mel_gt)
- self.logger.add_audio(f'wav_gt_{batch_idx}', wav_gt, self.global_step, 22050)
- return outputs
-
- ############
- # infer
- ############
- def test_step(self, sample, batch_idx):
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- emo_embed = sample.get('emo_embed')
- txt_tokens = sample['txt_tokens']
- mel2ph, uv, f0 = None, None, None
- ref_mel2word = sample['mel2word']
- ref_mel2ph = sample['mel2ph']
- ref_mels = sample['mels']
- if hparams['use_gt_dur']:
- mel2ph = sample['mel2ph']
- if hparams['use_gt_f0']:
- f0 = sample['f0']
- uv = sample['uv']
- global_steps = 200000
- run_model = lambda: self.model(
- txt_tokens, spk_embed=spk_embed, emo_embed=emo_embed, mel2ph=mel2ph, ref_mel2ph=ref_mel2ph, ref_mel2word=ref_mel2word,
- f0=f0, uv=uv, ref_mels=ref_mels, global_steps=global_steps, infer=True)
- outputs = run_model()
- sample['outputs'] = self.model.out2mel(outputs['mel_out'])
- sample['mel2ph_pred'] = outputs['mel2ph']
- if hparams['use_pitch_embed']:
- sample['f0'] = denorm_f0(sample['f0'], sample['uv'], hparams)
- if hparams['pitch_type'] == 'ph':
- sample['f0'] = torch.gather(F.pad(sample['f0'], [1, 0]), 1, sample['mel2ph'])
- sample['f0_pred'] = outputs.get('f0_denorm')
-
- return self.after_infer(sample)
-
-
-
- def after_infer(self, predictions, sil_start_frame=0):
- predictions = utils.unpack_dict_to_list(predictions)
- assert len(predictions) == 1, 'Only support batch_size=1 in inference.'
- prediction = predictions[0]
- prediction = utils.tensors_to_np(prediction)
- item_name = prediction.get('item_name')
- text = prediction.get('text')
- ph_tokens = prediction.get('txt_tokens')
- mel_gt = prediction["mels"]
- mel2ph_gt = prediction.get("mel2ph")
- mel2ph_gt = mel2ph_gt if mel2ph_gt is not None else None
- mel_pred = prediction["outputs"]
- mel2ph_pred = prediction.get("mel2ph_pred")
- f0_gt = prediction.get("f0")
- f0_pred = prediction.get("f0_pred")
-
- str_phs = None
- if self.phone_encoder is not None and 'txt_tokens' in prediction:
- str_phs = self.phone_encoder.decode(prediction['txt_tokens'], strip_padding=True)
-
- if 'encdec_attn' in prediction:
- encdec_attn = prediction['encdec_attn'] # (1, Tph, Tmel)
- encdec_attn = encdec_attn[encdec_attn.max(-1).sum(-1).argmax(-1)]
- txt_lengths = prediction.get('txt_lengths')
- encdec_attn = encdec_attn.T[:, :txt_lengths]
- else:
- encdec_attn = None
-
- wav_pred = self.vocoder.spec2wav(mel_pred, f0=f0_pred)
- wav_pred[:sil_start_frame * hparams['hop_size']] = 0
- gen_dir = self.gen_dir
- base_fn = f'[{self.results_id:06d}][{item_name}][%s]'
- # if text is not None:
- # base_fn += text.replace(":", "%3A")[:80]
- base_fn = base_fn.replace(' ', '_')
- if not hparams['profile_infer']:
- os.makedirs(gen_dir, exist_ok=True)
- os.makedirs(f'{gen_dir}/wavs', exist_ok=True)
- os.makedirs(f'{gen_dir}/plot', exist_ok=True)
- if hparams.get('save_mel_npy', False):
- os.makedirs(f'{gen_dir}/npy', exist_ok=True)
- if 'encdec_attn' in prediction:
- os.makedirs(f'{gen_dir}/attn_plot', exist_ok=True)
- self.saving_results_futures.append(
- self.saving_result_pool.apply_async(self.save_result, args=[
- wav_pred, mel_pred, base_fn % 'TTS', gen_dir, str_phs, mel2ph_pred, encdec_attn]))
-
- if mel_gt is not None and hparams['save_gt']:
- wav_gt = self.vocoder.spec2wav(mel_gt, f0=f0_gt)
- self.saving_results_futures.append(
- self.saving_result_pool.apply_async(self.save_result, args=[
- wav_gt, mel_gt, base_fn % 'Ref', gen_dir, str_phs, mel2ph_gt]))
- if hparams['save_f0']:
- import matplotlib.pyplot as plt
- f0_pred_, _ = get_pitch(wav_pred, mel_pred, hparams)
- f0_gt_, _ = get_pitch(wav_gt, mel_gt, hparams)
- fig = plt.figure()
- plt.plot(f0_pred_, label=r'$\hat{f_0}$')
- plt.plot(f0_gt_, label=r'$f_0$')
- plt.legend()
- plt.tight_layout()
- plt.savefig(f'{gen_dir}/plot/[F0][{item_name}]{text}.png', format='png')
- plt.close(fig)
-
- print(f"Pred_shape: {mel_pred.shape}, gt_shape: {mel_gt.shape}")
- self.results_id += 1
- return {
- 'item_name': item_name,
- 'text': text,
- 'ph_tokens': self.phone_encoder.decode(ph_tokens.tolist()),
- 'wav_fn_pred': base_fn % 'TTS',
- 'wav_fn_gt': base_fn % 'Ref',
- }
-
-
-
- @staticmethod
- def save_result(wav_out, mel, base_fn, gen_dir, str_phs=None, mel2ph=None, alignment=None):
- audio.save_wav(wav_out, f'{gen_dir}/wavs/{base_fn}.wav', hparams['audio_sample_rate'],
- norm=hparams['out_wav_norm'])
- fig = plt.figure(figsize=(14, 10))
- spec_vmin = hparams['mel_vmin']
- spec_vmax = hparams['mel_vmax']
- heatmap = plt.pcolor(mel.T, vmin=spec_vmin, vmax=spec_vmax)
- fig.colorbar(heatmap)
- f0, _ = get_pitch(wav_out, mel, hparams)
- f0 = f0 / 10 * (f0 > 0)
- plt.plot(f0, c='white', linewidth=1, alpha=0.6)
- if mel2ph is not None and str_phs is not None:
- decoded_txt = str_phs.split(" ")
- dur = mel2ph_to_dur(torch.LongTensor(mel2ph)[None, :], len(decoded_txt))[0].numpy()
- dur = [0] + list(np.cumsum(dur))
- for i in range(len(dur) - 1):
- shift = (i % 20) + 1
- plt.text(dur[i], shift, decoded_txt[i])
- plt.hlines(shift, dur[i], dur[i + 1], colors='b' if decoded_txt[i] != '|' else 'black')
- plt.vlines(dur[i], 0, 5, colors='b' if decoded_txt[i] != '|' else 'black',
- alpha=1, linewidth=1)
- plt.tight_layout()
- plt.savefig(f'{gen_dir}/plot/{base_fn}.png', format='png')
- plt.close(fig)
- if hparams.get('save_mel_npy', False):
- np.save(f'{gen_dir}/npy/{base_fn}', mel)
- if alignment is not None:
- fig, ax = plt.subplots(figsize=(12, 16))
- im = ax.imshow(alignment, aspect='auto', origin='lower',
- interpolation='none')
- ax.set_xticks(np.arange(0, alignment.shape[1], 5))
- ax.set_yticks(np.arange(0, alignment.shape[0], 10))
- ax.set_ylabel("$S_p$ index")
- ax.set_xlabel("$H_c$ index")
- fig.colorbar(im, ax=ax)
- fig.savefig(f'{gen_dir}/attn_plot/{base_fn}_attn.png', format='png')
- plt.close(fig)
-
-
-
diff --git a/spaces/Rongjiehuang/ProDiff/tasks/vocoder/dataset_utils.py b/spaces/Rongjiehuang/ProDiff/tasks/vocoder/dataset_utils.py
deleted file mode 100644
index 05dcdaa524efde31575dd30b57b627d22744b53c..0000000000000000000000000000000000000000
--- a/spaces/Rongjiehuang/ProDiff/tasks/vocoder/dataset_utils.py
+++ /dev/null
@@ -1,204 +0,0 @@
-import glob
-import importlib
-import os
-from resemblyzer import VoiceEncoder
-import numpy as np
-import torch
-import torch.distributed as dist
-from torch.utils.data import DistributedSampler
-import utils
-from tasks.base_task import BaseDataset
-from utils.hparams import hparams
-from utils.indexed_datasets import IndexedDataset
-from tqdm import tqdm
-
-class EndlessDistributedSampler(DistributedSampler):
- def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True):
- if num_replicas is None:
- if not dist.is_available():
- raise RuntimeError("Requires distributed package to be available")
- num_replicas = dist.get_world_size()
- if rank is None:
- if not dist.is_available():
- raise RuntimeError("Requires distributed package to be available")
- rank = dist.get_rank()
- self.dataset = dataset
- self.num_replicas = num_replicas
- self.rank = rank
- self.epoch = 0
- self.shuffle = shuffle
-
- g = torch.Generator()
- g.manual_seed(self.epoch)
- if self.shuffle:
- indices = [i for _ in range(1000) for i in torch.randperm(
- len(self.dataset), generator=g).tolist()]
- else:
- indices = [i for _ in range(1000) for i in list(range(len(self.dataset)))]
- indices = indices[:len(indices) // self.num_replicas * self.num_replicas]
- indices = indices[self.rank::self.num_replicas]
- self.indices = indices
-
- def __iter__(self):
- return iter(self.indices)
-
- def __len__(self):
- return len(self.indices)
-
-
-class VocoderDataset(BaseDataset):
- def __init__(self, prefix, shuffle=False):
- super().__init__(shuffle)
- self.hparams = hparams
- self.prefix = prefix
- self.data_dir = hparams['binary_data_dir']
- self.is_infer = prefix == 'test'
- self.batch_max_frames = 0 if self.is_infer else hparams['max_samples'] // hparams['hop_size']
- self.aux_context_window = hparams['aux_context_window']
- self.hop_size = hparams['hop_size']
- if self.is_infer and hparams['test_input_dir'] != '':
- self.indexed_ds, self.sizes = self.load_test_inputs(hparams['test_input_dir'])
- self.avail_idxs = [i for i, _ in enumerate(self.sizes)]
- elif self.is_infer and hparams['test_mel_dir'] != '':
- self.indexed_ds, self.sizes = self.load_mel_inputs(hparams['test_mel_dir'])
- self.avail_idxs = [i for i, _ in enumerate(self.sizes)]
- else:
- self.indexed_ds = None
- self.sizes = np.load(f'{self.data_dir}/{self.prefix}_lengths.npy')
- self.avail_idxs = [idx for idx, s in enumerate(self.sizes) if
- s - 2 * self.aux_context_window > self.batch_max_frames]
- print(f"| {len(self.sizes) - len(self.avail_idxs)} short items are skipped in {prefix} set.")
- self.sizes = [s for idx, s in enumerate(self.sizes) if
- s - 2 * self.aux_context_window > self.batch_max_frames]
-
- def _get_item(self, index):
- if self.indexed_ds is None:
- self.indexed_ds = IndexedDataset(f'{self.data_dir}/{self.prefix}')
- item = self.indexed_ds[index]
- return item
-
- def __getitem__(self, index):
- index = self.avail_idxs[index]
- item = self._get_item(index)
- sample = {
- "id": index,
- "item_name": item['item_name'],
- "mel": torch.FloatTensor(item['mel']),
- "wav": torch.FloatTensor(item['wav'].astype(np.float32)),
- }
- if 'pitch' in item:
- sample['pitch'] = torch.LongTensor(item['pitch'])
- sample['f0'] = torch.FloatTensor(item['f0'])
-
- if hparams.get('use_spk_embed', False):
- sample["spk_embed"] = torch.Tensor(item['spk_embed'])
- if hparams.get('use_emo_embed', False):
- sample["emo_embed"] = torch.Tensor(item['emo_embed'])
-
- return sample
-
- def collater(self, batch):
- if len(batch) == 0:
- return {}
-
- y_batch, c_batch, p_batch, f0_batch = [], [], [], []
- item_name = []
- have_pitch = 'pitch' in batch[0]
- for idx in range(len(batch)):
- item_name.append(batch[idx]['item_name'])
- x, c = batch[idx]['wav'] if self.hparams['use_wav'] else None, batch[idx]['mel'].squeeze(0)
- if have_pitch:
- p = batch[idx]['pitch']
- f0 = batch[idx]['f0']
- if self.hparams['use_wav']:self._assert_ready_for_upsampling(x, c, self.hop_size, 0)
- if len(c) - 2 * self.aux_context_window > self.batch_max_frames:
- # randomly pickup with the batch_max_steps length of the part
- batch_max_frames = self.batch_max_frames if self.batch_max_frames != 0 else len(
- c) - 2 * self.aux_context_window - 1
- batch_max_steps = batch_max_frames * self.hop_size
- interval_start = self.aux_context_window
- interval_end = len(c) - batch_max_frames - self.aux_context_window
- start_frame = np.random.randint(interval_start, interval_end)
- start_step = start_frame * self.hop_size
- if self.hparams['use_wav']:y = x[start_step: start_step + batch_max_steps]
- c = c[start_frame - self.aux_context_window:
- start_frame + self.aux_context_window + batch_max_frames]
- if have_pitch:
- p = p[start_frame - self.aux_context_window:
- start_frame + self.aux_context_window + batch_max_frames]
- f0 = f0[start_frame - self.aux_context_window:
- start_frame + self.aux_context_window + batch_max_frames]
- if self.hparams['use_wav']:self._assert_ready_for_upsampling(y, c, self.hop_size, self.aux_context_window)
- else:
- print(f"Removed short sample from batch (length={len(x)}).")
- continue
- if self.hparams['use_wav']:y_batch += [y.reshape(-1, 1)] # [(T, 1), (T, 1), ...]
- c_batch += [c] # [(T' C), (T' C), ...]
- if have_pitch:
- p_batch += [p] # [(T' C), (T' C), ...]
- f0_batch += [f0] # [(T' C), (T' C), ...]
-
- # convert each batch to tensor, asuume that each item in batch has the same length
- if self.hparams['use_wav']:y_batch = utils.collate_2d(y_batch, 0).transpose(2, 1) # (B, 1, T)
- c_batch = utils.collate_2d(c_batch, 0).transpose(2, 1) # (B, C, T')
- if have_pitch:
- p_batch = utils.collate_1d(p_batch, 0) # (B, T')
- f0_batch = utils.collate_1d(f0_batch, 0) # (B, T')
- else:
- p_batch, f0_batch = None, None
-
- # make input noise signal batch tensor
- if self.hparams['use_wav']: z_batch = torch.randn(y_batch.size()) # (B, 1, T)
- else: z_batch=[]
- return {
- 'z': z_batch,
- 'mels': c_batch,
- 'wavs': y_batch,
- 'pitches': p_batch,
- 'f0': f0_batch,
- 'item_name': item_name
- }
-
- @staticmethod
- def _assert_ready_for_upsampling(x, c, hop_size, context_window):
- """Assert the audio and feature lengths are correctly adjusted for upsamping."""
- assert len(x) == (len(c) - 2 * context_window) * hop_size
-
- def load_test_inputs(self, test_input_dir, spk_id=0):
- inp_wav_paths = sorted(glob.glob(f'{test_input_dir}/*.wav') + glob.glob(f'{test_input_dir}/**/*.mp3'))
- sizes = []
- items = []
-
- binarizer_cls = hparams.get("binarizer_cls", 'data_gen.tts.base_binarizer.BaseBinarizer')
- pkg = ".".join(binarizer_cls.split(".")[:-1])
- cls_name = binarizer_cls.split(".")[-1]
- binarizer_cls = getattr(importlib.import_module(pkg), cls_name)
- binarization_args = hparams['binarization_args']
-
- for wav_fn in inp_wav_paths:
- item_name = wav_fn[len(test_input_dir) + 1:].replace("/", "_")
- item = binarizer_cls.process_item(
- item_name, wav_fn, binarization_args)
- items.append(item)
- sizes.append(item['len'])
- return items, sizes
-
- def load_mel_inputs(self, test_input_dir, spk_id=0):
- inp_mel_paths = sorted(glob.glob(f'{test_input_dir}/*.npy'))
- sizes = []
- items = []
-
- binarizer_cls = hparams.get("binarizer_cls", 'data_gen.tts.base_binarizer.BaseBinarizer')
- pkg = ".".join(binarizer_cls.split(".")[:-1])
- cls_name = binarizer_cls.split(".")[-1]
- binarizer_cls = getattr(importlib.import_module(pkg), cls_name)
- binarization_args = hparams['binarization_args']
-
- for mel in inp_mel_paths:
- mel_input = np.load(mel)
- mel_input = torch.FloatTensor(mel_input)
- item_name = mel[len(test_input_dir) + 1:].replace("/", "_")
- item = binarizer_cls.process_mel_item(item_name, mel_input, None, binarization_args)
- items.append(item)
- sizes.append(item['len'])
- return items, sizes
diff --git a/spaces/SUPERSHANKY/ControlNet_Colab/gradio_hed2image.py b/spaces/SUPERSHANKY/ControlNet_Colab/gradio_hed2image.py
deleted file mode 100644
index d937d665417cd85bbc56b2f16e15993c7395b22d..0000000000000000000000000000000000000000
--- a/spaces/SUPERSHANKY/ControlNet_Colab/gradio_hed2image.py
+++ /dev/null
@@ -1,68 +0,0 @@
-# This file is adapted from https://github.com/lllyasviel/ControlNet/blob/f4748e3630d8141d7765e2bd9b1e348f47847707/gradio_hed2image.py
-# The original license file is LICENSE.ControlNet in this repo.
-import gradio as gr
-
-
-def create_demo(process, max_images=12):
- with gr.Blocks() as demo:
- with gr.Row():
- gr.Markdown('## Control Stable Diffusion with HED Maps')
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type='numpy')
- prompt = gr.Textbox(label='Prompt')
- run_button = gr.Button(label='Run')
- with gr.Accordion('Advanced options', open=False):
- num_samples = gr.Slider(label='Images',
- minimum=1,
- maximum=max_images,
- value=1,
- step=1)
- image_resolution = gr.Slider(label='Image Resolution',
- minimum=256,
- maximum=768,
- value=512,
- step=256)
- detect_resolution = gr.Slider(label='HED Resolution',
- minimum=128,
- maximum=1024,
- value=512,
- step=1)
- ddim_steps = gr.Slider(label='Steps',
- minimum=1,
- maximum=100,
- value=20,
- step=1)
- scale = gr.Slider(label='Guidance Scale',
- minimum=0.1,
- maximum=30.0,
- value=9.0,
- step=0.1)
- seed = gr.Slider(label='Seed',
- minimum=-1,
- maximum=2147483647,
- step=1,
- randomize=True)
- eta = gr.Number(label='eta (DDIM)', value=0.0)
- a_prompt = gr.Textbox(
- label='Added Prompt',
- value='best quality, extremely detailed')
- n_prompt = gr.Textbox(
- label='Negative Prompt',
- value=
- 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
- )
- with gr.Column():
- result_gallery = gr.Gallery(label='Output',
- show_label=False,
- elem_id='gallery').style(
- grid=2, height='auto')
- ips = [
- input_image, prompt, a_prompt, n_prompt, num_samples,
- image_resolution, detect_resolution, ddim_steps, scale, seed, eta
- ]
- run_button.click(fn=process,
- inputs=ips,
- outputs=[result_gallery],
- api_name='hed')
- return demo
diff --git a/spaces/SUPERSHANKY/ControlNet_Colab/gradio_hough2image.py b/spaces/SUPERSHANKY/ControlNet_Colab/gradio_hough2image.py
deleted file mode 100644
index 5fb85cec86b8d0883334981d8810b3206237458b..0000000000000000000000000000000000000000
--- a/spaces/SUPERSHANKY/ControlNet_Colab/gradio_hough2image.py
+++ /dev/null
@@ -1,81 +0,0 @@
-# This file is adapted from https://github.com/lllyasviel/ControlNet/blob/f4748e3630d8141d7765e2bd9b1e348f47847707/gradio_hough2image.py
-# The original license file is LICENSE.ControlNet in this repo.
-import gradio as gr
-
-
-def create_demo(process, max_images=12):
- with gr.Blocks() as demo:
- with gr.Row():
- gr.Markdown('## Control Stable Diffusion with Hough Line Maps')
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type='numpy')
- prompt = gr.Textbox(label='Prompt')
- run_button = gr.Button(label='Run')
- with gr.Accordion('Advanced options', open=False):
- num_samples = gr.Slider(label='Images',
- minimum=1,
- maximum=max_images,
- value=1,
- step=1)
- image_resolution = gr.Slider(label='Image Resolution',
- minimum=256,
- maximum=768,
- value=512,
- step=256)
- detect_resolution = gr.Slider(label='Hough Resolution',
- minimum=128,
- maximum=1024,
- value=512,
- step=1)
- value_threshold = gr.Slider(
- label='Hough value threshold (MLSD)',
- minimum=0.01,
- maximum=2.0,
- value=0.1,
- step=0.01)
- distance_threshold = gr.Slider(
- label='Hough distance threshold (MLSD)',
- minimum=0.01,
- maximum=20.0,
- value=0.1,
- step=0.01)
- ddim_steps = gr.Slider(label='Steps',
- minimum=1,
- maximum=100,
- value=20,
- step=1)
- scale = gr.Slider(label='Guidance Scale',
- minimum=0.1,
- maximum=30.0,
- value=9.0,
- step=0.1)
- seed = gr.Slider(label='Seed',
- minimum=-1,
- maximum=2147483647,
- step=1,
- randomize=True)
- eta = gr.Number(label='eta (DDIM)', value=0.0)
- a_prompt = gr.Textbox(
- label='Added Prompt',
- value='best quality, extremely detailed')
- n_prompt = gr.Textbox(
- label='Negative Prompt',
- value=
- 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
- )
- with gr.Column():
- result_gallery = gr.Gallery(label='Output',
- show_label=False,
- elem_id='gallery').style(
- grid=2, height='auto')
- ips = [
- input_image, prompt, a_prompt, n_prompt, num_samples,
- image_resolution, detect_resolution, ddim_steps, scale, seed, eta,
- value_threshold, distance_threshold
- ]
- run_button.click(fn=process,
- inputs=ips,
- outputs=[result_gallery],
- api_name='hough')
- return demo
diff --git a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/position_encoding.py b/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/position_encoding.py
deleted file mode 100644
index eac7e896bbe85a670824bfe8ef487d0535d5bd99..0000000000000000000000000000000000000000
--- a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/position_encoding.py
+++ /dev/null
@@ -1,186 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# DINO
-# Copyright (c) 2022 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Conditional DETR
-# Copyright (c) 2021 Microsoft. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Copied from DETR (https://github.com/facebookresearch/detr)
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-# ------------------------------------------------------------------------
-
-"""
-Various positional encodings for the transformer.
-"""
-import math
-
-import torch
-from torch import nn
-
-from groundingdino.util.misc import NestedTensor
-
-
-class PositionEmbeddingSine(nn.Module):
- """
- This is a more standard version of the position embedding, very similar to the one
- used by the Attention is all you need paper, generalized to work on images.
- """
-
- def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None):
- super().__init__()
- self.num_pos_feats = num_pos_feats
- self.temperature = temperature
- self.normalize = normalize
- if scale is not None and normalize is False:
- raise ValueError("normalize should be True if scale is passed")
- if scale is None:
- scale = 2 * math.pi
- self.scale = scale
-
- def forward(self, tensor_list: NestedTensor):
- x = tensor_list.tensors
- mask = tensor_list.mask
- assert mask is not None
- not_mask = ~mask
- y_embed = not_mask.cumsum(1, dtype=torch.float32)
- x_embed = not_mask.cumsum(2, dtype=torch.float32)
- if self.normalize:
- eps = 1e-6
- # if os.environ.get("SHILONG_AMP", None) == '1':
- # eps = 1e-4
- # else:
- # eps = 1e-6
- y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale
- x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale
-
- dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
- dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)
-
- pos_x = x_embed[:, :, :, None] / dim_t
- pos_y = y_embed[:, :, :, None] / dim_t
- pos_x = torch.stack(
- (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4
- ).flatten(3)
- pos_y = torch.stack(
- (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4
- ).flatten(3)
- pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
- return pos
-
-
-class PositionEmbeddingSineHW(nn.Module):
- """
- This is a more standard version of the position embedding, very similar to the one
- used by the Attention is all you need paper, generalized to work on images.
- """
-
- def __init__(
- self, num_pos_feats=64, temperatureH=10000, temperatureW=10000, normalize=False, scale=None
- ):
- super().__init__()
- self.num_pos_feats = num_pos_feats
- self.temperatureH = temperatureH
- self.temperatureW = temperatureW
- self.normalize = normalize
- if scale is not None and normalize is False:
- raise ValueError("normalize should be True if scale is passed")
- if scale is None:
- scale = 2 * math.pi
- self.scale = scale
-
- def forward(self, tensor_list: NestedTensor):
- x = tensor_list.tensors
- mask = tensor_list.mask
- assert mask is not None
- not_mask = ~mask
- y_embed = not_mask.cumsum(1, dtype=torch.float32)
- x_embed = not_mask.cumsum(2, dtype=torch.float32)
-
- # import ipdb; ipdb.set_trace()
-
- if self.normalize:
- eps = 1e-6
- y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale
- x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale
-
- dim_tx = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
- dim_tx = self.temperatureW ** (2 * (torch.div(dim_tx, 2, rounding_mode='floor')) / self.num_pos_feats)
- pos_x = x_embed[:, :, :, None] / dim_tx
-
- dim_ty = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
- dim_ty = self.temperatureH ** (2 * (torch.div(dim_ty, 2, rounding_mode='floor')) / self.num_pos_feats)
- pos_y = y_embed[:, :, :, None] / dim_ty
-
- pos_x = torch.stack(
- (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4
- ).flatten(3)
- pos_y = torch.stack(
- (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4
- ).flatten(3)
- pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
-
- # import ipdb; ipdb.set_trace()
-
- return pos
-
-
-class PositionEmbeddingLearned(nn.Module):
- """
- Absolute pos embedding, learned.
- """
-
- def __init__(self, num_pos_feats=256):
- super().__init__()
- self.row_embed = nn.Embedding(50, num_pos_feats)
- self.col_embed = nn.Embedding(50, num_pos_feats)
- self.reset_parameters()
-
- def reset_parameters(self):
- nn.init.uniform_(self.row_embed.weight)
- nn.init.uniform_(self.col_embed.weight)
-
- def forward(self, tensor_list: NestedTensor):
- x = tensor_list.tensors
- h, w = x.shape[-2:]
- i = torch.arange(w, device=x.device)
- j = torch.arange(h, device=x.device)
- x_emb = self.col_embed(i)
- y_emb = self.row_embed(j)
- pos = (
- torch.cat(
- [
- x_emb.unsqueeze(0).repeat(h, 1, 1),
- y_emb.unsqueeze(1).repeat(1, w, 1),
- ],
- dim=-1,
- )
- .permute(2, 0, 1)
- .unsqueeze(0)
- .repeat(x.shape[0], 1, 1, 1)
- )
- return pos
-
-
-def build_position_encoding(args):
- N_steps = args.hidden_dim // 2
- if args.position_embedding in ("v2", "sine"):
- # TODO find a better way of exposing other arguments
- position_embedding = PositionEmbeddingSineHW(
- N_steps,
- temperatureH=args.pe_temperatureH,
- temperatureW=args.pe_temperatureW,
- normalize=True,
- )
- elif args.position_embedding in ("v3", "learned"):
- position_embedding = PositionEmbeddingLearned(N_steps)
- else:
- raise ValueError(f"not supported {args.position_embedding}")
-
- return position_embedding
diff --git a/spaces/Silentlin/DiffSinger/vocoders/vocoder_utils.py b/spaces/Silentlin/DiffSinger/vocoders/vocoder_utils.py
deleted file mode 100644
index db5d5ca1765928e4b047db04435a8a39b52592ca..0000000000000000000000000000000000000000
--- a/spaces/Silentlin/DiffSinger/vocoders/vocoder_utils.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import librosa
-
-from utils.hparams import hparams
-import numpy as np
-
-
-def denoise(wav, v=0.1):
- spec = librosa.stft(y=wav, n_fft=hparams['fft_size'], hop_length=hparams['hop_size'],
- win_length=hparams['win_size'], pad_mode='constant')
- spec_m = np.abs(spec)
- spec_m = np.clip(spec_m - v, a_min=0, a_max=None)
- spec_a = np.angle(spec)
-
- return librosa.istft(spec_m * np.exp(1j * spec_a), hop_length=hparams['hop_size'],
- win_length=hparams['win_size'])
diff --git a/spaces/SouthCity/ShuruiXu/README.md b/spaces/SouthCity/ShuruiXu/README.md
deleted file mode 100644
index 01bac90e809880f1ae2f10527edaede5a0535b51..0000000000000000000000000000000000000000
--- a/spaces/SouthCity/ShuruiXu/README.md
+++ /dev/null
@@ -1,274 +0,0 @@
----
-title: ChatImprovement
-emoji: 😻
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-duplicated_from: qingxu98/gpt-academic
----
-
-
-# ChatGPT 学术优化
-
-**如果喜欢这个项目,请给它一个Star;如果你发明了更好用的快捷键或函数插件,欢迎发issue或者pull requests(dev分支)**
-
-If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request (to `dev` branch).
-
-```
-代码中参考了很多其他优秀项目中的设计,主要包括:
-
-# 借鉴项目1:借鉴了ChuanhuChatGPT中读取OpenAI json的方法、记录历史问询记录的方法以及gradio queue的使用技巧
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# 借鉴项目2:借鉴了mdtex2html中公式处理的方法
-https://github.com/polarwinkel/mdtex2html
-
-项目使用OpenAI的gpt-3.5-turbo模型,期待gpt-4早点放宽门槛😂
-```
-
-> **Note**
->
-> 1.请注意只有“红颜色”标识的函数插件(按钮)才支持读取文件。目前对pdf/word格式文件的支持插件正在逐步完善中,需要更多developer的帮助。
->
-> 2.本项目中每个文件的功能都在自译解[`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。
->
-> 3.如果您不太习惯部分中文命名的函数、注释或者界面,您可以随时点击相关函数插件,调用ChatGPT一键生成纯英文的项目源代码。
-
-
-
-功能 | 描述
---- | ---
-一键润色 | 支持一键润色、一键查找论文语法错误
-一键中英互译 | 一键中英互译
-一键代码解释 | 可以正确显示代码、解释代码
-自定义快捷键 | 支持自定义快捷键
-配置代理服务器 | 支持配置代理服务器
-模块化设计 | 支持自定义高阶的实验性功能与[函数插件],插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
-自我程序剖析 | [函数插件] 一键读懂本项目的源代码
-程序剖析 | [函数插件] 一键可以剖析其他Python/C/C++/Java项目树
-读论文 | [函数插件] 一键解读latex论文全文并生成摘要
-批量注释生成 | [函数插件] 一键批量生成函数注释
-chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
-arxiv小助手 | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
-公式显示 | 可以同时显示公式的tex形式和渲染形式
-图片显示 | 可以在markdown中显示图片
-多线程函数插件支持 | 支持多线调用chatgpt,一键处理海量文本或程序
-支持GPT输出的markdown表格 | 可以输出支持GPT的markdown表格
-…… | ……
-
-
-
-
-- 新界面
-
-
-
-
-
-- 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放粘贴板
-
-
-
-
-- 润色/纠错
-
-
-
-
-
-- 支持GPT输出的markdown表格
-
-
-
-
-- 如果输出包含公式,会同时以tex形式和渲染形式显示,方便复制和阅读
-
-
-
-
-
-- 懒得看项目代码?整个工程直接给chatgpt炫嘴里
-
-
-
-
-## 直接运行 (Windows, Linux or MacOS)
-
-### 1. 下载项目
-```sh
-git clone https://github.com/binary-husky/chatgpt_academic.git
-cd chatgpt_academic
-```
-
-### 2. 配置API_KEY和代理设置
-
-在`config.py`中,配置 海外Proxy 和 OpenAI API KEY,说明如下
-```
-1. 如果你在国内,需要设置海外代理才能够顺利使用 OpenAI API,设置方法请仔细阅读config.py(1.修改其中的USE_PROXY为True; 2.按照说明修改其中的proxies)。
-2. 配置 OpenAI API KEY。你需要在 OpenAI 官网上注册并获取 API KEY。一旦你拿到了 API KEY,在 config.py 文件里配置好即可。
-3. 与代理网络有关的issue(网络超时、代理不起作用)汇总到 https://github.com/binary-husky/chatgpt_academic/issues/1
-```
-(P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中。`config_private.py`不受git管控,可以让您的隐私信息更加安全。)
-
-
-### 3. 安装依赖
-```sh
-# (选择一)推荐
-python -m pip install -r requirements.txt
-
-# (选择二)如果您使用anaconda,步骤也是类似的:
-# (选择二.1)conda create -n gptac_venv python=3.11
-# (选择二.2)conda activate gptac_venv
-# (选择二.3)python -m pip install -r requirements.txt
-
-# 备注:使用官方pip源或者阿里pip源,其他pip源(如清华pip)有可能出问题,临时换源方法:
-# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
-```
-
-### 4. 运行
-```sh
-python main.py
-```
-
-### 5. 测试实验性功能
-```
-- 测试C++项目头文件分析
- input区域 输入 `./crazy_functions/test_project/cpp/libJPG` , 然后点击 "[实验] 解析整个C++项目(input输入项目根路径)"
-- 测试给Latex项目写摘要
- input区域 输入 `./crazy_functions/test_project/latex/attention` , 然后点击 "[实验] 读tex论文写摘要(input输入项目根路径)"
-- 测试Python项目分析
- input区域 输入 `./crazy_functions/test_project/python/dqn` , 然后点击 "[实验] 解析整个py项目(input输入项目根路径)"
-- 测试自我代码解读
- 点击 "[实验] 请解析并解构此项目本身"
-- 测试实验功能模板函数(要求gpt回答历史上的今天发生了什么),您可以根据此函数为模板,实现更复杂的功能
- 点击 "[实验] 实验功能函数模板"
-```
-
-## 使用docker (Linux)
-
-``` sh
-# 下载项目
-git clone https://github.com/binary-husky/chatgpt_academic.git
-cd chatgpt_academic
-# 配置 海外Proxy 和 OpenAI API KEY
-用任意文本编辑器编辑 config.py
-# 安装
-docker build -t gpt-academic .
-# 运行
-docker run --rm -it --net=host gpt-academic
-
-# 测试实验性功能
-## 测试自我代码解读
-点击 "[实验] 请解析并解构此项目本身"
-## 测试实验功能模板函数(要求gpt回答历史上的今天发生了什么),您可以根据此函数为模板,实现更复杂的功能
-点击 "[实验] 实验功能函数模板"
-##(请注意在docker中运行时,需要额外注意程序的文件访问权限问题)
-## 测试C++项目头文件分析
-input区域 输入 ./crazy_functions/test_project/cpp/libJPG , 然后点击 "[实验] 解析整个C++项目(input输入项目根路径)"
-## 测试给Latex项目写摘要
-input区域 输入 ./crazy_functions/test_project/latex/attention , 然后点击 "[实验] 读tex论文写摘要(input输入项目根路径)"
-## 测试Python项目分析
-input区域 输入 ./crazy_functions/test_project/python/dqn , 然后点击 "[实验] 解析整个py项目(input输入项目根路径)"
-
-```
-
-## 其他部署方式
-- 使用WSL2(Windows Subsystem for Linux 子系统)
-请访问[部署wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
-
-- nginx远程部署
-请访问[部署wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E7%9A%84%E6%8C%87%E5%AF%BC)
-
-
-## 自定义新的便捷按钮(学术快捷键自定义)
-打开functional.py,添加条目如下,然后重启程序即可。(如果按钮已经添加成功并可见,那么前缀、后缀都支持热修改,无需重启程序即可生效。)
-例如
-```
-"超级英译中": {
-
- # 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
- "Prefix": "请翻译把下面一段内容成中文,然后用一个markdown表格逐一解释文中出现的专有名词:\n\n",
-
- # 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来。
- "Suffix": "",
-
-},
-```
-
-
-
-
-
-如果你发明了更好用的学术快捷键,欢迎发issue或者pull requests!
-
-## 配置代理
-### 方法一:常规方法
-在```config.py```中修改端口与代理软件对应
-
-
-
-
-
-
-配置完成后,你可以用以下命令测试代理是否工作,如果一切正常,下面的代码将输出你的代理服务器所在地:
-```
-python check_proxy.py
-```
-### 方法二:纯新手教程
-[纯新手教程](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
-
-## 兼容性测试
-
-### 图片显示:
-
-
-
-
-
-
-### 如果一个程序能够读懂并剖析自己:
-
-
-
-
-
-
-
-
-
-### 其他任意Python/Cpp项目剖析:
-
-
-
-
-
-
-
-
-### Latex论文一键阅读理解与摘要生成
-
-
-
-
-### 自动报告生成
-
-
-### 模块化功能设计
-
-
-
-
-
-## Todo:
-
-- (Top Priority) 调用另一个开源项目text-generation-webui的web接口,使用其他llm模型
-- 总结大工程源代码时,文本过长、token溢出的问题(目前的方法是直接二分丢弃处理溢出,过于粗暴,有效信息大量丢失)
-
-
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/store/jac.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/store/jac.py
deleted file mode 100644
index 2ca4920194f7d7a3fbc3063224b99af5c1dcb84e..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/store/jac.py
+++ /dev/null
@@ -1,370 +0,0 @@
-import json
-import logging
-import os
-from pathlib import Path
-from typing import (
- TYPE_CHECKING,
- Any,
- Dict,
- Iterator,
- List,
- Optional,
- Type,
- TypeVar,
- Union,
-)
-
-from docarray.store.abstract_doc_store import AbstractDocStore
-from docarray.store.helpers import (
- _BufferedCachingRequestReader,
- get_version_info,
- raise_req_error,
-)
-from docarray.utils._internal.cache import _get_cache_path
-from docarray.utils._internal.misc import import_library
-
-if TYPE_CHECKING: # pragma: no cover
- import io
-
- from docarray import BaseDoc, DocList
-
-if TYPE_CHECKING:
- import hubble
- from hubble import Client as HubbleClient
- from hubble.client.endpoints import EndpointsV2
-else:
- hubble = import_library('hubble', raise_error=True)
- HubbleClient = hubble.Client
- EndpointsV2 = hubble.client.endpoints.EndpointsV2
-
-
-def _get_length_from_summary(summary: List[Dict]) -> Optional[int]:
- """Get the length from summary."""
- for item in summary:
- if 'Length' == item['name']:
- return item['value']
- raise ValueError('Length not found in summary')
-
-
-def _get_raw_summary(self: 'DocList') -> List[Dict[str, Any]]:
- items: List[Dict[str, Any]] = [
- dict(
- name='Type',
- value=self.__class__.__name__,
- description='The type of the DocList',
- ),
- dict(
- name='Length',
- value=len(self),
- description='The length of the DocList',
- ),
- dict(
- name='Homogenous Documents',
- value=True,
- description='Whether all documents are of the same structure, attributes',
- ),
- dict(
- name='Fields',
- value=tuple(self[0].__class__.__fields__.keys()),
- description='The fields of the Document',
- ),
- dict(
- name='Multimodal dataclass',
- value=True,
- description='Whether all documents are multimodal',
- ),
- ]
-
- return items
-
-
-SelfJACDocStore = TypeVar('SelfJACDocStore', bound='JACDocStore')
-
-
-class JACDocStore(AbstractDocStore):
- """Class to push and pull [`DocList`][docarray.DocList] to and from Jina AI Cloud."""
-
- @staticmethod
- @hubble.login_required
- def list(namespace: str = '', show_table: bool = False) -> List[str]:
- """List all available arrays in the cloud.
-
- :param namespace: Not supported for Jina AI Cloud.
- :param show_table: if true, show the table of the arrays.
- :returns: List of available DocList's names.
- """
- if len(namespace) > 0:
- logging.warning('Namespace is not supported for Jina AI Cloud.')
- from rich import print
-
- result = []
- from rich import box
- from rich.table import Table
-
- resp = HubbleClient(jsonify=True).list_artifacts(
- filter={'type': 'documentArray'},
- sort={'createdAt': 1},
- pageSize=10000,
- )
-
- table = Table(
- title=f'You have {resp["meta"]["total"]} DocList on the cloud',
- box=box.SIMPLE,
- highlight=True,
- )
- table.add_column('Name')
- table.add_column('Length')
- table.add_column('Access')
- table.add_column('Created at', justify='center')
- table.add_column('Updated at', justify='center')
-
- for docs in resp['data']:
- result.append(docs['name'])
-
- table.add_row(
- docs['name'],
- str(_get_length_from_summary(docs['metaData'].get('summary', []))),
- docs['visibility'],
- docs['createdAt'],
- docs['updatedAt'],
- )
-
- if show_table:
- print(table)
- return result
-
- @staticmethod
- @hubble.login_required
- def delete(name: str, missing_ok: bool = True) -> bool:
- """
- Delete a [`DocList`][docarray.DocList] from the cloud.
- :param name: the name of the DocList to delete.
- :param missing_ok: if true, do not raise an error if the DocList does not exist.
- :return: True if the DocList was deleted, False if it did not exist.
- """
- try:
- HubbleClient(jsonify=True).delete_artifact(name=name)
- except hubble.excepts.RequestedEntityNotFoundError:
- if missing_ok:
- return False
- else:
- raise
- return True
-
- @staticmethod
- @hubble.login_required
- def push(
- docs: 'DocList',
- name: str,
- public: bool = True,
- show_progress: bool = False,
- branding: Optional[Dict] = None,
- ) -> Dict:
- """Push this [`DocList`][docarray.DocList] object to Jina AI Cloud
-
- !!! note
- - Push with the same ``name`` will override the existing content.
- - Kinda like a public clipboard where everyone can override anyone's content.
- So to make your content survive longer, you may want to use longer & more complicated name.
- - The lifetime of the content is not promised atm, could be a day, could be a week. Do not use it for
- persistence. Only use this full temporary transmission/storage/clipboard.
-
- :param docs: The `DocList` to push.
- :param name: A name that can later be used to retrieve this `DocList`.
- :param public: By default, anyone can pull a `DocList` if they know its name.
- Setting this to false will restrict access to only the creator.
- :param show_progress: If true, a progress bar will be displayed.
- :param branding: A dictionary of branding information to be sent to Jina Cloud. e.g. {"icon": "emoji", "background": "#fff"}
- """
- import requests
- import urllib3
-
- delimiter = os.urandom(32)
-
- data, ctype = urllib3.filepost.encode_multipart_formdata(
- {
- 'file': (
- 'DocumentArray',
- delimiter,
- ),
- 'name': name,
- 'type': 'documentArray',
- 'public': public,
- 'metaData': json.dumps(
- {
- 'summary': _get_raw_summary(docs),
- 'branding': branding,
- 'version': get_version_info(),
- },
- sort_keys=True,
- ),
- }
- )
-
- headers = {
- 'Content-Type': ctype,
- }
-
- auth_token = hubble.get_token()
- if auth_token:
- headers['Authorization'] = f'token {auth_token}'
-
- _head, _tail = data.split(delimiter)
-
- def gen():
- yield _head
- binary_stream = docs._to_binary_stream(
- protocol='protobuf', compress='gzip', show_progress=show_progress
- )
- while True:
- try:
- yield next(binary_stream)
- except StopIteration:
- break
- yield _tail
-
- response = requests.post(
- HubbleClient()._base_url + EndpointsV2.upload_artifact,
- data=gen(),
- headers=headers,
- )
-
- if response.ok:
- return response.json()['data']
- else:
- if response.status_code >= 400 and 'readableMessage' in response.json():
- response.reason = response.json()['readableMessage']
- raise_req_error(response)
-
- @classmethod
- @hubble.login_required
- def push_stream(
- cls: Type[SelfJACDocStore],
- docs: Iterator['BaseDoc'],
- name: str,
- public: bool = True,
- show_progress: bool = False,
- branding: Optional[Dict] = None,
- ) -> Dict:
- """Push a stream of documents to Jina AI Cloud
-
- !!! note
- - Push with the same ``name`` will override the existing content.
- - Kinda like a public clipboard where everyone can override anyone's content.
- So to make your content survive longer, you may want to use longer & more complicated name.
- - The lifetime of the content is not promised atm, could be a day, could be a week. Do not use it for
- persistence. Only use this full temporary transmission/storage/clipboard.
-
- :param docs: a stream of documents
- :param name: A name that can later be used to retrieve this `DocList`.
- :param public: By default, anyone can pull a `DocList` if they know its name.
- Setting this to false will restrict access to only the creator.
- :param show_progress: If true, a progress bar will be displayed.
- :param branding: A dictionary of branding information to be sent to Jina Cloud. e.g. {"icon": "emoji", "background": "#fff"}
- """
- from docarray import DocList
-
- # This is a temporary solution to push a stream of documents
- # The memory footprint is not ideal
- # But it must be done this way for now because Hubble expects to know the length of the DocList
- # before it starts receiving the documents
- first_doc = next(docs)
- _docs = DocList[first_doc.__class__]([first_doc]) # type: ignore
- for doc in docs:
- _docs.append(doc)
- return cls.push(_docs, name, public, show_progress, branding)
-
- @staticmethod
- @hubble.login_required
- def pull(
- cls: Type['DocList'],
- name: str,
- show_progress: bool = False,
- local_cache: bool = True,
- ) -> 'DocList':
- """Pull a [`DocList`][docarray.DocList] from Jina AI Cloud to local.
-
- :param name: the upload name set during `.push`
- :param show_progress: if true, display a progress bar.
- :param local_cache: store the downloaded DocList to local folder
- :return: a [`DocList`][docarray.DocList] object
- """
- from docarray import DocList
-
- return DocList[cls.doc_type]( # type: ignore
- JACDocStore.pull_stream(cls, name, show_progress, local_cache)
- )
-
- @staticmethod
- @hubble.login_required
- def pull_stream(
- cls: Type['DocList'],
- name: str,
- show_progress: bool = False,
- local_cache: bool = False,
- ) -> Iterator['BaseDoc']:
- """Pull a [`DocList`][docarray.DocList] from Jina AI Cloud to local.
-
- :param name: the upload name set during `.push`
- :param show_progress: if true, display a progress bar.
- :param local_cache: store the downloaded DocList to local folder
- :return: An iterator of Documents
- """
- import requests
-
- headers = {}
-
- auth_token = hubble.get_token()
-
- if auth_token:
- headers['Authorization'] = f'token {auth_token}'
-
- url = HubbleClient()._base_url + EndpointsV2.download_artifact + f'?name={name}'
- response = requests.get(url, headers=headers)
-
- if response.ok:
- url = response.json()['data']['download']
- else:
- response.raise_for_status()
-
- with requests.get(
- url,
- stream=True,
- ) as r:
- from contextlib import nullcontext
-
- r.raise_for_status()
- save_name = name.replace('/', '_')
-
- tmp_cache_file = Path(f'/tmp/{save_name}.docs')
- _source: Union[
- _BufferedCachingRequestReader, io.BufferedReader
- ] = _BufferedCachingRequestReader(r, tmp_cache_file)
-
- cache_file = _get_cache_path() / f'{save_name}.docs'
- if local_cache and cache_file.exists():
- _cache_len = cache_file.stat().st_size
- if _cache_len == int(r.headers['Content-length']):
- if show_progress:
- print(f'Loading from local cache {cache_file}')
- _source = open(cache_file, 'rb')
- r.close()
-
- docs = cls._load_binary_stream(
- nullcontext(_source), # type: ignore
- protocol='protobuf',
- compress='gzip',
- show_progress=show_progress,
- )
- try:
- while True:
- yield next(docs)
- except StopIteration:
- pass
-
- if local_cache:
- if isinstance(_source, _BufferedCachingRequestReader):
- Path(_get_cache_path()).mkdir(parents=True, exist_ok=True)
- tmp_cache_file.rename(cache_file)
- else:
- _source.close()
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/utils/misc.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/utils/misc.py
deleted file mode 100644
index 2c58d0d7fee9fe3d4519270ad8c1e998d0d8a18c..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/utils/misc.py
+++ /dev/null
@@ -1,377 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import collections.abc
-import functools
-import itertools
-import subprocess
-import warnings
-from collections import abc
-from importlib import import_module
-from inspect import getfullargspec
-from itertools import repeat
-
-
-# From PyTorch internals
-def _ntuple(n):
-
- def parse(x):
- if isinstance(x, collections.abc.Iterable):
- return x
- return tuple(repeat(x, n))
-
- return parse
-
-
-to_1tuple = _ntuple(1)
-to_2tuple = _ntuple(2)
-to_3tuple = _ntuple(3)
-to_4tuple = _ntuple(4)
-to_ntuple = _ntuple
-
-
-def is_str(x):
- """Whether the input is an string instance.
-
- Note: This method is deprecated since python 2 is no longer supported.
- """
- return isinstance(x, str)
-
-
-def import_modules_from_strings(imports, allow_failed_imports=False):
- """Import modules from the given list of strings.
-
- Args:
- imports (list | str | None): The given module names to be imported.
- allow_failed_imports (bool): If True, the failed imports will return
- None. Otherwise, an ImportError is raise. Default: False.
-
- Returns:
- list[module] | module | None: The imported modules.
-
- Examples:
- >>> osp, sys = import_modules_from_strings(
- ... ['os.path', 'sys'])
- >>> import os.path as osp_
- >>> import sys as sys_
- >>> assert osp == osp_
- >>> assert sys == sys_
- """
- if not imports:
- return
- single_import = False
- if isinstance(imports, str):
- single_import = True
- imports = [imports]
- if not isinstance(imports, list):
- raise TypeError(
- f'custom_imports must be a list but got type {type(imports)}')
- imported = []
- for imp in imports:
- if not isinstance(imp, str):
- raise TypeError(
- f'{imp} is of type {type(imp)} and cannot be imported.')
- try:
- imported_tmp = import_module(imp)
- except ImportError:
- if allow_failed_imports:
- warnings.warn(f'{imp} failed to import and is ignored.',
- UserWarning)
- imported_tmp = None
- else:
- raise ImportError
- imported.append(imported_tmp)
- if single_import:
- imported = imported[0]
- return imported
-
-
-def iter_cast(inputs, dst_type, return_type=None):
- """Cast elements of an iterable object into some type.
-
- Args:
- inputs (Iterable): The input object.
- dst_type (type): Destination type.
- return_type (type, optional): If specified, the output object will be
- converted to this type, otherwise an iterator.
-
- Returns:
- iterator or specified type: The converted object.
- """
- if not isinstance(inputs, abc.Iterable):
- raise TypeError('inputs must be an iterable object')
- if not isinstance(dst_type, type):
- raise TypeError('"dst_type" must be a valid type')
-
- out_iterable = map(dst_type, inputs)
-
- if return_type is None:
- return out_iterable
- else:
- return return_type(out_iterable)
-
-
-def list_cast(inputs, dst_type):
- """Cast elements of an iterable object into a list of some type.
-
- A partial method of :func:`iter_cast`.
- """
- return iter_cast(inputs, dst_type, return_type=list)
-
-
-def tuple_cast(inputs, dst_type):
- """Cast elements of an iterable object into a tuple of some type.
-
- A partial method of :func:`iter_cast`.
- """
- return iter_cast(inputs, dst_type, return_type=tuple)
-
-
-def is_seq_of(seq, expected_type, seq_type=None):
- """Check whether it is a sequence of some type.
-
- Args:
- seq (Sequence): The sequence to be checked.
- expected_type (type): Expected type of sequence items.
- seq_type (type, optional): Expected sequence type.
-
- Returns:
- bool: Whether the sequence is valid.
- """
- if seq_type is None:
- exp_seq_type = abc.Sequence
- else:
- assert isinstance(seq_type, type)
- exp_seq_type = seq_type
- if not isinstance(seq, exp_seq_type):
- return False
- for item in seq:
- if not isinstance(item, expected_type):
- return False
- return True
-
-
-def is_list_of(seq, expected_type):
- """Check whether it is a list of some type.
-
- A partial method of :func:`is_seq_of`.
- """
- return is_seq_of(seq, expected_type, seq_type=list)
-
-
-def is_tuple_of(seq, expected_type):
- """Check whether it is a tuple of some type.
-
- A partial method of :func:`is_seq_of`.
- """
- return is_seq_of(seq, expected_type, seq_type=tuple)
-
-
-def slice_list(in_list, lens):
- """Slice a list into several sub lists by a list of given length.
-
- Args:
- in_list (list): The list to be sliced.
- lens(int or list): The expected length of each out list.
-
- Returns:
- list: A list of sliced list.
- """
- if isinstance(lens, int):
- assert len(in_list) % lens == 0
- lens = [lens] * int(len(in_list) / lens)
- if not isinstance(lens, list):
- raise TypeError('"indices" must be an integer or a list of integers')
- elif sum(lens) != len(in_list):
- raise ValueError('sum of lens and list length does not '
- f'match: {sum(lens)} != {len(in_list)}')
- out_list = []
- idx = 0
- for i in range(len(lens)):
- out_list.append(in_list[idx:idx + lens[i]])
- idx += lens[i]
- return out_list
-
-
-def concat_list(in_list):
- """Concatenate a list of list into a single list.
-
- Args:
- in_list (list): The list of list to be merged.
-
- Returns:
- list: The concatenated flat list.
- """
- return list(itertools.chain(*in_list))
-
-
-def check_prerequisites(
- prerequisites,
- checker,
- msg_tmpl='Prerequisites "{}" are required in method "{}" but not '
- 'found, please install them first.'): # yapf: disable
- """A decorator factory to check if prerequisites are satisfied.
-
- Args:
- prerequisites (str of list[str]): Prerequisites to be checked.
- checker (callable): The checker method that returns True if a
- prerequisite is meet, False otherwise.
- msg_tmpl (str): The message template with two variables.
-
- Returns:
- decorator: A specific decorator.
- """
-
- def wrap(func):
-
- @functools.wraps(func)
- def wrapped_func(*args, **kwargs):
- requirements = [prerequisites] if isinstance(
- prerequisites, str) else prerequisites
- missing = []
- for item in requirements:
- if not checker(item):
- missing.append(item)
- if missing:
- print(msg_tmpl.format(', '.join(missing), func.__name__))
- raise RuntimeError('Prerequisites not meet.')
- else:
- return func(*args, **kwargs)
-
- return wrapped_func
-
- return wrap
-
-
-def _check_py_package(package):
- try:
- import_module(package)
- except ImportError:
- return False
- else:
- return True
-
-
-def _check_executable(cmd):
- if subprocess.call(f'which {cmd}', shell=True) != 0:
- return False
- else:
- return True
-
-
-def requires_package(prerequisites):
- """A decorator to check if some python packages are installed.
-
- Example:
- >>> @requires_package('numpy')
- >>> func(arg1, args):
- >>> return numpy.zeros(1)
- array([0.])
- >>> @requires_package(['numpy', 'non_package'])
- >>> func(arg1, args):
- >>> return numpy.zeros(1)
- ImportError
- """
- return check_prerequisites(prerequisites, checker=_check_py_package)
-
-
-def requires_executable(prerequisites):
- """A decorator to check if some executable files are installed.
-
- Example:
- >>> @requires_executable('ffmpeg')
- >>> func(arg1, args):
- >>> print(1)
- 1
- """
- return check_prerequisites(prerequisites, checker=_check_executable)
-
-
-def deprecated_api_warning(name_dict, cls_name=None):
- """A decorator to check if some arguments are deprecate and try to replace
- deprecate src_arg_name to dst_arg_name.
-
- Args:
- name_dict(dict):
- key (str): Deprecate argument names.
- val (str): Expected argument names.
-
- Returns:
- func: New function.
- """
-
- def api_warning_wrapper(old_func):
-
- @functools.wraps(old_func)
- def new_func(*args, **kwargs):
- # get the arg spec of the decorated method
- args_info = getfullargspec(old_func)
- # get name of the function
- func_name = old_func.__name__
- if cls_name is not None:
- func_name = f'{cls_name}.{func_name}'
- if args:
- arg_names = args_info.args[:len(args)]
- for src_arg_name, dst_arg_name in name_dict.items():
- if src_arg_name in arg_names:
- warnings.warn(
- f'"{src_arg_name}" is deprecated in '
- f'`{func_name}`, please use "{dst_arg_name}" '
- 'instead')
- arg_names[arg_names.index(src_arg_name)] = dst_arg_name
- if kwargs:
- for src_arg_name, dst_arg_name in name_dict.items():
- if src_arg_name in kwargs:
-
- assert dst_arg_name not in kwargs, (
- f'The expected behavior is to replace '
- f'the deprecated key `{src_arg_name}` to '
- f'new key `{dst_arg_name}`, but got them '
- f'in the arguments at the same time, which '
- f'is confusing. `{src_arg_name} will be '
- f'deprecated in the future, please '
- f'use `{dst_arg_name}` instead.')
-
- warnings.warn(
- f'"{src_arg_name}" is deprecated in '
- f'`{func_name}`, please use "{dst_arg_name}" '
- 'instead')
- kwargs[dst_arg_name] = kwargs.pop(src_arg_name)
-
- # apply converted arguments to the decorated method
- output = old_func(*args, **kwargs)
- return output
-
- return new_func
-
- return api_warning_wrapper
-
-
-def is_method_overridden(method, base_class, derived_class):
- """Check if a method of base class is overridden in derived class.
-
- Args:
- method (str): the method name to check.
- base_class (type): the class of the base class.
- derived_class (type | Any): the class or instance of the derived class.
- """
- assert isinstance(base_class, type), \
- "base_class doesn't accept instance, Please pass class instead."
-
- if not isinstance(derived_class, type):
- derived_class = derived_class.__class__
-
- base_method = getattr(base_class, method)
- derived_method = getattr(derived_class, method)
- return derived_method != base_method
-
-
-def has_method(obj: object, method: str) -> bool:
- """Check whether the object has a method.
-
- Args:
- method (str): The method name to check.
- obj (object): The object to check.
-
- Returns:
- bool: True if the object has the method else False.
- """
- return hasattr(obj, method) and callable(getattr(obj, method))
diff --git a/spaces/Swamyajulu/MyGenAIChatBot/README.md b/spaces/Swamyajulu/MyGenAIChatBot/README.md
deleted file mode 100644
index df2689db9596bb579489010866c2c6895158d4d3..0000000000000000000000000000000000000000
--- a/spaces/Swamyajulu/MyGenAIChatBot/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: MyGenAIChatBot
-emoji: 🏃
-colorFrom: gray
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/TH5314/newbing/src/components/chat-notification.tsx b/spaces/TH5314/newbing/src/components/chat-notification.tsx
deleted file mode 100644
index 4be24d0f1755c8058698cfa66c736d8d4792475a..0000000000000000000000000000000000000000
--- a/spaces/TH5314/newbing/src/components/chat-notification.tsx
+++ /dev/null
@@ -1,77 +0,0 @@
-import { useEffect } from 'react'
-import Image from 'next/image'
-
-import IconWarning from '@/assets/images/warning.svg'
-import { ChatError, ErrorCode, ChatMessageModel } from '@/lib/bots/bing/types'
-import { ExternalLink } from './external-link'
-import { useBing } from '@/lib/hooks/use-bing'
-
-export interface ChatNotificationProps extends Pick, 'bot'> {
- message?: ChatMessageModel
-}
-
-function getAction(error: ChatError, reset: () => void) {
- if (error.code === ErrorCode.THROTTLE_LIMIT) {
- reset()
- return (
-
- 你已达到每日最大发送消息次数,请
更换账号 或隔一天后重试
-
- )
- }
- if (error.code === ErrorCode.BING_FORBIDDEN) {
- return (
-
- 你的账号已在黑名单,请尝试更换账号及申请解封
-
- )
- }
- if (error.code === ErrorCode.CONVERSATION_LIMIT) {
- return (
-
- 当前话题已中止,请点
-
重新开始
- 开启新的对话
-
- )
- }
- if (error.code === ErrorCode.BING_CAPTCHA) {
- return (
-
- 点击通过人机验证
-
- )
- }
- if (error.code === ErrorCode.BING_UNAUTHORIZED) {
- reset()
- return (
- 没有获取到身份信息或身份信息失效,点此重新设置
- )
- }
- return error.message
-}
-
-export function ChatNotification({ message, bot }: ChatNotificationProps) {
- useEffect(() => {
- window.scrollBy(0, 2000)
- }, [message])
-
- if (!message?.error) return
-
- return (
-
-
-
-
-
-
- {getAction(message.error, () => bot.resetConversation())}
-
-
-
-
-
- )
-}
diff --git a/spaces/Tabaxi3K/FrankenFlic/app.py b/spaces/Tabaxi3K/FrankenFlic/app.py
deleted file mode 100644
index d2356e1a8df0e7492ef0e408259a9fee947dae8e..0000000000000000000000000000000000000000
--- a/spaces/Tabaxi3K/FrankenFlic/app.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import streamlit as st
-from nltk.tokenize import sent_tokenize
-
-def local_css(file_name):
- with open(file_name) as f:
- st.markdown(f'', unsafe_allow_html=True)
-
-def remote_css(url):
- st.markdown(f' ', unsafe_allow_html=True)
-
-local_css("style.css")
-
-
-desc = "Uses a neural network trained on over *5000* horror movies to generate sometimes good, *mostly non-sensical and humorous* horror movie plots after being given a movie title. This program attempts its best guess at generating a movie based on whatever title you give it. "
-st.title('FrankenFlic')
-st.markdown("Note, this app is still in-development so you may receive cut off responses or other errors. If you get a blank response, please try another movie title. Please be kind! ", unsafe_allow_html=True)
-st.write("Created by: [Caleb Choe Discord: Tabaxi3K#3514 Instagram: @creativeusername2327](https://www.instagram.com/creativeusername2327/)")
-st.write(desc)
-
-st.subheader("Enter the name of your film and hit enter:")
-prompt = st.text_input("") + " is a movie about"
-
-def query(payload):
- response = requests.post(API_URL, headers=headers, json=payload)
- return response.json()
-
-import requests
-import json
-import time
-import re
-payload = json.dumps(prompt)
-
-from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktParameters
-punkt_param = PunktParameters()
-punkt_param.abbrev_types = set(['dr', 'vs', 'mr', 'mrs', 'prof', 'inc'])
-sentence_splitter = PunktSentenceTokenizer(punkt_param)
-
-
-API_URL = "https://api-inference.huggingface.co/models/Tabaxi3K/FrankenFlic"
-# headers = {"Content-Type": "application/json", "Authorization": "Bearer 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"}
-headers = {"Authorization": "Bearer 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"}
-
-def query(payload):
- response = requests.post(API_URL, headers=headers, json=payload)
- return response.json()
-
-output = query(prompt)
-
-if st.button('Scare Me'):
- try:
- time.sleep(1)
- movie = output[0]["generated_text"]
- splitted = movie.split(".",1)
- st.subheader(prompt[:-17])
- st.markdown('.'.join(word for word in splitted[:1]))
- except:
- st.write("Our servers are dusting off some cobwebs, can you please try your response again or use a different movie name?")
\ No newline at end of file
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/misc.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/misc.py
deleted file mode 100644
index bd191c4e14f389d6d0f799dfef9c5c0221a8c568..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/misc.py
+++ /dev/null
@@ -1,735 +0,0 @@
-import contextlib
-import errno
-import getpass
-import hashlib
-import io
-import logging
-import os
-import posixpath
-import shutil
-import stat
-import sys
-import sysconfig
-import urllib.parse
-from io import StringIO
-from itertools import filterfalse, tee, zip_longest
-from types import TracebackType
-from typing import (
- Any,
- BinaryIO,
- Callable,
- ContextManager,
- Dict,
- Generator,
- Iterable,
- Iterator,
- List,
- Optional,
- TextIO,
- Tuple,
- Type,
- TypeVar,
- Union,
- cast,
-)
-
-from pip._vendor.pyproject_hooks import BuildBackendHookCaller
-from pip._vendor.tenacity import retry, stop_after_delay, wait_fixed
-
-from pip import __version__
-from pip._internal.exceptions import CommandError, ExternallyManagedEnvironment
-from pip._internal.locations import get_major_minor_version
-from pip._internal.utils.compat import WINDOWS
-from pip._internal.utils.virtualenv import running_under_virtualenv
-
-__all__ = [
- "rmtree",
- "display_path",
- "backup_dir",
- "ask",
- "splitext",
- "format_size",
- "is_installable_dir",
- "normalize_path",
- "renames",
- "get_prog",
- "captured_stdout",
- "ensure_dir",
- "remove_auth_from_url",
- "check_externally_managed",
- "ConfiguredBuildBackendHookCaller",
-]
-
-logger = logging.getLogger(__name__)
-
-T = TypeVar("T")
-ExcInfo = Tuple[Type[BaseException], BaseException, TracebackType]
-VersionInfo = Tuple[int, int, int]
-NetlocTuple = Tuple[str, Tuple[Optional[str], Optional[str]]]
-
-
-def get_pip_version() -> str:
- pip_pkg_dir = os.path.join(os.path.dirname(__file__), "..", "..")
- pip_pkg_dir = os.path.abspath(pip_pkg_dir)
-
- return "pip {} from {} (python {})".format(
- __version__,
- pip_pkg_dir,
- get_major_minor_version(),
- )
-
-
-def normalize_version_info(py_version_info: Tuple[int, ...]) -> Tuple[int, int, int]:
- """
- Convert a tuple of ints representing a Python version to one of length
- three.
-
- :param py_version_info: a tuple of ints representing a Python version,
- or None to specify no version. The tuple can have any length.
-
- :return: a tuple of length three if `py_version_info` is non-None.
- Otherwise, return `py_version_info` unchanged (i.e. None).
- """
- if len(py_version_info) < 3:
- py_version_info += (3 - len(py_version_info)) * (0,)
- elif len(py_version_info) > 3:
- py_version_info = py_version_info[:3]
-
- return cast("VersionInfo", py_version_info)
-
-
-def ensure_dir(path: str) -> None:
- """os.path.makedirs without EEXIST."""
- try:
- os.makedirs(path)
- except OSError as e:
- # Windows can raise spurious ENOTEMPTY errors. See #6426.
- if e.errno != errno.EEXIST and e.errno != errno.ENOTEMPTY:
- raise
-
-
-def get_prog() -> str:
- try:
- prog = os.path.basename(sys.argv[0])
- if prog in ("__main__.py", "-c"):
- return f"{sys.executable} -m pip"
- else:
- return prog
- except (AttributeError, TypeError, IndexError):
- pass
- return "pip"
-
-
-# Retry every half second for up to 3 seconds
-# Tenacity raises RetryError by default, explicitly raise the original exception
-@retry(reraise=True, stop=stop_after_delay(3), wait=wait_fixed(0.5))
-def rmtree(dir: str, ignore_errors: bool = False) -> None:
- if sys.version_info >= (3, 12):
- shutil.rmtree(dir, ignore_errors=ignore_errors, onexc=rmtree_errorhandler)
- else:
- shutil.rmtree(dir, ignore_errors=ignore_errors, onerror=rmtree_errorhandler)
-
-
-def rmtree_errorhandler(
- func: Callable[..., Any], path: str, exc_info: Union[ExcInfo, BaseException]
-) -> None:
- """On Windows, the files in .svn are read-only, so when rmtree() tries to
- remove them, an exception is thrown. We catch that here, remove the
- read-only attribute, and hopefully continue without problems."""
- try:
- has_attr_readonly = not (os.stat(path).st_mode & stat.S_IWRITE)
- except OSError:
- # it's equivalent to os.path.exists
- return
-
- if has_attr_readonly:
- # convert to read/write
- os.chmod(path, stat.S_IWRITE)
- # use the original function to repeat the operation
- func(path)
- return
- else:
- raise
-
-
-def display_path(path: str) -> str:
- """Gives the display value for a given path, making it relative to cwd
- if possible."""
- path = os.path.normcase(os.path.abspath(path))
- if path.startswith(os.getcwd() + os.path.sep):
- path = "." + path[len(os.getcwd()) :]
- return path
-
-
-def backup_dir(dir: str, ext: str = ".bak") -> str:
- """Figure out the name of a directory to back up the given dir to
- (adding .bak, .bak2, etc)"""
- n = 1
- extension = ext
- while os.path.exists(dir + extension):
- n += 1
- extension = ext + str(n)
- return dir + extension
-
-
-def ask_path_exists(message: str, options: Iterable[str]) -> str:
- for action in os.environ.get("PIP_EXISTS_ACTION", "").split():
- if action in options:
- return action
- return ask(message, options)
-
-
-def _check_no_input(message: str) -> None:
- """Raise an error if no input is allowed."""
- if os.environ.get("PIP_NO_INPUT"):
- raise Exception(
- f"No input was expected ($PIP_NO_INPUT set); question: {message}"
- )
-
-
-def ask(message: str, options: Iterable[str]) -> str:
- """Ask the message interactively, with the given possible responses"""
- while 1:
- _check_no_input(message)
- response = input(message)
- response = response.strip().lower()
- if response not in options:
- print(
- "Your response ({!r}) was not one of the expected responses: "
- "{}".format(response, ", ".join(options))
- )
- else:
- return response
-
-
-def ask_input(message: str) -> str:
- """Ask for input interactively."""
- _check_no_input(message)
- return input(message)
-
-
-def ask_password(message: str) -> str:
- """Ask for a password interactively."""
- _check_no_input(message)
- return getpass.getpass(message)
-
-
-def strtobool(val: str) -> int:
- """Convert a string representation of truth to true (1) or false (0).
-
- True values are 'y', 'yes', 't', 'true', 'on', and '1'; false values
- are 'n', 'no', 'f', 'false', 'off', and '0'. Raises ValueError if
- 'val' is anything else.
- """
- val = val.lower()
- if val in ("y", "yes", "t", "true", "on", "1"):
- return 1
- elif val in ("n", "no", "f", "false", "off", "0"):
- return 0
- else:
- raise ValueError(f"invalid truth value {val!r}")
-
-
-def format_size(bytes: float) -> str:
- if bytes > 1000 * 1000:
- return "{:.1f} MB".format(bytes / 1000.0 / 1000)
- elif bytes > 10 * 1000:
- return "{} kB".format(int(bytes / 1000))
- elif bytes > 1000:
- return "{:.1f} kB".format(bytes / 1000.0)
- else:
- return "{} bytes".format(int(bytes))
-
-
-def tabulate(rows: Iterable[Iterable[Any]]) -> Tuple[List[str], List[int]]:
- """Return a list of formatted rows and a list of column sizes.
-
- For example::
-
- >>> tabulate([['foobar', 2000], [0xdeadbeef]])
- (['foobar 2000', '3735928559'], [10, 4])
- """
- rows = [tuple(map(str, row)) for row in rows]
- sizes = [max(map(len, col)) for col in zip_longest(*rows, fillvalue="")]
- table = [" ".join(map(str.ljust, row, sizes)).rstrip() for row in rows]
- return table, sizes
-
-
-def is_installable_dir(path: str) -> bool:
- """Is path is a directory containing pyproject.toml or setup.py?
-
- If pyproject.toml exists, this is a PEP 517 project. Otherwise we look for
- a legacy setuptools layout by identifying setup.py. We don't check for the
- setup.cfg because using it without setup.py is only available for PEP 517
- projects, which are already covered by the pyproject.toml check.
- """
- if not os.path.isdir(path):
- return False
- if os.path.isfile(os.path.join(path, "pyproject.toml")):
- return True
- if os.path.isfile(os.path.join(path, "setup.py")):
- return True
- return False
-
-
-def read_chunks(
- file: BinaryIO, size: int = io.DEFAULT_BUFFER_SIZE
-) -> Generator[bytes, None, None]:
- """Yield pieces of data from a file-like object until EOF."""
- while True:
- chunk = file.read(size)
- if not chunk:
- break
- yield chunk
-
-
-def normalize_path(path: str, resolve_symlinks: bool = True) -> str:
- """
- Convert a path to its canonical, case-normalized, absolute version.
-
- """
- path = os.path.expanduser(path)
- if resolve_symlinks:
- path = os.path.realpath(path)
- else:
- path = os.path.abspath(path)
- return os.path.normcase(path)
-
-
-def splitext(path: str) -> Tuple[str, str]:
- """Like os.path.splitext, but take off .tar too"""
- base, ext = posixpath.splitext(path)
- if base.lower().endswith(".tar"):
- ext = base[-4:] + ext
- base = base[:-4]
- return base, ext
-
-
-def renames(old: str, new: str) -> None:
- """Like os.renames(), but handles renaming across devices."""
- # Implementation borrowed from os.renames().
- head, tail = os.path.split(new)
- if head and tail and not os.path.exists(head):
- os.makedirs(head)
-
- shutil.move(old, new)
-
- head, tail = os.path.split(old)
- if head and tail:
- try:
- os.removedirs(head)
- except OSError:
- pass
-
-
-def is_local(path: str) -> bool:
- """
- Return True if path is within sys.prefix, if we're running in a virtualenv.
-
- If we're not in a virtualenv, all paths are considered "local."
-
- Caution: this function assumes the head of path has been normalized
- with normalize_path.
- """
- if not running_under_virtualenv():
- return True
- return path.startswith(normalize_path(sys.prefix))
-
-
-def write_output(msg: Any, *args: Any) -> None:
- logger.info(msg, *args)
-
-
-class StreamWrapper(StringIO):
- orig_stream: TextIO
-
- @classmethod
- def from_stream(cls, orig_stream: TextIO) -> "StreamWrapper":
- ret = cls()
- ret.orig_stream = orig_stream
- return ret
-
- # compileall.compile_dir() needs stdout.encoding to print to stdout
- # type ignore is because TextIOBase.encoding is writeable
- @property
- def encoding(self) -> str: # type: ignore
- return self.orig_stream.encoding
-
-
-@contextlib.contextmanager
-def captured_output(stream_name: str) -> Generator[StreamWrapper, None, None]:
- """Return a context manager used by captured_stdout/stdin/stderr
- that temporarily replaces the sys stream *stream_name* with a StringIO.
-
- Taken from Lib/support/__init__.py in the CPython repo.
- """
- orig_stdout = getattr(sys, stream_name)
- setattr(sys, stream_name, StreamWrapper.from_stream(orig_stdout))
- try:
- yield getattr(sys, stream_name)
- finally:
- setattr(sys, stream_name, orig_stdout)
-
-
-def captured_stdout() -> ContextManager[StreamWrapper]:
- """Capture the output of sys.stdout:
-
- with captured_stdout() as stdout:
- print('hello')
- self.assertEqual(stdout.getvalue(), 'hello\n')
-
- Taken from Lib/support/__init__.py in the CPython repo.
- """
- return captured_output("stdout")
-
-
-def captured_stderr() -> ContextManager[StreamWrapper]:
- """
- See captured_stdout().
- """
- return captured_output("stderr")
-
-
-# Simulates an enum
-def enum(*sequential: Any, **named: Any) -> Type[Any]:
- enums = dict(zip(sequential, range(len(sequential))), **named)
- reverse = {value: key for key, value in enums.items()}
- enums["reverse_mapping"] = reverse
- return type("Enum", (), enums)
-
-
-def build_netloc(host: str, port: Optional[int]) -> str:
- """
- Build a netloc from a host-port pair
- """
- if port is None:
- return host
- if ":" in host:
- # Only wrap host with square brackets when it is IPv6
- host = f"[{host}]"
- return f"{host}:{port}"
-
-
-def build_url_from_netloc(netloc: str, scheme: str = "https") -> str:
- """
- Build a full URL from a netloc.
- """
- if netloc.count(":") >= 2 and "@" not in netloc and "[" not in netloc:
- # It must be a bare IPv6 address, so wrap it with brackets.
- netloc = f"[{netloc}]"
- return f"{scheme}://{netloc}"
-
-
-def parse_netloc(netloc: str) -> Tuple[Optional[str], Optional[int]]:
- """
- Return the host-port pair from a netloc.
- """
- url = build_url_from_netloc(netloc)
- parsed = urllib.parse.urlparse(url)
- return parsed.hostname, parsed.port
-
-
-def split_auth_from_netloc(netloc: str) -> NetlocTuple:
- """
- Parse out and remove the auth information from a netloc.
-
- Returns: (netloc, (username, password)).
- """
- if "@" not in netloc:
- return netloc, (None, None)
-
- # Split from the right because that's how urllib.parse.urlsplit()
- # behaves if more than one @ is present (which can be checked using
- # the password attribute of urlsplit()'s return value).
- auth, netloc = netloc.rsplit("@", 1)
- pw: Optional[str] = None
- if ":" in auth:
- # Split from the left because that's how urllib.parse.urlsplit()
- # behaves if more than one : is present (which again can be checked
- # using the password attribute of the return value)
- user, pw = auth.split(":", 1)
- else:
- user, pw = auth, None
-
- user = urllib.parse.unquote(user)
- if pw is not None:
- pw = urllib.parse.unquote(pw)
-
- return netloc, (user, pw)
-
-
-def redact_netloc(netloc: str) -> str:
- """
- Replace the sensitive data in a netloc with "****", if it exists.
-
- For example:
- - "user:pass@example.com" returns "user:****@example.com"
- - "accesstoken@example.com" returns "****@example.com"
- """
- netloc, (user, password) = split_auth_from_netloc(netloc)
- if user is None:
- return netloc
- if password is None:
- user = "****"
- password = ""
- else:
- user = urllib.parse.quote(user)
- password = ":****"
- return "{user}{password}@{netloc}".format(
- user=user, password=password, netloc=netloc
- )
-
-
-def _transform_url(
- url: str, transform_netloc: Callable[[str], Tuple[Any, ...]]
-) -> Tuple[str, NetlocTuple]:
- """Transform and replace netloc in a url.
-
- transform_netloc is a function taking the netloc and returning a
- tuple. The first element of this tuple is the new netloc. The
- entire tuple is returned.
-
- Returns a tuple containing the transformed url as item 0 and the
- original tuple returned by transform_netloc as item 1.
- """
- purl = urllib.parse.urlsplit(url)
- netloc_tuple = transform_netloc(purl.netloc)
- # stripped url
- url_pieces = (purl.scheme, netloc_tuple[0], purl.path, purl.query, purl.fragment)
- surl = urllib.parse.urlunsplit(url_pieces)
- return surl, cast("NetlocTuple", netloc_tuple)
-
-
-def _get_netloc(netloc: str) -> NetlocTuple:
- return split_auth_from_netloc(netloc)
-
-
-def _redact_netloc(netloc: str) -> Tuple[str]:
- return (redact_netloc(netloc),)
-
-
-def split_auth_netloc_from_url(
- url: str,
-) -> Tuple[str, str, Tuple[Optional[str], Optional[str]]]:
- """
- Parse a url into separate netloc, auth, and url with no auth.
-
- Returns: (url_without_auth, netloc, (username, password))
- """
- url_without_auth, (netloc, auth) = _transform_url(url, _get_netloc)
- return url_without_auth, netloc, auth
-
-
-def remove_auth_from_url(url: str) -> str:
- """Return a copy of url with 'username:password@' removed."""
- # username/pass params are passed to subversion through flags
- # and are not recognized in the url.
- return _transform_url(url, _get_netloc)[0]
-
-
-def redact_auth_from_url(url: str) -> str:
- """Replace the password in a given url with ****."""
- return _transform_url(url, _redact_netloc)[0]
-
-
-class HiddenText:
- def __init__(self, secret: str, redacted: str) -> None:
- self.secret = secret
- self.redacted = redacted
-
- def __repr__(self) -> str:
- return "".format(str(self))
-
- def __str__(self) -> str:
- return self.redacted
-
- # This is useful for testing.
- def __eq__(self, other: Any) -> bool:
- if type(self) != type(other):
- return False
-
- # The string being used for redaction doesn't also have to match,
- # just the raw, original string.
- return self.secret == other.secret
-
-
-def hide_value(value: str) -> HiddenText:
- return HiddenText(value, redacted="****")
-
-
-def hide_url(url: str) -> HiddenText:
- redacted = redact_auth_from_url(url)
- return HiddenText(url, redacted=redacted)
-
-
-def protect_pip_from_modification_on_windows(modifying_pip: bool) -> None:
- """Protection of pip.exe from modification on Windows
-
- On Windows, any operation modifying pip should be run as:
- python -m pip ...
- """
- pip_names = [
- "pip",
- f"pip{sys.version_info.major}",
- f"pip{sys.version_info.major}.{sys.version_info.minor}",
- ]
-
- # See https://github.com/pypa/pip/issues/1299 for more discussion
- should_show_use_python_msg = (
- modifying_pip and WINDOWS and os.path.basename(sys.argv[0]) in pip_names
- )
-
- if should_show_use_python_msg:
- new_command = [sys.executable, "-m", "pip"] + sys.argv[1:]
- raise CommandError(
- "To modify pip, please run the following command:\n{}".format(
- " ".join(new_command)
- )
- )
-
-
-def check_externally_managed() -> None:
- """Check whether the current environment is externally managed.
-
- If the ``EXTERNALLY-MANAGED`` config file is found, the current environment
- is considered externally managed, and an ExternallyManagedEnvironment is
- raised.
- """
- if running_under_virtualenv():
- return
- marker = os.path.join(sysconfig.get_path("stdlib"), "EXTERNALLY-MANAGED")
- if not os.path.isfile(marker):
- return
- raise ExternallyManagedEnvironment.from_config(marker)
-
-
-def is_console_interactive() -> bool:
- """Is this console interactive?"""
- return sys.stdin is not None and sys.stdin.isatty()
-
-
-def hash_file(path: str, blocksize: int = 1 << 20) -> Tuple[Any, int]:
- """Return (hash, length) for path using hashlib.sha256()"""
-
- h = hashlib.sha256()
- length = 0
- with open(path, "rb") as f:
- for block in read_chunks(f, size=blocksize):
- length += len(block)
- h.update(block)
- return h, length
-
-
-def pairwise(iterable: Iterable[Any]) -> Iterator[Tuple[Any, Any]]:
- """
- Return paired elements.
-
- For example:
- s -> (s0, s1), (s2, s3), (s4, s5), ...
- """
- iterable = iter(iterable)
- return zip_longest(iterable, iterable)
-
-
-def partition(
- pred: Callable[[T], bool],
- iterable: Iterable[T],
-) -> Tuple[Iterable[T], Iterable[T]]:
- """
- Use a predicate to partition entries into false entries and true entries,
- like
-
- partition(is_odd, range(10)) --> 0 2 4 6 8 and 1 3 5 7 9
- """
- t1, t2 = tee(iterable)
- return filterfalse(pred, t1), filter(pred, t2)
-
-
-class ConfiguredBuildBackendHookCaller(BuildBackendHookCaller):
- def __init__(
- self,
- config_holder: Any,
- source_dir: str,
- build_backend: str,
- backend_path: Optional[str] = None,
- runner: Optional[Callable[..., None]] = None,
- python_executable: Optional[str] = None,
- ):
- super().__init__(
- source_dir, build_backend, backend_path, runner, python_executable
- )
- self.config_holder = config_holder
-
- def build_wheel(
- self,
- wheel_directory: str,
- config_settings: Optional[Dict[str, Union[str, List[str]]]] = None,
- metadata_directory: Optional[str] = None,
- ) -> str:
- cs = self.config_holder.config_settings
- return super().build_wheel(
- wheel_directory, config_settings=cs, metadata_directory=metadata_directory
- )
-
- def build_sdist(
- self,
- sdist_directory: str,
- config_settings: Optional[Dict[str, Union[str, List[str]]]] = None,
- ) -> str:
- cs = self.config_holder.config_settings
- return super().build_sdist(sdist_directory, config_settings=cs)
-
- def build_editable(
- self,
- wheel_directory: str,
- config_settings: Optional[Dict[str, Union[str, List[str]]]] = None,
- metadata_directory: Optional[str] = None,
- ) -> str:
- cs = self.config_holder.config_settings
- return super().build_editable(
- wheel_directory, config_settings=cs, metadata_directory=metadata_directory
- )
-
- def get_requires_for_build_wheel(
- self, config_settings: Optional[Dict[str, Union[str, List[str]]]] = None
- ) -> List[str]:
- cs = self.config_holder.config_settings
- return super().get_requires_for_build_wheel(config_settings=cs)
-
- def get_requires_for_build_sdist(
- self, config_settings: Optional[Dict[str, Union[str, List[str]]]] = None
- ) -> List[str]:
- cs = self.config_holder.config_settings
- return super().get_requires_for_build_sdist(config_settings=cs)
-
- def get_requires_for_build_editable(
- self, config_settings: Optional[Dict[str, Union[str, List[str]]]] = None
- ) -> List[str]:
- cs = self.config_holder.config_settings
- return super().get_requires_for_build_editable(config_settings=cs)
-
- def prepare_metadata_for_build_wheel(
- self,
- metadata_directory: str,
- config_settings: Optional[Dict[str, Union[str, List[str]]]] = None,
- _allow_fallback: bool = True,
- ) -> str:
- cs = self.config_holder.config_settings
- return super().prepare_metadata_for_build_wheel(
- metadata_directory=metadata_directory,
- config_settings=cs,
- _allow_fallback=_allow_fallback,
- )
-
- def prepare_metadata_for_build_editable(
- self,
- metadata_directory: str,
- config_settings: Optional[Dict[str, Union[str, List[str]]]] = None,
- _allow_fallback: bool = True,
- ) -> str:
- cs = self.config_holder.config_settings
- return super().prepare_metadata_for_build_editable(
- metadata_directory=metadata_directory,
- config_settings=cs,
- _allow_fallback=_allow_fallback,
- )
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/cmdline.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/cmdline.py
deleted file mode 100644
index eec1775ba5fcba678f014f8a977259675e9c1854..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/cmdline.py
+++ /dev/null
@@ -1,668 +0,0 @@
-"""
- pygments.cmdline
- ~~~~~~~~~~~~~~~~
-
- Command line interface.
-
- :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-import os
-import sys
-import shutil
-import argparse
-from textwrap import dedent
-
-from pip._vendor.pygments import __version__, highlight
-from pip._vendor.pygments.util import ClassNotFound, OptionError, docstring_headline, \
- guess_decode, guess_decode_from_terminal, terminal_encoding, \
- UnclosingTextIOWrapper
-from pip._vendor.pygments.lexers import get_all_lexers, get_lexer_by_name, guess_lexer, \
- load_lexer_from_file, get_lexer_for_filename, find_lexer_class_for_filename
-from pip._vendor.pygments.lexers.special import TextLexer
-from pip._vendor.pygments.formatters.latex import LatexEmbeddedLexer, LatexFormatter
-from pip._vendor.pygments.formatters import get_all_formatters, get_formatter_by_name, \
- load_formatter_from_file, get_formatter_for_filename, find_formatter_class
-from pip._vendor.pygments.formatters.terminal import TerminalFormatter
-from pip._vendor.pygments.formatters.terminal256 import Terminal256Formatter, TerminalTrueColorFormatter
-from pip._vendor.pygments.filters import get_all_filters, find_filter_class
-from pip._vendor.pygments.styles import get_all_styles, get_style_by_name
-
-
-def _parse_options(o_strs):
- opts = {}
- if not o_strs:
- return opts
- for o_str in o_strs:
- if not o_str.strip():
- continue
- o_args = o_str.split(',')
- for o_arg in o_args:
- o_arg = o_arg.strip()
- try:
- o_key, o_val = o_arg.split('=', 1)
- o_key = o_key.strip()
- o_val = o_val.strip()
- except ValueError:
- opts[o_arg] = True
- else:
- opts[o_key] = o_val
- return opts
-
-
-def _parse_filters(f_strs):
- filters = []
- if not f_strs:
- return filters
- for f_str in f_strs:
- if ':' in f_str:
- fname, fopts = f_str.split(':', 1)
- filters.append((fname, _parse_options([fopts])))
- else:
- filters.append((f_str, {}))
- return filters
-
-
-def _print_help(what, name):
- try:
- if what == 'lexer':
- cls = get_lexer_by_name(name)
- print("Help on the %s lexer:" % cls.name)
- print(dedent(cls.__doc__))
- elif what == 'formatter':
- cls = find_formatter_class(name)
- print("Help on the %s formatter:" % cls.name)
- print(dedent(cls.__doc__))
- elif what == 'filter':
- cls = find_filter_class(name)
- print("Help on the %s filter:" % name)
- print(dedent(cls.__doc__))
- return 0
- except (AttributeError, ValueError):
- print("%s not found!" % what, file=sys.stderr)
- return 1
-
-
-def _print_list(what):
- if what == 'lexer':
- print()
- print("Lexers:")
- print("~~~~~~~")
-
- info = []
- for fullname, names, exts, _ in get_all_lexers():
- tup = (', '.join(names)+':', fullname,
- exts and '(filenames ' + ', '.join(exts) + ')' or '')
- info.append(tup)
- info.sort()
- for i in info:
- print(('* %s\n %s %s') % i)
-
- elif what == 'formatter':
- print()
- print("Formatters:")
- print("~~~~~~~~~~~")
-
- info = []
- for cls in get_all_formatters():
- doc = docstring_headline(cls)
- tup = (', '.join(cls.aliases) + ':', doc, cls.filenames and
- '(filenames ' + ', '.join(cls.filenames) + ')' or '')
- info.append(tup)
- info.sort()
- for i in info:
- print(('* %s\n %s %s') % i)
-
- elif what == 'filter':
- print()
- print("Filters:")
- print("~~~~~~~~")
-
- for name in get_all_filters():
- cls = find_filter_class(name)
- print("* " + name + ':')
- print(" %s" % docstring_headline(cls))
-
- elif what == 'style':
- print()
- print("Styles:")
- print("~~~~~~~")
-
- for name in get_all_styles():
- cls = get_style_by_name(name)
- print("* " + name + ':')
- print(" %s" % docstring_headline(cls))
-
-
-def _print_list_as_json(requested_items):
- import json
- result = {}
- if 'lexer' in requested_items:
- info = {}
- for fullname, names, filenames, mimetypes in get_all_lexers():
- info[fullname] = {
- 'aliases': names,
- 'filenames': filenames,
- 'mimetypes': mimetypes
- }
- result['lexers'] = info
-
- if 'formatter' in requested_items:
- info = {}
- for cls in get_all_formatters():
- doc = docstring_headline(cls)
- info[cls.name] = {
- 'aliases': cls.aliases,
- 'filenames': cls.filenames,
- 'doc': doc
- }
- result['formatters'] = info
-
- if 'filter' in requested_items:
- info = {}
- for name in get_all_filters():
- cls = find_filter_class(name)
- info[name] = {
- 'doc': docstring_headline(cls)
- }
- result['filters'] = info
-
- if 'style' in requested_items:
- info = {}
- for name in get_all_styles():
- cls = get_style_by_name(name)
- info[name] = {
- 'doc': docstring_headline(cls)
- }
- result['styles'] = info
-
- json.dump(result, sys.stdout)
-
-def main_inner(parser, argns):
- if argns.help:
- parser.print_help()
- return 0
-
- if argns.V:
- print('Pygments version %s, (c) 2006-2023 by Georg Brandl, Matthäus '
- 'Chajdas and contributors.' % __version__)
- return 0
-
- def is_only_option(opt):
- return not any(v for (k, v) in vars(argns).items() if k != opt)
-
- # handle ``pygmentize -L``
- if argns.L is not None:
- arg_set = set()
- for k, v in vars(argns).items():
- if v:
- arg_set.add(k)
-
- arg_set.discard('L')
- arg_set.discard('json')
-
- if arg_set:
- parser.print_help(sys.stderr)
- return 2
-
- # print version
- if not argns.json:
- main(['', '-V'])
- allowed_types = {'lexer', 'formatter', 'filter', 'style'}
- largs = [arg.rstrip('s') for arg in argns.L]
- if any(arg not in allowed_types for arg in largs):
- parser.print_help(sys.stderr)
- return 0
- if not largs:
- largs = allowed_types
- if not argns.json:
- for arg in largs:
- _print_list(arg)
- else:
- _print_list_as_json(largs)
- return 0
-
- # handle ``pygmentize -H``
- if argns.H:
- if not is_only_option('H'):
- parser.print_help(sys.stderr)
- return 2
- what, name = argns.H
- if what not in ('lexer', 'formatter', 'filter'):
- parser.print_help(sys.stderr)
- return 2
- return _print_help(what, name)
-
- # parse -O options
- parsed_opts = _parse_options(argns.O or [])
-
- # parse -P options
- for p_opt in argns.P or []:
- try:
- name, value = p_opt.split('=', 1)
- except ValueError:
- parsed_opts[p_opt] = True
- else:
- parsed_opts[name] = value
-
- # encodings
- inencoding = parsed_opts.get('inencoding', parsed_opts.get('encoding'))
- outencoding = parsed_opts.get('outencoding', parsed_opts.get('encoding'))
-
- # handle ``pygmentize -N``
- if argns.N:
- lexer = find_lexer_class_for_filename(argns.N)
- if lexer is None:
- lexer = TextLexer
-
- print(lexer.aliases[0])
- return 0
-
- # handle ``pygmentize -C``
- if argns.C:
- inp = sys.stdin.buffer.read()
- try:
- lexer = guess_lexer(inp, inencoding=inencoding)
- except ClassNotFound:
- lexer = TextLexer
-
- print(lexer.aliases[0])
- return 0
-
- # handle ``pygmentize -S``
- S_opt = argns.S
- a_opt = argns.a
- if S_opt is not None:
- f_opt = argns.f
- if not f_opt:
- parser.print_help(sys.stderr)
- return 2
- if argns.l or argns.INPUTFILE:
- parser.print_help(sys.stderr)
- return 2
-
- try:
- parsed_opts['style'] = S_opt
- fmter = get_formatter_by_name(f_opt, **parsed_opts)
- except ClassNotFound as err:
- print(err, file=sys.stderr)
- return 1
-
- print(fmter.get_style_defs(a_opt or ''))
- return 0
-
- # if no -S is given, -a is not allowed
- if argns.a is not None:
- parser.print_help(sys.stderr)
- return 2
-
- # parse -F options
- F_opts = _parse_filters(argns.F or [])
-
- # -x: allow custom (eXternal) lexers and formatters
- allow_custom_lexer_formatter = bool(argns.x)
-
- # select lexer
- lexer = None
-
- # given by name?
- lexername = argns.l
- if lexername:
- # custom lexer, located relative to user's cwd
- if allow_custom_lexer_formatter and '.py' in lexername:
- try:
- filename = None
- name = None
- if ':' in lexername:
- filename, name = lexername.rsplit(':', 1)
-
- if '.py' in name:
- # This can happen on Windows: If the lexername is
- # C:\lexer.py -- return to normal load path in that case
- name = None
-
- if filename and name:
- lexer = load_lexer_from_file(filename, name,
- **parsed_opts)
- else:
- lexer = load_lexer_from_file(lexername, **parsed_opts)
- except ClassNotFound as err:
- print('Error:', err, file=sys.stderr)
- return 1
- else:
- try:
- lexer = get_lexer_by_name(lexername, **parsed_opts)
- except (OptionError, ClassNotFound) as err:
- print('Error:', err, file=sys.stderr)
- return 1
-
- # read input code
- code = None
-
- if argns.INPUTFILE:
- if argns.s:
- print('Error: -s option not usable when input file specified',
- file=sys.stderr)
- return 2
-
- infn = argns.INPUTFILE
- try:
- with open(infn, 'rb') as infp:
- code = infp.read()
- except Exception as err:
- print('Error: cannot read infile:', err, file=sys.stderr)
- return 1
- if not inencoding:
- code, inencoding = guess_decode(code)
-
- # do we have to guess the lexer?
- if not lexer:
- try:
- lexer = get_lexer_for_filename(infn, code, **parsed_opts)
- except ClassNotFound as err:
- if argns.g:
- try:
- lexer = guess_lexer(code, **parsed_opts)
- except ClassNotFound:
- lexer = TextLexer(**parsed_opts)
- else:
- print('Error:', err, file=sys.stderr)
- return 1
- except OptionError as err:
- print('Error:', err, file=sys.stderr)
- return 1
-
- elif not argns.s: # treat stdin as full file (-s support is later)
- # read code from terminal, always in binary mode since we want to
- # decode ourselves and be tolerant with it
- code = sys.stdin.buffer.read() # use .buffer to get a binary stream
- if not inencoding:
- code, inencoding = guess_decode_from_terminal(code, sys.stdin)
- # else the lexer will do the decoding
- if not lexer:
- try:
- lexer = guess_lexer(code, **parsed_opts)
- except ClassNotFound:
- lexer = TextLexer(**parsed_opts)
-
- else: # -s option needs a lexer with -l
- if not lexer:
- print('Error: when using -s a lexer has to be selected with -l',
- file=sys.stderr)
- return 2
-
- # process filters
- for fname, fopts in F_opts:
- try:
- lexer.add_filter(fname, **fopts)
- except ClassNotFound as err:
- print('Error:', err, file=sys.stderr)
- return 1
-
- # select formatter
- outfn = argns.o
- fmter = argns.f
- if fmter:
- # custom formatter, located relative to user's cwd
- if allow_custom_lexer_formatter and '.py' in fmter:
- try:
- filename = None
- name = None
- if ':' in fmter:
- # Same logic as above for custom lexer
- filename, name = fmter.rsplit(':', 1)
-
- if '.py' in name:
- name = None
-
- if filename and name:
- fmter = load_formatter_from_file(filename, name,
- **parsed_opts)
- else:
- fmter = load_formatter_from_file(fmter, **parsed_opts)
- except ClassNotFound as err:
- print('Error:', err, file=sys.stderr)
- return 1
- else:
- try:
- fmter = get_formatter_by_name(fmter, **parsed_opts)
- except (OptionError, ClassNotFound) as err:
- print('Error:', err, file=sys.stderr)
- return 1
-
- if outfn:
- if not fmter:
- try:
- fmter = get_formatter_for_filename(outfn, **parsed_opts)
- except (OptionError, ClassNotFound) as err:
- print('Error:', err, file=sys.stderr)
- return 1
- try:
- outfile = open(outfn, 'wb')
- except Exception as err:
- print('Error: cannot open outfile:', err, file=sys.stderr)
- return 1
- else:
- if not fmter:
- if os.environ.get('COLORTERM','') in ('truecolor', '24bit'):
- fmter = TerminalTrueColorFormatter(**parsed_opts)
- elif '256' in os.environ.get('TERM', ''):
- fmter = Terminal256Formatter(**parsed_opts)
- else:
- fmter = TerminalFormatter(**parsed_opts)
- outfile = sys.stdout.buffer
-
- # determine output encoding if not explicitly selected
- if not outencoding:
- if outfn:
- # output file? use lexer encoding for now (can still be None)
- fmter.encoding = inencoding
- else:
- # else use terminal encoding
- fmter.encoding = terminal_encoding(sys.stdout)
-
- # provide coloring under Windows, if possible
- if not outfn and sys.platform in ('win32', 'cygwin') and \
- fmter.name in ('Terminal', 'Terminal256'): # pragma: no cover
- # unfortunately colorama doesn't support binary streams on Py3
- outfile = UnclosingTextIOWrapper(outfile, encoding=fmter.encoding)
- fmter.encoding = None
- try:
- import pip._vendor.colorama.initialise as colorama_initialise
- except ImportError:
- pass
- else:
- outfile = colorama_initialise.wrap_stream(
- outfile, convert=None, strip=None, autoreset=False, wrap=True)
-
- # When using the LaTeX formatter and the option `escapeinside` is
- # specified, we need a special lexer which collects escaped text
- # before running the chosen language lexer.
- escapeinside = parsed_opts.get('escapeinside', '')
- if len(escapeinside) == 2 and isinstance(fmter, LatexFormatter):
- left = escapeinside[0]
- right = escapeinside[1]
- lexer = LatexEmbeddedLexer(left, right, lexer)
-
- # ... and do it!
- if not argns.s:
- # process whole input as per normal...
- try:
- highlight(code, lexer, fmter, outfile)
- finally:
- if outfn:
- outfile.close()
- return 0
- else:
- # line by line processing of stdin (eg: for 'tail -f')...
- try:
- while 1:
- line = sys.stdin.buffer.readline()
- if not line:
- break
- if not inencoding:
- line = guess_decode_from_terminal(line, sys.stdin)[0]
- highlight(line, lexer, fmter, outfile)
- if hasattr(outfile, 'flush'):
- outfile.flush()
- return 0
- except KeyboardInterrupt: # pragma: no cover
- return 0
- finally:
- if outfn:
- outfile.close()
-
-
-class HelpFormatter(argparse.HelpFormatter):
- def __init__(self, prog, indent_increment=2, max_help_position=16, width=None):
- if width is None:
- try:
- width = shutil.get_terminal_size().columns - 2
- except Exception:
- pass
- argparse.HelpFormatter.__init__(self, prog, indent_increment,
- max_help_position, width)
-
-
-def main(args=sys.argv):
- """
- Main command line entry point.
- """
- desc = "Highlight an input file and write the result to an output file."
- parser = argparse.ArgumentParser(description=desc, add_help=False,
- formatter_class=HelpFormatter)
-
- operation = parser.add_argument_group('Main operation')
- lexersel = operation.add_mutually_exclusive_group()
- lexersel.add_argument(
- '-l', metavar='LEXER',
- help='Specify the lexer to use. (Query names with -L.) If not '
- 'given and -g is not present, the lexer is guessed from the filename.')
- lexersel.add_argument(
- '-g', action='store_true',
- help='Guess the lexer from the file contents, or pass through '
- 'as plain text if nothing can be guessed.')
- operation.add_argument(
- '-F', metavar='FILTER[:options]', action='append',
- help='Add a filter to the token stream. (Query names with -L.) '
- 'Filter options are given after a colon if necessary.')
- operation.add_argument(
- '-f', metavar='FORMATTER',
- help='Specify the formatter to use. (Query names with -L.) '
- 'If not given, the formatter is guessed from the output filename, '
- 'and defaults to the terminal formatter if the output is to the '
- 'terminal or an unknown file extension.')
- operation.add_argument(
- '-O', metavar='OPTION=value[,OPTION=value,...]', action='append',
- help='Give options to the lexer and formatter as a comma-separated '
- 'list of key-value pairs. '
- 'Example: `-O bg=light,python=cool`.')
- operation.add_argument(
- '-P', metavar='OPTION=value', action='append',
- help='Give a single option to the lexer and formatter - with this '
- 'you can pass options whose value contains commas and equal signs. '
- 'Example: `-P "heading=Pygments, the Python highlighter"`.')
- operation.add_argument(
- '-o', metavar='OUTPUTFILE',
- help='Where to write the output. Defaults to standard output.')
-
- operation.add_argument(
- 'INPUTFILE', nargs='?',
- help='Where to read the input. Defaults to standard input.')
-
- flags = parser.add_argument_group('Operation flags')
- flags.add_argument(
- '-v', action='store_true',
- help='Print a detailed traceback on unhandled exceptions, which '
- 'is useful for debugging and bug reports.')
- flags.add_argument(
- '-s', action='store_true',
- help='Process lines one at a time until EOF, rather than waiting to '
- 'process the entire file. This only works for stdin, only for lexers '
- 'with no line-spanning constructs, and is intended for streaming '
- 'input such as you get from `tail -f`. '
- 'Example usage: `tail -f sql.log | pygmentize -s -l sql`.')
- flags.add_argument(
- '-x', action='store_true',
- help='Allow custom lexers and formatters to be loaded from a .py file '
- 'relative to the current working directory. For example, '
- '`-l ./customlexer.py -x`. By default, this option expects a file '
- 'with a class named CustomLexer or CustomFormatter; you can also '
- 'specify your own class name with a colon (`-l ./lexer.py:MyLexer`). '
- 'Users should be very careful not to use this option with untrusted '
- 'files, because it will import and run them.')
- flags.add_argument('--json', help='Output as JSON. This can '
- 'be only used in conjunction with -L.',
- default=False,
- action='store_true')
-
- special_modes_group = parser.add_argument_group(
- 'Special modes - do not do any highlighting')
- special_modes = special_modes_group.add_mutually_exclusive_group()
- special_modes.add_argument(
- '-S', metavar='STYLE -f formatter',
- help='Print style definitions for STYLE for a formatter '
- 'given with -f. The argument given by -a is formatter '
- 'dependent.')
- special_modes.add_argument(
- '-L', nargs='*', metavar='WHAT',
- help='List lexers, formatters, styles or filters -- '
- 'give additional arguments for the thing(s) you want to list '
- '(e.g. "styles"), or omit them to list everything.')
- special_modes.add_argument(
- '-N', metavar='FILENAME',
- help='Guess and print out a lexer name based solely on the given '
- 'filename. Does not take input or highlight anything. If no specific '
- 'lexer can be determined, "text" is printed.')
- special_modes.add_argument(
- '-C', action='store_true',
- help='Like -N, but print out a lexer name based solely on '
- 'a given content from standard input.')
- special_modes.add_argument(
- '-H', action='store', nargs=2, metavar=('NAME', 'TYPE'),
- help='Print detailed help for the object of type , '
- 'where is one of "lexer", "formatter" or "filter".')
- special_modes.add_argument(
- '-V', action='store_true',
- help='Print the package version.')
- special_modes.add_argument(
- '-h', '--help', action='store_true',
- help='Print this help.')
- special_modes_group.add_argument(
- '-a', metavar='ARG',
- help='Formatter-specific additional argument for the -S (print '
- 'style sheet) mode.')
-
- argns = parser.parse_args(args[1:])
-
- try:
- return main_inner(parser, argns)
- except BrokenPipeError:
- # someone closed our stdout, e.g. by quitting a pager.
- return 0
- except Exception:
- if argns.v:
- print(file=sys.stderr)
- print('*' * 65, file=sys.stderr)
- print('An unhandled exception occurred while highlighting.',
- file=sys.stderr)
- print('Please report the whole traceback to the issue tracker at',
- file=sys.stderr)
- print('.',
- file=sys.stderr)
- print('*' * 65, file=sys.stderr)
- print(file=sys.stderr)
- raise
- import traceback
- info = traceback.format_exception(*sys.exc_info())
- msg = info[-1].strip()
- if len(info) >= 3:
- # extract relevant file and position info
- msg += '\n (f%s)' % info[-2].split('\n')[0].strip()[1:]
- print(file=sys.stderr)
- print('*** Error while highlighting:', file=sys.stderr)
- print(msg, file=sys.stderr)
- print('*** If this is a bug you want to report, please rerun with -v.',
- file=sys.stderr)
- return 1
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/repr.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/repr.py
deleted file mode 100644
index f284bcafa6ab2e1c9ae51be54107836e68cfb0d3..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/repr.py
+++ /dev/null
@@ -1,149 +0,0 @@
-import inspect
-from functools import partial
-from typing import (
- Any,
- Callable,
- Iterable,
- List,
- Optional,
- Tuple,
- Type,
- TypeVar,
- Union,
- overload,
-)
-
-T = TypeVar("T")
-
-
-Result = Iterable[Union[Any, Tuple[Any], Tuple[str, Any], Tuple[str, Any, Any]]]
-RichReprResult = Result
-
-
-class ReprError(Exception):
- """An error occurred when attempting to build a repr."""
-
-
-@overload
-def auto(cls: Optional[Type[T]]) -> Type[T]:
- ...
-
-
-@overload
-def auto(*, angular: bool = False) -> Callable[[Type[T]], Type[T]]:
- ...
-
-
-def auto(
- cls: Optional[Type[T]] = None, *, angular: Optional[bool] = None
-) -> Union[Type[T], Callable[[Type[T]], Type[T]]]:
- """Class decorator to create __repr__ from __rich_repr__"""
-
- def do_replace(cls: Type[T], angular: Optional[bool] = None) -> Type[T]:
- def auto_repr(self: T) -> str:
- """Create repr string from __rich_repr__"""
- repr_str: List[str] = []
- append = repr_str.append
-
- angular: bool = getattr(self.__rich_repr__, "angular", False) # type: ignore[attr-defined]
- for arg in self.__rich_repr__(): # type: ignore[attr-defined]
- if isinstance(arg, tuple):
- if len(arg) == 1:
- append(repr(arg[0]))
- else:
- key, value, *default = arg
- if key is None:
- append(repr(value))
- else:
- if default and default[0] == value:
- continue
- append(f"{key}={value!r}")
- else:
- append(repr(arg))
- if angular:
- return f"<{self.__class__.__name__} {' '.join(repr_str)}>"
- else:
- return f"{self.__class__.__name__}({', '.join(repr_str)})"
-
- def auto_rich_repr(self: Type[T]) -> Result:
- """Auto generate __rich_rep__ from signature of __init__"""
- try:
- signature = inspect.signature(self.__init__)
- for name, param in signature.parameters.items():
- if param.kind == param.POSITIONAL_ONLY:
- yield getattr(self, name)
- elif param.kind in (
- param.POSITIONAL_OR_KEYWORD,
- param.KEYWORD_ONLY,
- ):
- if param.default == param.empty:
- yield getattr(self, param.name)
- else:
- yield param.name, getattr(self, param.name), param.default
- except Exception as error:
- raise ReprError(
- f"Failed to auto generate __rich_repr__; {error}"
- ) from None
-
- if not hasattr(cls, "__rich_repr__"):
- auto_rich_repr.__doc__ = "Build a rich repr"
- cls.__rich_repr__ = auto_rich_repr # type: ignore[attr-defined]
-
- auto_repr.__doc__ = "Return repr(self)"
- cls.__repr__ = auto_repr # type: ignore[assignment]
- if angular is not None:
- cls.__rich_repr__.angular = angular # type: ignore[attr-defined]
- return cls
-
- if cls is None:
- return partial(do_replace, angular=angular)
- else:
- return do_replace(cls, angular=angular)
-
-
-@overload
-def rich_repr(cls: Optional[Type[T]]) -> Type[T]:
- ...
-
-
-@overload
-def rich_repr(*, angular: bool = False) -> Callable[[Type[T]], Type[T]]:
- ...
-
-
-def rich_repr(
- cls: Optional[Type[T]] = None, *, angular: bool = False
-) -> Union[Type[T], Callable[[Type[T]], Type[T]]]:
- if cls is None:
- return auto(angular=angular)
- else:
- return auto(cls)
-
-
-if __name__ == "__main__":
-
- @auto
- class Foo:
- def __rich_repr__(self) -> Result:
- yield "foo"
- yield "bar", {"shopping": ["eggs", "ham", "pineapple"]}
- yield "buy", "hand sanitizer"
-
- foo = Foo()
- from pip._vendor.rich.console import Console
-
- console = Console()
-
- console.rule("Standard repr")
- console.print(foo)
-
- console.print(foo, width=60)
- console.print(foo, width=30)
-
- console.rule("Angular repr")
- Foo.__rich_repr__.angular = True # type: ignore[attr-defined]
-
- console.print(foo)
-
- console.print(foo, width=60)
- console.print(foo, width=30)
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/tree.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/tree.py
deleted file mode 100644
index afe8da1a4a30daf6e48ffba514656e7c86c9abaa..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/tree.py
+++ /dev/null
@@ -1,251 +0,0 @@
-from typing import Iterator, List, Optional, Tuple
-
-from ._loop import loop_first, loop_last
-from .console import Console, ConsoleOptions, RenderableType, RenderResult
-from .jupyter import JupyterMixin
-from .measure import Measurement
-from .segment import Segment
-from .style import Style, StyleStack, StyleType
-from .styled import Styled
-
-
-class Tree(JupyterMixin):
- """A renderable for a tree structure.
-
- Args:
- label (RenderableType): The renderable or str for the tree label.
- style (StyleType, optional): Style of this tree. Defaults to "tree".
- guide_style (StyleType, optional): Style of the guide lines. Defaults to "tree.line".
- expanded (bool, optional): Also display children. Defaults to True.
- highlight (bool, optional): Highlight renderable (if str). Defaults to False.
- """
-
- def __init__(
- self,
- label: RenderableType,
- *,
- style: StyleType = "tree",
- guide_style: StyleType = "tree.line",
- expanded: bool = True,
- highlight: bool = False,
- hide_root: bool = False,
- ) -> None:
- self.label = label
- self.style = style
- self.guide_style = guide_style
- self.children: List[Tree] = []
- self.expanded = expanded
- self.highlight = highlight
- self.hide_root = hide_root
-
- def add(
- self,
- label: RenderableType,
- *,
- style: Optional[StyleType] = None,
- guide_style: Optional[StyleType] = None,
- expanded: bool = True,
- highlight: Optional[bool] = False,
- ) -> "Tree":
- """Add a child tree.
-
- Args:
- label (RenderableType): The renderable or str for the tree label.
- style (StyleType, optional): Style of this tree. Defaults to "tree".
- guide_style (StyleType, optional): Style of the guide lines. Defaults to "tree.line".
- expanded (bool, optional): Also display children. Defaults to True.
- highlight (Optional[bool], optional): Highlight renderable (if str). Defaults to False.
-
- Returns:
- Tree: A new child Tree, which may be further modified.
- """
- node = Tree(
- label,
- style=self.style if style is None else style,
- guide_style=self.guide_style if guide_style is None else guide_style,
- expanded=expanded,
- highlight=self.highlight if highlight is None else highlight,
- )
- self.children.append(node)
- return node
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "RenderResult":
-
- stack: List[Iterator[Tuple[bool, Tree]]] = []
- pop = stack.pop
- push = stack.append
- new_line = Segment.line()
-
- get_style = console.get_style
- null_style = Style.null()
- guide_style = get_style(self.guide_style, default="") or null_style
- SPACE, CONTINUE, FORK, END = range(4)
-
- ASCII_GUIDES = (" ", "| ", "+-- ", "`-- ")
- TREE_GUIDES = [
- (" ", "│ ", "├── ", "└── "),
- (" ", "┃ ", "┣━━ ", "┗━━ "),
- (" ", "║ ", "╠══ ", "╚══ "),
- ]
- _Segment = Segment
-
- def make_guide(index: int, style: Style) -> Segment:
- """Make a Segment for a level of the guide lines."""
- if options.ascii_only:
- line = ASCII_GUIDES[index]
- else:
- guide = 1 if style.bold else (2 if style.underline2 else 0)
- line = TREE_GUIDES[0 if options.legacy_windows else guide][index]
- return _Segment(line, style)
-
- levels: List[Segment] = [make_guide(CONTINUE, guide_style)]
- push(iter(loop_last([self])))
-
- guide_style_stack = StyleStack(get_style(self.guide_style))
- style_stack = StyleStack(get_style(self.style))
- remove_guide_styles = Style(bold=False, underline2=False)
-
- depth = 0
-
- while stack:
- stack_node = pop()
- try:
- last, node = next(stack_node)
- except StopIteration:
- levels.pop()
- if levels:
- guide_style = levels[-1].style or null_style
- levels[-1] = make_guide(FORK, guide_style)
- guide_style_stack.pop()
- style_stack.pop()
- continue
- push(stack_node)
- if last:
- levels[-1] = make_guide(END, levels[-1].style or null_style)
-
- guide_style = guide_style_stack.current + get_style(node.guide_style)
- style = style_stack.current + get_style(node.style)
- prefix = levels[(2 if self.hide_root else 1) :]
- renderable_lines = console.render_lines(
- Styled(node.label, style),
- options.update(
- width=options.max_width
- - sum(level.cell_length for level in prefix),
- highlight=self.highlight,
- height=None,
- ),
- pad=options.justify is not None,
- )
-
- if not (depth == 0 and self.hide_root):
- for first, line in loop_first(renderable_lines):
- if prefix:
- yield from _Segment.apply_style(
- prefix,
- style.background_style,
- post_style=remove_guide_styles,
- )
- yield from line
- yield new_line
- if first and prefix:
- prefix[-1] = make_guide(
- SPACE if last else CONTINUE, prefix[-1].style or null_style
- )
-
- if node.expanded and node.children:
- levels[-1] = make_guide(
- SPACE if last else CONTINUE, levels[-1].style or null_style
- )
- levels.append(
- make_guide(END if len(node.children) == 1 else FORK, guide_style)
- )
- style_stack.push(get_style(node.style))
- guide_style_stack.push(get_style(node.guide_style))
- push(iter(loop_last(node.children)))
- depth += 1
-
- def __rich_measure__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "Measurement":
- stack: List[Iterator[Tree]] = [iter([self])]
- pop = stack.pop
- push = stack.append
- minimum = 0
- maximum = 0
- measure = Measurement.get
- level = 0
- while stack:
- iter_tree = pop()
- try:
- tree = next(iter_tree)
- except StopIteration:
- level -= 1
- continue
- push(iter_tree)
- min_measure, max_measure = measure(console, options, tree.label)
- indent = level * 4
- minimum = max(min_measure + indent, minimum)
- maximum = max(max_measure + indent, maximum)
- if tree.expanded and tree.children:
- push(iter(tree.children))
- level += 1
- return Measurement(minimum, maximum)
-
-
-if __name__ == "__main__": # pragma: no cover
-
- from pip._vendor.rich.console import Group
- from pip._vendor.rich.markdown import Markdown
- from pip._vendor.rich.panel import Panel
- from pip._vendor.rich.syntax import Syntax
- from pip._vendor.rich.table import Table
-
- table = Table(row_styles=["", "dim"])
-
- table.add_column("Released", style="cyan", no_wrap=True)
- table.add_column("Title", style="magenta")
- table.add_column("Box Office", justify="right", style="green")
-
- table.add_row("Dec 20, 2019", "Star Wars: The Rise of Skywalker", "$952,110,690")
- table.add_row("May 25, 2018", "Solo: A Star Wars Story", "$393,151,347")
- table.add_row("Dec 15, 2017", "Star Wars Ep. V111: The Last Jedi", "$1,332,539,889")
- table.add_row("Dec 16, 2016", "Rogue One: A Star Wars Story", "$1,332,439,889")
-
- code = """\
-class Segment(NamedTuple):
- text: str = ""
- style: Optional[Style] = None
- is_control: bool = False
-"""
- syntax = Syntax(code, "python", theme="monokai", line_numbers=True)
-
- markdown = Markdown(
- """\
-### example.md
-> Hello, World!
->
-> Markdown _all_ the things
-"""
- )
-
- root = Tree("🌲 [b green]Rich Tree", highlight=True, hide_root=True)
-
- node = root.add(":file_folder: Renderables", guide_style="red")
- simple_node = node.add(":file_folder: [bold yellow]Atomic", guide_style="uu green")
- simple_node.add(Group("📄 Syntax", syntax))
- simple_node.add(Group("📄 Markdown", Panel(markdown, border_style="green")))
-
- containers_node = node.add(
- ":file_folder: [bold magenta]Containers", guide_style="bold magenta"
- )
- containers_node.expanded = True
- panel = Panel.fit("Just a panel", border_style="red")
- containers_node.add(Group("📄 Panels", panel))
-
- containers_node.add(Group("📄 [b magenta]Table", table))
-
- console = Console()
-
- console.print(root)
diff --git a/spaces/TempoFunk/makeavid-sd-jax/app.py b/spaces/TempoFunk/makeavid-sd-jax/app.py
deleted file mode 100644
index 713f891b3e0e9a49cce59e4f768bd65e183ee6bf..0000000000000000000000000000000000000000
--- a/spaces/TempoFunk/makeavid-sd-jax/app.py
+++ /dev/null
@@ -1,368 +0,0 @@
-
-import os
-import json
-from io import BytesIO
-import base64
-from functools import partial
-
-from PIL import Image, ImageOps
-import gradio as gr
-
-from makeavid_sd.inference import (
- InferenceUNetPseudo3D,
- jnp,
- SCHEDULERS
-)
-
-print(os.environ.get('XLA_PYTHON_CLIENT_PREALLOCATE', 'NotSet'))
-print(os.environ.get('XLA_PYTHON_CLIENT_ALLOCATOR', 'NotSet'))
-
-_seen_compilations = set()
-
-_model = InferenceUNetPseudo3D(
- model_path = 'TempoFunk/makeavid-sd-jax',
- dtype = jnp.float16,
- hf_auth_token = os.environ.get('HUGGING_FACE_HUB_TOKEN', None)
-)
-
-import datetime
-print(datetime.datetime.now(datetime.timezone.utc).isoformat())
-
-if _model.failed != False:
- trace = f'```{_model.failed}```'
- with gr.Blocks(title = 'Make-A-Video Stable Diffusion JAX', analytics_enabled = False) as demo:
- exception = gr.Markdown(trace)
- demo.launch()
-
-_examples = []
-_expath = 'examples'
-for x in sorted(os.listdir(_expath)):
- with open(os.path.join(_expath, x, 'params.json'), 'r') as f:
- ex = json.load(f)
- ex['image_input'] = None
- if os.path.isfile(os.path.join(_expath, x, 'input.png')):
- ex['image_input'] = os.path.join(_expath, x, 'input.png')
- ex['image_output'] = os.path.join(_expath, x, 'output.gif')
- _examples.append(ex)
-
-
-_output_formats = (
- 'webp', 'gif'
-)
-
-# gradio is illiterate. type hints make it go poopoo in pantsu.
-def generate(
- prompt = 'An elderly man having a great time in the park.',
- neg_prompt = '',
- hint_image = None,
- inference_steps = 20,
- cfg = 15.0,
- cfg_image = 9.0,
- seed = 0,
- fps = 12,
- num_frames = 24,
- height = 512,
- width = 512,
- scheduler_type = 'dpm',
- output_format = 'gif'
-) -> str:
- num_frames = min(24, max(2, int(num_frames)))
- inference_steps = min(60, max(2, int(inference_steps)))
- height = min(576, max(256, int(height)))
- width = min(576, max(256, int(width)))
- height = (height // 64) * 64
- width = (width // 64) * 64
- cfg = max(cfg, 1.0)
- cfg_image = max(cfg_image, 1.0)
- fps = min(1000, max(1, int(fps)))
- seed = min(2**32-2, int(seed))
- if seed < 0:
- seed = -seed
- if hint_image is not None:
- if hint_image.mode != 'RGB':
- hint_image = hint_image.convert('RGB')
- if hint_image.size != (width, height):
- hint_image = ImageOps.fit(hint_image, (width, height), method = Image.Resampling.LANCZOS)
- scheduler_type = scheduler_type.lower()
- if scheduler_type not in SCHEDULERS:
- scheduler_type = 'dpm'
- output_format = output_format.lower()
- if output_format not in _output_formats:
- output_format = 'gif'
- mask_image = None
- images = _model.generate(
- prompt = [prompt] * _model.device_count,
- neg_prompt = neg_prompt,
- hint_image = hint_image,
- mask_image = mask_image,
- inference_steps = inference_steps,
- cfg = cfg,
- cfg_image = cfg_image,
- height = height,
- width = width,
- num_frames = num_frames,
- seed = seed,
- scheduler_type = scheduler_type
- )
- _seen_compilations.add((hint_image is None, inference_steps, height, width, num_frames))
- with BytesIO() as buffer:
- images[1].save(
- buffer,
- format = output_format,
- save_all = True,
- append_images = images[2:],
- loop = 0,
- duration = round(1000 / fps),
- allow_mixed = True,
- optimize = True
- )
- data = f'data:image/{output_format};base64,' + base64.b64encode(buffer.getvalue()).decode()
- with BytesIO() as buffer:
- images[-1].save(buffer, format = 'png', optimize = True)
- last_data = f'data:image/png;base64,' + base64.b64encode(buffer.getvalue()).decode()
- with BytesIO() as buffer:
- images[0].save(buffer, format ='png', optimize = True)
- first_data = f'data:image/png;base64,' + base64.b64encode(buffer.getvalue()).decode()
- return data, last_data, first_data
-
-def check_if_compiled(hint_image, inference_steps, height, width, num_frames, scheduler_type, message):
- height = int(height)
- width = int(width)
- inference_steps = int(inference_steps)
- height = (height // 64) * 64
- width = (width // 64) * 64
- if (hint_image is None, inference_steps, height, width, num_frames, scheduler_type) in _seen_compilations:
- return ''
- else:
- return message
-
-with gr.Blocks(title = 'Make-A-Video Stable Diffusion JAX', analytics_enabled = False) as demo:
- variant = 'panel'
- with gr.Row():
- with gr.Column():
- intro1 = gr.Markdown("""
- # Make-A-Video Stable Diffusion JAX
-
- We have extended a pretrained latent-diffusion inpainting image generation model with **temporal convolutions and attention**.
- We guide the video generation with a hint image by taking advantage of the extra 5 input channels of the inpainting model.
- In this demo the hint image can be given by the user, otherwise it is generated by an generative image model.
-
- The temporal layers are a port of [Make-A-Video PyTorch](https://github.com/lucidrains/make-a-video-pytorch) to [JAX](https://github.com/google/jax) utilizing [FLAX](https://github.com/google/flax).
- The convolution is pseudo 3D and seperately convolves accross the spatial dimension in 2D and over the temporal dimension in 1D.
- Temporal attention is purely self attention and also separately attends to time.
-
- Only the new temporal layers have been fine tuned on a dataset of videos themed around dance.
- The model has been trained for 80 epochs on a dataset of 18,000 Videos with 120 frames each, randomly selecting a 24 frame range from each sample.
-
- Model: [TempoFunk/makeavid-sd-jax](https://huggingface.co/TempoFunk/makeavid-sd-jax)
- Datasets: [TempoFunk/tempofunk-sdance](https://huggingface.co/datasets/TempoFunk/tempofunk-sdance), [TempoFunk/small](https://huggingface.co/datasets/TempoFunk/small)
-
- Model implementation and training code can be found at (WIP)
- """)
- with gr.Column():
- intro3 = gr.Markdown("""
- **Please be patient. The model might have to compile with current parameters.**
-
- This can take up to 5 minutes on the first run, and 2-3 minutes on later runs.
- The compilation will be cached and later runs with the same parameters
- will be much faster.
-
- Changes to the following parameters require the model to compile
- - Number of frames
- - Width & Height
- - Inference steps
- - Input image vs. no input image
- - Noise scheduler type
-
- If you encounter any issues, please report them here: [Space discussions](https://huggingface.co/spaces/TempoFunk/makeavid-sd-jax/discussions) (or DM [@lopho](https://twitter.com/lopho))
-
- Leave a ❤️ like if you like. Consider it a dopamine donation at no cost.
- """)
-
- with gr.Row(variant = variant):
- with gr.Column():
- with gr.Row():
- #cancel_button = gr.Button(value = 'Cancel')
- submit_button = gr.Button(value = 'Make A Video', variant = 'primary')
- prompt_input = gr.Textbox(
- label = 'Prompt',
- value = 'They are dancing in the club but everybody is a 3d cg hairy monster wearing a hairy costume.',
- interactive = True
- )
- neg_prompt_input = gr.Textbox(
- label = 'Negative prompt (optional)',
- value = 'monochrome, saturated',
- interactive = True
- )
- cfg_input = gr.Slider(
- label = 'Guidance scale video',
- minimum = 1.0,
- maximum = 20.0,
- step = 0.1,
- value = 15.0,
- interactive = True
- )
- cfg_image_input = gr.Slider(
- label = 'Guidance scale hint (no effect with input image)',
- minimum = 1.0,
- maximum = 20.0,
- step = 0.1,
- value = 15.0,
- interactive = True
- )
- seed_input = gr.Number(
- label = 'Random seed',
- value = 0,
- interactive = True,
- precision = 0
- )
- image_input = gr.Image(
- label = 'Hint image (optional)',
- interactive = True,
- image_mode = 'RGB',
- type = 'pil',
- optional = True,
- source = 'upload'
- )
- inference_steps_input = gr.Slider(
- label = 'Steps',
- minimum = 2,
- maximum = 60,
- value = 20,
- step = 1,
- interactive = True
- )
- num_frames_input = gr.Slider(
- label = 'Number of frames to generate',
- minimum = 2,
- maximum = 24,
- step = 1,
- value = 24,
- interactive = True
- )
- width_input = gr.Slider(
- label = 'Width',
- minimum = 256,
- maximum = 576,
- step = 64,
- value = 512,
- interactive = True
- )
- height_input = gr.Slider(
- label = 'Height',
- minimum = 256,
- maximum = 576,
- step = 64,
- value = 512,
- interactive = True
- )
- scheduler_input = gr.Dropdown(
- label = 'Noise scheduler',
- choices = list(SCHEDULERS.keys()),
- value = 'dpm',
- interactive = True
- )
- with gr.Row():
- fps_input = gr.Slider(
- label = 'Output FPS',
- minimum = 1,
- maximum = 1000,
- step = 1,
- value = 12,
- interactive = True
- )
- output_format = gr.Dropdown(
- label = 'Output format',
- choices = _output_formats,
- value = 'gif',
- interactive = True
- )
- with gr.Column():
- #will_trigger = gr.Markdown('')
- patience = gr.Markdown('**Please be patient. The model might have to compile with current parameters.**')
- image_output = gr.Image(
- label = 'Output',
- value = 'example.gif',
- interactive = False
- )
- tips = gr.Markdown('🤫 *Secret tip*: try using the last frame as input for the next generation.')
- with gr.Row():
- last_frame_output = gr.Image(
- label = 'Last frame',
- interactive = False
- )
- first_frame_output = gr.Image(
- label = 'Initial frame',
- interactive = False
- )
- examples_lst = []
- for x in _examples:
- examples_lst.append([
- x['image_output'],
- x['prompt'],
- x['neg_prompt'],
- x['image_input'],
- x['cfg'],
- x['cfg_image'],
- x['seed'],
- x['fps'],
- x['steps'],
- x['scheduler'],
- x['num_frames'],
- x['height'],
- x['width'],
- x['format']
- ])
- examples = gr.Examples(
- examples = examples_lst,
- inputs = [
- image_output,
- prompt_input,
- neg_prompt_input,
- image_input,
- cfg_input,
- cfg_image_input,
- seed_input,
- fps_input,
- inference_steps_input,
- scheduler_input,
- num_frames_input,
- height_input,
- width_input,
- output_format
- ],
- postprocess = False
- )
- #trigger_inputs = [ image_input, inference_steps_input, height_input, width_input, num_frames_input, scheduler_input ]
- #trigger_check_fun = partial(check_if_compiled, message = 'Current parameters need compilation.')
- #height_input.change(fn = trigger_check_fun, inputs = trigger_inputs, outputs = will_trigger)
- #width_input.change(fn = trigger_check_fun, inputs = trigger_inputs, outputs = will_trigger)
- #num_frames_input.change(fn = trigger_check_fun, inputs = trigger_inputs, outputs = will_trigger)
- #image_input.change(fn = trigger_check_fun, inputs = trigger_inputs, outputs = will_trigger)
- #inference_steps_input.change(fn = trigger_check_fun, inputs = trigger_inputs, outputs = will_trigger)
- #scheduler_input.change(fn = trigger_check_fun, inputs = trigger_inputs, outputs = will_trigger)
- submit_button.click(
- fn = generate,
- inputs = [
- prompt_input,
- neg_prompt_input,
- image_input,
- inference_steps_input,
- cfg_input,
- cfg_image_input,
- seed_input,
- fps_input,
- num_frames_input,
- height_input,
- width_input,
- scheduler_input,
- output_format
- ],
- outputs = [ image_output, last_frame_output, first_frame_output ],
- postprocess = False
- )
- #cancel_button.click(fn = lambda: None, cancels = ev)
-
-demo.queue(concurrency_count = 1, max_size = 8, api_open = True)
-demo.launch(show_api = True)
-
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_patch.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_patch.py
deleted file mode 100644
index c9eee594a27cdec29ce5f2b6f7730171eda3805e..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_patch.py
+++ /dev/null
@@ -1,152 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import contextlib
-from unittest import mock
-import torch
-
-from detectron2.modeling import poolers
-from detectron2.modeling.proposal_generator import rpn
-from detectron2.modeling.roi_heads import keypoint_head, mask_head
-from detectron2.modeling.roi_heads.fast_rcnn import FastRCNNOutputLayers
-
-from .c10 import (
- Caffe2Compatible,
- Caffe2FastRCNNOutputsInference,
- Caffe2KeypointRCNNInference,
- Caffe2MaskRCNNInference,
- Caffe2ROIPooler,
- Caffe2RPN,
-)
-
-
-class GenericMixin(object):
- pass
-
-
-class Caffe2CompatibleConverter(object):
- """
- A GenericUpdater which implements the `create_from` interface, by modifying
- module object and assign it with another class replaceCls.
- """
-
- def __init__(self, replaceCls):
- self.replaceCls = replaceCls
-
- def create_from(self, module):
- # update module's class to the new class
- assert isinstance(module, torch.nn.Module)
- if issubclass(self.replaceCls, GenericMixin):
- # replaceCls should act as mixin, create a new class on-the-fly
- new_class = type(
- "{}MixedWith{}".format(self.replaceCls.__name__, module.__class__.__name__),
- (self.replaceCls, module.__class__),
- {}, # {"new_method": lambda self: ...},
- )
- module.__class__ = new_class
- else:
- # replaceCls is complete class, this allow arbitrary class swap
- module.__class__ = self.replaceCls
-
- # initialize Caffe2Compatible
- if isinstance(module, Caffe2Compatible):
- module.tensor_mode = False
-
- return module
-
-
-def patch(model, target, updater, *args, **kwargs):
- """
- recursively (post-order) update all modules with the target type and its
- subclasses, make a initialization/composition/inheritance/... via the
- updater.create_from.
- """
- for name, module in model.named_children():
- model._modules[name] = patch(module, target, updater, *args, **kwargs)
- if isinstance(model, target):
- return updater.create_from(model, *args, **kwargs)
- return model
-
-
-def patch_generalized_rcnn(model):
- ccc = Caffe2CompatibleConverter
- model = patch(model, rpn.RPN, ccc(Caffe2RPN))
- model = patch(model, poolers.ROIPooler, ccc(Caffe2ROIPooler))
-
- return model
-
-
-@contextlib.contextmanager
-def mock_fastrcnn_outputs_inference(
- tensor_mode, check=True, box_predictor_type=FastRCNNOutputLayers
-):
- with mock.patch.object(
- box_predictor_type,
- "inference",
- autospec=True,
- side_effect=Caffe2FastRCNNOutputsInference(tensor_mode),
- ) as mocked_func:
- yield
- if check:
- assert mocked_func.call_count > 0
-
-
-@contextlib.contextmanager
-def mock_mask_rcnn_inference(tensor_mode, patched_module, check=True):
- with mock.patch(
- "{}.mask_rcnn_inference".format(patched_module), side_effect=Caffe2MaskRCNNInference()
- ) as mocked_func:
- yield
- if check:
- assert mocked_func.call_count > 0
-
-
-@contextlib.contextmanager
-def mock_keypoint_rcnn_inference(tensor_mode, patched_module, use_heatmap_max_keypoint, check=True):
- with mock.patch(
- "{}.keypoint_rcnn_inference".format(patched_module),
- side_effect=Caffe2KeypointRCNNInference(use_heatmap_max_keypoint),
- ) as mocked_func:
- yield
- if check:
- assert mocked_func.call_count > 0
-
-
-class ROIHeadsPatcher:
- def __init__(self, heads, use_heatmap_max_keypoint):
- self.heads = heads
- self.use_heatmap_max_keypoint = use_heatmap_max_keypoint
-
- @contextlib.contextmanager
- def mock_roi_heads(self, tensor_mode=True):
- """
- Patching several inference functions inside ROIHeads and its subclasses
-
- Args:
- tensor_mode (bool): whether the inputs/outputs are caffe2's tensor
- format or not. Default to True.
- """
- # NOTE: this requries the `keypoint_rcnn_inference` and `mask_rcnn_inference`
- # are called inside the same file as BaseXxxHead due to using mock.patch.
- kpt_heads_mod = keypoint_head.BaseKeypointRCNNHead.__module__
- mask_head_mod = mask_head.BaseMaskRCNNHead.__module__
-
- mock_ctx_managers = [
- mock_fastrcnn_outputs_inference(
- tensor_mode=tensor_mode,
- check=True,
- box_predictor_type=type(self.heads.box_predictor),
- )
- ]
- if getattr(self.heads, "keypoint_on", False):
- mock_ctx_managers += [
- mock_keypoint_rcnn_inference(
- tensor_mode, kpt_heads_mod, self.use_heatmap_max_keypoint
- )
- ]
- if getattr(self.heads, "mask_on", False):
- mock_ctx_managers += [mock_mask_rcnn_inference(tensor_mode, mask_head_mod)]
-
- with contextlib.ExitStack() as stack: # python 3.3+
- for mgr in mock_ctx_managers:
- stack.enter_context(mgr)
- yield
diff --git a/spaces/Tetel/chat/EdgeGPT/EdgeGPT.py b/spaces/Tetel/chat/EdgeGPT/EdgeGPT.py
deleted file mode 100644
index d6f64ff075bdfc4aa618b307ca7e0a21f6296292..0000000000000000000000000000000000000000
--- a/spaces/Tetel/chat/EdgeGPT/EdgeGPT.py
+++ /dev/null
@@ -1,236 +0,0 @@
-"""
-Main.py
-"""
-from __future__ import annotations
-
-import json
-from pathlib import Path
-from typing import Generator
-
-from .chathub import *
-from .conversation import *
-from .conversation_style import *
-from .request import *
-from .utilities import *
-
-
-class Chatbot:
- """
- Combines everything to make it seamless
- """
-
- def __init__(
- self,
- proxy: str | None = None,
- cookies: list[dict] | None = None,
- ) -> None:
- self.proxy: str | None = proxy
- self.chat_hub: ChatHub = ChatHub(
- Conversation(self.proxy, cookies=cookies),
- proxy=self.proxy,
- cookies=cookies,
- )
-
- @staticmethod
- async def create(
- proxy: str | None = None,
- cookies: list[dict] | None = None,
- imageInput: str | None = None
- ) -> Chatbot:
- self = Chatbot.__new__(Chatbot)
- self.proxy = proxy
- self.chat_hub = ChatHub(
- await Conversation.create(self.proxy, cookies=cookies, imageInput=imageInput),
- proxy=self.proxy,
- cookies=cookies,
- )
- return self
-
- async def save_conversation(self, filename: str) -> None:
- """
- Save the conversation to a file
- """
- with open(filename, "w") as f:
- conversation_id = self.chat_hub.request.conversation_id
- conversation_signature = self.chat_hub.request.conversation_signature
- client_id = self.chat_hub.request.client_id
- invocation_id = self.chat_hub.request.invocation_id
- f.write(
- json.dumps(
- {
- "conversation_id": conversation_id,
- "conversation_signature": conversation_signature,
- "client_id": client_id,
- "invocation_id": invocation_id,
- },
- ),
- )
-
- async def load_conversation(self, filename: str) -> None:
- """
- Load the conversation from a file
- """
- with open(filename) as f:
- conversation = json.load(f)
- self.chat_hub.request = ChatHubRequest(
- conversation_signature=conversation["conversation_signature"],
- client_id=conversation["client_id"],
- conversation_id=conversation["conversation_id"],
- invocation_id=conversation["invocation_id"],
- )
-
- async def get_conversation(self) -> dict:
- """
- Gets the conversation history from conversation_id (requires load_conversation)
- """
- return await self.chat_hub.get_conversation()
-
- async def get_activity(self) -> dict:
- """
- Gets the recent activity (requires cookies)
- """
- return await self.chat_hub.get_activity()
-
- async def ask(
- self,
- prompt: str,
- wss_link: str = "wss://sydney.bing.com/sydney/ChatHub",
- conversation_style: CONVERSATION_STYLE_TYPE = None,
- webpage_context: str | None = None,
- search_result: bool = False,
- locale: str = guess_locale(),
- simplify_response: bool = False,
- ) -> dict:
- """
- Ask a question to the bot
- Response:
- {
- item (dict):
- messages (list[dict]):
- adaptiveCards (list[dict]):
- body (list[dict]):
- text (str): Response
- }
- To get the response, you can do:
- response["item"]["messages"][1]["adaptiveCards"][0]["body"][0]["text"]
- """
- async for final, response in self.chat_hub.ask_stream(
- prompt=prompt,
- conversation_style=conversation_style,
- wss_link=wss_link,
- webpage_context=webpage_context,
- search_result=search_result,
- locale=locale,
- ):
- if final:
- if not simplify_response:
- return response
- messages_left = response["item"]["throttling"][
- "maxNumUserMessagesInConversation"
- ] - response["item"]["throttling"].get(
- "numUserMessagesInConversation", 0
- )
- if messages_left == 0:
- raise Exception("Max messages reached")
- for msg in reversed(response["item"]["messages"]):
- if msg.get("adaptiveCards") and msg["adaptiveCards"][0]["body"][
- 0
- ].get("text"):
- message = msg
- break
- if not message:
- raise Exception("No message found")
- suggestions = [
- suggestion["text"]
- for suggestion in message.get("suggestedResponses", [])
- ]
- adaptive_cards = message.get("adaptiveCards", [])
- adaptive_text = (
- adaptive_cards[0]["body"][0].get("text") if adaptive_cards else None
- )
- sources = (
- adaptive_cards[0]["body"][0].get("text") if adaptive_cards else None
- )
- sources_text = (
- adaptive_cards[0]["body"][-1].get("text")
- if adaptive_cards
- else None
- )
- return {
- "text": message["text"],
- "author": message["author"],
- "sources": sources,
- "sources_text": sources_text,
- "suggestions": suggestions,
- "messages_left": messages_left,
- "max_messages": response["item"]["throttling"][
- "maxNumUserMessagesInConversation"
- ],
- "adaptive_text": adaptive_text,
- }
- return {}
-
- async def ask_stream(
- self,
- prompt: str,
- wss_link: str = "wss://sydney.bing.com/sydney/ChatHub",
- conversation_style: CONVERSATION_STYLE_TYPE = None,
- raw: bool = False,
- webpage_context: str | None = None,
- search_result: bool = False,
- locale: str = guess_locale(),
- ) -> Generator[bool, dict | str, None]:
- """
- Ask a question to the bot
- """
- async for response in self.chat_hub.ask_stream(
- prompt=prompt,
- conversation_style=conversation_style,
- wss_link=wss_link,
- raw=raw,
- webpage_context=webpage_context,
- search_result=search_result,
- locale=locale,
- ):
- yield response
-
- async def close(self) -> None:
- """
- Close the connection
- """
- await self.chat_hub.close()
-
- async def delete_conversation(
- self,
- conversation_id: str = None,
- conversation_signature: str = None,
- client_id: str = None,
- ) -> None:
- """
- Delete the chat in the server
- """
- await self.chat_hub.delete_conversation(
- conversation_id=conversation_id,
- conversation_signature=conversation_signature,
- client_id=client_id,
- )
-
- async def reset(self, delete=False) -> None:
- """
- Reset the conversation
- """
- if delete:
- await self.remove_and_close()
- else:
- await self.close()
- self.chat_hub = ChatHub(
- await Conversation.create(self.proxy, cookies=self.chat_hub.cookies),
- proxy=self.proxy,
- cookies=self.chat_hub.cookies,
- )
-
-
-if __name__ == "__main__":
- from .main import main
-
- main()
diff --git a/spaces/Th3BossC/TranscriptApi/TranscriptApi/static/styles.css b/spaces/Th3BossC/TranscriptApi/TranscriptApi/static/styles.css
deleted file mode 100644
index b8722cbdc4dc340d7c00911a37d8ee10ce542861..0000000000000000000000000000000000000000
--- a/spaces/Th3BossC/TranscriptApi/TranscriptApi/static/styles.css
+++ /dev/null
@@ -1,125 +0,0 @@
-.dark {
- /* --bg : #353941; */
- --heading-bg : #26282B;
- --button-bg : #5F85DB;
- --button-hover-bg : #90B8F8;
- --text-color : white;
- --rev-text-color : black;
-
- --bg : url('images/background-dark.svg');
-}
-
-
-.light {
- /* --bg : #448EF6; */
- --heading-bg : #75C2F6;
- --button-bg : #65DAF7;
- --button-hover-bg : #FFE981;
- --text-color : black;
- --rev-text-color : white;
-
- --bg : url('images/background-light.svg');
-}
-
-nav {
- transition: all 200ms ease-in-out;
- transition-delay : 0ms;
-}
-
-body {
- background : var(--bg);
- background-size: cover;
- transition: background 200ms ease-in-out, color 1000ms ease-in-out;
- /* overflow: hidden; */
-}
-
-.grid {
- display: flex;
- flex-direction: column;
- flex-wrap: wrap;
- /* gap: 1rem; */
- grid-template-columns: minmax(240px, 1fr);
- grid-template-rows: 240px;
- margin : 10px;
- padding : 20px;
-}
-
-
-
-
-.heading {
- color : var(--text-color);
- margin : minmax(10px, 100px);
- padding: 50px;
- text-align: center;
- align-self: center;
- font-family: 'Opens Sans', sans-serif;
- font-style: italic;
- font-weight: 800;
- /* background-color: var(--heading-bg); */
- border-radius: 8px;
- /* filter: drop-shadow(.3rem .3rem 4px black); */
- transition: all 100ms ease-in-out;
- transition-delay : 200ms;
-}
-
-.url-submit-form {
- padding : 50px;
- display: flex;
- flex-direction: column;
- align-items: center;
- justify-content: center;
-}
-
-input[type = 'text'] {
- text-align : center;
- border: none;
-}
-
-
-input[type = 'text']::placeholder {
- color: var(--text-color);
- opacity: 0.4;
-}
-
-.btn-primary {
- background-color : var(--button-bg) !important;
- border-color : var(--button-bg) !important;
- color : var(--text-color) !important;
-}
-
-.btn-primary:hover {
- background-color: var(--button-hover-bg) !important;
- border-color : var(--button-hover-bg) !important;
- color : black !important;
-
-}
-
-
-.text {
- /* grid-column : span 1 / auto; */
- color : var(--text-color);
- padding : 30px;
- border: 2px solid var(--rev-text-color);
- border-radius: 8px;
- backdrop-filter: blur(10px);
- clip-path: circle(0% at 50% 0%);
- transition : all 200ms ease-in-out, clip-path 500ms ease-in-out;
- transition-delay : 400ms;
-}
-
-
-.title {
- font-family :'Lucida Sans', 'Lucida Sans Regular', 'Lucida Grande', 'Lucida Sans Unicode', Geneva, Verdana, sans-serif;
- font-style : bold;
- font-size: large;
- text-align: center;
-}
-
-.content {
- font-family: 'Lucida Sans', 'Lucida Sans Regular', 'Lucida Grande', 'Lucida Sans Unicode', Geneva, Verdana, sans-serif;
- margin: 5px;
- padding : 10px;
- text-align: center;
-}
-
diff --git a/spaces/TopdeckingLands/Diffusion_Space/utils.py b/spaces/TopdeckingLands/Diffusion_Space/utils.py
deleted file mode 100644
index ff1c065d186347ca51b47d010a697dbe1814695c..0000000000000000000000000000000000000000
--- a/spaces/TopdeckingLands/Diffusion_Space/utils.py
+++ /dev/null
@@ -1,6 +0,0 @@
-def is_google_colab():
- try:
- import google.colab
- return True
- except:
- return False
\ No newline at end of file
diff --git a/spaces/Toritto/Genshin-impact-IA-project-v1/config.py b/spaces/Toritto/Genshin-impact-IA-project-v1/config.py
deleted file mode 100644
index b6de7523991c6384178ad96b5fe0c8932c1b5688..0000000000000000000000000000000000000000
--- a/spaces/Toritto/Genshin-impact-IA-project-v1/config.py
+++ /dev/null
@@ -1,99 +0,0 @@
-import argparse
-import sys
-import torch
-from multiprocessing import cpu_count
-
-class Config:
- def __init__(self):
- self.device = "cuda:0"
- self.is_half = True
- self.n_cpu = 0
- self.gpu_name = None
- self.gpu_mem = None
- (
- self.share,
- self.api,
- self.unsupported
- ) = self.arg_parse()
- self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config()
-
- @staticmethod
- def arg_parse() -> tuple:
- parser = argparse.ArgumentParser()
- parser.add_argument("--share", action="store_true", help="Launch with public link")
- parser.add_argument("--api", action="store_true", help="Launch with api")
- parser.add_argument("--unsupported", action="store_true", help="Enable unsupported feature")
- cmd_opts = parser.parse_args()
-
- return (
- cmd_opts.share,
- cmd_opts.api,
- cmd_opts.unsupported
- )
-
- # has_mps is only available in nightly pytorch (for now) and MasOS 12.3+.
- # check `getattr` and try it for compatibility
- @staticmethod
- def has_mps() -> bool:
- if not torch.backends.mps.is_available():
- return False
- try:
- torch.zeros(1).to(torch.device("mps"))
- return True
- except Exception:
- return False
-
- def device_config(self) -> tuple:
- if torch.cuda.is_available():
- i_device = int(self.device.split(":")[-1])
- self.gpu_name = torch.cuda.get_device_name(i_device)
- if (
- ("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
- or "P40" in self.gpu_name.upper()
- or "1060" in self.gpu_name
- or "1070" in self.gpu_name
- or "1080" in self.gpu_name
- ):
- print("INFO: Found GPU", self.gpu_name, ", force to fp32")
- self.is_half = False
- else:
- print("INFO: Found GPU", self.gpu_name)
- self.gpu_mem = int(
- torch.cuda.get_device_properties(i_device).total_memory
- / 1024
- / 1024
- / 1024
- + 0.4
- )
- elif self.has_mps():
- print("INFO: No supported Nvidia GPU found, use MPS instead")
- self.device = "mps"
- self.is_half = False
- else:
- print("INFO: No supported Nvidia GPU found, use CPU instead")
- self.device = "cpu"
- self.is_half = False
-
- if self.n_cpu == 0:
- self.n_cpu = cpu_count()
-
- if self.is_half:
- # 6G显存配置
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
- else:
- # 5G显存配置
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
-
- if self.gpu_mem != None and self.gpu_mem <= 4:
- x_pad = 1
- x_query = 5
- x_center = 30
- x_max = 32
-
- return x_pad, x_query, x_center, x_max
diff --git a/spaces/Trangluna2002/AI_Cover_Gen/src/main.py b/spaces/Trangluna2002/AI_Cover_Gen/src/main.py
deleted file mode 100644
index a0dc7d0d119562c55bb0789aee902aea7b854648..0000000000000000000000000000000000000000
--- a/spaces/Trangluna2002/AI_Cover_Gen/src/main.py
+++ /dev/null
@@ -1,355 +0,0 @@
-import argparse
-import gc
-import hashlib
-import json
-import os
-import shlex
-import subprocess
-from contextlib import suppress
-from urllib.parse import urlparse, parse_qs
-
-import gradio as gr
-import librosa
-import numpy as np
-import soundfile as sf
-import sox
-import yt_dlp
-from pedalboard import Pedalboard, Reverb, Compressor, HighpassFilter
-from pedalboard.io import AudioFile
-from pydub import AudioSegment
-
-from mdx import run_mdx
-from rvc import Config, load_hubert, get_vc, rvc_infer
-
-BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
-
-mdxnet_models_dir = os.path.join(BASE_DIR, 'mdxnet_models')
-rvc_models_dir = os.path.join(BASE_DIR, 'rvc_models')
-output_dir = os.path.join(BASE_DIR, 'song_output')
-
-
-def get_youtube_video_id(url, ignore_playlist=True):
- """
- Examples:
- http://youtu.be/SA2iWivDJiE
- http://www.youtube.com/watch?v=_oPAwA_Udwc&feature=feedu
- http://www.youtube.com/embed/SA2iWivDJiE
- http://www.youtube.com/v/SA2iWivDJiE?version=3&hl=en_US
- """
- query = urlparse(url)
- if query.hostname == 'youtu.be':
- if query.path[1:] == 'watch':
- return query.query[2:]
- return query.path[1:]
-
- if query.hostname in {'www.youtube.com', 'youtube.com', 'music.youtube.com'}:
- if not ignore_playlist:
- # use case: get playlist id not current video in playlist
- with suppress(KeyError):
- return parse_qs(query.query)['list'][0]
- if query.path == '/watch':
- return parse_qs(query.query)['v'][0]
- if query.path[:7] == '/watch/':
- return query.path.split('/')[1]
- if query.path[:7] == '/embed/':
- return query.path.split('/')[2]
- if query.path[:3] == '/v/':
- return query.path.split('/')[2]
-
- # returns None for invalid YouTube url
- return None
-
-
-def yt_download(link):
- ydl_opts = {
- 'format': 'bestaudio',
- 'outtmpl': '%(title)s',
- 'nocheckcertificate': True,
- 'ignoreerrors': True,
- 'no_warnings': True,
- 'quiet': True,
- 'extractaudio': True,
- 'postprocessors': [{'key': 'FFmpegExtractAudio', 'preferredcodec': 'mp3'}],
- }
- with yt_dlp.YoutubeDL(ydl_opts) as ydl:
- result = ydl.extract_info(link, download=True)
- download_path = ydl.prepare_filename(result, outtmpl='%(title)s.mp3')
-
- return download_path
-
-
-def raise_exception(error_msg, is_webui):
- if is_webui:
- raise gr.Error(error_msg)
- else:
- raise Exception(error_msg)
-
-
-def get_rvc_model(voice_model, is_webui):
- rvc_model_filename, rvc_index_filename = None, None
- model_dir = os.path.join(rvc_models_dir, voice_model)
- for file in os.listdir(model_dir):
- ext = os.path.splitext(file)[1]
- if ext == '.pth':
- rvc_model_filename = file
- if ext == '.index':
- rvc_index_filename = file
-
- if rvc_model_filename is None:
- error_msg = f'No model file exists in {model_dir}.'
- raise_exception(error_msg, is_webui)
-
- return os.path.join(model_dir, rvc_model_filename), os.path.join(model_dir, rvc_index_filename) if rvc_index_filename else ''
-
-
-def get_audio_paths(song_dir):
- orig_song_path = None
- instrumentals_path = None
- main_vocals_dereverb_path = None
- backup_vocals_path = None
-
- for file in os.listdir(song_dir):
- if file.endswith('_Instrumental.wav'):
- instrumentals_path = os.path.join(song_dir, file)
- orig_song_path = instrumentals_path.replace('_Instrumental', '')
-
- elif file.endswith('_Vocals_Main_DeReverb.wav'):
- main_vocals_dereverb_path = os.path.join(song_dir, file)
-
- elif file.endswith('_Vocals_Backup.wav'):
- backup_vocals_path = os.path.join(song_dir, file)
-
- return orig_song_path, instrumentals_path, main_vocals_dereverb_path, backup_vocals_path
-
-
-def convert_to_stereo(audio_path):
- wave, sr = librosa.load(audio_path, mono=False, sr=44100)
-
- # check if mono
- if type(wave[0]) != np.ndarray:
- stereo_path = f'{os.path.splitext(audio_path)[0]}_stereo.wav'
- command = shlex.split(f'ffmpeg -y -loglevel error -i "{audio_path}" -ac 2 -f wav "{stereo_path}"')
- subprocess.run(command)
- return stereo_path
- else:
- return audio_path
-
-
-def pitch_shift(audio_path, pitch_change):
- output_path = f'{os.path.splitext(audio_path)[0]}_p{pitch_change}.wav'
- if not os.path.exists(output_path):
- y, sr = sf.read(audio_path)
- tfm = sox.Transformer()
- tfm.pitch(pitch_change)
- y_shifted = tfm.build_array(input_array=y, sample_rate_in=sr)
- sf.write(output_path, y_shifted, sr)
-
- return output_path
-
-
-def get_hash(filepath):
- with open(filepath, 'rb') as f:
- file_hash = hashlib.blake2b()
- while chunk := f.read(8192):
- file_hash.update(chunk)
-
- return file_hash.hexdigest()[:11]
-
-
-def display_progress(message, percent, is_webui, progress=None):
- if is_webui:
- progress(percent, desc=message)
- else:
- print(message)
-
-
-def preprocess_song(song_input, mdx_model_params, song_id, is_webui, input_type, progress=None):
- keep_orig = False
- if input_type == 'yt':
- display_progress('[~] Downloading song...', 0, is_webui, progress)
- song_link = song_input.split('&')[0]
- orig_song_path = yt_download(song_link)
- elif input_type == 'local':
- orig_song_path = song_input
- keep_orig = True
- else:
- orig_song_path = None
-
- song_output_dir = os.path.join(output_dir, song_id)
- orig_song_path = convert_to_stereo(orig_song_path)
-
- display_progress('[~] Separating Vocals from Instrumental...', 0.1, is_webui, progress)
- vocals_path, instrumentals_path = run_mdx(mdx_model_params, song_output_dir, os.path.join(mdxnet_models_dir, 'UVR-MDX-NET-Voc_FT.onnx'), orig_song_path, denoise=True, keep_orig=keep_orig)
-
- display_progress('[~] Separating Main Vocals from Backup Vocals...', 0.2, is_webui, progress)
- backup_vocals_path, main_vocals_path = run_mdx(mdx_model_params, song_output_dir, os.path.join(mdxnet_models_dir, 'UVR_MDXNET_KARA_2.onnx'), vocals_path, suffix='Backup', invert_suffix='Main', denoise=True)
-
- display_progress('[~] Applying DeReverb to Vocals...', 0.3, is_webui, progress)
- _, main_vocals_dereverb_path = run_mdx(mdx_model_params, song_output_dir, os.path.join(mdxnet_models_dir, 'Reverb_HQ_By_FoxJoy.onnx'), main_vocals_path, invert_suffix='DeReverb', exclude_main=True, denoise=True)
-
- return orig_song_path, vocals_path, instrumentals_path, main_vocals_path, backup_vocals_path, main_vocals_dereverb_path
-
-
-def voice_change(voice_model, vocals_path, output_path, pitch_change, f0_method, index_rate, filter_radius, rms_mix_rate, protect, crepe_hop_length, is_webui):
- rvc_model_path, rvc_index_path = get_rvc_model(voice_model, is_webui)
- device = 'cpu'
- config = Config(device, False)
- hubert_model = load_hubert(device, config.is_half, os.path.join(rvc_models_dir, 'hubert_base.pt'))
- cpt, version, net_g, tgt_sr, vc = get_vc(device, False, config, rvc_model_path)
-
- # convert main vocals
- rvc_infer(rvc_index_path, index_rate, vocals_path, output_path, pitch_change, f0_method, cpt, version, net_g, filter_radius, tgt_sr, rms_mix_rate, protect, crepe_hop_length, vc, hubert_model)
- del hubert_model, cpt
- gc.collect()
-
-
-def add_audio_effects(audio_path, reverb_rm_size, reverb_wet, reverb_dry, reverb_damping):
- output_path = f'{os.path.splitext(audio_path)[0]}_mixed.wav'
-
- # Initialize audio effects plugins
- board = Pedalboard(
- [
- HighpassFilter(),
- Compressor(ratio=4, threshold_db=-15),
- Reverb(room_size=reverb_rm_size, dry_level=reverb_dry, wet_level=reverb_wet, damping=reverb_damping)
- ]
- )
-
- with AudioFile(audio_path) as f:
- with AudioFile(output_path, 'w', f.samplerate, f.num_channels) as o:
- # Read one second of audio at a time, until the file is empty:
- while f.tell() < f.frames:
- chunk = f.read(int(f.samplerate))
- effected = board(chunk, f.samplerate, reset=False)
- o.write(effected)
-
- return output_path
-
-
-def combine_audio(audio_paths, output_path, main_gain, backup_gain, inst_gain, output_format):
- main_vocal_audio = AudioSegment.from_wav(audio_paths[0]) - 4 + main_gain
- backup_vocal_audio = AudioSegment.from_wav(audio_paths[1]) - 6 + backup_gain
- instrumental_audio = AudioSegment.from_wav(audio_paths[2]) - 7 + inst_gain
- main_vocal_audio.overlay(backup_vocal_audio).overlay(instrumental_audio).export(output_path, format=output_format)
-
-
-def song_cover_pipeline(song_input, voice_model, pitch_change, keep_files,
- is_webui=0, main_gain=0, backup_gain=0, inst_gain=0, index_rate=0.5, filter_radius=3,
- rms_mix_rate=0.25, f0_method='rmvpe', crepe_hop_length=128, protect=0.33, pitch_change_all=0,
- reverb_rm_size=0.15, reverb_wet=0.2, reverb_dry=0.8, reverb_damping=0.7, output_format='mp3',
- progress=gr.Progress()):
- try:
- if not song_input or not voice_model:
- raise_exception('Ensure that the song input field and voice model field is filled.', is_webui)
-
- display_progress('[~] Starting AI Cover Generation Pipeline...', 0, is_webui, progress)
-
- with open(os.path.join(mdxnet_models_dir, 'model_data.json')) as infile:
- mdx_model_params = json.load(infile)
-
- # if youtube url
- if urlparse(song_input).scheme == 'https':
- input_type = 'yt'
- song_id = get_youtube_video_id(song_input)
- if song_id is None:
- error_msg = 'Invalid YouTube url.'
- raise_exception(error_msg, is_webui)
-
- # local audio file
- else:
- input_type = 'local'
- song_input = song_input.strip('\"')
- if os.path.exists(song_input):
- song_id = get_hash(song_input)
- else:
- error_msg = f'{song_input} does not exist.'
- song_id = None
- raise_exception(error_msg, is_webui)
-
- song_dir = os.path.join(output_dir, song_id)
-
- if not os.path.exists(song_dir):
- os.makedirs(song_dir)
- orig_song_path, vocals_path, instrumentals_path, main_vocals_path, backup_vocals_path, main_vocals_dereverb_path = preprocess_song(song_input, mdx_model_params, song_id, is_webui, input_type, progress)
-
- else:
- vocals_path, main_vocals_path = None, None
- paths = get_audio_paths(song_dir)
-
- # if any of the audio files aren't available or keep intermediate files, rerun preprocess
- if any(path is None for path in paths) or keep_files:
- orig_song_path, vocals_path, instrumentals_path, main_vocals_path, backup_vocals_path, main_vocals_dereverb_path = preprocess_song(song_input, mdx_model_params, song_id, is_webui, input_type, progress)
- else:
- orig_song_path, instrumentals_path, main_vocals_dereverb_path, backup_vocals_path = paths
-
- pitch_change = pitch_change * 12 + pitch_change_all
- ai_vocals_path = os.path.join(song_dir, f'{os.path.splitext(os.path.basename(orig_song_path))[0]}_{voice_model}_p{pitch_change}_i{index_rate}_fr{filter_radius}_rms{rms_mix_rate}_pro{protect}_{f0_method}{"" if f0_method != "mangio-crepe" else f"_{crepe_hop_length}"}.wav')
- ai_cover_path = os.path.join(song_dir, f'{os.path.splitext(os.path.basename(orig_song_path))[0]} ({voice_model} Ver).{output_format}')
-
- if not os.path.exists(ai_vocals_path):
- display_progress('[~] Converting voice using RVC...', 0.5, is_webui, progress)
- voice_change(voice_model, main_vocals_dereverb_path, ai_vocals_path, pitch_change, f0_method, index_rate, filter_radius, rms_mix_rate, protect, crepe_hop_length, is_webui)
-
- display_progress('[~] Applying audio effects to Vocals...', 0.8, is_webui, progress)
- ai_vocals_mixed_path = add_audio_effects(ai_vocals_path, reverb_rm_size, reverb_wet, reverb_dry, reverb_damping)
-
- if pitch_change_all != 0:
- display_progress('[~] Applying overall pitch change', 0.85, is_webui, progress)
- instrumentals_path = pitch_shift(instrumentals_path, pitch_change_all)
- backup_vocals_path = pitch_shift(backup_vocals_path, pitch_change_all)
-
- display_progress('[~] Combining AI Vocals and Instrumentals...', 0.9, is_webui, progress)
- combine_audio([ai_vocals_mixed_path, backup_vocals_path, instrumentals_path], ai_cover_path, main_gain, backup_gain, inst_gain, output_format)
-
- if not keep_files:
- display_progress('[~] Removing intermediate audio files...', 0.95, is_webui, progress)
- intermediate_files = [vocals_path, main_vocals_path, ai_vocals_mixed_path]
- if pitch_change_all != 0:
- intermediate_files += [instrumentals_path, backup_vocals_path]
- for file in intermediate_files:
- if file and os.path.exists(file):
- os.remove(file)
-
- return ai_cover_path
-
- except Exception as e:
- raise_exception(str(e), is_webui)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser(description='Generate a AI cover song in the song_output/id directory.', add_help=True)
- parser.add_argument('-i', '--song-input', type=str, required=True, help='Link to a YouTube video or the filepath to a local mp3/wav file to create an AI cover of')
- parser.add_argument('-dir', '--rvc-dirname', type=str, required=True, help='Name of the folder in the rvc_models directory containing the RVC model file and optional index file to use')
- parser.add_argument('-p', '--pitch-change', type=int, required=True, help='Change the pitch of AI Vocals only. Generally, use 1 for male to female and -1 for vice-versa. (Octaves)')
- parser.add_argument('-k', '--keep-files', action=argparse.BooleanOptionalAction, help='Whether to keep all intermediate audio files generated in the song_output/id directory, e.g. Isolated Vocals/Instrumentals')
- parser.add_argument('-ir', '--index-rate', type=float, default=0.5, help='A decimal number e.g. 0.5, used to reduce/resolve the timbre leakage problem. If set to 1, more biased towards the timbre quality of the training dataset')
- parser.add_argument('-fr', '--filter-radius', type=int, default=3, help='A number between 0 and 7. If >=3: apply median filtering to the harvested pitch results. The value represents the filter radius and can reduce breathiness.')
- parser.add_argument('-rms', '--rms-mix-rate', type=float, default=0.25, help="A decimal number e.g. 0.25. Control how much to use the original vocal's loudness (0) or a fixed loudness (1).")
- parser.add_argument('-palgo', '--pitch-detection-algo', type=str, default='rmvpe', help='Best option is rmvpe (clarity in vocals), then mangio-crepe (smoother vocals).')
- parser.add_argument('-hop', '--crepe-hop-length', type=int, default=128, help='If pitch detection algo is mangio-crepe, controls how often it checks for pitch changes in milliseconds. The higher the value, the faster the conversion and less risk of voice cracks, but there is less pitch accuracy. Recommended: 128.')
- parser.add_argument('-pro', '--protect', type=float, default=0.33, help='A decimal number e.g. 0.33. Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy.')
- parser.add_argument('-mv', '--main-vol', type=int, default=0, help='Volume change for AI main vocals in decibels. Use -3 to decrease by 3 decibels and 3 to increase by 3 decibels')
- parser.add_argument('-bv', '--backup-vol', type=int, default=0, help='Volume change for backup vocals in decibels')
- parser.add_argument('-iv', '--inst-vol', type=int, default=0, help='Volume change for instrumentals in decibels')
- parser.add_argument('-pall', '--pitch-change-all', type=int, default=0, help='Change the pitch/key of vocals and instrumentals. Changing this slightly reduces sound quality')
- parser.add_argument('-rsize', '--reverb-size', type=float, default=0.15, help='Reverb room size between 0 and 1')
- parser.add_argument('-rwet', '--reverb-wetness', type=float, default=0.2, help='Reverb wet level between 0 and 1')
- parser.add_argument('-rdry', '--reverb-dryness', type=float, default=0.8, help='Reverb dry level between 0 and 1')
- parser.add_argument('-rdamp', '--reverb-damping', type=float, default=0.7, help='Reverb damping between 0 and 1')
- parser.add_argument('-oformat', '--output-format', type=str, default='mp3', help='Output format of audio file. mp3 for smaller file size, wav for best quality')
- args = parser.parse_args()
-
- rvc_dirname = args.rvc_dirname
- if not os.path.exists(os.path.join(rvc_models_dir, rvc_dirname)):
- raise Exception(f'The folder {os.path.join(rvc_models_dir, rvc_dirname)} does not exist.')
-
- cover_path = song_cover_pipeline(args.song_input, rvc_dirname, args.pitch_change, args.keep_files,
- main_gain=args.main_vol, backup_gain=args.backup_vol, inst_gain=args.inst_vol,
- index_rate=args.index_rate, filter_radius=args.filter_radius,
- rms_mix_rate=args.rms_mix_rate, f0_method=args.pitch_detection_algo,
- crepe_hop_length=args.crepe_hop_length, protect=args.protect,
- pitch_change_all=args.pitch_change_all,
- reverb_rm_size=args.reverb_size, reverb_wet=args.reverb_wetness,
- reverb_dry=args.reverb_dryness, reverb_damping=args.reverb_damping,
- output_format=args.output_format)
- print(f'[+] Cover generated at {cover_path}')
diff --git a/spaces/Writer/token-counter/app.py b/spaces/Writer/token-counter/app.py
deleted file mode 100644
index 0eaff47f6b43dff7948f55df107d83c3bb1e9e2a..0000000000000000000000000000000000000000
--- a/spaces/Writer/token-counter/app.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from transformers import AutoTokenizer
-import gradio as gr
-
-
-tokenizer = AutoTokenizer.from_pretrained("kiranr/gpt2-tokenizer")
-
-def tokenize(input_text):
- tokens = tokenizer(input_text)["input_ids"]
- return f"Number of tokens: {len(tokens)}"
-
-
-demo = gr.Interface(
- fn=tokenize,
- inputs=gr.Textbox(lines=7),
- outputs="text",
-)
-demo.launch()
diff --git a/spaces/Xenova/react-translator/assets/worker-22715bb5.js b/spaces/Xenova/react-translator/assets/worker-22715bb5.js
deleted file mode 100644
index c8469fbe724c110faacf1a35e0bc6fe8f1aee06a..0000000000000000000000000000000000000000
--- a/spaces/Xenova/react-translator/assets/worker-22715bb5.js
+++ /dev/null
@@ -1,1790 +0,0 @@
-var gn=Object.defineProperty;var mn=(rt,Xe,_)=>Xe in rt?gn(rt,Xe,{enumerable:!0,configurable:!0,writable:!0,value:_}):rt[Xe]=_;var fe=(rt,Xe,_)=>(mn(rt,typeof Xe!="symbol"?Xe+"":Xe,_),_);(function(){var rt;"use strict";function _mergeNamespaces(_,n){return n.forEach(function(s){s&&typeof s!="string"&&!Array.isArray(s)&&Object.keys(s).forEach(function(c){if(c!=="default"&&!(c in _)){var l=Object.getOwnPropertyDescriptor(s,c);Object.defineProperty(_,c,l.get?l:{enumerable:!0,get:function(){return s[c]}})}})}),Object.freeze(_)}function dispatchCallback(_,n){_!==null&&_(n)}function reverseDictionary(_){return Object.fromEntries(Object.entries(_).map(([n,s])=>[s,n]))}function escapeRegExp(_){return _.replace(/[.*+?^${}()|[\]\\]/g,"\\$&")}const Callable=class{constructor(){let _=function(...n){return _._call(...n)};return Object.setPrototypeOf(_,new.target.prototype)}_call(..._){throw Error("Must implement _call method in subclass")}};function isString(_){return typeof _=="string"||_ instanceof String}function isTypedArray(_){var n,s,c;return((c=(s=(n=_==null?void 0:_.prototype)==null?void 0:n.__proto__)==null?void 0:s.constructor)==null?void 0:c.name)==="TypedArray"}function isIntegralNumber(_){return Number.isInteger(_)||typeof _=="bigint"}function exists(_){return _!=null}function calculateDimensions(_){const n=[];let s=_;for(;Array.isArray(s);)n.push(s.length),s=s[0];return n}var fs={},ONNX_NODE=Object.freeze({__proto__:null,default:fs});function getDefaultExportFromCjs(_){return _&&_.__esModule&&Object.prototype.hasOwnProperty.call(_,"default")?_.default:_}function getAugmentedNamespace(_){if(_.__esModule)return _;var n=_.default;if(typeof n=="function"){var s=function c(){if(this instanceof c){var l=[null];l.push.apply(l,arguments);var f=Function.bind.apply(n,l);return new f}return n.apply(this,arguments)};s.prototype=n.prototype}else s={};return Object.defineProperty(s,"__esModule",{value:!0}),Object.keys(_).forEach(function(c){var l=Object.getOwnPropertyDescriptor(_,c);Object.defineProperty(s,c,l.get?l:{enumerable:!0,get:function(){return _[c]}})}),s}var ortWeb_min$1={exports:{}};const backends={},backendsSortedByPriority=[],registerBackend=(_,n,s)=>{if(n&&typeof n.init=="function"&&typeof n.createSessionHandler=="function"){const c=backends[_];if(c===void 0)backends[_]={backend:n,priority:s};else{if(c.priority>s)return;if(c.priority===s&&c.backend!==n)throw new Error(`cannot register backend "${_}" using priority ${s}`)}if(s>=0){const l=backendsSortedByPriority.indexOf(_);l!==-1&&backendsSortedByPriority.splice(l,1);for(let f=0;f{const n=_.length===0?backendsSortedByPriority:_,s=[];for(const c of n){const l=backends[c];if(l){if(l.initialized)return l.backend;if(l.aborted)continue;const f=!!l.initPromise;try{return f||(l.initPromise=l.backend.init()),await l.initPromise,l.initialized=!0,l.backend}catch(a){f||s.push({name:c,err:a}),l.aborted=!0}finally{delete l.initPromise}}}throw new Error(`no available backend found. ERR: ${s.map(c=>`[${c.name}] ${c.err}`).join(", ")}`)};class EnvImpl{constructor(){this.wasm={},this.webgl={},this.logLevelInternal="warning"}set logLevel(n){if(n!==void 0){if(typeof n!="string"||["verbose","info","warning","error","fatal"].indexOf(n)===-1)throw new Error(`Unsupported logging level: ${n}`);this.logLevelInternal=n}}get logLevel(){return this.logLevelInternal}}const env$1=new EnvImpl,isBigInt64ArrayAvailable=typeof BigInt64Array<"u"&&typeof BigInt64Array.from=="function",isBigUint64ArrayAvailable=typeof BigUint64Array<"u"&&typeof BigUint64Array.from=="function",NUMERIC_TENSOR_TYPE_TO_TYPEDARRAY_MAP=new Map([["float32",Float32Array],["uint8",Uint8Array],["int8",Int8Array],["uint16",Uint16Array],["int16",Int16Array],["int32",Int32Array],["bool",Uint8Array],["float64",Float64Array],["uint32",Uint32Array]]),NUMERIC_TENSOR_TYPEDARRAY_TO_TYPE_MAP=new Map([[Float32Array,"float32"],[Uint8Array,"uint8"],[Int8Array,"int8"],[Uint16Array,"uint16"],[Int16Array,"int16"],[Int32Array,"int32"],[Float64Array,"float64"],[Uint32Array,"uint32"]]);isBigInt64ArrayAvailable&&(NUMERIC_TENSOR_TYPE_TO_TYPEDARRAY_MAP.set("int64",BigInt64Array),NUMERIC_TENSOR_TYPEDARRAY_TO_TYPE_MAP.set(BigInt64Array,"int64")),isBigUint64ArrayAvailable&&(NUMERIC_TENSOR_TYPE_TO_TYPEDARRAY_MAP.set("uint64",BigUint64Array),NUMERIC_TENSOR_TYPEDARRAY_TO_TYPE_MAP.set(BigUint64Array,"uint64"));const calculateSize=_=>{let n=1;for(let s=0;s<_.length;s++){const c=_[s];if(typeof c!="number"||!Number.isSafeInteger(c))throw new TypeError(`dims[${s}] must be an integer, got: ${c}`);if(c<0)throw new RangeError(`dims[${s}] must be a non-negative integer, got: ${c}`);n*=c}return n};let Tensor$2=class lt{constructor(n,s,c){let l,f,a;if(typeof n=="string")if(l=n,a=c,n==="string"){if(!Array.isArray(s))throw new TypeError("A string tensor's data must be a string array.");f=s}else{const p=NUMERIC_TENSOR_TYPE_TO_TYPEDARRAY_MAP.get(n);if(p===void 0)throw new TypeError(`Unsupported tensor type: ${n}.`);if(Array.isArray(s))f=p.from(s);else if(s instanceof p)f=s;else throw new TypeError(`A ${l} tensor's data must be type of ${p}`)}else if(a=s,Array.isArray(n)){if(n.length===0)throw new TypeError("Tensor type cannot be inferred from an empty array.");const p=typeof n[0];if(p==="string")l="string",f=n;else if(p==="boolean")l="bool",f=Uint8Array.from(n);else throw new TypeError(`Invalid element type of data array: ${p}.`)}else{const p=NUMERIC_TENSOR_TYPEDARRAY_TO_TYPE_MAP.get(n.constructor);if(p===void 0)throw new TypeError(`Unsupported type for tensor data: ${n.constructor}.`);l=p,f=n}if(a===void 0)a=[f.length];else if(!Array.isArray(a))throw new TypeError("A tensor's dims must be a number array");const h=calculateSize(a);if(h!==f.length)throw new Error(`Tensor's size(${h}) does not match data length(${f.length}).`);this.dims=a,this.type=l,this.data=f,this.size=h}static bufferToTensor(n,s){if(n===void 0)throw new Error("Image buffer must be defined");if(s.height===void 0||s.width===void 0)throw new Error("Image height and width must be defined");const{height:c,width:l}=s,f=s.norm;let a,h;f===void 0||f.mean===void 0?a=255:a=f.mean,f===void 0||f.bias===void 0?h=0:h=f.bias;const p=s.bitmapFormat!==void 0?s.bitmapFormat:"RGBA",u=s.tensorFormat!==void 0&&s.tensorFormat!==void 0?s.tensorFormat:"RGB",o=c*l,t=u==="RGBA"?new Float32Array(o*4):new Float32Array(o*3);let e=4,r=0,i=1,d=2,g=3,m=0,b=o,y=o*2,w=-1;p==="RGB"&&(e=3,r=0,i=1,d=2,g=-1),u==="RGBA"?w=o*3:u==="RBG"?(m=0,y=o,b=o*2):u==="BGR"&&(y=0,b=o,m=o*2);for(let S=0;S{const t=document.createElement("canvas"),e=t.getContext("2d");if(!n||!e)return o();const r=new Image;r.crossOrigin="Anonymous",r.src=n,r.onload=()=>{t.width=r.width,t.height=r.height,e.drawImage(r,0,0,t.width,t.height);const i=e.getImageData(0,0,t.width,t.height);if(s!==void 0){if(s.height!==void 0&&s.height!==t.height)throw new Error("Image input config height doesn't match ImageBitmap height");if(p.height=t.height,s.width!==void 0&&s.width!==t.width)throw new Error("Image input config width doesn't match ImageBitmap width");p.width=t.width}else p.height=t.height,p.width=t.width;u(lt.bufferToTensor(i.data,p))}});throw new Error("Input data provided is not supported - aborted tensor creation")}if(h!==void 0)return lt.bufferToTensor(h,p);throw new Error("Input data provided is not supported - aborted tensor creation")}toImageData(n){var s,c;const l=document.createElement("canvas").getContext("2d");let f;if(l!=null){const a=this.dims[3],h=this.dims[2],p=this.dims[1],u=n!==void 0&&n.format!==void 0?n.format:"RGB",o=n!==void 0&&((s=n.norm)===null||s===void 0?void 0:s.mean)!==void 0?n.norm.mean:255,t=n!==void 0&&((c=n.norm)===null||c===void 0?void 0:c.bias)!==void 0?n.norm.bias:0,e=h*a;if(n!==void 0){if(n.height!==void 0&&n.height!==h)throw new Error("Image output config height doesn't match tensor height");if(n.width!==void 0&&n.width!==a)throw new Error("Image output config width doesn't match tensor width");if(n.format!==void 0&&p===4&&n.format!=="RGBA"||p===3&&n.format!=="RGB"&&n.format!=="BGR")throw new Error("Tensor format doesn't match input tensor dims")}const r=4;let i=0,d=1,g=2,m=3,b=0,y=e,w=e*2,v=-1;u==="RGBA"?(b=0,y=e,w=e*2,v=e*3):u==="RGB"?(b=0,y=e,w=e*2):u==="RBG"&&(b=0,w=e,y=e*2),f=l.createImageData(a,h);for(let S=0;S"u")throw new Error(`input '${u}' is missing in 'feeds'.`);if(a)for(const u of this.outputNames)l[u]=null;const h=await this.handler.run(n,l,f),p={};for(const u in h)Object.hasOwnProperty.call(h,u)&&(p[u]=new Tensor$1(h[u].type,h[u].data,h[u].dims));return p}static async create(n,s,c,l){let f,a={};if(typeof n=="string"){if(f=n,typeof s=="object"&&s!==null)a=s;else if(typeof s<"u")throw new TypeError("'options' must be an object.")}else if(n instanceof Uint8Array){if(f=n,typeof s=="object"&&s!==null)a=s;else if(typeof s<"u")throw new TypeError("'options' must be an object.")}else if(n instanceof ArrayBuffer||typeof SharedArrayBuffer<"u"&&n instanceof SharedArrayBuffer){const t=n;let e=0,r=n.byteLength;if(typeof s=="object"&&s!==null)a=s;else if(typeof s=="number"){if(e=s,!Number.isSafeInteger(e))throw new RangeError("'byteOffset' must be an integer.");if(e<0||e>=t.byteLength)throw new RangeError(`'byteOffset' is out of range [0, ${t.byteLength}).`);if(r=n.byteLength-e,typeof c=="number"){if(r=c,!Number.isSafeInteger(r))throw new RangeError("'byteLength' must be an integer.");if(r<=0||e+r>t.byteLength)throw new RangeError(`'byteLength' is out of range (0, ${t.byteLength-e}].`);if(typeof l=="object"&&l!==null)a=l;else if(typeof l<"u")throw new TypeError("'options' must be an object.")}else if(typeof c<"u")throw new TypeError("'byteLength' must be a number.")}else if(typeof s<"u")throw new TypeError("'options' must be an object.");f=new Uint8Array(t,e,r)}else throw new TypeError("Unexpected argument[0]: must be 'path' or 'buffer'.");const p=(a.executionProviders||[]).map(t=>typeof t=="string"?t:t.name),o=await(await resolveBackend(p)).createSessionHandler(f,a);return new hn(o)}startProfiling(){this.handler.startProfiling()}endProfiling(){this.handler.endProfiling()}get inputNames(){return this.handler.inputNames}get outputNames(){return this.handler.outputNames}};const InferenceSession$1=InferenceSession$2;var lib=Object.freeze({__proto__:null,InferenceSession:InferenceSession$1,Tensor:Tensor$1,env:env$1,registerBackend}),require$$0=getAugmentedNamespace(lib);/*!
-* ONNX Runtime Web v1.14.0
-* Copyright (c) Microsoft Corporation. All rights reserved.
-* Licensed under the MIT License.
-*/(function(module,exports){(function(_,n){module.exports=n(require$$0)})(self,__WEBPACK_EXTERNAL_MODULE__1670__=>(()=>{var __webpack_modules__={3474:(_,n,s)=>{var c,l=(c=(c=typeof document<"u"&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(f){function a(){return X.buffer!=ee&&Ee(X.buffer),ue}function h(){return X.buffer!=ee&&Ee(X.buffer),Ae}function p(){return X.buffer!=ee&&Ee(X.buffer),xe}function u(){return X.buffer!=ee&&Ee(X.buffer),oe}function o(){return X.buffer!=ee&&Ee(X.buffer),we}var t,e,r;f=f||{},t||(t=f!==void 0?f:{}),t.ready=new Promise(function(T,E){e=T,r=E});var i,d,g,m,b,y,w=Object.assign({},t),v="./this.program",S=(T,E)=>{throw E},A=typeof window=="object",O=typeof importScripts=="function",x=typeof process=="object"&&typeof process.versions=="object"&&typeof process.versions.node=="string",I=t.ENVIRONMENT_IS_PTHREAD||!1,$="";function z(T){return t.locateFile?t.locateFile(T,$):$+T}if(x){let T;$=O?s(908).dirname($)+"/":"//",y=()=>{b||(m=s(1384),b=s(908))},i=function(E,k){return y(),E=b.normalize(E),m.readFileSync(E,k?void 0:"utf8")},g=E=>((E=i(E,!0)).buffer||(E=new Uint8Array(E)),E),d=(E,k,C)=>{y(),E=b.normalize(E),m.readFile(E,function(B,V){B?C(B):k(V.buffer)})},1{if(qe())throw process.exitCode=E,k;k instanceof Qe||j("exiting due to exception: "+k),process.exit(E)},t.inspect=function(){return"[Emscripten Module object]"};try{T=s(9925)}catch(E){throw console.error('The "worker_threads" module is not supported in this node.js build - perhaps a newer version is needed?'),E}s.g.Worker=T.Worker}else(A||O)&&(O?$=self.location.href:typeof document<"u"&&document.currentScript&&($=document.currentScript.src),c&&($=c),$=$.indexOf("blob:")!==0?$.substr(0,$.replace(/[?#].*/,"").lastIndexOf("/")+1):"",x||(i=T=>{var E=new XMLHttpRequest;return E.open("GET",T,!1),E.send(null),E.responseText},O&&(g=T=>{var E=new XMLHttpRequest;return E.open("GET",T,!1),E.responseType="arraybuffer",E.send(null),new Uint8Array(E.response)}),d=(T,E,k)=>{var C=new XMLHttpRequest;C.open("GET",T,!0),C.responseType="arraybuffer",C.onload=()=>{C.status==200||C.status==0&&C.response?E(C.response):k()},C.onerror=k,C.send(null)}));x&&typeof performance>"u"&&(s.g.performance=s(6953).performance);var L=console.log.bind(console),N=console.warn.bind(console);x&&(y(),L=T=>m.writeSync(1,T+`
-`),N=T=>m.writeSync(2,T+`
-`));var H,M=t.print||L,j=t.printErr||N;Object.assign(t,w),w=null,t.thisProgram&&(v=t.thisProgram),t.quit&&(S=t.quit),t.wasmBinary&&(H=t.wasmBinary);var Z=t.noExitRuntime||!1;typeof WebAssembly!="object"&&ge("no native wasm support detected");var X,Q,ee,ue,Ae,xe,oe,we,ye=!1,ke=typeof TextDecoder<"u"?new TextDecoder("utf8"):void 0;function Ne(T,E,k){var C=(E>>>=0)+k;for(k=E;T[k]&&!(k>=C);)++k;if(16(B=(240&B)==224?(15&B)<<12|V<<6|K:(7&B)<<18|V<<12|K<<6|63&T[E++])?C+=String.fromCharCode(B):(B-=65536,C+=String.fromCharCode(55296|B>>10,56320|1023&B))}}else C+=String.fromCharCode(B)}return C}function Te(T,E){return(T>>>=0)?Ne(h(),T,E):""}function $e(T,E,k,C){if(!(0>>=0;C=k+C-1;for(var V=0;V=K&&(K=65536+((1023&K)<<10)|1023&T.charCodeAt(++V)),127>=K){if(k>=C)break;E[k++>>>0]=K}else{if(2047>=K){if(k+1>=C)break;E[k++>>>0]=192|K>>6}else{if(65535>=K){if(k+2>=C)break;E[k++>>>0]=224|K>>12}else{if(k+3>=C)break;E[k++>>>0]=240|K>>18,E[k++>>>0]=128|K>>12&63}E[k++>>>0]=128|K>>6&63}E[k++>>>0]=128|63&K}}return E[k>>>0]=0,k-B}function Ce(T){for(var E=0,k=0;k=C?E++:2047>=C?E+=2:55296<=C&&57343>=C?(E+=4,++k):E+=3}return E}function Ee(T){ee=T,t.HEAP8=ue=new Int8Array(T),t.HEAP16=new Int16Array(T),t.HEAP32=xe=new Int32Array(T),t.HEAPU8=Ae=new Uint8Array(T),t.HEAPU16=new Uint16Array(T),t.HEAPU32=oe=new Uint32Array(T),t.HEAPF32=new Float32Array(T),t.HEAPF64=we=new Float64Array(T)}I&&(ee=t.buffer);var Oe=t.INITIAL_MEMORY||16777216;if(I)X=t.wasmMemory,ee=t.buffer;else if(t.wasmMemory)X=t.wasmMemory;else if(!((X=new WebAssembly.Memory({initial:Oe/65536,maximum:65536,shared:!0})).buffer instanceof SharedArrayBuffer))throw j("requested a shared WebAssembly.Memory but the returned buffer is not a SharedArrayBuffer, indicating that while the browser has SharedArrayBuffer it does not have WebAssembly threads support - you may need to set a flag"),x&&console.log("(on node you may need: --experimental-wasm-threads --experimental-wasm-bulk-memory and also use a recent version)"),Error("bad memory");X&&(ee=X.buffer),Oe=ee.byteLength,Ee(ee);var ze,Ve=[],Ge=[],Ye=[],Je=[];function qe(){return Z||!1}function Ue(){var T=t.preRun.shift();Ve.unshift(T)}var Ie,je=0,Ke=null;function ge(T){throw I?postMessage({cmd:"onAbort",arg:T}):t.onAbort&&t.onAbort(T),j(T="Aborted("+T+")"),ye=!0,T=new WebAssembly.RuntimeError(T+". Build with -sASSERTIONS for more info."),r(T),T}function ft(){return Ie.startsWith("data:application/octet-stream;base64,")}function ct(){var T=Ie;try{if(T==Ie&&H)return new Uint8Array(H);if(g)return g(T);throw"both async and sync fetching of the wasm failed"}catch(E){ge(E)}}Ie="ort-wasm-threaded.wasm",ft()||(Ie=z(Ie));var It={};function Qe(T){this.name="ExitStatus",this.message="Program terminated with exit("+T+")",this.status=T}function dt(T){(T=re.Vb[T])||ge(),re.mc(T)}function ht(T){var E=re.Cc();if(!E)return 6;re.ac.push(E),re.Vb[T.Ub]=E,E.Ub=T.Ub;var k={cmd:"run",start_routine:T.Ic,arg:T.zc,pthread_ptr:T.Ub};return E.$b=()=>{k.time=performance.now(),E.postMessage(k,T.Nc)},E.loaded&&(E.$b(),delete E.$b),0}function Re(T){if(I)return J(1,1,T);qe()||(re.oc(),t.onExit&&t.onExit(T),ye=!0),S(T,new Qe(T))}function ot(T,E){if(!E&&I)throw Dt(T),"unwind";qe()||I||(Ht(),it(Ye),Wt(0),Nt[1].length&&Lt(1,10),Nt[2].length&&Lt(2,10),re.oc()),Re(T)}var re={Yb:[],ac:[],qc:[],Vb:{},fc:function(){I&&re.Ec()},Pc:function(){},Ec:function(){re.receiveObjectTransfer=re.Gc,re.threadInitTLS=re.pc,re.setExitStatus=re.nc,Z=!1},nc:function(){},oc:function(){for(var T of Object.values(re.Vb))re.mc(T);for(T of re.Yb)T.terminate();re.Yb=[]},mc:function(T){var E=T.Ub;delete re.Vb[E],re.Yb.push(T),re.ac.splice(re.ac.indexOf(T),1),T.Ub=0,jt(E)},Gc:function(){},pc:function(){re.qc.forEach(T=>T())},Fc:function(T,E){T.onmessage=k=>{var C=(k=k.data).cmd;if(T.Ub&&(re.Bc=T.Ub),k.targetThread&&k.targetThread!=Ct()){var B=re.Vb[k.Qc];B?B.postMessage(k,k.transferList):j('Internal error! Worker sent a message "'+C+'" to target pthread '+k.targetThread+", but that thread no longer exists!")}else C==="processProxyingQueue"?F(k.queue):C==="spawnThread"?ht(k):C==="cleanupThread"?dt(k.thread):C==="killThread"?(k=k.thread,C=re.Vb[k],delete re.Vb[k],C.terminate(),jt(k),re.ac.splice(re.ac.indexOf(C),1),C.Ub=0):C==="cancelThread"?re.Vb[k.thread].postMessage({cmd:"cancel"}):C==="loaded"?(T.loaded=!0,E&&E(T),T.$b&&(T.$b(),delete T.$b)):C==="print"?M("Thread "+k.threadId+": "+k.text):C==="printErr"?j("Thread "+k.threadId+": "+k.text):C==="alert"?alert("Thread "+k.threadId+": "+k.text):k.target==="setimmediate"?T.postMessage(k):C==="onAbort"?t.onAbort&&t.onAbort(k.arg):C&&j("worker sent an unknown command "+C);re.Bc=void 0},T.onerror=k=>{throw j("worker sent an error! "+k.filename+":"+k.lineno+": "+k.message),k},x&&(T.on("message",function(k){T.onmessage({data:k})}),T.on("error",function(k){T.onerror(k)}),T.on("detachedExit",function(){})),T.postMessage({cmd:"load",urlOrBlob:t.mainScriptUrlOrBlob||c,wasmMemory:X,wasmModule:Q})},yc:function(){var T=z("ort-wasm-threaded.worker.js");re.Yb.push(new Worker(T))},Cc:function(){return re.Yb.length==0&&(re.yc(),re.Fc(re.Yb[0])),re.Yb.pop()}};function it(T){for(;0>2>>>0];T=p()[T+48>>2>>>0],Jt(E,E-T),ce(E)};var et=[];function ve(T){var E=et[T];return E||(T>=et.length&&(et.length=T+1),et[T]=E=ze.get(T)),E}t.invokeEntryPoint=function(T,E){T=ve(T)(E),qe()?re.nc(T):Zt(T)};var st,gt,at=[],ae=0,ie=0;function se(T){this.Zb=T,this.Sb=T-24,this.xc=function(E){u()[this.Sb+4>>2>>>0]=E},this.bc=function(){return u()[this.Sb+4>>2>>>0]},this.wc=function(E){u()[this.Sb+8>>2>>>0]=E},this.Dc=function(){return u()[this.Sb+8>>2>>>0]},this.rc=function(){p()[this.Sb>>2>>>0]=0},this.hc=function(E){E=E?1:0,a()[this.Sb+12>>0>>>0]=E},this.uc=function(){return a()[this.Sb+12>>0>>>0]!=0},this.ic=function(E){E=E?1:0,a()[this.Sb+13>>0>>>0]=E},this.kc=function(){return a()[this.Sb+13>>0>>>0]!=0},this.fc=function(E,k){this.cc(0),this.xc(E),this.wc(k),this.rc(),this.hc(!1),this.ic(!1)},this.sc=function(){Atomics.add(p(),this.Sb>>2,1)},this.Hc=function(){return Atomics.sub(p(),this.Sb>>2,1)===1},this.cc=function(E){u()[this.Sb+16>>2>>>0]=E},this.tc=function(){return u()[this.Sb+16>>2>>>0]},this.vc=function(){if(Qt(this.bc()))return u()[this.Zb>>2>>>0];var E=this.tc();return E!==0?E:this.Zb}}function mt(T){return qt(new se(T).Sb)}function ut(T,E,k,C){return I?J(3,1,T,E,k,C):bt(T,E,k,C)}function bt(T,E,k,C){if(typeof SharedArrayBuffer>"u")return j("Current environment does not support SharedArrayBuffer, pthreads are not available!"),6;var B=[];return I&&B.length===0?ut(T,E,k,C):(T={Ic:k,Ub:T,zc:C,Nc:B},I?(T.Oc="spawnThread",postMessage(T,B),0):ht(T))}function yt(T,E,k){return I?J(4,1,T,E,k):0}function _t(T,E){if(I)return J(5,1,T,E)}function wt(T,E){if(I)return J(6,1,T,E)}function vt(T,E,k){if(I)return J(7,1,T,E,k)}function xt(T,E,k){return I?J(8,1,T,E,k):0}function Tt(T,E){if(I)return J(9,1,T,E)}function St(T,E,k){if(I)return J(10,1,T,E,k)}function At(T,E,k,C){if(I)return J(11,1,T,E,k,C)}function Ot(T,E,k,C){if(I)return J(12,1,T,E,k,C)}function Et(T,E,k,C){if(I)return J(13,1,T,E,k,C)}function Pt(T){if(I)return J(14,1,T)}function P(T,E){if(I)return J(15,1,T,E)}function D(T,E,k){if(I)return J(16,1,T,E,k)}function F(T){Atomics.store(p(),T>>2,1),Ct()&&Kt(T),Atomics.compareExchange(p(),T>>2,1,0)}function R(T){return u()[T>>>2]+4294967296*p()[T+4>>>2]}function U(T,E,k,C,B,V){return I?J(17,1,T,E,k,C,B,V):-52}function W(T,E,k,C,B,V){if(I)return J(18,1,T,E,k,C,B,V)}function Y(T){var E=Ce(T)+1,k=Rt(E);return k&&$e(T,a(),k,E),k}function te(T,E,k){function C(me){return(me=me.toTimeString().match(/\(([A-Za-z ]+)\)$/))?me[1]:"GMT"}if(I)return J(19,1,T,E,k);var B=new Date().getFullYear(),V=new Date(B,0,1),K=new Date(B,6,1);B=V.getTimezoneOffset();var ne=K.getTimezoneOffset(),pe=Math.max(B,ne);p()[T>>2>>>0]=60*pe,p()[E>>2>>>0]=+(B!=ne),T=C(V),E=C(K),T=Y(T),E=Y(E),ne>2>>>0]=T,u()[k+4>>2>>>0]=E):(u()[k>>2>>>0]=E,u()[k+4>>2>>>0]=T)}function J(T,E){var k=arguments.length-2,C=arguments;return kt(()=>{for(var B=Bt(8*k),V=B>>3,K=0;K>>0]=ne}return Yt(T,k,B,E)})}t.executeNotifiedProxyingQueue=F,gt=x?()=>{var T=process.hrtime();return 1e3*T[0]+T[1]/1e6}:I?()=>performance.now()-t.__performance_now_clock_drift:()=>performance.now();var le,Se=[],Le={};function Fe(){if(!le){var T,E={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:(typeof navigator=="object"&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:v||"./this.program"};for(T in Le)Le[T]===void 0?delete E[T]:E[T]=Le[T];var k=[];for(T in E)k.push(T+"="+E[T]);le=k}return le}function G(T,E){if(I)return J(20,1,T,E);var k=0;return Fe().forEach(function(C,B){var V=E+k;for(B=u()[T+4*B>>2>>>0]=V,V=0;V>0>>>0]=C.charCodeAt(V);a()[B>>0>>>0]=0,k+=C.length+1}),0}function be(T,E){if(I)return J(21,1,T,E);var k=Fe();u()[T>>2>>>0]=k.length;var C=0;return k.forEach(function(B){C+=B.length+1}),u()[E>>2>>>0]=C,0}function Pe(T){return I?J(22,1,T):52}function We(T,E,k,C){return I?J(23,1,T,E,k,C):52}function tt(T,E,k,C,B){return I?J(24,1,T,E,k,C,B):70}var Nt=[null,[],[]];function Lt(T,E){var k=Nt[T];E===0||E===10?((T===1?M:j)(Ne(k,0)),k.length=0):k.push(E)}function zt(T,E,k,C){if(I)return J(25,1,T,E,k,C);for(var B=0,V=0;V>2>>>0],ne=u()[E+4>>2>>>0];E+=8;for(var pe=0;pe>>0]);B+=ne}return u()[C>>2>>>0]=B,0}var Be=0;function Mt(T){return T%4==0&&(T%100!=0||T%400==0)}var Ut=[31,29,31,30,31,30,31,31,30,31,30,31],Vt=[31,28,31,30,31,30,31,31,30,31,30,31];function Gt(T,E,k,C){function B(q,_e,De){for(q=typeof q=="number"?q.toString():q||"";q.length<_e;)q=De[0]+q;return q}function V(q,_e){return B(q,_e,"0")}function K(q,_e){function De(pt){return 0>pt?-1:0nt-q.getDate())){q.setDate(q.getDate()+_e);break}_e-=nt-q.getDate()+1,q.setDate(1),11>De?q.setMonth(De+1):(q.setMonth(0),q.setFullYear(q.getFullYear()+1))}return De=new Date(q.getFullYear()+1,0,4),_e=ne(new Date(q.getFullYear(),0,4)),De=ne(De),0>=K(_e,q)?0>=K(De,q)?q.getFullYear()+1:q.getFullYear():q.getFullYear()-1}var me=p()[C+40>>2>>>0];for(var Me in C={Lc:p()[C>>2>>>0],Kc:p()[C+4>>2>>>0],dc:p()[C+8>>2>>>0],jc:p()[C+12>>2>>>0],ec:p()[C+16>>2>>>0],Xb:p()[C+20>>2>>>0],Tb:p()[C+24>>2>>>0],Wb:p()[C+28>>2>>>0],Rc:p()[C+32>>2>>>0],Jc:p()[C+36>>2>>>0],Mc:me?Te(me):""},k=Te(k),me={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})k=k.replace(new RegExp(Me,"g"),me[Me]);var Ze="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),He="January February March April May June July August September October November December".split(" ");for(Me in me={"%a":function(q){return Ze[q.Tb].substring(0,3)},"%A":function(q){return Ze[q.Tb]},"%b":function(q){return He[q.ec].substring(0,3)},"%B":function(q){return He[q.ec]},"%C":function(q){return V((q.Xb+1900)/100|0,2)},"%d":function(q){return V(q.jc,2)},"%e":function(q){return B(q.jc,2," ")},"%g":function(q){return pe(q).toString().substring(2)},"%G":function(q){return pe(q)},"%H":function(q){return V(q.dc,2)},"%I":function(q){return(q=q.dc)==0?q=12:12q.dc?"AM":"PM"},"%S":function(q){return V(q.Lc,2)},"%t":function(){return" "},"%u":function(q){return q.Tb||7},"%U":function(q){return V(Math.floor((q.Wb+7-q.Tb)/7),2)},"%V":function(q){var _e=Math.floor((q.Wb+7-(q.Tb+6)%7)/7);if(2>=(q.Tb+371-q.Wb-2)%7&&_e++,_e)_e==53&&((De=(q.Tb+371-q.Wb)%7)==4||De==3&&Mt(q.Xb)||(_e=1));else{_e=52;var De=(q.Tb+7-q.Wb-1)%7;(De==4||De==5&&Mt(q.Xb%400-1))&&_e++}return V(_e,2)},"%w":function(q){return q.Tb},"%W":function(q){return V(Math.floor((q.Wb+7-(q.Tb+6)%7)/7),2)},"%y":function(q){return(q.Xb+1900).toString().substring(2)},"%Y":function(q){return q.Xb+1900},"%z":function(q){var _e=0<=(q=q.Jc);return q=Math.abs(q)/60,(_e?"+":"-")+("0000"+(q/60*100+q%60)).slice(-4)},"%Z":function(q){return q.Mc},"%%":function(){return"%"}},k=k.replace(/%%/g,"\0\0"),me)k.includes(Me)&&(k=k.replace(new RegExp(Me,"g"),me[Me](C)));return Me=function(q){var _e=Array(Ce(q)+1);return $e(q,_e,0,_e.length),_e}(k=k.replace(/\0\0/g,"%")),Me.length>E?0:(function(q,_e){a().set(q,_e>>>0)}(Me,T),Me.length-1)}re.fc();var pn=[null,Re,Dt,ut,yt,_t,wt,vt,xt,Tt,St,At,Ot,Et,Pt,P,D,U,W,te,G,be,Pe,We,tt,zt],fn={b:function(T){return Rt(T+24)+24},n:function(T){return(T=new se(T)).uc()||(T.hc(!0),ae--),T.ic(!1),at.push(T),T.sc(),T.vc()},ma:function(T){throw j("Unexpected exception thrown, this is not properly supported - aborting"),ye=!0,T},x:function(){he(0);var T=at.pop();if(T.Hc()&&!T.kc()){var E=T.Dc();E&&ve(E)(T.Zb),mt(T.Zb)}ie=0},e:function(){var T=ie;if(!T)return Be=0;var E=new se(T);E.cc(T);var k=E.bc();if(!k)return Be=0,T;for(var C=Array.prototype.slice.call(arguments),B=0;BF(C));else if(I)postMessage({targetThread:T,cmd:"processProxyingQueue",queue:C});else{if(!(T=re.Vb[T]))return;T.postMessage({cmd:"processProxyingQueue",queue:C})}return 1},Ea:function(){return-1},Pa:function(T,E){T=new Date(1e3*R(T)),p()[E>>2>>>0]=T.getUTCSeconds(),p()[E+4>>2>>>0]=T.getUTCMinutes(),p()[E+8>>2>>>0]=T.getUTCHours(),p()[E+12>>2>>>0]=T.getUTCDate(),p()[E+16>>2>>>0]=T.getUTCMonth(),p()[E+20>>2>>>0]=T.getUTCFullYear()-1900,p()[E+24>>2>>>0]=T.getUTCDay(),T=(T.getTime()-Date.UTC(T.getUTCFullYear(),0,1,0,0,0,0))/864e5|0,p()[E+28>>2>>>0]=T},Qa:function(T,E){T=new Date(1e3*R(T)),p()[E>>2>>>0]=T.getSeconds(),p()[E+4>>2>>>0]=T.getMinutes(),p()[E+8>>2>>>0]=T.getHours(),p()[E+12>>2>>>0]=T.getDate(),p()[E+16>>2>>>0]=T.getMonth(),p()[E+20>>2>>>0]=T.getFullYear()-1900,p()[E+24>>2>>>0]=T.getDay();var k=new Date(T.getFullYear(),0,1),C=(T.getTime()-k.getTime())/864e5|0;p()[E+28>>2>>>0]=C,p()[E+36>>2>>>0]=-60*T.getTimezoneOffset(),C=new Date(T.getFullYear(),6,1).getTimezoneOffset(),T=0|(C!=(k=k.getTimezoneOffset())&&T.getTimezoneOffset()==Math.min(k,C)),p()[E+32>>2>>>0]=T},Ra:function(T){var E=new Date(p()[T+20>>2>>>0]+1900,p()[T+16>>2>>>0],p()[T+12>>2>>>0],p()[T+8>>2>>>0],p()[T+4>>2>>>0],p()[T>>2>>>0],0),k=p()[T+32>>2>>>0],C=E.getTimezoneOffset(),B=new Date(E.getFullYear(),0,1),V=new Date(E.getFullYear(),6,1).getTimezoneOffset(),K=B.getTimezoneOffset(),ne=Math.min(K,V);return 0>k?p()[T+32>>2>>>0]=+(V!=K&&ne==C):0>2>>>0]=E.getDay(),k=(E.getTime()-B.getTime())/864e5|0,p()[T+28>>2>>>0]=k,p()[T>>2>>>0]=E.getSeconds(),p()[T+4>>2>>>0]=E.getMinutes(),p()[T+8>>2>>>0]=E.getHours(),p()[T+12>>2>>>0]=E.getDate(),p()[T+16>>2>>>0]=E.getMonth(),E.getTime()/1e3|0},Aa:U,Ba:W,Sa:function T(E,k,C){T.Ac||(T.Ac=!0,te(E,k,C))},y:function(){ge("")},U:function(){if(!x&&!O){var T="Blocking on the main thread is very dangerous, see https://emscripten.org/docs/porting/pthreads.html#blocking-on-the-main-browser-thread";st||(st={}),st[T]||(st[T]=1,x&&(T="warning: "+T),j(T))}},ra:function(){return 4294901760},B:gt,Ia:function(T,E,k){h().copyWithin(T>>>0,E>>>0,E+k>>>0)},F:function(){return x?s(3993).cpus().length:navigator.hardwareConcurrency},Da:function(T,E,k){Se.length=E,k>>=3;for(var C=0;C>>0];return(0>T?It[-T-1]:pn[T]).apply(null,Se)},qa:function(T){var E=h().length;if((T>>>=0)<=E||4294901760=k;k*=2){var C=E*(1+.2/k);C=Math.min(C,T+100663296);var B=Math;C=Math.max(T,C),B=B.min.call(B,4294901760,C+(65536-C%65536)%65536);e:{try{X.grow(B-ee.byteLength+65535>>>16),Ee(X.buffer);var V=1;break e}catch{}V=void 0}if(V)return!0}return!1},Na:function(){throw"unwind"},Ga:G,Ha:be,J:ot,I:Pe,S:We,ga:tt,R:zt,d:function(){return Be},na:function T(E,k){T.lc||(T.lc=function(){if(typeof crypto=="object"&&typeof crypto.getRandomValues=="function"){var B=new Uint8Array(1);return()=>(crypto.getRandomValues(B),B[0])}if(x)try{var V=s(Object(function(){var K=new Error("Cannot find module 'crypto'");throw K.code="MODULE_NOT_FOUND",K}()));return()=>V.randomBytes(1)[0]}catch{}return()=>ge("randomDevice")}());for(var C=0;C>0>>>0]=T.lc();return 0},ia:function(T,E,k){var C=de();try{return ve(T)(E,k)}catch(B){if(ce(C),B!==B+0)throw B;he(1,0)}},ja:function(T,E,k){var C=de();try{return ve(T)(E,k)}catch(B){if(ce(C),B!==B+0)throw B;he(1,0)}},K:function(T){var E=de();try{return ve(T)()}catch(k){if(ce(E),k!==k+0)throw k;he(1,0)}},f:function(T,E){var k=de();try{return ve(T)(E)}catch(C){if(ce(k),C!==C+0)throw C;he(1,0)}},P:function(T,E,k){var C=de();try{return ve(T)(E,k)}catch(B){if(ce(C),B!==B+0)throw B;he(1,0)}},Q:function(T,E,k){var C=de();try{return ve(T)(E,k)}catch(B){if(ce(C),B!==B+0)throw B;he(1,0)}},k:function(T,E,k){var C=de();try{return ve(T)(E,k)}catch(B){if(ce(C),B!==B+0)throw B;he(1,0)}},p:function(T,E,k,C){var B=de();try{return ve(T)(E,k,C)}catch(V){if(ce(B),V!==V+0)throw V;he(1,0)}},q:function(T,E,k,C,B){var V=de();try{return ve(T)(E,k,C,B)}catch(K){if(ce(V),K!==K+0)throw K;he(1,0)}},N:function(T,E,k,C,B,V){var K=de();try{return ve(T)(E,k,C,B,V)}catch(ne){if(ce(K),ne!==ne+0)throw ne;he(1,0)}},s:function(T,E,k,C,B,V){var K=de();try{return ve(T)(E,k,C,B,V)}catch(ne){if(ce(K),ne!==ne+0)throw ne;he(1,0)}},w:function(T,E,k,C,B,V,K){var ne=de();try{return ve(T)(E,k,C,B,V,K)}catch(pe){if(ce(ne),pe!==pe+0)throw pe;he(1,0)}},L:function(T,E,k,C,B,V,K,ne){var pe=de();try{return ve(T)(E,k,C,B,V,K,ne)}catch(me){if(ce(pe),me!==me+0)throw me;he(1,0)}},E:function(T,E,k,C,B,V,K,ne,pe,me,Me,Ze){var He=de();try{return ve(T)(E,k,C,B,V,K,ne,pe,me,Me,Ze)}catch(q){if(ce(He),q!==q+0)throw q;he(1,0)}},aa:function(T,E,k,C,B,V,K,ne){var pe=de();try{return ln(T,E,k,C,B,V,K,ne)}catch(me){if(ce(pe),me!==me+0)throw me;he(1,0)}},_:function(T,E,k,C,B,V,K){var ne=de();try{return tn(T,E,k,C,B,V,K)}catch(pe){if(ce(ne),pe!==pe+0)throw pe;he(1,0)}},Z:function(T,E,k,C,B){var V=de();try{return cn(T,E,k,C,B)}catch(K){if(ce(V),K!==K+0)throw K;he(1,0)}},ca:function(T,E,k,C){var B=de();try{return an(T,E,k,C)}catch(V){if(ce(B),V!==V+0)throw V;he(1,0)}},$:function(T){var E=de();try{return en(T)}catch(k){if(ce(E),k!==k+0)throw k;he(1,0)}},ba:function(T,E){var k=de();try{return un(T,E)}catch(C){if(ce(k),C!==C+0)throw C;he(1,0)}},Y:function(T,E,k){var C=de();try{return nn(T,E,k)}catch(B){if(ce(C),B!==B+0)throw B;he(1,0)}},g:function(T){var E=de();try{ve(T)()}catch(k){if(ce(E),k!==k+0)throw k;he(1,0)}},r:function(T,E){var k=de();try{ve(T)(E)}catch(C){if(ce(k),C!==C+0)throw C;he(1,0)}},i:function(T,E,k){var C=de();try{ve(T)(E,k)}catch(B){if(ce(C),B!==B+0)throw B;he(1,0)}},ha:function(T,E,k,C){var B=de();try{ve(T)(E,k,C)}catch(V){if(ce(B),V!==V+0)throw V;he(1,0)}},m:function(T,E,k,C){var B=de();try{ve(T)(E,k,C)}catch(V){if(ce(B),V!==V+0)throw V;he(1,0)}},v:function(T,E,k,C,B){var V=de();try{ve(T)(E,k,C,B)}catch(K){if(ce(V),K!==K+0)throw K;he(1,0)}},u:function(T,E,k,C,B,V){var K=de();try{ve(T)(E,k,C,B,V)}catch(ne){if(ce(K),ne!==ne+0)throw ne;he(1,0)}},O:function(T,E,k,C,B,V,K){var ne=de();try{ve(T)(E,k,C,B,V,K)}catch(pe){if(ce(ne),pe!==pe+0)throw pe;he(1,0)}},A:function(T,E,k,C,B,V,K,ne){var pe=de();try{ve(T)(E,k,C,B,V,K,ne)}catch(me){if(ce(pe),me!==me+0)throw me;he(1,0)}},ka:function(T,E,k,C,B,V,K,ne,pe){var me=de();try{ve(T)(E,k,C,B,V,K,ne,pe)}catch(Me){if(ce(me),Me!==Me+0)throw Me;he(1,0)}},C:function(T,E,k,C,B,V,K,ne,pe,me,Me){var Ze=de();try{ve(T)(E,k,C,B,V,K,ne,pe,me,Me)}catch(He){if(ce(Ze),He!==He+0)throw He;he(1,0)}},D:function(T,E,k,C,B,V,K,ne,pe,me,Me,Ze,He,q,_e,De){var nt=de();try{ve(T)(E,k,C,B,V,K,ne,pe,me,Me,Ze,He,q,_e,De)}catch(pt){if(ce(nt),pt!==pt+0)throw pt;he(1,0)}},fa:function(T,E,k,C,B,V,K,ne){var pe=de();try{rn(T,E,k,C,B,V,K,ne)}catch(me){if(ce(pe),me!==me+0)throw me;he(1,0)}},da:function(T,E,k,C,B,V,K,ne,pe,me,Me,Ze){var He=de();try{sn(T,E,k,C,B,V,K,ne,pe,me,Me,Ze)}catch(q){if(ce(He),q!==q+0)throw q;he(1,0)}},ea:function(T,E,k,C,B,V){var K=de();try{on(T,E,k,C,B,V)}catch(ne){if(ce(K),ne!==ne+0)throw ne;he(1,0)}},o:function(T){return T},a:X||t.wasmMemory,G:function(T){Be=T},la:Gt,z:function(T,E,k,C){return Gt(T,E,k,C)}};(function(){function T(B,V){t.asm=B.exports,re.qc.push(t.asm.sb),ze=t.asm.ub,Ge.unshift(t.asm.Va),Q=V,I||(je--,t.monitorRunDependencies&&t.monitorRunDependencies(je),je==0&&Ke&&(B=Ke,Ke=null,B()))}function E(B){T(B.instance,B.module)}function k(B){return function(){if(!H&&(A||O)){if(typeof fetch=="function"&&!Ie.startsWith("file://"))return fetch(Ie,{credentials:"same-origin"}).then(function(V){if(!V.ok)throw"failed to load wasm binary file at '"+Ie+"'";return V.arrayBuffer()}).catch(function(){return ct()});if(d)return new Promise(function(V,K){d(Ie,function(ne){V(new Uint8Array(ne))},K)})}return Promise.resolve().then(function(){return ct()})}().then(function(V){return WebAssembly.instantiate(V,C)}).then(function(V){return V}).then(B,function(V){j("failed to asynchronously prepare wasm: "+V),ge(V)})}var C={a:fn};if(I||(je++,t.monitorRunDependencies&&t.monitorRunDependencies(je)),t.instantiateWasm)try{return t.instantiateWasm(C,T)}catch(B){return j("Module.instantiateWasm callback failed with error: "+B),!1}(H||typeof WebAssembly.instantiateStreaming!="function"||ft()||Ie.startsWith("file://")||x||typeof fetch!="function"?k(E):fetch(Ie,{credentials:"same-origin"}).then(function(B){return WebAssembly.instantiateStreaming(B,C).then(E,function(V){return j("wasm streaming compile failed: "+V),j("falling back to ArrayBuffer instantiation"),k(E)})})).catch(r)})(),t.___wasm_call_ctors=function(){return(t.___wasm_call_ctors=t.asm.Va).apply(null,arguments)},t._OrtInit=function(){return(t._OrtInit=t.asm.Wa).apply(null,arguments)},t._OrtCreateSessionOptions=function(){return(t._OrtCreateSessionOptions=t.asm.Xa).apply(null,arguments)},t._OrtAppendExecutionProvider=function(){return(t._OrtAppendExecutionProvider=t.asm.Ya).apply(null,arguments)},t._OrtAddSessionConfigEntry=function(){return(t._OrtAddSessionConfigEntry=t.asm.Za).apply(null,arguments)},t._OrtReleaseSessionOptions=function(){return(t._OrtReleaseSessionOptions=t.asm._a).apply(null,arguments)},t._OrtCreateSession=function(){return(t._OrtCreateSession=t.asm.$a).apply(null,arguments)},t._OrtReleaseSession=function(){return(t._OrtReleaseSession=t.asm.ab).apply(null,arguments)},t._OrtGetInputCount=function(){return(t._OrtGetInputCount=t.asm.bb).apply(null,arguments)},t._OrtGetOutputCount=function(){return(t._OrtGetOutputCount=t.asm.cb).apply(null,arguments)},t._OrtGetInputName=function(){return(t._OrtGetInputName=t.asm.db).apply(null,arguments)},t._OrtGetOutputName=function(){return(t._OrtGetOutputName=t.asm.eb).apply(null,arguments)},t._OrtFree=function(){return(t._OrtFree=t.asm.fb).apply(null,arguments)},t._OrtCreateTensor=function(){return(t._OrtCreateTensor=t.asm.gb).apply(null,arguments)},t._OrtGetTensorData=function(){return(t._OrtGetTensorData=t.asm.hb).apply(null,arguments)},t._OrtReleaseTensor=function(){return(t._OrtReleaseTensor=t.asm.ib).apply(null,arguments)},t._OrtCreateRunOptions=function(){return(t._OrtCreateRunOptions=t.asm.jb).apply(null,arguments)},t._OrtAddRunConfigEntry=function(){return(t._OrtAddRunConfigEntry=t.asm.kb).apply(null,arguments)},t._OrtReleaseRunOptions=function(){return(t._OrtReleaseRunOptions=t.asm.lb).apply(null,arguments)},t._OrtRun=function(){return(t._OrtRun=t.asm.mb).apply(null,arguments)},t._OrtEndProfiling=function(){return(t._OrtEndProfiling=t.asm.nb).apply(null,arguments)};var Ct=t._pthread_self=function(){return(Ct=t._pthread_self=t.asm.ob).apply(null,arguments)},Rt=t._malloc=function(){return(Rt=t._malloc=t.asm.pb).apply(null,arguments)},qt=t._free=function(){return(qt=t._free=t.asm.qb).apply(null,arguments)},Wt=t._fflush=function(){return(Wt=t._fflush=t.asm.rb).apply(null,arguments)};t.__emscripten_tls_init=function(){return(t.__emscripten_tls_init=t.asm.sb).apply(null,arguments)};var Ht=t.___funcs_on_exit=function(){return(Ht=t.___funcs_on_exit=t.asm.tb).apply(null,arguments)},Xt=t.__emscripten_thread_init=function(){return(Xt=t.__emscripten_thread_init=t.asm.vb).apply(null,arguments)};t.__emscripten_thread_crashed=function(){return(t.__emscripten_thread_crashed=t.asm.wb).apply(null,arguments)};var $t,Yt=t._emscripten_run_in_main_runtime_thread_js=function(){return(Yt=t._emscripten_run_in_main_runtime_thread_js=t.asm.xb).apply(null,arguments)},Kt=t.__emscripten_proxy_execute_task_queue=function(){return(Kt=t.__emscripten_proxy_execute_task_queue=t.asm.yb).apply(null,arguments)},jt=t.__emscripten_thread_free_data=function(){return(jt=t.__emscripten_thread_free_data=t.asm.zb).apply(null,arguments)},Zt=t.__emscripten_thread_exit=function(){return(Zt=t.__emscripten_thread_exit=t.asm.Ab).apply(null,arguments)},he=t._setThrew=function(){return(he=t._setThrew=t.asm.Bb).apply(null,arguments)},Jt=t._emscripten_stack_set_limits=function(){return(Jt=t._emscripten_stack_set_limits=t.asm.Cb).apply(null,arguments)},de=t.stackSave=function(){return(de=t.stackSave=t.asm.Db).apply(null,arguments)},ce=t.stackRestore=function(){return(ce=t.stackRestore=t.asm.Eb).apply(null,arguments)},Bt=t.stackAlloc=function(){return(Bt=t.stackAlloc=t.asm.Fb).apply(null,arguments)},Ft=t.___cxa_can_catch=function(){return(Ft=t.___cxa_can_catch=t.asm.Gb).apply(null,arguments)},Qt=t.___cxa_is_pointer_type=function(){return(Qt=t.___cxa_is_pointer_type=t.asm.Hb).apply(null,arguments)},en=t.dynCall_j=function(){return(en=t.dynCall_j=t.asm.Ib).apply(null,arguments)},tn=t.dynCall_iiiiij=function(){return(tn=t.dynCall_iiiiij=t.asm.Jb).apply(null,arguments)},nn=t.dynCall_jii=function(){return(nn=t.dynCall_jii=t.asm.Kb).apply(null,arguments)},rn=t.dynCall_viiiiij=function(){return(rn=t.dynCall_viiiiij=t.asm.Lb).apply(null,arguments)},on=t.dynCall_vjji=function(){return(on=t.dynCall_vjji=t.asm.Mb).apply(null,arguments)},sn=t.dynCall_viiijjjii=function(){return(sn=t.dynCall_viiijjjii=t.asm.Nb).apply(null,arguments)},an=t.dynCall_iij=function(){return(an=t.dynCall_iij=t.asm.Ob).apply(null,arguments)},un=t.dynCall_ji=function(){return(un=t.dynCall_ji=t.asm.Pb).apply(null,arguments)},ln=t.dynCall_iiiiiij=function(){return(ln=t.dynCall_iiiiiij=t.asm.Qb).apply(null,arguments)},cn=t.dynCall_iiij=function(){return(cn=t.dynCall_iiij=t.asm.Rb).apply(null,arguments)};function dn(){function T(){if(!$t&&($t=!0,t.calledRun=!0,!ye)&&(I||it(Ge),e(t),t.onRuntimeInitialized&&t.onRuntimeInitialized(),!I)){if(t.postRun)for(typeof t.postRun=="function"&&(t.postRun=[t.postRun]);t.postRun.length;){var E=t.postRun.shift();Je.unshift(E)}it(Je)}}if(!(0{var c,l=(c=(c=typeof document<"u"&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(f){var a,h,p;f=f||{},a||(a=f!==void 0?f:{}),a.ready=new Promise(function(P,D){h=P,p=D});var u,o,t,e,r,i,d=Object.assign({},a),g="./this.program",m=(P,D)=>{throw D},b=typeof window=="object",y=typeof importScripts=="function",w=typeof process=="object"&&typeof process.versions=="object"&&typeof process.versions.node=="string",v="";w?(v=y?s(908).dirname(v)+"/":"//",i=()=>{r||(e=s(1384),r=s(908))},u=function(P,D){return i(),P=r.normalize(P),e.readFileSync(P,D?void 0:"utf8")},t=P=>((P=u(P,!0)).buffer||(P=new Uint8Array(P)),P),o=(P,D,F)=>{i(),P=r.normalize(P),e.readFile(P,function(R,U){R?F(R):D(U.buffer)})},1{if(x||0{var D=new XMLHttpRequest;return D.open("GET",P,!1),D.send(null),D.responseText},y&&(t=P=>{var D=new XMLHttpRequest;return D.open("GET",P,!1),D.responseType="arraybuffer",D.send(null),new Uint8Array(D.response)}),o=(P,D,F)=>{var R=new XMLHttpRequest;R.open("GET",P,!0),R.responseType="arraybuffer",R.onload=()=>{R.status==200||R.status==0&&R.response?D(R.response):F()},R.onerror=F,R.send(null)});var S,A=a.print||console.log.bind(console),O=a.printErr||console.warn.bind(console);Object.assign(a,d),d=null,a.thisProgram&&(g=a.thisProgram),a.quit&&(m=a.quit),a.wasmBinary&&(S=a.wasmBinary);var x=a.noExitRuntime||!1;typeof WebAssembly!="object"&&Ee("no native wasm support detected");var I,$,z,L,N,H,M=!1,j=typeof TextDecoder<"u"?new TextDecoder("utf8"):void 0;function Z(P,D,F){var R=(D>>>=0)+F;for(F=D;P[F]&&!(F>=R);)++F;if(16(U=(240&U)==224?(15&U)<<12|W<<6|Y:(7&U)<<18|W<<12|Y<<6|63&P[D++])?R+=String.fromCharCode(U):(U-=65536,R+=String.fromCharCode(55296|U>>10,56320|1023&U))}}else R+=String.fromCharCode(U)}return R}function X(P,D){return(P>>>=0)?Z(L,P,D):""}function Q(P,D,F,R){if(!(0>>=0;R=F+R-1;for(var W=0;W=Y&&(Y=65536+((1023&Y)<<10)|1023&P.charCodeAt(++W)),127>=Y){if(F>=R)break;D[F++>>>0]=Y}else{if(2047>=Y){if(F+1>=R)break;D[F++>>>0]=192|Y>>6}else{if(65535>=Y){if(F+2>=R)break;D[F++>>>0]=224|Y>>12}else{if(F+3>=R)break;D[F++>>>0]=240|Y>>18,D[F++>>>0]=128|Y>>12&63}D[F++>>>0]=128|Y>>6&63}D[F++>>>0]=128|63&Y}}return D[F>>>0]=0,F-U}function ee(P){for(var D=0,F=0;F=R?D++:2047>=R?D+=2:55296<=R&&57343>=R?(D+=4,++F):D+=3}return D}function ue(){var P=I.buffer;$=P,a.HEAP8=z=new Int8Array(P),a.HEAP16=new Int16Array(P),a.HEAP32=N=new Int32Array(P),a.HEAPU8=L=new Uint8Array(P),a.HEAPU16=new Uint16Array(P),a.HEAPU32=H=new Uint32Array(P),a.HEAPF32=new Float32Array(P),a.HEAPF64=new Float64Array(P)}var Ae,xe=[],oe=[],we=[],ye=[],ke=0;function Ne(){var P=a.preRun.shift();xe.unshift(P)}var Te,$e=0,Ce=null;function Ee(P){throw a.onAbort&&a.onAbort(P),O(P="Aborted("+P+")"),M=!0,P=new WebAssembly.RuntimeError(P+". Build with -sASSERTIONS for more info."),p(P),P}function Oe(){return Te.startsWith("data:application/octet-stream;base64,")}if(Te="ort-wasm.wasm",!Oe()){var ze=Te;Te=a.locateFile?a.locateFile(ze,v):v+ze}function Ve(){var P=Te;try{if(P==Te&&S)return new Uint8Array(S);if(t)return t(P);throw"both async and sync fetching of the wasm failed"}catch(D){Ee(D)}}function Ge(P){this.name="ExitStatus",this.message="Program terminated with exit("+P+")",this.status=P}function Ye(P){for(;0>2>>>0]=D},this.Eb=function(){return H[this.zb+4>>2>>>0]},this.Sb=function(D){H[this.zb+8>>2>>>0]=D},this.Wb=function(){return H[this.zb+8>>2>>>0]},this.Tb=function(){N[this.zb>>2>>>0]=0},this.Ib=function(D){z[this.zb+12>>0>>>0]=D?1:0},this.Pb=function(){return z[this.zb+12>>0>>>0]!=0},this.Jb=function(D){z[this.zb+13>>0>>>0]=D?1:0},this.Lb=function(){return z[this.zb+13>>0>>>0]!=0},this.Rb=function(D,F){this.Fb(0),this.Ub(D),this.Sb(F),this.Tb(),this.Ib(!1),this.Jb(!1)},this.Nb=function(){N[this.zb>>2>>>0]+=1},this.Xb=function(){var D=N[this.zb>>2>>>0];return N[this.zb>>2>>>0]=D-1,D===1},this.Fb=function(D){H[this.zb+16>>2>>>0]=D},this.Ob=function(){return H[this.zb+16>>2>>>0]},this.Qb=function(){if(bt(this.Eb()))return H[this.Db>>2>>>0];var D=this.Ob();return D!==0?D:this.Db}}function je(P){return st(new Ie(P).zb)}var Ke=[];function ge(P){var D=Ke[P];return D||(P>=Ke.length&&(Ke.length=P+1),Ke[P]=D=Ae.get(P)),D}function ft(P){var D=ee(P)+1,F=ve(D);return F&&Q(P,z,F,D),F}var ct={};function It(){if(!Qe){var P,D={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:(typeof navigator=="object"&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:g||"./this.program"};for(P in ct)ct[P]===void 0?delete D[P]:D[P]=ct[P];var F=[];for(P in D)F.push(P+"="+D[P]);Qe=F}return Qe}var Qe,dt=[null,[],[]];function ht(P,D){var F=dt[P];D===0||D===10?((P===1?A:O)(Z(F,0)),F.length=0):F.push(D)}var Re=0;function ot(P){return P%4==0&&(P%100!=0||P%400==0)}var re=[31,29,31,30,31,30,31,31,30,31,30,31],it=[31,28,31,30,31,30,31,31,30,31,30,31];function kt(P,D,F,R){function U(G,be,Pe){for(G=typeof G=="number"?G.toString():G||"";G.lengthtt?-1:0We-G.getDate())){G.setDate(G.getDate()+be);break}be-=We-G.getDate()+1,G.setDate(1),11>Pe?G.setMonth(Pe+1):(G.setMonth(0),G.setFullYear(G.getFullYear()+1))}return Pe=new Date(G.getFullYear()+1,0,4),be=te(new Date(G.getFullYear(),0,4)),Pe=te(Pe),0>=Y(be,G)?0>=Y(Pe,G)?G.getFullYear()+1:G.getFullYear():G.getFullYear()-1}var le=N[R+40>>2>>>0];for(var Se in R={$b:N[R>>2>>>0],Zb:N[R+4>>2>>>0],Gb:N[R+8>>2>>>0],Kb:N[R+12>>2>>>0],Hb:N[R+16>>2>>>0],Cb:N[R+20>>2>>>0],Ab:N[R+24>>2>>>0],Bb:N[R+28>>2>>>0],bc:N[R+32>>2>>>0],Yb:N[R+36>>2>>>0],ac:le?X(le):""},F=X(F),le={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})F=F.replace(new RegExp(Se,"g"),le[Se]);var Le="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),Fe="January February March April May June July August September October November December".split(" ");for(Se in le={"%a":function(G){return Le[G.Ab].substring(0,3)},"%A":function(G){return Le[G.Ab]},"%b":function(G){return Fe[G.Hb].substring(0,3)},"%B":function(G){return Fe[G.Hb]},"%C":function(G){return W((G.Cb+1900)/100|0,2)},"%d":function(G){return W(G.Kb,2)},"%e":function(G){return U(G.Kb,2," ")},"%g":function(G){return J(G).toString().substring(2)},"%G":function(G){return J(G)},"%H":function(G){return W(G.Gb,2)},"%I":function(G){return(G=G.Gb)==0?G=12:12G.Gb?"AM":"PM"},"%S":function(G){return W(G.$b,2)},"%t":function(){return" "},"%u":function(G){return G.Ab||7},"%U":function(G){return W(Math.floor((G.Bb+7-G.Ab)/7),2)},"%V":function(G){var be=Math.floor((G.Bb+7-(G.Ab+6)%7)/7);if(2>=(G.Ab+371-G.Bb-2)%7&&be++,be)be==53&&((Pe=(G.Ab+371-G.Bb)%7)==4||Pe==3&&ot(G.Cb)||(be=1));else{be=52;var Pe=(G.Ab+7-G.Bb-1)%7;(Pe==4||Pe==5&&ot(G.Cb%400-1))&&be++}return W(be,2)},"%w":function(G){return G.Ab},"%W":function(G){return W(Math.floor((G.Bb+7-(G.Ab+6)%7)/7),2)},"%y":function(G){return(G.Cb+1900).toString().substring(2)},"%Y":function(G){return G.Cb+1900},"%z":function(G){var be=0<=(G=G.Yb);return G=Math.abs(G)/60,(be?"+":"-")+("0000"+(G/60*100+G%60)).slice(-4)},"%Z":function(G){return G.ac},"%%":function(){return"%"}},F=F.replace(/%%/g,"\0\0"),le)F.includes(Se)&&(F=F.replace(new RegExp(Se,"g"),le[Se](R)));return Se=function(G){var be=Array(ee(G)+1);return Q(G,be,0,be.length),be}(F=F.replace(/\0\0/g,"%")),Se.length>D?0:(z.set(Se,P>>>0),Se.length-1)}var Dt={a:function(P){return ve(P+24)+24},m:function(P){return(P=new Ie(P)).Pb()||(P.Ib(!0),qe--),P.Jb(!1),Je.push(P),P.Nb(),P.Qb()},ia:function(P){throw O("Unexpected exception thrown, this is not properly supported - aborting"),M=!0,P},w:function(){ae(0);var P=Je.pop();if(P.Xb()&&!P.Lb()){var D=P.Wb();D&&ge(D)(P.Db),je(P.Db)}Ue=0},d:function(){var P=Ue;if(!P)return Re=0;var D=new Ie(P);D.Fb(P);var F=D.Eb();if(!F)return Re=0,P;for(var R=Array.prototype.slice.call(arguments),U=0;U>>2]+4294967296*N[P+4>>>2])),N[D>>2>>>0]=P.getUTCSeconds(),N[D+4>>2>>>0]=P.getUTCMinutes(),N[D+8>>2>>>0]=P.getUTCHours(),N[D+12>>2>>>0]=P.getUTCDate(),N[D+16>>2>>>0]=P.getUTCMonth(),N[D+20>>2>>>0]=P.getUTCFullYear()-1900,N[D+24>>2>>>0]=P.getUTCDay(),N[D+28>>2>>>0]=(P.getTime()-Date.UTC(P.getUTCFullYear(),0,1,0,0,0,0))/864e5|0},Ea:function(P,D){P=new Date(1e3*(H[P>>>2]+4294967296*N[P+4>>>2])),N[D>>2>>>0]=P.getSeconds(),N[D+4>>2>>>0]=P.getMinutes(),N[D+8>>2>>>0]=P.getHours(),N[D+12>>2>>>0]=P.getDate(),N[D+16>>2>>>0]=P.getMonth(),N[D+20>>2>>>0]=P.getFullYear()-1900,N[D+24>>2>>>0]=P.getDay();var F=new Date(P.getFullYear(),0,1);N[D+28>>2>>>0]=(P.getTime()-F.getTime())/864e5|0,N[D+36>>2>>>0]=-60*P.getTimezoneOffset();var R=new Date(P.getFullYear(),6,1).getTimezoneOffset();F=F.getTimezoneOffset(),N[D+32>>2>>>0]=0|(R!=F&&P.getTimezoneOffset()==Math.min(F,R))},Fa:function(P){var D=new Date(N[P+20>>2>>>0]+1900,N[P+16>>2>>>0],N[P+12>>2>>>0],N[P+8>>2>>>0],N[P+4>>2>>>0],N[P>>2>>>0],0),F=N[P+32>>2>>>0],R=D.getTimezoneOffset(),U=new Date(D.getFullYear(),0,1),W=new Date(D.getFullYear(),6,1).getTimezoneOffset(),Y=U.getTimezoneOffset(),te=Math.min(Y,W);return 0>F?N[P+32>>2>>>0]=+(W!=Y&&te==R):0>2>>>0]=D.getDay(),N[P+28>>2>>>0]=(D.getTime()-U.getTime())/864e5|0,N[P>>2>>>0]=D.getSeconds(),N[P+4>>2>>>0]=D.getMinutes(),N[P+8>>2>>>0]=D.getHours(),N[P+12>>2>>>0]=D.getDate(),N[P+16>>2>>>0]=D.getMonth(),D.getTime()/1e3|0},sa:function(){return-52},ta:function(){},Ga:function P(D,F,R){P.Vb||(P.Vb=!0,function(U,W,Y){function te(Fe){return(Fe=Fe.toTimeString().match(/\(([A-Za-z ]+)\)$/))?Fe[1]:"GMT"}var J=new Date().getFullYear(),le=new Date(J,0,1),Se=new Date(J,6,1);J=le.getTimezoneOffset();var Le=Se.getTimezoneOffset();N[U>>2>>>0]=60*Math.max(J,Le),N[W>>2>>>0]=+(J!=Le),U=te(le),W=te(Se),U=ft(U),W=ft(W),Le>2>>>0]=U,H[Y+4>>2>>>0]=W):(H[Y>>2>>>0]=W,H[Y+4>>2>>>0]=U)}(D,F,R))},B:function(){Ee("")},ma:function(){return 4294901760},I:w?()=>{var P=process.hrtime();return 1e3*P[0]+P[1]/1e6}:()=>performance.now(),xa:function(P,D,F){L.copyWithin(P>>>0,D>>>0,D+F>>>0)},G:function(P){var D=L.length;if(4294901760<(P>>>=0))return!1;for(var F=1;4>=F;F*=2){var R=D*(1+.2/F);R=Math.min(R,P+100663296);var U=Math;R=Math.max(P,R),U=U.min.call(U,4294901760,R+(65536-R%65536)%65536);e:{try{I.grow(U-$.byteLength+65535>>>16),ue();var W=1;break e}catch{}W=void 0}if(W)return!0}return!1},va:function(P,D){var F=0;return It().forEach(function(R,U){var W=D+F;for(U=H[P+4*U>>2>>>0]=W,W=0;W>0>>>0]=R.charCodeAt(W);z[U>>0>>>0]=0,F+=R.length+1}),0},wa:function(P,D){var F=It();H[P>>2>>>0]=F.length;var R=0;return F.forEach(function(U){R+=U.length+1}),H[D>>2>>>0]=R,0},ba:function(P){x||0>2>>>0],te=H[D+4>>2>>>0];D+=8;for(var J=0;J>>0]);U+=te}return H[R>>2>>>0]=U,0},c:function(){return Re},ja:function P(D,F){P.Mb||(P.Mb=function(){if(typeof crypto=="object"&&typeof crypto.getRandomValues=="function"){var U=new Uint8Array(1);return()=>(crypto.getRandomValues(U),U[0])}if(w)try{var W=s(Object(function(){var Y=new Error("Cannot find module 'crypto'");throw Y.code="MODULE_NOT_FOUND",Y}()));return()=>W.randomBytes(1)[0]}catch{}return()=>Ee("randomDevice")}());for(var R=0;R>0>>>0]=P.Mb();return 0},ea:function(P,D,F){var R=ie();try{return ge(P)(D,F)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},fa:function(P,D,F){var R=ie();try{return ge(P)(D,F)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},J:function(P){var D=ie();try{return ge(P)()}catch(F){if(se(D),F!==F+0)throw F;ae(1,0)}},e:function(P,D){var F=ie();try{return ge(P)(D)}catch(R){if(se(F),R!==R+0)throw R;ae(1,0)}},N:function(P,D,F){var R=ie();try{return ge(P)(D,F)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},O:function(P,D,F){var R=ie();try{return ge(P)(D,F)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},j:function(P,D,F){var R=ie();try{return ge(P)(D,F)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},o:function(P,D,F,R){var U=ie();try{return ge(P)(D,F,R)}catch(W){if(se(U),W!==W+0)throw W;ae(1,0)}},p:function(P,D,F,R,U){var W=ie();try{return ge(P)(D,F,R,U)}catch(Y){if(se(W),Y!==Y+0)throw Y;ae(1,0)}},M:function(P,D,F,R,U,W){var Y=ie();try{return ge(P)(D,F,R,U,W)}catch(te){if(se(Y),te!==te+0)throw te;ae(1,0)}},r:function(P,D,F,R,U,W){var Y=ie();try{return ge(P)(D,F,R,U,W)}catch(te){if(se(Y),te!==te+0)throw te;ae(1,0)}},v:function(P,D,F,R,U,W,Y){var te=ie();try{return ge(P)(D,F,R,U,W,Y)}catch(J){if(se(te),J!==J+0)throw J;ae(1,0)}},K:function(P,D,F,R,U,W,Y,te){var J=ie();try{return ge(P)(D,F,R,U,W,Y,te)}catch(le){if(se(J),le!==le+0)throw le;ae(1,0)}},D:function(P,D,F,R,U,W,Y,te,J,le,Se,Le){var Fe=ie();try{return ge(P)(D,F,R,U,W,Y,te,J,le,Se,Le)}catch(G){if(se(Fe),G!==G+0)throw G;ae(1,0)}},X:function(P,D,F,R,U,W,Y,te){var J=ie();try{return Ot(P,D,F,R,U,W,Y,te)}catch(le){if(se(J),le!==le+0)throw le;ae(1,0)}},V:function(P,D,F,R,U,W,Y){var te=ie();try{return _t(P,D,F,R,U,W,Y)}catch(J){if(se(te),J!==J+0)throw J;ae(1,0)}},U:function(P,D,F,R,U){var W=ie();try{return Et(P,D,F,R,U)}catch(Y){if(se(W),Y!==Y+0)throw Y;ae(1,0)}},Z:function(P,D,F,R){var U=ie();try{return St(P,D,F,R)}catch(W){if(se(U),W!==W+0)throw W;ae(1,0)}},W:function(P){var D=ie();try{return yt(P)}catch(F){if(se(D),F!==F+0)throw F;ae(1,0)}},Y:function(P,D){var F=ie();try{return At(P,D)}catch(R){if(se(F),R!==R+0)throw R;ae(1,0)}},T:function(P,D,F){var R=ie();try{return wt(P,D,F)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},f:function(P){var D=ie();try{ge(P)()}catch(F){if(se(D),F!==F+0)throw F;ae(1,0)}},q:function(P,D){var F=ie();try{ge(P)(D)}catch(R){if(se(F),R!==R+0)throw R;ae(1,0)}},h:function(P,D,F){var R=ie();try{ge(P)(D,F)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},da:function(P,D,F,R){var U=ie();try{ge(P)(D,F,R)}catch(W){if(se(U),W!==W+0)throw W;ae(1,0)}},l:function(P,D,F,R){var U=ie();try{ge(P)(D,F,R)}catch(W){if(se(U),W!==W+0)throw W;ae(1,0)}},t:function(P,D,F,R,U){var W=ie();try{ge(P)(D,F,R,U)}catch(Y){if(se(W),Y!==Y+0)throw Y;ae(1,0)}},u:function(P,D,F,R,U,W){var Y=ie();try{ge(P)(D,F,R,U,W)}catch(te){if(se(Y),te!==te+0)throw te;ae(1,0)}},x:function(P,D,F,R,U,W,Y){var te=ie();try{ge(P)(D,F,R,U,W,Y)}catch(J){if(se(te),J!==J+0)throw J;ae(1,0)}},z:function(P,D,F,R,U,W,Y,te){var J=ie();try{ge(P)(D,F,R,U,W,Y,te)}catch(le){if(se(J),le!==le+0)throw le;ae(1,0)}},ga:function(P,D,F,R,U,W,Y,te,J){var le=ie();try{ge(P)(D,F,R,U,W,Y,te,J)}catch(Se){if(se(le),Se!==Se+0)throw Se;ae(1,0)}},A:function(P,D,F,R,U,W,Y,te,J,le,Se){var Le=ie();try{ge(P)(D,F,R,U,W,Y,te,J,le,Se)}catch(Fe){if(se(Le),Fe!==Fe+0)throw Fe;ae(1,0)}},C:function(P,D,F,R,U,W,Y,te,J,le,Se,Le,Fe,G,be,Pe){var We=ie();try{ge(P)(D,F,R,U,W,Y,te,J,le,Se,Le,Fe,G,be,Pe)}catch(tt){if(se(We),tt!==tt+0)throw tt;ae(1,0)}},aa:function(P,D,F,R,U,W,Y,te){var J=ie();try{vt(P,D,F,R,U,W,Y,te)}catch(le){if(se(J),le!==le+0)throw le;ae(1,0)}},_:function(P,D,F,R,U,W,Y,te,J,le,Se,Le){var Fe=ie();try{Tt(P,D,F,R,U,W,Y,te,J,le,Se,Le)}catch(G){if(se(Fe),G!==G+0)throw G;ae(1,0)}},$:function(P,D,F,R,U,W){var Y=ie();try{xt(P,D,F,R,U,W)}catch(te){if(se(Y),te!==te+0)throw te;ae(1,0)}},n:function(P){return P},F:function(P){Re=P},ha:kt,y:function(P,D,F,R){return kt(P,D,F,R)}};(function(){function P(U){a.asm=U.exports,I=a.asm.Ka,ue(),Ae=a.asm.ib,oe.unshift(a.asm.La),$e--,a.monitorRunDependencies&&a.monitorRunDependencies($e),$e==0&&Ce&&(U=Ce,Ce=null,U())}function D(U){P(U.instance)}function F(U){return function(){if(!S&&(b||y)){if(typeof fetch=="function"&&!Te.startsWith("file://"))return fetch(Te,{credentials:"same-origin"}).then(function(W){if(!W.ok)throw"failed to load wasm binary file at '"+Te+"'";return W.arrayBuffer()}).catch(function(){return Ve()});if(o)return new Promise(function(W,Y){o(Te,function(te){W(new Uint8Array(te))},Y)})}return Promise.resolve().then(function(){return Ve()})}().then(function(W){return WebAssembly.instantiate(W,R)}).then(function(W){return W}).then(U,function(W){O("failed to asynchronously prepare wasm: "+W),Ee(W)})}var R={a:Dt};if($e++,a.monitorRunDependencies&&a.monitorRunDependencies($e),a.instantiateWasm)try{return a.instantiateWasm(R,P)}catch(U){return O("Module.instantiateWasm callback failed with error: "+U),!1}(S||typeof WebAssembly.instantiateStreaming!="function"||Oe()||Te.startsWith("file://")||w||typeof fetch!="function"?F(D):fetch(Te,{credentials:"same-origin"}).then(function(U){return WebAssembly.instantiateStreaming(U,R).then(D,function(W){return O("wasm streaming compile failed: "+W),O("falling back to ArrayBuffer instantiation"),F(D)})})).catch(p)})(),a.___wasm_call_ctors=function(){return(a.___wasm_call_ctors=a.asm.La).apply(null,arguments)},a._OrtInit=function(){return(a._OrtInit=a.asm.Ma).apply(null,arguments)},a._OrtCreateSessionOptions=function(){return(a._OrtCreateSessionOptions=a.asm.Na).apply(null,arguments)},a._OrtAppendExecutionProvider=function(){return(a._OrtAppendExecutionProvider=a.asm.Oa).apply(null,arguments)},a._OrtAddSessionConfigEntry=function(){return(a._OrtAddSessionConfigEntry=a.asm.Pa).apply(null,arguments)},a._OrtReleaseSessionOptions=function(){return(a._OrtReleaseSessionOptions=a.asm.Qa).apply(null,arguments)},a._OrtCreateSession=function(){return(a._OrtCreateSession=a.asm.Ra).apply(null,arguments)},a._OrtReleaseSession=function(){return(a._OrtReleaseSession=a.asm.Sa).apply(null,arguments)},a._OrtGetInputCount=function(){return(a._OrtGetInputCount=a.asm.Ta).apply(null,arguments)},a._OrtGetOutputCount=function(){return(a._OrtGetOutputCount=a.asm.Ua).apply(null,arguments)},a._OrtGetInputName=function(){return(a._OrtGetInputName=a.asm.Va).apply(null,arguments)},a._OrtGetOutputName=function(){return(a._OrtGetOutputName=a.asm.Wa).apply(null,arguments)},a._OrtFree=function(){return(a._OrtFree=a.asm.Xa).apply(null,arguments)},a._OrtCreateTensor=function(){return(a._OrtCreateTensor=a.asm.Ya).apply(null,arguments)},a._OrtGetTensorData=function(){return(a._OrtGetTensorData=a.asm.Za).apply(null,arguments)},a._OrtReleaseTensor=function(){return(a._OrtReleaseTensor=a.asm._a).apply(null,arguments)},a._OrtCreateRunOptions=function(){return(a._OrtCreateRunOptions=a.asm.$a).apply(null,arguments)},a._OrtAddRunConfigEntry=function(){return(a._OrtAddRunConfigEntry=a.asm.ab).apply(null,arguments)},a._OrtReleaseRunOptions=function(){return(a._OrtReleaseRunOptions=a.asm.bb).apply(null,arguments)},a._OrtRun=function(){return(a._OrtRun=a.asm.cb).apply(null,arguments)},a._OrtEndProfiling=function(){return(a._OrtEndProfiling=a.asm.db).apply(null,arguments)};var et,ve=a._malloc=function(){return(ve=a._malloc=a.asm.eb).apply(null,arguments)},st=a._free=function(){return(st=a._free=a.asm.fb).apply(null,arguments)},gt=a._fflush=function(){return(gt=a._fflush=a.asm.gb).apply(null,arguments)},at=a.___funcs_on_exit=function(){return(at=a.___funcs_on_exit=a.asm.hb).apply(null,arguments)},ae=a._setThrew=function(){return(ae=a._setThrew=a.asm.jb).apply(null,arguments)},ie=a.stackSave=function(){return(ie=a.stackSave=a.asm.kb).apply(null,arguments)},se=a.stackRestore=function(){return(se=a.stackRestore=a.asm.lb).apply(null,arguments)},mt=a.stackAlloc=function(){return(mt=a.stackAlloc=a.asm.mb).apply(null,arguments)},ut=a.___cxa_can_catch=function(){return(ut=a.___cxa_can_catch=a.asm.nb).apply(null,arguments)},bt=a.___cxa_is_pointer_type=function(){return(bt=a.___cxa_is_pointer_type=a.asm.ob).apply(null,arguments)},yt=a.dynCall_j=function(){return(yt=a.dynCall_j=a.asm.pb).apply(null,arguments)},_t=a.dynCall_iiiiij=function(){return(_t=a.dynCall_iiiiij=a.asm.qb).apply(null,arguments)},wt=a.dynCall_jii=function(){return(wt=a.dynCall_jii=a.asm.rb).apply(null,arguments)},vt=a.dynCall_viiiiij=function(){return(vt=a.dynCall_viiiiij=a.asm.sb).apply(null,arguments)},xt=a.dynCall_vjji=function(){return(xt=a.dynCall_vjji=a.asm.tb).apply(null,arguments)},Tt=a.dynCall_viiijjjii=function(){return(Tt=a.dynCall_viiijjjii=a.asm.ub).apply(null,arguments)},St=a.dynCall_iij=function(){return(St=a.dynCall_iij=a.asm.vb).apply(null,arguments)},At=a.dynCall_ji=function(){return(At=a.dynCall_ji=a.asm.wb).apply(null,arguments)},Ot=a.dynCall_iiiiiij=function(){return(Ot=a.dynCall_iiiiiij=a.asm.xb).apply(null,arguments)},Et=a.dynCall_iiij=function(){return(Et=a.dynCall_iiij=a.asm.yb).apply(null,arguments)};function Pt(){function P(){if(!et&&(et=!0,a.calledRun=!0,!M)){if(Ye(oe),h(a),a.onRuntimeInitialized&&a.onRuntimeInitialized(),a.postRun)for(typeof a.postRun=="function"&&(a.postRun=[a.postRun]);a.postRun.length;){var D=a.postRun.shift();ye.unshift(D)}Ye(ye)}}if(!(0<$e)){if(a.preRun)for(typeof a.preRun=="function"&&(a.preRun=[a.preRun]);a.preRun.length;)Ne();Ye(xe),0<$e||(a.setStatus?(a.setStatus("Running..."),setTimeout(function(){setTimeout(function(){a.setStatus("")},1),P()},1)):P())}}if(a.UTF8ToString=X,a.stringToUTF8=function(P,D,F){return Q(P,L,D,F)},a.lengthBytesUTF8=ee,a.stackSave=ie,a.stackRestore=se,a.stackAlloc=mt,Ce=function P(){et||Pt(),et||(Ce=P)},a.preInit)for(typeof a.preInit=="function"&&(a.preInit=[a.preInit]);0{_.exports=function(n,s){for(var c=new Array(arguments.length-1),l=0,f=2,a=!0;f{var s=n;s.length=function(h){var p=h.length;if(!p)return 0;for(var u=0;--p%4>1&&h.charAt(p)==="=";)++u;return Math.ceil(3*h.length)/4-u};for(var c=new Array(64),l=new Array(123),f=0;f<64;)l[c[f]=f<26?f+65:f<52?f+71:f<62?f-4:f-59|43]=f++;s.encode=function(h,p,u){for(var o,t=null,e=[],r=0,i=0;p>2],o=(3&d)<<4,i=1;break;case 1:e[r++]=c[o|d>>4],o=(15&d)<<2,i=2;break;case 2:e[r++]=c[o|d>>6],e[r++]=c[63&d],i=0}r>8191&&((t||(t=[])).push(String.fromCharCode.apply(String,e)),r=0)}return i&&(e[r++]=c[o],e[r++]=61,i===1&&(e[r++]=61)),t?(r&&t.push(String.fromCharCode.apply(String,e.slice(0,r))),t.join("")):String.fromCharCode.apply(String,e.slice(0,r))};var a="invalid encoding";s.decode=function(h,p,u){for(var o,t=u,e=0,r=0;r1)break;if((i=l[i])===void 0)throw Error(a);switch(e){case 0:o=i,e=1;break;case 1:p[u++]=o<<2|(48&i)>>4,o=i,e=2;break;case 2:p[u++]=(15&o)<<4|(60&i)>>2,o=i,e=3;break;case 3:p[u++]=(3&o)<<6|i,e=0}}if(e===1)throw Error(a);return u-t},s.test=function(h){return/^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$/.test(h)}},9211:_=>{function n(){this._listeners={}}_.exports=n,n.prototype.on=function(s,c,l){return(this._listeners[s]||(this._listeners[s]=[])).push({fn:c,ctx:l||this}),this},n.prototype.off=function(s,c){if(s===void 0)this._listeners={};else if(c===void 0)this._listeners[s]=[];else for(var l=this._listeners[s],f=0;f{function n(a){return typeof Float32Array<"u"?function(){var h=new Float32Array([-0]),p=new Uint8Array(h.buffer),u=p[3]===128;function o(i,d,g){h[0]=i,d[g]=p[0],d[g+1]=p[1],d[g+2]=p[2],d[g+3]=p[3]}function t(i,d,g){h[0]=i,d[g]=p[3],d[g+1]=p[2],d[g+2]=p[1],d[g+3]=p[0]}function e(i,d){return p[0]=i[d],p[1]=i[d+1],p[2]=i[d+2],p[3]=i[d+3],h[0]}function r(i,d){return p[3]=i[d],p[2]=i[d+1],p[1]=i[d+2],p[0]=i[d+3],h[0]}a.writeFloatLE=u?o:t,a.writeFloatBE=u?t:o,a.readFloatLE=u?e:r,a.readFloatBE=u?r:e}():function(){function h(u,o,t,e){var r=o<0?1:0;if(r&&(o=-o),o===0)u(1/o>0?0:2147483648,t,e);else if(isNaN(o))u(2143289344,t,e);else if(o>34028234663852886e22)u((r<<31|2139095040)>>>0,t,e);else if(o<11754943508222875e-54)u((r<<31|Math.round(o/1401298464324817e-60))>>>0,t,e);else{var i=Math.floor(Math.log(o)/Math.LN2);u((r<<31|i+127<<23|8388607&Math.round(o*Math.pow(2,-i)*8388608))>>>0,t,e)}}function p(u,o,t){var e=u(o,t),r=2*(e>>31)+1,i=e>>>23&255,d=8388607&e;return i===255?d?NaN:r*(1/0):i===0?1401298464324817e-60*r*d:r*Math.pow(2,i-150)*(d+8388608)}a.writeFloatLE=h.bind(null,s),a.writeFloatBE=h.bind(null,c),a.readFloatLE=p.bind(null,l),a.readFloatBE=p.bind(null,f)}(),typeof Float64Array<"u"?function(){var h=new Float64Array([-0]),p=new Uint8Array(h.buffer),u=p[7]===128;function o(i,d,g){h[0]=i,d[g]=p[0],d[g+1]=p[1],d[g+2]=p[2],d[g+3]=p[3],d[g+4]=p[4],d[g+5]=p[5],d[g+6]=p[6],d[g+7]=p[7]}function t(i,d,g){h[0]=i,d[g]=p[7],d[g+1]=p[6],d[g+2]=p[5],d[g+3]=p[4],d[g+4]=p[3],d[g+5]=p[2],d[g+6]=p[1],d[g+7]=p[0]}function e(i,d){return p[0]=i[d],p[1]=i[d+1],p[2]=i[d+2],p[3]=i[d+3],p[4]=i[d+4],p[5]=i[d+5],p[6]=i[d+6],p[7]=i[d+7],h[0]}function r(i,d){return p[7]=i[d],p[6]=i[d+1],p[5]=i[d+2],p[4]=i[d+3],p[3]=i[d+4],p[2]=i[d+5],p[1]=i[d+6],p[0]=i[d+7],h[0]}a.writeDoubleLE=u?o:t,a.writeDoubleBE=u?t:o,a.readDoubleLE=u?e:r,a.readDoubleBE=u?r:e}():function(){function h(u,o,t,e,r,i){var d=e<0?1:0;if(d&&(e=-e),e===0)u(0,r,i+o),u(1/e>0?0:2147483648,r,i+t);else if(isNaN(e))u(0,r,i+o),u(2146959360,r,i+t);else if(e>17976931348623157e292)u(0,r,i+o),u((d<<31|2146435072)>>>0,r,i+t);else{var g;if(e<22250738585072014e-324)u((g=e/5e-324)>>>0,r,i+o),u((d<<31|g/4294967296)>>>0,r,i+t);else{var m=Math.floor(Math.log(e)/Math.LN2);m===1024&&(m=1023),u(4503599627370496*(g=e*Math.pow(2,-m))>>>0,r,i+o),u((d<<31|m+1023<<20|1048576*g&1048575)>>>0,r,i+t)}}}function p(u,o,t,e,r){var i=u(e,r+o),d=u(e,r+t),g=2*(d>>31)+1,m=d>>>20&2047,b=4294967296*(1048575&d)+i;return m===2047?b?NaN:g*(1/0):m===0?5e-324*g*b:g*Math.pow(2,m-1075)*(b+4503599627370496)}a.writeDoubleLE=h.bind(null,s,0,4),a.writeDoubleBE=h.bind(null,c,4,0),a.readDoubleLE=p.bind(null,l,0,4),a.readDoubleBE=p.bind(null,f,4,0)}(),a}function s(a,h,p){h[p]=255&a,h[p+1]=a>>>8&255,h[p+2]=a>>>16&255,h[p+3]=a>>>24}function c(a,h,p){h[p]=a>>>24,h[p+1]=a>>>16&255,h[p+2]=a>>>8&255,h[p+3]=255&a}function l(a,h){return(a[h]|a[h+1]<<8|a[h+2]<<16|a[h+3]<<24)>>>0}function f(a,h){return(a[h]<<24|a[h+1]<<16|a[h+2]<<8|a[h+3])>>>0}_.exports=n(n)},7199:module=>{function inquire(moduleName){try{var mod=eval("quire".replace(/^/,"re"))(moduleName);if(mod&&(mod.length||Object.keys(mod).length))return mod}catch(_){}return null}module.exports=inquire},6662:_=>{_.exports=function(n,s,c){var l=c||8192,f=l>>>1,a=null,h=l;return function(p){if(p<1||p>f)return n(p);h+p>l&&(a=n(l),h=0);var u=s.call(a,h,h+=p);return 7&h&&(h=1+(7|h)),u}}},4997:(_,n)=>{var s=n;s.length=function(c){for(var l=0,f=0,a=0;a191&&a<224?p[u++]=(31&a)<<6|63&c[l++]:a>239&&a<365?(a=((7&a)<<18|(63&c[l++])<<12|(63&c[l++])<<6|63&c[l++])-65536,p[u++]=55296+(a>>10),p[u++]=56320+(1023&a)):p[u++]=(15&a)<<12|(63&c[l++])<<6|63&c[l++],u>8191&&((h||(h=[])).push(String.fromCharCode.apply(String,p)),u=0);return h?(u&&h.push(String.fromCharCode.apply(String,p.slice(0,u))),h.join("")):String.fromCharCode.apply(String,p.slice(0,u))},s.write=function(c,l,f){for(var a,h,p=f,u=0;u>6|192,l[f++]=63&a|128):(64512&a)==55296&&(64512&(h=c.charCodeAt(u+1)))==56320?(a=65536+((1023&a)<<10)+(1023&h),++u,l[f++]=a>>18|240,l[f++]=a>>12&63|128,l[f++]=a>>6&63|128,l[f++]=63&a|128):(l[f++]=a>>12|224,l[f++]=a>>6&63|128,l[f++]=63&a|128);return f-p}},3442:(_,n)=>{n.__esModule=!0;var s=function(){function c(l){if(!l)throw new TypeError("Invalid argument; `value` has no value.");this.value=c.EMPTY,l&&c.isGuid(l)&&(this.value=l)}return c.isGuid=function(l){var f=l.toString();return l&&(l instanceof c||c.validator.test(f))},c.create=function(){return new c([c.gen(2),c.gen(1),c.gen(1),c.gen(1),c.gen(3)].join("-"))},c.createEmpty=function(){return new c("emptyguid")},c.parse=function(l){return new c(l)},c.raw=function(){return[c.gen(2),c.gen(1),c.gen(1),c.gen(1),c.gen(3)].join("-")},c.gen=function(l){for(var f="",a=0;a{_.exports=s;var n=null;try{n=new WebAssembly.Instance(new WebAssembly.Module(new Uint8Array([0,97,115,109,1,0,0,0,1,13,2,96,0,1,127,96,4,127,127,127,127,1,127,3,7,6,0,1,1,1,1,1,6,6,1,127,1,65,0,11,7,50,6,3,109,117,108,0,1,5,100,105,118,95,115,0,2,5,100,105,118,95,117,0,3,5,114,101,109,95,115,0,4,5,114,101,109,95,117,0,5,8,103,101,116,95,104,105,103,104,0,0,10,191,1,6,4,0,35,0,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,126,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,127,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,128,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,129,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,130,34,4,66,32,135,167,36,0,32,4,167,11])),{}).exports}catch{}function s(x,I,$){this.low=0|x,this.high=0|I,this.unsigned=!!$}function c(x){return(x&&x.__isLong__)===!0}s.prototype.__isLong__,Object.defineProperty(s.prototype,"__isLong__",{value:!0}),s.isLong=c;var l={},f={};function a(x,I){var $,z,L;return I?(L=0<=(x>>>=0)&&x<256)&&(z=f[x])?z:($=p(x,(0|x)<0?-1:0,!0),L&&(f[x]=$),$):(L=-128<=(x|=0)&&x<128)&&(z=l[x])?z:($=p(x,x<0?-1:0,!1),L&&(l[x]=$),$)}function h(x,I){if(isNaN(x))return I?m:g;if(I){if(x<0)return m;if(x>=r)return S}else{if(x<=-i)return A;if(x+1>=i)return v}return x<0?h(-x,I).neg():p(x%e|0,x/e|0,I)}function p(x,I,$){return new s(x,I,$)}s.fromInt=a,s.fromNumber=h,s.fromBits=p;var u=Math.pow;function o(x,I,$){if(x.length===0)throw Error("empty string");if(x==="NaN"||x==="Infinity"||x==="+Infinity"||x==="-Infinity")return g;if(typeof I=="number"?($=I,I=!1):I=!!I,($=$||10)<2||36<$)throw RangeError("radix");var z;if((z=x.indexOf("-"))>0)throw Error("interior hyphen");if(z===0)return o(x.substring(1),I,$).neg();for(var L=h(u($,8)),N=g,H=0;H>>0:this.low},O.toNumber=function(){return this.unsigned?(this.high>>>0)*e+(this.low>>>0):this.high*e+(this.low>>>0)},O.toString=function(x){if((x=x||10)<2||36>>0).toString(x);if((N=M).isZero())return j+H;for(;j.length<6;)j="0"+j;H=""+j+H}},O.getHighBits=function(){return this.high},O.getHighBitsUnsigned=function(){return this.high>>>0},O.getLowBits=function(){return this.low},O.getLowBitsUnsigned=function(){return this.low>>>0},O.getNumBitsAbs=function(){if(this.isNegative())return this.eq(A)?64:this.neg().getNumBitsAbs();for(var x=this.high!=0?this.high:this.low,I=31;I>0&&!(x&1<=0},O.isOdd=function(){return(1&this.low)==1},O.isEven=function(){return(1&this.low)==0},O.equals=function(x){return c(x)||(x=t(x)),(this.unsigned===x.unsigned||this.high>>>31!=1||x.high>>>31!=1)&&this.high===x.high&&this.low===x.low},O.eq=O.equals,O.notEquals=function(x){return!this.eq(x)},O.neq=O.notEquals,O.ne=O.notEquals,O.lessThan=function(x){return this.comp(x)<0},O.lt=O.lessThan,O.lessThanOrEqual=function(x){return this.comp(x)<=0},O.lte=O.lessThanOrEqual,O.le=O.lessThanOrEqual,O.greaterThan=function(x){return this.comp(x)>0},O.gt=O.greaterThan,O.greaterThanOrEqual=function(x){return this.comp(x)>=0},O.gte=O.greaterThanOrEqual,O.ge=O.greaterThanOrEqual,O.compare=function(x){if(c(x)||(x=t(x)),this.eq(x))return 0;var I=this.isNegative(),$=x.isNegative();return I&&!$?-1:!I&&$?1:this.unsigned?x.high>>>0>this.high>>>0||x.high===this.high&&x.low>>>0>this.low>>>0?-1:1:this.sub(x).isNegative()?-1:1},O.comp=O.compare,O.negate=function(){return!this.unsigned&&this.eq(A)?A:this.not().add(b)},O.neg=O.negate,O.add=function(x){c(x)||(x=t(x));var I=this.high>>>16,$=65535&this.high,z=this.low>>>16,L=65535&this.low,N=x.high>>>16,H=65535&x.high,M=x.low>>>16,j=0,Z=0,X=0,Q=0;return X+=(Q+=L+(65535&x.low))>>>16,Z+=(X+=z+M)>>>16,j+=(Z+=$+H)>>>16,j+=I+N,p((X&=65535)<<16|(Q&=65535),(j&=65535)<<16|(Z&=65535),this.unsigned)},O.subtract=function(x){return c(x)||(x=t(x)),this.add(x.neg())},O.sub=O.subtract,O.multiply=function(x){if(this.isZero())return g;if(c(x)||(x=t(x)),n)return p(n.mul(this.low,this.high,x.low,x.high),n.get_high(),this.unsigned);if(x.isZero())return g;if(this.eq(A))return x.isOdd()?A:g;if(x.eq(A))return this.isOdd()?A:g;if(this.isNegative())return x.isNegative()?this.neg().mul(x.neg()):this.neg().mul(x).neg();if(x.isNegative())return this.mul(x.neg()).neg();if(this.lt(d)&&x.lt(d))return h(this.toNumber()*x.toNumber(),this.unsigned);var I=this.high>>>16,$=65535&this.high,z=this.low>>>16,L=65535&this.low,N=x.high>>>16,H=65535&x.high,M=x.low>>>16,j=65535&x.low,Z=0,X=0,Q=0,ee=0;return Q+=(ee+=L*j)>>>16,X+=(Q+=z*j)>>>16,Q&=65535,X+=(Q+=L*M)>>>16,Z+=(X+=$*j)>>>16,X&=65535,Z+=(X+=z*M)>>>16,X&=65535,Z+=(X+=L*H)>>>16,Z+=I*j+$*M+z*H+L*N,p((Q&=65535)<<16|(ee&=65535),(Z&=65535)<<16|(X&=65535),this.unsigned)},O.mul=O.multiply,O.divide=function(x){if(c(x)||(x=t(x)),x.isZero())throw Error("division by zero");var I,$,z;if(n)return this.unsigned||this.high!==-2147483648||x.low!==-1||x.high!==-1?p((this.unsigned?n.div_u:n.div_s)(this.low,this.high,x.low,x.high),n.get_high(),this.unsigned):this;if(this.isZero())return this.unsigned?m:g;if(this.unsigned){if(x.unsigned||(x=x.toUnsigned()),x.gt(this))return m;if(x.gt(this.shru(1)))return y;z=m}else{if(this.eq(A))return x.eq(b)||x.eq(w)?A:x.eq(A)?b:(I=this.shr(1).div(x).shl(1)).eq(g)?x.isNegative()?b:w:($=this.sub(x.mul(I)),z=I.add($.div(x)));if(x.eq(A))return this.unsigned?m:g;if(this.isNegative())return x.isNegative()?this.neg().div(x.neg()):this.neg().div(x).neg();if(x.isNegative())return this.div(x.neg()).neg();z=g}for($=this;$.gte(x);){I=Math.max(1,Math.floor($.toNumber()/x.toNumber()));for(var L=Math.ceil(Math.log(I)/Math.LN2),N=L<=48?1:u(2,L-48),H=h(I),M=H.mul(x);M.isNegative()||M.gt($);)M=(H=h(I-=N,this.unsigned)).mul(x);H.isZero()&&(H=b),z=z.add(H),$=$.sub(M)}return z},O.div=O.divide,O.modulo=function(x){return c(x)||(x=t(x)),n?p((this.unsigned?n.rem_u:n.rem_s)(this.low,this.high,x.low,x.high),n.get_high(),this.unsigned):this.sub(this.div(x).mul(x))},O.mod=O.modulo,O.rem=O.modulo,O.not=function(){return p(~this.low,~this.high,this.unsigned)},O.and=function(x){return c(x)||(x=t(x)),p(this.low&x.low,this.high&x.high,this.unsigned)},O.or=function(x){return c(x)||(x=t(x)),p(this.low|x.low,this.high|x.high,this.unsigned)},O.xor=function(x){return c(x)||(x=t(x)),p(this.low^x.low,this.high^x.high,this.unsigned)},O.shiftLeft=function(x){return c(x)&&(x=x.toInt()),(x&=63)==0?this:x<32?p(this.low<>>32-x,this.unsigned):p(0,this.low<>>x|this.high<<32-x,this.high>>x,this.unsigned):p(this.high>>x-32,this.high>=0?0:-1,this.unsigned)},O.shr=O.shiftRight,O.shiftRightUnsigned=function(x){if(c(x)&&(x=x.toInt()),(x&=63)==0)return this;var I=this.high;return x<32?p(this.low>>>x|I<<32-x,I>>>x,this.unsigned):p(x===32?I:I>>>x-32,0,this.unsigned)},O.shru=O.shiftRightUnsigned,O.shr_u=O.shiftRightUnsigned,O.toSigned=function(){return this.unsigned?p(this.low,this.high,!1):this},O.toUnsigned=function(){return this.unsigned?this:p(this.low,this.high,!0)},O.toBytes=function(x){return x?this.toBytesLE():this.toBytesBE()},O.toBytesLE=function(){var x=this.high,I=this.low;return[255&I,I>>>8&255,I>>>16&255,I>>>24,255&x,x>>>8&255,x>>>16&255,x>>>24]},O.toBytesBE=function(){var x=this.high,I=this.low;return[x>>>24,x>>>16&255,x>>>8&255,255&x,I>>>24,I>>>16&255,I>>>8&255,255&I]},s.fromBytes=function(x,I,$){return $?s.fromBytesLE(x,I):s.fromBytesBE(x,I)},s.fromBytesLE=function(x,I){return new s(x[0]|x[1]<<8|x[2]<<16|x[3]<<24,x[4]|x[5]<<8|x[6]<<16|x[7]<<24,I)},s.fromBytesBE=function(x,I){return new s(x[4]<<24|x[5]<<16|x[6]<<8|x[7],x[0]<<24|x[1]<<16|x[2]<<8|x[3],I)}},1446:(_,n,s)=>{var c,l,f,a=s(2100),h=a.Reader,p=a.Writer,u=a.util,o=a.roots.default||(a.roots.default={});o.onnx=((f={}).Version=(c={},(l=Object.create(c))[c[0]="_START_VERSION"]=0,l[c[1]="IR_VERSION_2017_10_10"]=1,l[c[2]="IR_VERSION_2017_10_30"]=2,l[c[3]="IR_VERSION_2017_11_3"]=3,l[c[4]="IR_VERSION_2019_1_22"]=4,l[c[5]="IR_VERSION"]=5,l),f.AttributeProto=function(){function t(e){if(this.floats=[],this.ints=[],this.strings=[],this.tensors=[],this.graphs=[],e)for(var r=Object.keys(e),i=0;i>>3){case 1:d.name=e.string();break;case 21:d.refAttrName=e.string();break;case 13:d.docString=e.string();break;case 20:d.type=e.int32();break;case 2:d.f=e.float();break;case 3:d.i=e.int64();break;case 4:d.s=e.bytes();break;case 5:d.t=o.onnx.TensorProto.decode(e,e.uint32());break;case 6:d.g=o.onnx.GraphProto.decode(e,e.uint32());break;case 7:if(d.floats&&d.floats.length||(d.floats=[]),(7&g)==2)for(var m=e.uint32()+e.pos;e.pos>>0,e.i.high>>>0).toNumber())),e.s!=null&&(typeof e.s=="string"?u.base64.decode(e.s,r.s=u.newBuffer(u.base64.length(e.s)),0):e.s.length&&(r.s=e.s)),e.t!=null){if(typeof e.t!="object")throw TypeError(".onnx.AttributeProto.t: object expected");r.t=o.onnx.TensorProto.fromObject(e.t)}if(e.g!=null){if(typeof e.g!="object")throw TypeError(".onnx.AttributeProto.g: object expected");r.g=o.onnx.GraphProto.fromObject(e.g)}if(e.floats){if(!Array.isArray(e.floats))throw TypeError(".onnx.AttributeProto.floats: array expected");r.floats=[];for(var i=0;i>>0,e.ints[i].high>>>0).toNumber())}if(e.strings){if(!Array.isArray(e.strings))throw TypeError(".onnx.AttributeProto.strings: array expected");for(r.strings=[],i=0;i>>0,e.i.high>>>0).toNumber():e.i),e.s!=null&&e.hasOwnProperty("s")&&(i.s=r.bytes===String?u.base64.encode(e.s,0,e.s.length):r.bytes===Array?Array.prototype.slice.call(e.s):e.s),e.t!=null&&e.hasOwnProperty("t")&&(i.t=o.onnx.TensorProto.toObject(e.t,r)),e.g!=null&&e.hasOwnProperty("g")&&(i.g=o.onnx.GraphProto.toObject(e.g,r)),e.floats&&e.floats.length){i.floats=[];for(var g=0;g>>0,e.ints[g].high>>>0).toNumber():e.ints[g];if(e.strings&&e.strings.length)for(i.strings=[],g=0;g>>3){case 1:d.name=e.string();break;case 2:d.type=o.onnx.TypeProto.decode(e,e.uint32());break;case 3:d.docString=e.string();break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.name!=null&&e.hasOwnProperty("name")&&!u.isString(e.name))return"name: string expected";if(e.type!=null&&e.hasOwnProperty("type")){var r=o.onnx.TypeProto.verify(e.type);if(r)return"type."+r}return e.docString!=null&&e.hasOwnProperty("docString")&&!u.isString(e.docString)?"docString: string expected":null},t.fromObject=function(e){if(e instanceof o.onnx.ValueInfoProto)return e;var r=new o.onnx.ValueInfoProto;if(e.name!=null&&(r.name=String(e.name)),e.type!=null){if(typeof e.type!="object")throw TypeError(".onnx.ValueInfoProto.type: object expected");r.type=o.onnx.TypeProto.fromObject(e.type)}return e.docString!=null&&(r.docString=String(e.docString)),r},t.toObject=function(e,r){r||(r={});var i={};return r.defaults&&(i.name="",i.type=null,i.docString=""),e.name!=null&&e.hasOwnProperty("name")&&(i.name=e.name),e.type!=null&&e.hasOwnProperty("type")&&(i.type=o.onnx.TypeProto.toObject(e.type,r)),e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,a.util.toJSONOptions)},t}(),f.NodeProto=function(){function t(e){if(this.input=[],this.output=[],this.attribute=[],e)for(var r=Object.keys(e),i=0;i>>3){case 1:d.input&&d.input.length||(d.input=[]),d.input.push(e.string());break;case 2:d.output&&d.output.length||(d.output=[]),d.output.push(e.string());break;case 3:d.name=e.string();break;case 4:d.opType=e.string();break;case 7:d.domain=e.string();break;case 5:d.attribute&&d.attribute.length||(d.attribute=[]),d.attribute.push(o.onnx.AttributeProto.decode(e,e.uint32()));break;case 6:d.docString=e.string();break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.input!=null&&e.hasOwnProperty("input")){if(!Array.isArray(e.input))return"input: array expected";for(var r=0;r>>3){case 1:d.irVersion=e.int64();break;case 8:d.opsetImport&&d.opsetImport.length||(d.opsetImport=[]),d.opsetImport.push(o.onnx.OperatorSetIdProto.decode(e,e.uint32()));break;case 2:d.producerName=e.string();break;case 3:d.producerVersion=e.string();break;case 4:d.domain=e.string();break;case 5:d.modelVersion=e.int64();break;case 6:d.docString=e.string();break;case 7:d.graph=o.onnx.GraphProto.decode(e,e.uint32());break;case 14:d.metadataProps&&d.metadataProps.length||(d.metadataProps=[]),d.metadataProps.push(o.onnx.StringStringEntryProto.decode(e,e.uint32()));break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.irVersion!=null&&e.hasOwnProperty("irVersion")&&!(u.isInteger(e.irVersion)||e.irVersion&&u.isInteger(e.irVersion.low)&&u.isInteger(e.irVersion.high)))return"irVersion: integer|Long expected";if(e.opsetImport!=null&&e.hasOwnProperty("opsetImport")){if(!Array.isArray(e.opsetImport))return"opsetImport: array expected";for(var r=0;r>>0,e.irVersion.high>>>0).toNumber())),e.opsetImport){if(!Array.isArray(e.opsetImport))throw TypeError(".onnx.ModelProto.opsetImport: array expected");r.opsetImport=[];for(var i=0;i>>0,e.modelVersion.high>>>0).toNumber())),e.docString!=null&&(r.docString=String(e.docString)),e.graph!=null){if(typeof e.graph!="object")throw TypeError(".onnx.ModelProto.graph: object expected");r.graph=o.onnx.GraphProto.fromObject(e.graph)}if(e.metadataProps){if(!Array.isArray(e.metadataProps))throw TypeError(".onnx.ModelProto.metadataProps: array expected");for(r.metadataProps=[],i=0;i>>0,e.irVersion.high>>>0).toNumber():e.irVersion),e.producerName!=null&&e.hasOwnProperty("producerName")&&(i.producerName=e.producerName),e.producerVersion!=null&&e.hasOwnProperty("producerVersion")&&(i.producerVersion=e.producerVersion),e.domain!=null&&e.hasOwnProperty("domain")&&(i.domain=e.domain),e.modelVersion!=null&&e.hasOwnProperty("modelVersion")&&(typeof e.modelVersion=="number"?i.modelVersion=r.longs===String?String(e.modelVersion):e.modelVersion:i.modelVersion=r.longs===String?u.Long.prototype.toString.call(e.modelVersion):r.longs===Number?new u.LongBits(e.modelVersion.low>>>0,e.modelVersion.high>>>0).toNumber():e.modelVersion),e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),e.graph!=null&&e.hasOwnProperty("graph")&&(i.graph=o.onnx.GraphProto.toObject(e.graph,r)),e.opsetImport&&e.opsetImport.length){i.opsetImport=[];for(var g=0;g>>3){case 1:d.key=e.string();break;case 2:d.value=e.string();break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){return typeof e!="object"||e===null?"object expected":e.key!=null&&e.hasOwnProperty("key")&&!u.isString(e.key)?"key: string expected":e.value!=null&&e.hasOwnProperty("value")&&!u.isString(e.value)?"value: string expected":null},t.fromObject=function(e){if(e instanceof o.onnx.StringStringEntryProto)return e;var r=new o.onnx.StringStringEntryProto;return e.key!=null&&(r.key=String(e.key)),e.value!=null&&(r.value=String(e.value)),r},t.toObject=function(e,r){r||(r={});var i={};return r.defaults&&(i.key="",i.value=""),e.key!=null&&e.hasOwnProperty("key")&&(i.key=e.key),e.value!=null&&e.hasOwnProperty("value")&&(i.value=e.value),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,a.util.toJSONOptions)},t}(),f.TensorAnnotation=function(){function t(e){if(this.quantParameterTensorNames=[],e)for(var r=Object.keys(e),i=0;i>>3){case 1:d.tensorName=e.string();break;case 2:d.quantParameterTensorNames&&d.quantParameterTensorNames.length||(d.quantParameterTensorNames=[]),d.quantParameterTensorNames.push(o.onnx.StringStringEntryProto.decode(e,e.uint32()));break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.tensorName!=null&&e.hasOwnProperty("tensorName")&&!u.isString(e.tensorName))return"tensorName: string expected";if(e.quantParameterTensorNames!=null&&e.hasOwnProperty("quantParameterTensorNames")){if(!Array.isArray(e.quantParameterTensorNames))return"quantParameterTensorNames: array expected";for(var r=0;r>>3){case 1:d.node&&d.node.length||(d.node=[]),d.node.push(o.onnx.NodeProto.decode(e,e.uint32()));break;case 2:d.name=e.string();break;case 5:d.initializer&&d.initializer.length||(d.initializer=[]),d.initializer.push(o.onnx.TensorProto.decode(e,e.uint32()));break;case 10:d.docString=e.string();break;case 11:d.input&&d.input.length||(d.input=[]),d.input.push(o.onnx.ValueInfoProto.decode(e,e.uint32()));break;case 12:d.output&&d.output.length||(d.output=[]),d.output.push(o.onnx.ValueInfoProto.decode(e,e.uint32()));break;case 13:d.valueInfo&&d.valueInfo.length||(d.valueInfo=[]),d.valueInfo.push(o.onnx.ValueInfoProto.decode(e,e.uint32()));break;case 14:d.quantizationAnnotation&&d.quantizationAnnotation.length||(d.quantizationAnnotation=[]),d.quantizationAnnotation.push(o.onnx.TensorAnnotation.decode(e,e.uint32()));break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.node!=null&&e.hasOwnProperty("node")){if(!Array.isArray(e.node))return"node: array expected";for(var r=0;r>>3){case 1:if(d.dims&&d.dims.length||(d.dims=[]),(7&g)==2)for(var m=e.uint32()+e.pos;e.pos>>0,e.dims[i].high>>>0).toNumber())}if(e.dataType!=null&&(r.dataType=0|e.dataType),e.segment!=null){if(typeof e.segment!="object")throw TypeError(".onnx.TensorProto.segment: object expected");r.segment=o.onnx.TensorProto.Segment.fromObject(e.segment)}if(e.floatData){if(!Array.isArray(e.floatData))throw TypeError(".onnx.TensorProto.floatData: array expected");for(r.floatData=[],i=0;i>>0,e.int64Data[i].high>>>0).toNumber())}if(e.name!=null&&(r.name=String(e.name)),e.docString!=null&&(r.docString=String(e.docString)),e.rawData!=null&&(typeof e.rawData=="string"?u.base64.decode(e.rawData,r.rawData=u.newBuffer(u.base64.length(e.rawData)),0):e.rawData.length&&(r.rawData=e.rawData)),e.externalData){if(!Array.isArray(e.externalData))throw TypeError(".onnx.TensorProto.externalData: array expected");for(r.externalData=[],i=0;i>>0,e.uint64Data[i].high>>>0).toNumber(!0))}return r},t.toObject=function(e,r){r||(r={});var i={};if((r.arrays||r.defaults)&&(i.dims=[],i.floatData=[],i.int32Data=[],i.stringData=[],i.int64Data=[],i.doubleData=[],i.uint64Data=[],i.externalData=[]),r.defaults&&(i.dataType=0,i.segment=null,i.name="",r.bytes===String?i.rawData="":(i.rawData=[],r.bytes!==Array&&(i.rawData=u.newBuffer(i.rawData))),i.docString="",i.dataLocation=r.enums===String?"DEFAULT":0),e.dims&&e.dims.length){i.dims=[];for(var d=0;d>>0,e.dims[d].high>>>0).toNumber():e.dims[d]}if(e.dataType!=null&&e.hasOwnProperty("dataType")&&(i.dataType=e.dataType),e.segment!=null&&e.hasOwnProperty("segment")&&(i.segment=o.onnx.TensorProto.Segment.toObject(e.segment,r)),e.floatData&&e.floatData.length)for(i.floatData=[],d=0;d>>0,e.int64Data[d].high>>>0).toNumber():e.int64Data[d];if(e.name!=null&&e.hasOwnProperty("name")&&(i.name=e.name),e.rawData!=null&&e.hasOwnProperty("rawData")&&(i.rawData=r.bytes===String?u.base64.encode(e.rawData,0,e.rawData.length):r.bytes===Array?Array.prototype.slice.call(e.rawData):e.rawData),e.doubleData&&e.doubleData.length)for(i.doubleData=[],d=0;d>>0,e.uint64Data[d].high>>>0).toNumber(!0):e.uint64Data[d];if(e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),e.externalData&&e.externalData.length)for(i.externalData=[],d=0;d>>3){case 1:g.begin=r.int64();break;case 2:g.end=r.int64();break;default:r.skipType(7&m)}}return g},e.decodeDelimited=function(r){return r instanceof h||(r=new h(r)),this.decode(r,r.uint32())},e.verify=function(r){return typeof r!="object"||r===null?"object expected":r.begin!=null&&r.hasOwnProperty("begin")&&!(u.isInteger(r.begin)||r.begin&&u.isInteger(r.begin.low)&&u.isInteger(r.begin.high))?"begin: integer|Long expected":r.end!=null&&r.hasOwnProperty("end")&&!(u.isInteger(r.end)||r.end&&u.isInteger(r.end.low)&&u.isInteger(r.end.high))?"end: integer|Long expected":null},e.fromObject=function(r){if(r instanceof o.onnx.TensorProto.Segment)return r;var i=new o.onnx.TensorProto.Segment;return r.begin!=null&&(u.Long?(i.begin=u.Long.fromValue(r.begin)).unsigned=!1:typeof r.begin=="string"?i.begin=parseInt(r.begin,10):typeof r.begin=="number"?i.begin=r.begin:typeof r.begin=="object"&&(i.begin=new u.LongBits(r.begin.low>>>0,r.begin.high>>>0).toNumber())),r.end!=null&&(u.Long?(i.end=u.Long.fromValue(r.end)).unsigned=!1:typeof r.end=="string"?i.end=parseInt(r.end,10):typeof r.end=="number"?i.end=r.end:typeof r.end=="object"&&(i.end=new u.LongBits(r.end.low>>>0,r.end.high>>>0).toNumber())),i},e.toObject=function(r,i){i||(i={});var d={};if(i.defaults){if(u.Long){var g=new u.Long(0,0,!1);d.begin=i.longs===String?g.toString():i.longs===Number?g.toNumber():g}else d.begin=i.longs===String?"0":0;u.Long?(g=new u.Long(0,0,!1),d.end=i.longs===String?g.toString():i.longs===Number?g.toNumber():g):d.end=i.longs===String?"0":0}return r.begin!=null&&r.hasOwnProperty("begin")&&(typeof r.begin=="number"?d.begin=i.longs===String?String(r.begin):r.begin:d.begin=i.longs===String?u.Long.prototype.toString.call(r.begin):i.longs===Number?new u.LongBits(r.begin.low>>>0,r.begin.high>>>0).toNumber():r.begin),r.end!=null&&r.hasOwnProperty("end")&&(typeof r.end=="number"?d.end=i.longs===String?String(r.end):r.end:d.end=i.longs===String?u.Long.prototype.toString.call(r.end):i.longs===Number?new u.LongBits(r.end.low>>>0,r.end.high>>>0).toNumber():r.end),d},e.prototype.toJSON=function(){return this.constructor.toObject(this,a.util.toJSONOptions)},e}(),t.DataLocation=function(){var e={},r=Object.create(e);return r[e[0]="DEFAULT"]=0,r[e[1]="EXTERNAL"]=1,r}(),t}(),f.TensorShapeProto=function(){function t(e){if(this.dim=[],e)for(var r=Object.keys(e),i=0;i>>3==1?(d.dim&&d.dim.length||(d.dim=[]),d.dim.push(o.onnx.TensorShapeProto.Dimension.decode(e,e.uint32()))):e.skipType(7&g)}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.dim!=null&&e.hasOwnProperty("dim")){if(!Array.isArray(e.dim))return"dim: array expected";for(var r=0;r>>3){case 1:m.dimValue=i.int64();break;case 2:m.dimParam=i.string();break;case 3:m.denotation=i.string();break;default:i.skipType(7&b)}}return m},e.decodeDelimited=function(i){return i instanceof h||(i=new h(i)),this.decode(i,i.uint32())},e.verify=function(i){if(typeof i!="object"||i===null)return"object expected";var d={};if(i.dimValue!=null&&i.hasOwnProperty("dimValue")&&(d.value=1,!(u.isInteger(i.dimValue)||i.dimValue&&u.isInteger(i.dimValue.low)&&u.isInteger(i.dimValue.high))))return"dimValue: integer|Long expected";if(i.dimParam!=null&&i.hasOwnProperty("dimParam")){if(d.value===1)return"value: multiple values";if(d.value=1,!u.isString(i.dimParam))return"dimParam: string expected"}return i.denotation!=null&&i.hasOwnProperty("denotation")&&!u.isString(i.denotation)?"denotation: string expected":null},e.fromObject=function(i){if(i instanceof o.onnx.TensorShapeProto.Dimension)return i;var d=new o.onnx.TensorShapeProto.Dimension;return i.dimValue!=null&&(u.Long?(d.dimValue=u.Long.fromValue(i.dimValue)).unsigned=!1:typeof i.dimValue=="string"?d.dimValue=parseInt(i.dimValue,10):typeof i.dimValue=="number"?d.dimValue=i.dimValue:typeof i.dimValue=="object"&&(d.dimValue=new u.LongBits(i.dimValue.low>>>0,i.dimValue.high>>>0).toNumber())),i.dimParam!=null&&(d.dimParam=String(i.dimParam)),i.denotation!=null&&(d.denotation=String(i.denotation)),d},e.toObject=function(i,d){d||(d={});var g={};return d.defaults&&(g.denotation=""),i.dimValue!=null&&i.hasOwnProperty("dimValue")&&(typeof i.dimValue=="number"?g.dimValue=d.longs===String?String(i.dimValue):i.dimValue:g.dimValue=d.longs===String?u.Long.prototype.toString.call(i.dimValue):d.longs===Number?new u.LongBits(i.dimValue.low>>>0,i.dimValue.high>>>0).toNumber():i.dimValue,d.oneofs&&(g.value="dimValue")),i.dimParam!=null&&i.hasOwnProperty("dimParam")&&(g.dimParam=i.dimParam,d.oneofs&&(g.value="dimParam")),i.denotation!=null&&i.hasOwnProperty("denotation")&&(g.denotation=i.denotation),g},e.prototype.toJSON=function(){return this.constructor.toObject(this,a.util.toJSONOptions)},e}(),t}(),f.TypeProto=function(){function t(r){if(r)for(var i=Object.keys(r),d=0;d>>3){case 1:g.tensorType=o.onnx.TypeProto.Tensor.decode(r,r.uint32());break;case 6:g.denotation=r.string();break;default:r.skipType(7&m)}}return g},t.decodeDelimited=function(r){return r instanceof h||(r=new h(r)),this.decode(r,r.uint32())},t.verify=function(r){if(typeof r!="object"||r===null)return"object expected";if(r.tensorType!=null&&r.hasOwnProperty("tensorType")){var i=o.onnx.TypeProto.Tensor.verify(r.tensorType);if(i)return"tensorType."+i}return r.denotation!=null&&r.hasOwnProperty("denotation")&&!u.isString(r.denotation)?"denotation: string expected":null},t.fromObject=function(r){if(r instanceof o.onnx.TypeProto)return r;var i=new o.onnx.TypeProto;if(r.tensorType!=null){if(typeof r.tensorType!="object")throw TypeError(".onnx.TypeProto.tensorType: object expected");i.tensorType=o.onnx.TypeProto.Tensor.fromObject(r.tensorType)}return r.denotation!=null&&(i.denotation=String(r.denotation)),i},t.toObject=function(r,i){i||(i={});var d={};return i.defaults&&(d.denotation=""),r.tensorType!=null&&r.hasOwnProperty("tensorType")&&(d.tensorType=o.onnx.TypeProto.Tensor.toObject(r.tensorType,i),i.oneofs&&(d.value="tensorType")),r.denotation!=null&&r.hasOwnProperty("denotation")&&(d.denotation=r.denotation),d},t.prototype.toJSON=function(){return this.constructor.toObject(this,a.util.toJSONOptions)},t.Tensor=function(){function r(i){if(i)for(var d=Object.keys(i),g=0;g>>3){case 1:m.elemType=i.int32();break;case 2:m.shape=o.onnx.TensorShapeProto.decode(i,i.uint32());break;default:i.skipType(7&b)}}return m},r.decodeDelimited=function(i){return i instanceof h||(i=new h(i)),this.decode(i,i.uint32())},r.verify=function(i){if(typeof i!="object"||i===null)return"object expected";if(i.elemType!=null&&i.hasOwnProperty("elemType")&&!u.isInteger(i.elemType))return"elemType: integer expected";if(i.shape!=null&&i.hasOwnProperty("shape")){var d=o.onnx.TensorShapeProto.verify(i.shape);if(d)return"shape."+d}return null},r.fromObject=function(i){if(i instanceof o.onnx.TypeProto.Tensor)return i;var d=new o.onnx.TypeProto.Tensor;if(i.elemType!=null&&(d.elemType=0|i.elemType),i.shape!=null){if(typeof i.shape!="object")throw TypeError(".onnx.TypeProto.Tensor.shape: object expected");d.shape=o.onnx.TensorShapeProto.fromObject(i.shape)}return d},r.toObject=function(i,d){d||(d={});var g={};return d.defaults&&(g.elemType=0,g.shape=null),i.elemType!=null&&i.hasOwnProperty("elemType")&&(g.elemType=i.elemType),i.shape!=null&&i.hasOwnProperty("shape")&&(g.shape=o.onnx.TensorShapeProto.toObject(i.shape,d)),g},r.prototype.toJSON=function(){return this.constructor.toObject(this,a.util.toJSONOptions)},r}(),t}(),f.OperatorSetIdProto=function(){function t(e){if(e)for(var r=Object.keys(e),i=0;i>>3){case 1:d.domain=e.string();break;case 2:d.version=e.int64();break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){return typeof e!="object"||e===null?"object expected":e.domain!=null&&e.hasOwnProperty("domain")&&!u.isString(e.domain)?"domain: string expected":e.version!=null&&e.hasOwnProperty("version")&&!(u.isInteger(e.version)||e.version&&u.isInteger(e.version.low)&&u.isInteger(e.version.high))?"version: integer|Long expected":null},t.fromObject=function(e){if(e instanceof o.onnx.OperatorSetIdProto)return e;var r=new o.onnx.OperatorSetIdProto;return e.domain!=null&&(r.domain=String(e.domain)),e.version!=null&&(u.Long?(r.version=u.Long.fromValue(e.version)).unsigned=!1:typeof e.version=="string"?r.version=parseInt(e.version,10):typeof e.version=="number"?r.version=e.version:typeof e.version=="object"&&(r.version=new u.LongBits(e.version.low>>>0,e.version.high>>>0).toNumber())),r},t.toObject=function(e,r){r||(r={});var i={};if(r.defaults)if(i.domain="",u.Long){var d=new u.Long(0,0,!1);i.version=r.longs===String?d.toString():r.longs===Number?d.toNumber():d}else i.version=r.longs===String?"0":0;return e.domain!=null&&e.hasOwnProperty("domain")&&(i.domain=e.domain),e.version!=null&&e.hasOwnProperty("version")&&(typeof e.version=="number"?i.version=r.longs===String?String(e.version):e.version:i.version=r.longs===String?u.Long.prototype.toString.call(e.version):r.longs===Number?new u.LongBits(e.version.low>>>0,e.version.high>>>0).toNumber():e.version),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,a.util.toJSONOptions)},t}(),f),_.exports=o},2100:(_,n,s)=>{_.exports=s(9482)},9482:(_,n,s)=>{var c=n;function l(){c.util._configure(),c.Writer._configure(c.BufferWriter),c.Reader._configure(c.BufferReader)}c.build="minimal",c.Writer=s(1173),c.BufferWriter=s(3155),c.Reader=s(1408),c.BufferReader=s(593),c.util=s(9693),c.rpc=s(5994),c.roots=s(5054),c.configure=l,l()},1408:(_,n,s)=>{_.exports=p;var c,l=s(9693),f=l.LongBits,a=l.utf8;function h(d,g){return RangeError("index out of range: "+d.pos+" + "+(g||1)+" > "+d.len)}function p(d){this.buf=d,this.pos=0,this.len=d.length}var u,o=typeof Uint8Array<"u"?function(d){if(d instanceof Uint8Array||Array.isArray(d))return new p(d);throw Error("illegal buffer")}:function(d){if(Array.isArray(d))return new p(d);throw Error("illegal buffer")},t=function(){return l.Buffer?function(d){return(p.create=function(g){return l.Buffer.isBuffer(g)?new c(g):o(g)})(d)}:o};function e(){var d=new f(0,0),g=0;if(!(this.len-this.pos>4)){for(;g<3;++g){if(this.pos>=this.len)throw h(this);if(d.lo=(d.lo|(127&this.buf[this.pos])<<7*g)>>>0,this.buf[this.pos++]<128)return d}return d.lo=(d.lo|(127&this.buf[this.pos++])<<7*g)>>>0,d}for(;g<4;++g)if(d.lo=(d.lo|(127&this.buf[this.pos])<<7*g)>>>0,this.buf[this.pos++]<128)return d;if(d.lo=(d.lo|(127&this.buf[this.pos])<<28)>>>0,d.hi=(d.hi|(127&this.buf[this.pos])>>4)>>>0,this.buf[this.pos++]<128)return d;if(g=0,this.len-this.pos>4){for(;g<5;++g)if(d.hi=(d.hi|(127&this.buf[this.pos])<<7*g+3)>>>0,this.buf[this.pos++]<128)return d}else for(;g<5;++g){if(this.pos>=this.len)throw h(this);if(d.hi=(d.hi|(127&this.buf[this.pos])<<7*g+3)>>>0,this.buf[this.pos++]<128)return d}throw Error("invalid varint encoding")}function r(d,g){return(d[g-4]|d[g-3]<<8|d[g-2]<<16|d[g-1]<<24)>>>0}function i(){if(this.pos+8>this.len)throw h(this,8);return new f(r(this.buf,this.pos+=4),r(this.buf,this.pos+=4))}p.create=t(),p.prototype._slice=l.Array.prototype.subarray||l.Array.prototype.slice,p.prototype.uint32=(u=4294967295,function(){if(u=(127&this.buf[this.pos])>>>0,this.buf[this.pos++]<128||(u=(u|(127&this.buf[this.pos])<<7)>>>0,this.buf[this.pos++]<128)||(u=(u|(127&this.buf[this.pos])<<14)>>>0,this.buf[this.pos++]<128)||(u=(u|(127&this.buf[this.pos])<<21)>>>0,this.buf[this.pos++]<128)||(u=(u|(15&this.buf[this.pos])<<28)>>>0,this.buf[this.pos++]<128))return u;if((this.pos+=5)>this.len)throw this.pos=this.len,h(this,10);return u}),p.prototype.int32=function(){return 0|this.uint32()},p.prototype.sint32=function(){var d=this.uint32();return d>>>1^-(1&d)|0},p.prototype.bool=function(){return this.uint32()!==0},p.prototype.fixed32=function(){if(this.pos+4>this.len)throw h(this,4);return r(this.buf,this.pos+=4)},p.prototype.sfixed32=function(){if(this.pos+4>this.len)throw h(this,4);return 0|r(this.buf,this.pos+=4)},p.prototype.float=function(){if(this.pos+4>this.len)throw h(this,4);var d=l.float.readFloatLE(this.buf,this.pos);return this.pos+=4,d},p.prototype.double=function(){if(this.pos+8>this.len)throw h(this,4);var d=l.float.readDoubleLE(this.buf,this.pos);return this.pos+=8,d},p.prototype.bytes=function(){var d=this.uint32(),g=this.pos,m=this.pos+d;if(m>this.len)throw h(this,d);return this.pos+=d,Array.isArray(this.buf)?this.buf.slice(g,m):g===m?new this.buf.constructor(0):this._slice.call(this.buf,g,m)},p.prototype.string=function(){var d=this.bytes();return a.read(d,0,d.length)},p.prototype.skip=function(d){if(typeof d=="number"){if(this.pos+d>this.len)throw h(this,d);this.pos+=d}else do if(this.pos>=this.len)throw h(this);while(128&this.buf[this.pos++]);return this},p.prototype.skipType=function(d){switch(d){case 0:this.skip();break;case 1:this.skip(8);break;case 2:this.skip(this.uint32());break;case 3:for(;(d=7&this.uint32())!=4;)this.skipType(d);break;case 5:this.skip(4);break;default:throw Error("invalid wire type "+d+" at offset "+this.pos)}return this},p._configure=function(d){c=d,p.create=t(),c._configure();var g=l.Long?"toLong":"toNumber";l.merge(p.prototype,{int64:function(){return e.call(this)[g](!1)},uint64:function(){return e.call(this)[g](!0)},sint64:function(){return e.call(this).zzDecode()[g](!1)},fixed64:function(){return i.call(this)[g](!0)},sfixed64:function(){return i.call(this)[g](!1)}})}},593:(_,n,s)=>{_.exports=f;var c=s(1408);(f.prototype=Object.create(c.prototype)).constructor=f;var l=s(9693);function f(a){c.call(this,a)}f._configure=function(){l.Buffer&&(f.prototype._slice=l.Buffer.prototype.slice)},f.prototype.string=function(){var a=this.uint32();return this.buf.utf8Slice?this.buf.utf8Slice(this.pos,this.pos=Math.min(this.pos+a,this.len)):this.buf.toString("utf-8",this.pos,this.pos=Math.min(this.pos+a,this.len))},f._configure()},5054:_=>{_.exports={}},5994:(_,n,s)=>{n.Service=s(7948)},7948:(_,n,s)=>{_.exports=l;var c=s(9693);function l(f,a,h){if(typeof f!="function")throw TypeError("rpcImpl must be a function");c.EventEmitter.call(this),this.rpcImpl=f,this.requestDelimited=!!a,this.responseDelimited=!!h}(l.prototype=Object.create(c.EventEmitter.prototype)).constructor=l,l.prototype.rpcCall=function f(a,h,p,u,o){if(!u)throw TypeError("request must be specified");var t=this;if(!o)return c.asPromise(f,t,a,h,p,u);if(t.rpcImpl)try{return t.rpcImpl(a,h[t.requestDelimited?"encodeDelimited":"encode"](u).finish(),function(e,r){if(e)return t.emit("error",e,a),o(e);if(r!==null){if(!(r instanceof p))try{r=p[t.responseDelimited?"decodeDelimited":"decode"](r)}catch(i){return t.emit("error",i,a),o(i)}return t.emit("data",r,a),o(null,r)}t.end(!0)})}catch(e){return t.emit("error",e,a),void setTimeout(function(){o(e)},0)}else setTimeout(function(){o(Error("already ended"))},0)},l.prototype.end=function(f){return this.rpcImpl&&(f||this.rpcImpl(null,null,null),this.rpcImpl=null,this.emit("end").off()),this}},1945:(_,n,s)=>{_.exports=l;var c=s(9693);function l(p,u){this.lo=p>>>0,this.hi=u>>>0}var f=l.zero=new l(0,0);f.toNumber=function(){return 0},f.zzEncode=f.zzDecode=function(){return this},f.length=function(){return 1};var a=l.zeroHash="\0\0\0\0\0\0\0\0";l.fromNumber=function(p){if(p===0)return f;var u=p<0;u&&(p=-p);var o=p>>>0,t=(p-o)/4294967296>>>0;return u&&(t=~t>>>0,o=~o>>>0,++o>4294967295&&(o=0,++t>4294967295&&(t=0))),new l(o,t)},l.from=function(p){if(typeof p=="number")return l.fromNumber(p);if(c.isString(p)){if(!c.Long)return l.fromNumber(parseInt(p,10));p=c.Long.fromString(p)}return p.low||p.high?new l(p.low>>>0,p.high>>>0):f},l.prototype.toNumber=function(p){if(!p&&this.hi>>>31){var u=1+~this.lo>>>0,o=~this.hi>>>0;return u||(o=o+1>>>0),-(u+4294967296*o)}return this.lo+4294967296*this.hi},l.prototype.toLong=function(p){return c.Long?new c.Long(0|this.lo,0|this.hi,!!p):{low:0|this.lo,high:0|this.hi,unsigned:!!p}};var h=String.prototype.charCodeAt;l.fromHash=function(p){return p===a?f:new l((h.call(p,0)|h.call(p,1)<<8|h.call(p,2)<<16|h.call(p,3)<<24)>>>0,(h.call(p,4)|h.call(p,5)<<8|h.call(p,6)<<16|h.call(p,7)<<24)>>>0)},l.prototype.toHash=function(){return String.fromCharCode(255&this.lo,this.lo>>>8&255,this.lo>>>16&255,this.lo>>>24,255&this.hi,this.hi>>>8&255,this.hi>>>16&255,this.hi>>>24)},l.prototype.zzEncode=function(){var p=this.hi>>31;return this.hi=((this.hi<<1|this.lo>>>31)^p)>>>0,this.lo=(this.lo<<1^p)>>>0,this},l.prototype.zzDecode=function(){var p=-(1&this.lo);return this.lo=((this.lo>>>1|this.hi<<31)^p)>>>0,this.hi=(this.hi>>>1^p)>>>0,this},l.prototype.length=function(){var p=this.lo,u=(this.lo>>>28|this.hi<<4)>>>0,o=this.hi>>>24;return o===0?u===0?p<16384?p<128?1:2:p<2097152?3:4:u<16384?u<128?5:6:u<2097152?7:8:o<128?9:10}},9693:function(_,n,s){var c=n;function l(a,h,p){for(var u=Object.keys(h),o=0;o0)},c.Buffer=function(){try{var a=c.inquire("buffer").Buffer;return a.prototype.utf8Write?a:null}catch{return null}}(),c._Buffer_from=null,c._Buffer_allocUnsafe=null,c.newBuffer=function(a){return typeof a=="number"?c.Buffer?c._Buffer_allocUnsafe(a):new c.Array(a):c.Buffer?c._Buffer_from(a):typeof Uint8Array>"u"?a:new Uint8Array(a)},c.Array=typeof Uint8Array<"u"?Uint8Array:Array,c.Long=c.global.dcodeIO&&c.global.dcodeIO.Long||c.global.Long||c.inquire("long"),c.key2Re=/^true|false|0|1$/,c.key32Re=/^-?(?:0|[1-9][0-9]*)$/,c.key64Re=/^(?:[\\x00-\\xff]{8}|-?(?:0|[1-9][0-9]*))$/,c.longToHash=function(a){return a?c.LongBits.from(a).toHash():c.LongBits.zeroHash},c.longFromHash=function(a,h){var p=c.LongBits.fromHash(a);return c.Long?c.Long.fromBits(p.lo,p.hi,h):p.toNumber(!!h)},c.merge=l,c.lcFirst=function(a){return a.charAt(0).toLowerCase()+a.substring(1)},c.newError=f,c.ProtocolError=f("ProtocolError"),c.oneOfGetter=function(a){for(var h={},p=0;p-1;--o)if(h[u[o]]===1&&this[u[o]]!==void 0&&this[u[o]]!==null)return u[o]}},c.oneOfSetter=function(a){return function(h){for(var p=0;p{_.exports=t;var c,l=s(9693),f=l.LongBits,a=l.base64,h=l.utf8;function p(b,y,w){this.fn=b,this.len=y,this.next=void 0,this.val=w}function u(){}function o(b){this.head=b.head,this.tail=b.tail,this.len=b.len,this.next=b.states}function t(){this.len=0,this.head=new p(u,0,0),this.tail=this.head,this.states=null}var e=function(){return l.Buffer?function(){return(t.create=function(){return new c})()}:function(){return new t}};function r(b,y,w){y[w]=255&b}function i(b,y){this.len=b,this.next=void 0,this.val=y}function d(b,y,w){for(;b.hi;)y[w++]=127&b.lo|128,b.lo=(b.lo>>>7|b.hi<<25)>>>0,b.hi>>>=7;for(;b.lo>127;)y[w++]=127&b.lo|128,b.lo=b.lo>>>7;y[w++]=b.lo}function g(b,y,w){y[w]=255&b,y[w+1]=b>>>8&255,y[w+2]=b>>>16&255,y[w+3]=b>>>24}t.create=e(),t.alloc=function(b){return new l.Array(b)},l.Array!==Array&&(t.alloc=l.pool(t.alloc,l.Array.prototype.subarray)),t.prototype._push=function(b,y,w){return this.tail=this.tail.next=new p(b,y,w),this.len+=y,this},i.prototype=Object.create(p.prototype),i.prototype.fn=function(b,y,w){for(;b>127;)y[w++]=127&b|128,b>>>=7;y[w]=b},t.prototype.uint32=function(b){return this.len+=(this.tail=this.tail.next=new i((b>>>=0)<128?1:b<16384?2:b<2097152?3:b<268435456?4:5,b)).len,this},t.prototype.int32=function(b){return b<0?this._push(d,10,f.fromNumber(b)):this.uint32(b)},t.prototype.sint32=function(b){return this.uint32((b<<1^b>>31)>>>0)},t.prototype.uint64=function(b){var y=f.from(b);return this._push(d,y.length(),y)},t.prototype.int64=t.prototype.uint64,t.prototype.sint64=function(b){var y=f.from(b).zzEncode();return this._push(d,y.length(),y)},t.prototype.bool=function(b){return this._push(r,1,b?1:0)},t.prototype.fixed32=function(b){return this._push(g,4,b>>>0)},t.prototype.sfixed32=t.prototype.fixed32,t.prototype.fixed64=function(b){var y=f.from(b);return this._push(g,4,y.lo)._push(g,4,y.hi)},t.prototype.sfixed64=t.prototype.fixed64,t.prototype.float=function(b){return this._push(l.float.writeFloatLE,4,b)},t.prototype.double=function(b){return this._push(l.float.writeDoubleLE,8,b)};var m=l.Array.prototype.set?function(b,y,w){y.set(b,w)}:function(b,y,w){for(var v=0;v>>0;if(!y)return this._push(r,1,0);if(l.isString(b)){var w=t.alloc(y=a.length(b));a.decode(b,w,0),b=w}return this.uint32(y)._push(m,y,b)},t.prototype.string=function(b){var y=h.length(b);return y?this.uint32(y)._push(h.write,y,b):this._push(r,1,0)},t.prototype.fork=function(){return this.states=new o(this),this.head=this.tail=new p(u,0,0),this.len=0,this},t.prototype.reset=function(){return this.states?(this.head=this.states.head,this.tail=this.states.tail,this.len=this.states.len,this.states=this.states.next):(this.head=this.tail=new p(u,0,0),this.len=0),this},t.prototype.ldelim=function(){var b=this.head,y=this.tail,w=this.len;return this.reset().uint32(w),w&&(this.tail.next=b.next,this.tail=y,this.len+=w),this},t.prototype.finish=function(){for(var b=this.head.next,y=this.constructor.alloc(this.len),w=0;b;)b.fn(b.val,y,w),w+=b.len,b=b.next;return y},t._configure=function(b){c=b,t.create=e(),c._configure()}},3155:(_,n,s)=>{_.exports=f;var c=s(1173);(f.prototype=Object.create(c.prototype)).constructor=f;var l=s(9693);function f(){c.call(this)}function a(h,p,u){h.length<40?l.utf8.write(h,p,u):p.utf8Write?p.utf8Write(h,u):p.write(h,u)}f._configure=function(){f.alloc=l._Buffer_allocUnsafe,f.writeBytesBuffer=l.Buffer&&l.Buffer.prototype instanceof Uint8Array&&l.Buffer.prototype.set.name==="set"?function(h,p,u){p.set(h,u)}:function(h,p,u){if(h.copy)h.copy(p,u,0,h.length);else for(var o=0;o>>0;return this.uint32(p),p&&this._push(f.writeBytesBuffer,p,h),this},f.prototype.string=function(h){var p=l.Buffer.byteLength(h);return this.uint32(p),p&&this._push(a,p,h),this},f._configure()},7714:(_,n,s)=>{n.R=void 0;const c=s(6919),l=s(7448);n.R=new class{async init(){}async createSessionHandler(f,a){const h=new c.Session(a);return await h.loadModel(f),new l.OnnxjsSessionHandler(h)}}},4200:(_,n,s)=>{n.c8=n.rX=void 0;const c=s(1670),l=s(5381),f=s(2157),a=s(2306);n.rX=()=>{if((typeof c.env.wasm.initTimeout!="number"||c.env.wasm.initTimeout<0)&&(c.env.wasm.initTimeout=0),typeof c.env.wasm.simd!="boolean"&&(c.env.wasm.simd=!0),typeof c.env.wasm.proxy!="boolean"&&(c.env.wasm.proxy=!1),typeof c.env.wasm.numThreads!="number"||!Number.isInteger(c.env.wasm.numThreads)||c.env.wasm.numThreads<=0){const h=typeof navigator>"u"?(0,l.cpus)().length:navigator.hardwareConcurrency;c.env.wasm.numThreads=Math.min(4,Math.ceil((h||1)/2))}},n.c8=new class{async init(){(0,n.rX)(),await(0,f.initWasm)()}async createSessionHandler(h,p){const u=new a.OnnxruntimeWebAssemblySessionHandler;return await u.loadModel(h,p),Promise.resolve(u)}}},6018:function(_,n,s){var c=this&&this.__createBinding||(Object.create?function(a,h,p,u){u===void 0&&(u=p);var o=Object.getOwnPropertyDescriptor(h,p);o&&!("get"in o?!h.__esModule:o.writable||o.configurable)||(o={enumerable:!0,get:function(){return h[p]}}),Object.defineProperty(a,u,o)}:function(a,h,p,u){u===void 0&&(u=p),a[u]=h[p]}),l=this&&this.__exportStar||function(a,h){for(var p in a)p==="default"||Object.prototype.hasOwnProperty.call(h,p)||c(h,a,p)};Object.defineProperty(n,"__esModule",{value:!0}),l(s(1670),n);const f=s(1670);{const a=s(7714).R;(0,f.registerBackend)("webgl",a,-10)}{const a=s(4200).c8;(0,f.registerBackend)("cpu",a,10),(0,f.registerBackend)("wasm",a,10),(0,f.registerBackend)("xnnpack",a,9)}},246:(_,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createAttributeWithCacheKey=void 0;class s{constructor(l){Object.assign(this,l)}get cacheKey(){return this._cacheKey||(this._cacheKey=Object.getOwnPropertyNames(this).sort().map(l=>`${this[l]}`).join(";")),this._cacheKey}}n.createAttributeWithCacheKey=c=>new s(c)},7778:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Attribute=void 0;const c=s(1446),l=s(9395),f=s(9162),a=s(2517);var h=l.onnxruntime.experimental.fbs;class p{constructor(o){if(this._attributes=new Map,o!=null){for(const t of o)t instanceof c.onnx.AttributeProto?this._attributes.set(t.name,[p.getValue(t),p.getType(t)]):t instanceof h.Attribute&&this._attributes.set(t.name(),[p.getValue(t),p.getType(t)]);if(this._attributes.sizef.Tensor.fromProto(r));if(o instanceof h.Attribute)return e.map(r=>f.Tensor.fromOrtTensor(r))}if(t===c.onnx.AttributeProto.AttributeType.STRING&&o instanceof c.onnx.AttributeProto){const r=e;return(0,a.decodeUtf8String)(r)}return t===c.onnx.AttributeProto.AttributeType.STRINGS&&o instanceof c.onnx.AttributeProto?e.map(a.decodeUtf8String):e}static getValueNoCheck(o){return o instanceof c.onnx.AttributeProto?this.getValueNoCheckFromOnnxFormat(o):this.getValueNoCheckFromOrtFormat(o)}static getValueNoCheckFromOnnxFormat(o){switch(o.type){case c.onnx.AttributeProto.AttributeType.FLOAT:return o.f;case c.onnx.AttributeProto.AttributeType.INT:return o.i;case c.onnx.AttributeProto.AttributeType.STRING:return o.s;case c.onnx.AttributeProto.AttributeType.TENSOR:return o.t;case c.onnx.AttributeProto.AttributeType.GRAPH:return o.g;case c.onnx.AttributeProto.AttributeType.FLOATS:return o.floats;case c.onnx.AttributeProto.AttributeType.INTS:return o.ints;case c.onnx.AttributeProto.AttributeType.STRINGS:return o.strings;case c.onnx.AttributeProto.AttributeType.TENSORS:return o.tensors;case c.onnx.AttributeProto.AttributeType.GRAPHS:return o.graphs;default:throw new Error(`unsupported attribute type: ${c.onnx.AttributeProto.AttributeType[o.type]}`)}}static getValueNoCheckFromOrtFormat(o){switch(o.type()){case h.AttributeType.FLOAT:return o.f();case h.AttributeType.INT:return o.i();case h.AttributeType.STRING:return o.s();case h.AttributeType.TENSOR:return o.t();case h.AttributeType.GRAPH:return o.g();case h.AttributeType.FLOATS:return o.floatsArray();case h.AttributeType.INTS:{const t=[];for(let e=0;e{Object.defineProperty(n,"__esModule",{value:!0}),n.resolveBackend=n.backend=void 0;const c=s(5038),l=new Map;async function f(a){const h=n.backend;if(h[a]!==void 0&&function(p){const u=p;return"initialize"in u&&typeof u.initialize=="function"&&"createSessionHandler"in u&&typeof u.createSessionHandler=="function"&&"dispose"in u&&typeof u.dispose=="function"}(h[a])){const p=h[a];let u=p.initialize();if(typeof u=="object"&&"then"in u&&(u=await u),u)return l.set(a,p),p}}n.backend={webgl:new c.WebGLBackend},n.resolveBackend=async function a(h){if(!h)return a(["webgl"]);{const p=typeof h=="string"?[h]:h;for(const u of p){const o=l.get(u);if(o)return o;const t=await f(u);if(t)return t}}throw new Error("no available backend to use")}},5038:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLBackend=void 0;const c=s(1670),l=s(6231),f=s(6416),a=s(7305);n.WebGLBackend=class{get contextId(){return c.env.webgl.contextId}set contextId(h){c.env.webgl.contextId=h}get matmulMaxBatchSize(){return c.env.webgl.matmulMaxBatchSize}set matmulMaxBatchSize(h){c.env.webgl.matmulMaxBatchSize=h}get textureCacheMode(){return c.env.webgl.textureCacheMode}set textureCacheMode(h){c.env.webgl.textureCacheMode=h}get pack(){return c.env.webgl.pack}set pack(h){c.env.webgl.pack=h}get async(){return c.env.webgl.async}set async(h){c.env.webgl.async=h}initialize(){try{return this.glContext=(0,a.createWebGLContext)(this.contextId),typeof this.matmulMaxBatchSize!="number"&&(this.matmulMaxBatchSize=16),typeof this.textureCacheMode!="string"&&(this.textureCacheMode="full"),typeof this.pack!="boolean"&&(this.pack=!1),typeof this.async!="boolean"&&(this.async=!1),l.Logger.setWithEnv(c.env),l.Logger.verbose("WebGLBackend",`Created WebGLContext: ${typeof this.glContext} with matmulMaxBatchSize: ${this.matmulMaxBatchSize}; textureCacheMode: ${this.textureCacheMode}; pack: ${this.pack}; async: ${this.async}.`),!0}catch(h){return l.Logger.warning("WebGLBackend",`Unable to initialize WebGLBackend. ${h}`),!1}}createSessionHandler(h){return new f.WebGLSessionHandler(this,h)}dispose(){this.glContext.dispose()}}},5107:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.CoordsGlslLib=void 0;const c=s(2517),l=s(8520),f=s(5060),a=s(7859),h=s(9390);class p extends l.GlslLib{constructor(o){super(o)}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},this.offsetToCoords()),this.coordsToOffset()),this.toVec()),this.valueFrom()),this.getCommonUtilFuncs()),this.getInputsSamplingSnippets()),this.getOutputSamplingSnippet())}getCustomTypes(){return{}}offsetToCoords(){return{offsetToCoords:new l.GlslLibRoutine(`
- vec2 offsetToCoords(int offset, int width, int height) {
- int t = offset / width;
- int s = offset - t*width;
- vec2 coords = (vec2(s,t) + vec2(0.5,0.5)) / vec2(width, height);
- return coords;
- }
- `)}}coordsToOffset(){return{coordsToOffset:new l.GlslLibRoutine(`
- int coordsToOffset(vec2 coords, int width, int height) {
- float s = coords.s * float(width);
- float t = coords.t * float(height);
- int offset = int(t) * width + int(s);
- return offset;
- }
- `)}}getOutputSamplingSnippet(){const o=this.context.outputTextureLayout;return o.isPacked?this.getPackedOutputSamplingSnippet(o):this.getUnpackedOutputSamplingSnippet(o)}getPackedOutputSamplingSnippet(o){const t=o.unpackedShape,e=[o.width,o.height],r={},i="getOutputCoords";switch(t.length){case 0:r[i]=this.getOutputScalarCoords();break;case 1:r[i]=this.getOutputPacked1DCoords(t,e);break;case 2:r[i]=this.getOutputPacked2DCoords(t,e);break;case 3:r[i]=this.getOutputPacked3DCoords(t,e);break;default:r[i]=this.getOutputPackedNDCoords(t,e)}const d=`
- void setOutput(vec4 val) {
- ${(0,f.getGlsl)(this.context.glContext.version).output} = val;
- }
- `;return r.floatTextureSetRGBA=new l.GlslLibRoutine(d),r}getUnpackedOutputSamplingSnippet(o){const t=o.unpackedShape,e=[o.width,o.height],r={},i="getOutputCoords";switch(t.length){case 0:r[i]=this.getOutputScalarCoords();break;case 1:r[i]=this.getOutputUnpacked1DCoords(t,e);break;case 2:r[i]=this.getOutputUnpacked2DCoords(t,e);break;case 3:r[i]=this.getOutputUnpacked3DCoords(t,e);break;case 4:r[i]=this.getOutputUnpacked4DCoords(t,e);break;case 5:r[i]=this.getOutputUnpacked5DCoords(t,e);break;case 6:r[i]=this.getOutputUnpacked6DCoords(t,e);break;default:throw new Error(`Unsupported output dimensionality: ${t.length}`)}const d=`
- void setOutput(float val) {
- ${(0,f.getGlsl)(this.context.glContext.version).output} = vec4(val, 0, 0, 0);
- }
- `;return r.floatTextureSetR=new l.GlslLibRoutine(d),r}getOutputScalarCoords(){return new l.GlslLibRoutine(`
- int getOutputCoords() {
- return 0;
- }
- `)}getOutputPacked1DCoords(o,t){const e=t;let r="";return e[0]===1?(r=`
- int getOutputCoords() {
- return 2 * int(TexCoords.y * ${e[1]}.0);
- }
- `,new l.GlslLibRoutine(r)):e[1]===1?(r=`
- int getOutputCoords() {
- return 2 * int(TexCoords.x * ${e[0]}.0);
- }
- `,new l.GlslLibRoutine(r)):(r=`
- int getOutputCoords() {
- ivec2 resTexRC = ivec2(TexCoords.xy *
- vec2(${e[0]}, ${e[1]}));
- return 2 * (resTexRC.y * ${e[0]} + resTexRC.x);
- }
- `,new l.GlslLibRoutine(r))}getOutputPacked2DCoords(o,t){let e="";if(c.ArrayUtil.arraysEqual(o,t))return e=`
- ivec2 getOutputCoords() {
- return 2 * ivec2(TexCoords.xy * vec2(${t[0]}, ${t[1]}));
- }
- `,new l.GlslLibRoutine(e);const r=t,i=Math.ceil(o[1]/2);return e=`
- ivec2 getOutputCoords() {
- ivec2 resTexRC = ivec2(TexCoords.xy *
- vec2(${r[0]}, ${r[1]}));
-
- int index = resTexRC.y * ${r[0]} + resTexRC.x;
-
- // reverse r and c order for packed texture
- int r = imod(index, ${i}) * 2;
- int c = 2 * (index / ${i});
-
- return ivec2(r, c);
- }
- `,new l.GlslLibRoutine(e)}getOutputPacked3DCoords(o,t){const e=[t[0],t[1]],r=Math.ceil(o[2]/2),i=r*Math.ceil(o[1]/2),d=`
- ivec3 getOutputCoords() {
- ivec2 resTexRC = ivec2(TexCoords.xy *
- vec2(${e[0]}, ${e[1]}));
- int index = resTexRC.y * ${e[0]} + resTexRC.x;
-
- int b = index / ${i};
- index -= b * ${i};
-
- // reverse r and c order for packed texture
- int r = imod(index, ${r}) * 2;
- int c = 2 * (index / ${r});
-
- return ivec3(b, r, c);
- }
- `;return new l.GlslLibRoutine(d)}getOutputPackedNDCoords(o,t){const e=[t[0],t[1]],r=Math.ceil(o[o.length-1]/2),i=r*Math.ceil(o[o.length-2]/2);let d=i,g="",m="b, r, c";for(let y=2;y=0;--m)i[m]=i[m+1]*o[m+1];const d=["r","c","d"],g=i.map((m,b)=>`int ${d[b]} = index / ${m}; ${b===i.length-1?`int ${d[b+1]} = index - ${d[b]} * ${m}`:`index -= ${d[b]} * ${m}`};`).join("");return e=`
- ivec3 getOutputCoords() {
- ivec2 resTexRC = ivec2(TexCoords.xy *
- vec2(${t[0]}, ${t[1]}));
- int index = resTexRC.y * ${t[0]} + resTexRC.x;
- ${g}
- return ivec3(r, c, d);
- }
- `,new l.GlslLibRoutine(e)}getOutputUnpacked4DCoords(o,t){let e="";const r=o.length;let i=null;r<2&&(i=[]),i=new Array(r-1),i[r-2]=o[r-1];for(let m=r-3;m>=0;--m)i[m]=i[m+1]*o[m+1];const d=["r","c","d","d2"],g=i.map((m,b)=>`int ${d[b]} = index / ${m}; ${b===i.length-1?`int ${d[b+1]} = index - ${d[b]} * ${m}`:`index -= ${d[b]} * ${m}`};`).join("");return e=`
- ivec4 getOutputCoords() {
- ivec2 resTexRC = ivec2(TexCoords.xy *
- vec2(${t[0]}, ${t[1]}));
- int index = resTexRC.y * ${t[0]} + resTexRC.x;
- ${g}
- return ivec4(r, c, d, d2);
- }
- `,new l.GlslLibRoutine(e)}getOutputUnpacked5DCoords(o,t){let e="";const r=o.length;let i=null;r<2&&(i=[]),i=new Array(r-1),i[r-2]=o[r-1];for(let m=r-3;m>=0;--m)i[m]=i[m+1]*o[m+1];const d=["r","c","d","d2","d3"],g=i.map((m,b)=>`int ${d[b]} = index / ${m}; ${b===i.length-1?`int ${d[b+1]} = index - ${d[b]} * ${m}`:`index -= ${d[b]} * ${m}`};`).join("");return e=`
- ivec5 getOutputCoords() {
- ivec2 resTexRC = ivec2(TexCoords.xy *
- vec2(${t[0]}, ${t[1]}));
- int index = resTexRC.y * ${t[0]} + resTexRC.x;
- ${g}
- return ivec5(r, c, d, d2, d3);
- }
- `,new l.GlslLibRoutine(e)}getOutputUnpacked6DCoords(o,t){let e="";const r=o.length;let i=null;r<2&&(i=[]),i=new Array(r-1),i[r-2]=o[r-1];for(let m=r-3;m>=0;--m)i[m]=i[m+1]*o[m+1];const d=["r","c","d","d2","d3","d4"],g=i.map((m,b)=>`int ${d[b]} = index / ${m}; ${b===i.length-1?`int ${d[b+1]} = index - ${d[b]} * ${m}`:`index -= ${d[b]} * ${m}`};`).join("");return e=`
- ivec6 getOutputCoords() {
- ivec2 resTexRC = ivec2(TexCoords.xy *
- vec2(${t[0]}, ${t[1]}));
- int index = resTexRC.y * ${t[0]} + resTexRC.x;
- ${g}
- return ivec6(r, c, d, d2, d3, d4);
- }
- `,new l.GlslLibRoutine(e)}getCommonUtilFuncs(){const o={};let t="uvFromFlat";o[t]=new l.GlslLibRoutine(`
- vec2 uvFromFlat(int texNumR, int texNumC, int index) {
- int texC = index / texNumR;
- int texR = index - texC * texNumR;
- // TODO: swap texR, texC order in following function so row is corresponding to u and column is corresponding to
- // v.
- return (vec2(texR, texC) + halfCR) / vec2(texNumR, texNumC);
- }
- `),t="packedUVfrom1D",o[t]=new l.GlslLibRoutine(`
- vec2 packedUVfrom1D(int texNumR, int texNumC, int index) {
- int texelIndex = index / 2;
- int texR = texelIndex / texNumC;
- int texC = texelIndex - texR * texNumC;
- return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR);
- }
- `),t="packedUVfrom2D",o[t]=new l.GlslLibRoutine(`
- vec2 packedUVfrom2D(int texNumR, int texNumC, int texelsInLogicalRow, int row, int col) {
- int texelIndex = (row / 2) * texelsInLogicalRow + (col / 2);
- int texR = texelIndex / texNumC;
- int texC = texelIndex - texR * texNumC;
- return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR);
- }
- `),t="packedUVfrom3D",o[t]=new l.GlslLibRoutine(`
- vec2 packedUVfrom3D(int texNumR, int texNumC,
- int texelsInBatch, int texelsInLogicalRow, int b,
- int row, int col) {
- int index = b * texelsInBatch + (row / 2) * texelsInLogicalRow + (col / 2);
- int texR = index / texNumC;
- int texC = index - texR * texNumC;
- return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR);
- }
- `),t="sampleTexture";const e=(0,f.getGlsl)(this.context.glContext.version);return o[t]=new l.GlslLibRoutine(`
- float sampleTexture(sampler2D textureSampler, vec2 uv) {
- return ${e.texture2D}(textureSampler, uv).r;
- }`),o}getInputsSamplingSnippets(){const o={},t=this.context.outputTextureLayout;return this.context.programInfo.inputNames.forEach((e,r)=>{const i=this.context.inputTextureLayouts[r],d=(0,h.generateShaderFuncNameFromInputSamplerName)(e);i.isPacked?o[d]=this.getPackedSamplerFromInput(d,e,i):o[d]=this.getUnpackedSamplerFromInput(d,e,i);const g=(0,h.generateShaderFuncNameFromInputSamplerNameAtOutCoords)(e);i.unpackedShape.length<=t.unpackedShape.length&&(i.isPacked?o[g]=this.getPackedSamplerAtOutputCoords(g,i,t,e):o[g]=this.getUnpackedSamplerAtOutputCoords(g,i,t,e))}),o}getPackedSamplerAtOutputCoords(o,t,e,r){const i=t.unpackedShape,d=e.unpackedShape,g=r,m=(0,h.generateShaderFuncNameFromInputSamplerName)(g),b=i.length,y=d.length,w=c.BroadcastUtil.getBroadcastDims(i,d),v=(0,h.getCoordsDataType)(y),S=y-b;let A;const O=(0,h.getGlChannels)();A=b===0?"":y<2&&w.length>=1?"coords = 0;":w.map(N=>`coords.${O[N+S]} = 0;`).join(`
-`);let x="";x=y<2&&b>0?"coords":i.map((N,H)=>`coords.${O[H+S]}`).join(", ");let I="return outputValue;";const $=c.ShapeUtil.size(i)===1,z=c.ShapeUtil.size(d)===1;if(b!==1||$||z){if($&&!z)I=y===1?`
- return vec4(outputValue.x, outputValue.x, 0., 0.);
- `:`
- return vec4(outputValue.x);
- `;else if(w.length){const N=b-2,H=b-1;w.indexOf(N)>-1&&w.indexOf(H)>-1?I="return vec4(outputValue.x);":w.indexOf(N)>-1?I="return vec4(outputValue.x, outputValue.y, outputValue.x, outputValue.y);":w.indexOf(H)>-1&&(I="return vec4(outputValue.xx, outputValue.zz);")}}else I=`
- return vec4(outputValue.xy, outputValue.xy);
- `;const L=`
- vec4 ${o}() {
- ${v} coords = getOutputCoords();
-
- int lastDim = coords.${O[y-1]};
- coords.${O[y-1]} = coords.${O[y-2]};
- coords.${O[y-2]} = lastDim;
-
- ${A}
- vec4 outputValue = ${m}(${x});
- ${I}
- }
- `;return new l.GlslLibRoutine(L,["coordinates.getOutputCoords"])}getUnpackedSamplerAtOutputCoords(o,t,e,r){const i=[e.width,e.height],d=[t.width,t.height],g=t.unpackedShape.length,m=e.unpackedShape.length,b=t.unpackedShape,y=e.unpackedShape,w=(0,h.generateShaderFuncNameFromInputSamplerName)(r);if(g===m&&c.ArrayUtil.arraysEqual(d,i)){const z=`
- float ${o}() {
- return sampleTexture(${r}, TexCoords);
- }
- `;return new l.GlslLibRoutine(z,["coordinates.sampleTexture"])}const v=(0,h.getCoordsDataType)(m),S=c.BroadcastUtil.getBroadcastDims(b,y),A=m-g;let O;const x=(0,h.getGlChannels)();O=g===0?"":m<2&&S.length>=1?"coords = 0;":S.map(z=>`coords.${x[z+A]} = 0;`).join(`
-`);let I="";I=m<2&&g>0?"coords":t.unpackedShape.map((z,L)=>`coords.${x[L+A]}`).join(", ");const $=`
- float ${o}() {
- ${v} coords = getOutputCoords();
- ${O}
- return ${w}(${I});
- }
- `;return new l.GlslLibRoutine($,["coordinates.getOutputCoords"])}getPackedSamplerFromInput(o,t,e){switch(e.unpackedShape.length){case 0:return this.getPackedSamplerScalar(o,t);case 1:return this.getPackedSampler1D(o,t,e);case 2:return this.getPackedSampler2D(o,t,e);case 3:return this.getPackedSampler3D(o,t,e);default:return this.getPackedSamplerND(o,t,e)}}getUnpackedSamplerFromInput(o,t,e){const r=e.unpackedShape;switch(r.length){case 0:return this.getUnpackedSamplerScalar(o,t,e);case 1:return this.getUnpackedSampler1D(o,t,e);case 2:return this.getUnpackedSampler2D(o,t,e);case 3:return this.getUnpackedSampler3D(o,t,e);case 4:return this.getUnpackedSampler4D(o,t,e);case 5:return this.getUnpackedSampler5D(o,t,e);case 6:return this.getUnpackedSampler6D(o,t,e);default:throw new Error(`Unsupported dimension ${r.length}-D`)}}getPackedSamplerScalar(o,t){const e=`
- vec4 ${o}() {
- return ${(0,f.getGlsl)(this.context.glContext.version).texture2D}(${t}, halfCR);
- }
- `;return new l.GlslLibRoutine(e)}getPackedSampler1D(o,t,e){const r=[e.width,e.height],i=[r[1],r[0]],d=(0,f.getGlsl)(this.context.glContext.version),g=`vec4 ${o}(int index) {
- vec2 uv = packedUVfrom1D(
- ${i[0]}, ${i[1]}, index);
- return ${d.texture2D}(${t}, uv);
- }`;return new l.GlslLibRoutine(g,["coordinates.packedUVfrom1D"])}getPackedSampler2D(o,t,e){const r=e.unpackedShape,i=[e.width,e.height],d=(0,f.getGlsl)(this.context.glContext.version),g=i[0],m=i[1];if(i!=null&&c.ArrayUtil.arraysEqual(r,i)){const v=`vec4 ${o}(int row, int col) {
- vec2 uv = (vec2(col, row) + halfCR) / vec2(${m}.0, ${g}.0);
- return ${d.texture2D}(${t}, uv);
- }`;return new l.GlslLibRoutine(v)}const b=i,y=Math.ceil(r[1]/2),w=`vec4 ${o}(int row, int col) {
- vec2 uv = packedUVfrom2D(${b[1]}, ${b[0]}, ${y}, row, col);
- return ${d.texture2D}(${t}, uv);
- }`;return new l.GlslLibRoutine(w,["coordinates.packedUVfrom2D"])}getPackedSampler3D(o,t,e){const r=e.unpackedShape,i=[e.width,e.height],d=[i[0],i[1]],g=(0,f.getGlsl)(this.context.glContext.version);if(r[0]===1){const v=r.slice(1),S=[1,2],A=(0,h.squeezeInputShape)(r,v),O=["b","row","col"],x=JSON.parse(JSON.stringify(e));x.unpackedShape=A;const I=this.getPackedSamplerFromInput(o,t,x),$=`${I.routineBody}
- vec4 ${o}(int b, int row, int col) {
- return ${o}(${(0,h.getSqueezedParams)(O,S)});
- } `;return new l.GlslLibRoutine($,I.dependencies)}const m=d[0],b=d[1],y=Math.ceil(r[2]/2),w=`vec4 ${o}(int b, int row, int col) {
- vec2 uv = packedUVfrom3D(
- ${b}, ${m}, ${y*Math.ceil(r[1]/2)}, ${y}, b, row, col);
- return ${g.texture2D}(${t}, uv);}`;return new l.GlslLibRoutine(w,["coordinates.packedUVfrom3D"])}getPackedSamplerND(o,t,e){const r=e.unpackedShape,i=r.length,d=[e.width,e.height],g=(0,f.getGlsl)(this.context.glContext.version),m=[d[0],d[1]],b=m[1],y=m[0],w=Math.ceil(r[i-1]/2);let v=w*Math.ceil(r[i-2]/2),S="int b, int row, int col",A=`b * ${v} + (row / 2) * ${w} + (col / 2)`;for(let x=2;x{const r=this.context.inputTextureLayouts[e],i=(r.unpackedShape.length>0?r.unpackedShape:r.shape).length;let d=`_${t}`;o[d]=new l.GlslLibRoutine(this.getValueFromSingle(t,i,r.width,r.height,!1),[`shapeUtils.indicesToOffset${d}`,"coordinates.offsetToCoords","fragcolor.getColorAsFloat"]),d+="_T",o[d]=new l.GlslLibRoutine(this.getValueFromSingle(t,i,r.width,r.height,!0),[`shapeUtils.indicesToOffset${d}`,"coordinates.offsetToCoords","fragcolor.getColorAsFloat"])}),o}getValueFromSingle(o,t,e,r,i){let d=`_${o}`;return i&&(d+="_T"),`
- float ${d}(int m[${t}]) {
- int offset = indicesToOffset${d}(m);
- vec2 coords = offsetToCoords(offset, ${e}, ${r});
- float value = getColorAsFloat(${(0,f.getGlsl)(this.context.glContext.version).texture2D}(${o}, coords));
- return value;
- }
- `}getPackedValueFrom(o,t,e,r,i){let d=`_${o}_Pack`;return i&&(d+="_T"),`
- vec4 ${d}(int m[${t}]) {
- int offset = indicesToOffset_${o}(m);
- vec2 coords = offsetToCoords(offset, ${e}, ${r});
- return ${(0,f.getGlsl)(this.context.glContext.version).texture2D}(${o}, coords);
- }
- `}}n.CoordsGlslLib=p},8520:(_,n)=>{var s;Object.defineProperty(n,"__esModule",{value:!0}),n.TopologicalSortGlslRoutines=n.GlslLibRoutineNode=n.GlslLibRoutine=n.GlslLib=n.GlslContext=n.FunctionType=void 0,(s=n.FunctionType||(n.FunctionType={}))[s.ValueBased=0]="ValueBased",s[s.Positional=1]="Positional",n.GlslContext=class{constructor(c,l,f,a){this.glContext=c,this.programInfo=l,this.inputTextureLayouts=f,this.outputTextureLayout=a}},n.GlslLib=class{constructor(c){this.context=c}},n.GlslLibRoutine=class{constructor(c,l){this.routineBody=c,this.dependencies=l}},n.GlslLibRoutineNode=class{constructor(c,l,f){this.name=c,this.dependencies=f||[],l&&(this.routineBody=l)}addDependency(c){c&&this.dependencies.push(c)}},n.TopologicalSortGlslRoutines=class{static returnOrderedNodes(c){if(!c||c.length===0)return[];if(c.length===1)return c;const l=new Set,f=new Set,a=new Array;return this.createOrderedNodes(c,l,f,a),a}static createOrderedNodes(c,l,f,a){for(let h=0;h0)for(let p=0;p{Object.defineProperty(n,"__esModule",{value:!0}),n.EncodingGlslLib=void 0;const c=s(8520);class l extends c.GlslLib{constructor(a){super(a)}getFunctions(){return Object.assign(Object.assign({},this.encodeFloat32()),this.decodeFloat32())}getCustomTypes(){return{}}encodeFloat32(){return{encode:new c.GlslLibRoutine(`highp vec4 encode(highp float f) {
- return vec4(f, 0.0, 0.0, 0.0);
- }
- `)}}decodeFloat32(){return{decode:new c.GlslLibRoutine(`highp float decode(highp vec4 rgba) {
- return rgba.r;
- }
- `)}}encodeUint8(){const a=l.isLittleEndian()?"rgba.rgba=rgba.abgr;":"";return{encode:new c.GlslLibRoutine(`
- highp vec4 encode(highp float f) {
- highp float F = abs(f);
- highp float Sign = step(0.0,-f);
- highp float Exponent = floor(log2(F));
- highp float Mantissa = (exp2(- Exponent) * F);
- Exponent = floor(log2(F) + 127.0) + floor(log2(Mantissa));
- highp vec4 rgba;
- rgba[0] = 128.0 * Sign + floor(Exponent*exp2(-1.0));
- rgba[1] = 128.0 * mod(Exponent,2.0) + mod(floor(Mantissa*128.0),128.0);
- rgba[2] = floor(mod(floor(Mantissa*exp2(23.0 -8.0)),exp2(8.0)));
- rgba[3] = floor(exp2(23.0)*mod(Mantissa,exp2(-15.0)));
- ${a}
- rgba = rgba / 255.0; // values need to be normalized to [0,1]
- return rgba;
- }
- `)}}decodeUint8(){const a=l.isLittleEndian()?"rgba.rgba=rgba.abgr;":"";return{decode:new c.GlslLibRoutine(`
- highp float decode(highp vec4 rgba) {
- rgba = rgba * 255.0; // values need to be de-normalized from [0,1] to [0,255]
- ${a}
- highp float Sign = 1.0 - step(128.0,rgba[0])*2.0;
- highp float Exponent = 2.0 * mod(rgba[0],128.0) + step(128.0,rgba[1]) - 127.0;
- highp float Mantissa = mod(rgba[1],128.0)*65536.0 + rgba[2]*256.0 +rgba[3] + float(0x800000);
- highp float Result = Sign * exp2(Exponent) * (Mantissa * exp2(-23.0 ));
- return Result;
- }
- `)}}static isLittleEndian(){const a=new ArrayBuffer(4),h=new Uint32Array(a),p=new Uint8Array(a);if(h[0]=3735928559,p[0]===239)return!0;if(p[0]===222)return!1;throw new Error("unknown endianness")}}n.EncodingGlslLib=l},9894:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.FragColorGlslLib=void 0;const c=s(8520),l=s(5060);class f extends c.GlslLib{constructor(h){super(h)}getFunctions(){return Object.assign(Object.assign({},this.setFragColor()),this.getColorAsFloat())}getCustomTypes(){return{}}setFragColor(){const h=(0,l.getGlsl)(this.context.glContext.version);return{setFragColor:new c.GlslLibRoutine(`
- void setFragColor(float value) {
- ${h.output} = encode(value);
- }
- `,["encoding.encode"])}}getColorAsFloat(){return{getColorAsFloat:new c.GlslLibRoutine(`
- float getColorAsFloat(vec4 color) {
- return decode(color);
- }
- `,["encoding.decode"])}}}n.FragColorGlslLib=f},2848:(_,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.replaceInlines=void 0;const s=/@inline[\s\n\r]+(\w+)[\s\n\r]+([0-9a-zA-Z_]+)\s*\(([^)]*)\)\s*{(([^}]|[\n\r])*)}/gm;n.replaceInlines=function(c){const l={};let f;for(;(f=s.exec(c))!==null;){const a=f[3].split(",").map(h=>{const p=h.trim().split(" ");return p&&p.length===2?{type:p[0],name:p[1]}:null}).filter(h=>h!==null);l[f[2]]={params:a,body:f[4]}}for(const a in l){const h="(\\w+)?\\s+([_0-9a-zA-Z]+)\\s+=\\s+__FUNC__\\((.*)\\)\\s*;".replace("__FUNC__",a),p=new RegExp(h,"gm");for(;(f=p.exec(c))!==null;){const u=f[1],o=f[2],t=f[3].split(","),e=u?`${u} ${o};`:"";let r=l[a].body,i="";l[a].params.forEach((g,m)=>{g&&(i+=`${g.type} ${g.name} = ${t[m]};
-`)}),r=`${i}
- ${r}`,r=r.replace("return",`${o} = `);const d=`
- ${e}
- {
- ${r}
- }
- `;c=c.replace(f[0],d)}}return c.replace(s,"")}},8879:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.GlslPreprocessor=void 0;const c=s(8520),l=s(2848),f=s(5483),a=s(5060);n.GlslPreprocessor=class{constructor(h,p,u,o){this.libs={},this.glslLibRoutineDependencyGraph={},this.context=new c.GlslContext(h,p,u,o),Object.keys(f.glslRegistry).forEach(e=>{const r=new f.glslRegistry[e](this.context);this.libs[e]=r});const t=this.glslLibRoutineDependencyGraph;for(const e in this.libs){const r=this.libs[e].getFunctions();for(const i in r){const d=e+"."+i;let g;t[d]?(g=t[d],g.routineBody=r[i].routineBody):(g=new c.GlslLibRoutineNode(d,r[i].routineBody),t[d]=g);const m=r[i].dependencies;if(m)for(let b=0;b{const o=u.split(".")[1];h.indexOf(o)!==-1&&p.push(this.glslLibRoutineDependencyGraph[u])}),c.TopologicalSortGlslRoutines.returnOrderedNodes(p)}getUniforms(h,p){const u=[];if(h)for(const o of h)u.push(`uniform sampler2D ${o};`);if(p)for(const o of p)u.push(`uniform ${o.type} ${o.name}${o.arrayLength?`[${o.arrayLength}]`:""};`);return u.join(`
-`)}}},5483:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.glslRegistry=void 0;const c=s(5107),l=s(7341),f=s(9894),a=s(2655),h=s(3891);n.glslRegistry={encoding:l.EncodingGlslLib,fragcolor:f.FragColorGlslLib,vec:h.VecGlslLib,shapeUtils:a.ShapeUtilsGlslLib,coordinates:c.CoordsGlslLib}},2655:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.ShapeUtilsGlslLib=void 0;const c=s(8520);class l extends c.GlslLib{constructor(a){super(a)}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},this.bcastIndex()),this.bcastMatmulIndex()),this.offsetToIndices()),this.indicesToOffset()),this.incrementIndices())}getCustomTypes(){return{}}bcastIndex(){const a=this.context.outputTextureLayout.shape.length,h={};return this.context.programInfo.inputNames.forEach((p,u)=>{const o=this.context.inputTextureLayouts[u].unpackedShape;if(o.length<=a){const t=o.length,e=a-t,r=`bcastIndices_${p}`;let i="";for(let g=0;g{const o=this.context.inputTextureLayouts[u].shape;if(!(o.length<2||o.length>a)){const t=o.length,e=a-t,r=`bcastMatmulIndices_${p}`;let i="";for(let g=0;g{const u=this.context.inputTextureLayouts[p].shape,o=this.context.inputTextureLayouts[p].strides,t=u.length;let e=`indicesToOffset_${h}`;a[e]=new c.GlslLibRoutine(l.indexToOffsetSingle(e,t,o)),e=`indicesToOffset_${h}_T`,a[e]=new c.GlslLibRoutine(l.indexToOffsetSingle(e,t,o.slice().reverse()))}),a}static indexToOffsetSingle(a,h,p){let u="";for(let o=h-1;o>=0;--o)u+=`
- offset += indices[${o}] * ${p[o]};
- `;return`
- int ${a}(int indices[${h}]) {
- int offset = 0;
- ${u}
- return offset;
- }
- `}offsetToIndices(){const a={};return this.context.programInfo.inputNames.forEach((h,p)=>{const u=this.context.inputTextureLayouts[p].shape,o=this.context.inputTextureLayouts[p].strides,t=u.length;let e=`offsetToIndices_${h}`;a[e]=new c.GlslLibRoutine(l.offsetToIndicesSingle(e,t,o)),e=`offsetToIndices_${h}_T`,a[e]=new c.GlslLibRoutine(l.offsetToIndicesSingle(e,t,o.slice().reverse()))}),a}static offsetToIndicesSingle(a,h,p){const u=[];for(let o=0;o{const u=this.context.inputTextureLayouts[p].shape,o=u.length,t=`incrementIndices_${h}`;let e="";for(let i=0;i= 0; --i) {
- if(i > axis) continue;
- indices[i] += 1;
- if(indices[i] < shape[i]) {
- break;
- }
- indices[i] = 0;
- }
- }
- `;a[t]=new c.GlslLibRoutine(r)}),a}}n.ShapeUtilsGlslLib=l},5060:(_,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.getDefaultFragShaderMain=n.getFragShaderPreamble=n.getVertexShaderSource=n.getGlsl=void 0;const s={version:"",attribute:"attribute",varyingVertex:"varying",varyingFrag:"varying",texture2D:"texture2D",output:"gl_FragColor",outputDeclaration:""},c={version:"#version 300 es",attribute:"in",varyingVertex:"out",varyingFrag:"in",texture2D:"texture",output:"outputColor",outputDeclaration:"out vec4 outputColor;"};function l(f){return f===1?s:c}n.getGlsl=l,n.getVertexShaderSource=function(f){const a=l(f);return`${a.version}
- precision highp float;
- ${a.attribute} vec3 position;
- ${a.attribute} vec2 textureCoord;
-
- ${a.varyingVertex} vec2 TexCoords;
-
- void main()
- {
- gl_Position = vec4(position, 1.0);
- TexCoords = textureCoord;
- }`},n.getFragShaderPreamble=function(f){const a=l(f);return`${a.version}
- precision highp float;
- precision highp int;
- precision highp sampler2D;
- ${a.varyingFrag} vec2 TexCoords;
- ${a.outputDeclaration}
- const vec2 halfCR = vec2(0.5, 0.5);
-
- // Custom vector types to handle higher dimenalities.
- struct ivec5
- {
- int x;
- int y;
- int z;
- int w;
- int u;
- };
-
- struct ivec6
- {
- int x;
- int y;
- int z;
- int w;
- int u;
- int v;
- };
-
- int imod(int x, int y) {
- return x - y * (x / y);
- }
-
- `},n.getDefaultFragShaderMain=function(f,a){return`
- void main() {
- int indices[${a}];
- toVec(TexCoords, indices);
- vec4 result = vec4(process(indices));
- ${l(f).output} = result;
- }
- `}},3891:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.VecGlslLib=void 0;const c=s(8520);class l extends c.GlslLib{constructor(a){super(a)}getCustomTypes(){return{}}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign({},this.binaryVecFunctions()),this.copyVec()),this.setVecItem()),this.getVecItem())}binaryVecFunctions(){const a=this.context.outputTextureLayout.shape.length,h={add:"+=",sub:"-=",mul:"*=",div:"/="},p={};for(const u in h){const o=`${u}Vec`;let t="";for(let r=0;r{Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLInferenceHandler=void 0;const c=s(6231),l=s(9162),f=s(2517),a=s(2403),h=s(7019),p=s(8710),u=s(5611),o=s(4057),t=s(2039);n.WebGLInferenceHandler=class{constructor(e){this.session=e,this.packedTextureDataCache=new Map,this.unpackedTextureDataCache=new Map}calculateTextureWidthAndHeight(e,r){return(0,o.calculateTextureWidthAndHeight)(this.session.layoutStrategy,e,r)}executeProgram(e,r){if(r.length{const S=v.map(O=>`${O.unpackedShape.join(",")};${O.width}x${O.height}`).join("_");let A=w.name;return w.cacheHint&&(A+="["+w.cacheHint+"]"),A+=":"+S,A})(e,i);let g=this.session.programManager.getArtifact(d);const m=g?g.programInfo:typeof e.get=="function"?e.get():e,b=(0,o.createTextureLayoutFromTextureType)(this.session.layoutStrategy,m.output.dims,m.output.textureType),y=this.createTextureData(b,m.output.type);return g||(g=this.session.programManager.build(m,i,y),this.session.programManager.setArtifact(d,g)),this.runProgram(g,i,y),y}run(e,r){return this.executeProgram(e,r).tensor}runProgram(e,r,i){for(let d=0;dthis.readTexture(m),async b=>this.readTextureAsync(m),void 0,g),texture:i});return this.setTextureData(m.tensor.dataId,m,e.isPacked),m}getTextureData(e,r=!1){return this.session.isInitializer(e)?this.session.getTextureData(e,r):r?this.packedTextureDataCache.get(e):this.unpackedTextureDataCache.get(e)}setTextureData(e,r,i=!1){this.session.isInitializer(e)?this.session.setTextureData(e,r,i):(i?this.packedTextureDataCache:this.unpackedTextureDataCache).set(e,r)}isTextureLayoutCached(e,r=!1){return!!this.getTextureData(e.dataId,r)}dispose(){this.session.textureManager.clearActiveTextures(),this.packedTextureDataCache.forEach(e=>this.session.textureManager.releaseTexture(e)),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache.forEach(e=>this.session.textureManager.releaseTexture(e)),this.unpackedTextureDataCache=new Map}readTexture(e){return e.isPacked?this.readTexture(this.unpack(e)):this.session.backend.glContext.isFloat32DownloadSupported?this.session.textureManager.readTexture(e,e.tensor.type,e.channels):this.session.textureManager.readUint8TextureAsFloat((0,p.encodeAsUint8)(this,e))}async readTextureAsync(e){return e.isPacked?this.readTextureAsync(this.unpack(e)):this.session.backend.glContext.isFloat32DownloadSupported?this.session.textureManager.readTextureAsync(e,e.tensor.type,e.channels):this.session.textureManager.readUint8TextureAsFloat((0,p.encodeAsUint8)(this,e))}pack(e){return this.executeProgram((0,a.createPackProgramInfoLoader)(this,e.tensor),[e.tensor])}unpack(e){return this.executeProgram((0,u.createUnpackProgramInfoLoader)(this,e.tensor),[e.tensor])}}},1640:function(_,n,s){var c=this&&this.__createBinding||(Object.create?function(X,Q,ee,ue){ue===void 0&&(ue=ee);var Ae=Object.getOwnPropertyDescriptor(Q,ee);Ae&&!("get"in Ae?!Q.__esModule:Ae.writable||Ae.configurable)||(Ae={enumerable:!0,get:function(){return Q[ee]}}),Object.defineProperty(X,ue,Ae)}:function(X,Q,ee,ue){ue===void 0&&(ue=ee),X[ue]=Q[ee]}),l=this&&this.__setModuleDefault||(Object.create?function(X,Q){Object.defineProperty(X,"default",{enumerable:!0,value:Q})}:function(X,Q){X.default=Q}),f=this&&this.__importStar||function(X){if(X&&X.__esModule)return X;var Q={};if(X!=null)for(var ee in X)ee!=="default"&&Object.prototype.hasOwnProperty.call(X,ee)&&c(Q,X,ee);return l(Q,X),Q};Object.defineProperty(n,"__esModule",{value:!0}),n.WEBGL_OP_RESOLVE_RULES=void 0;const a=s(2898),h=f(s(7839)),p=s(4196),u=s(2069),o=s(8138),t=s(9663),e=s(5193),r=s(7992),i=s(1253),d=s(4776),g=s(6572),m=s(3346),b=s(5623),y=s(2870),w=s(2143),v=s(4939),S=s(718),A=s(2268),O=s(8117),x=s(2278),I=s(5524),$=s(5975),z=s(3933),L=s(6558),N=s(5723),H=s(3738),M=f(s(4909)),j=s(8428),Z=s(9793);n.WEBGL_OP_RESOLVE_RULES=[["Abs","","6+",M.abs],["Acos","","7+",M.acos],["Add","","7+",h.add],["And","","7+",h.and],["Asin","","7+",M.asin],["Atan","","7+",M.atan],["AveragePool","","7+",w.averagePool,w.parseAveragePoolAttributes],["BatchNormalization","","7+",a.batchNormalization,a.parseBatchNormalizationAttributes],["Cast","","6+",p.cast,p.parseCastAttributes],["Ceil","","6+",M.ceil],["Clip","","6-10",M.clip,M.parseClipAttributes],["Clip","","11+",M.clipV11],["Concat","","4+",u.concat,u.parseConcatAttributes],["Conv","","1+",o.conv,o.parseConvAttributes],["ConvTranspose","","1+",t.convTranspose,t.parseConvTransposeAttributes],["Cos","","7+",M.cos],["Div","","7+",h.div],["Dropout","","7+",M.identity],["DepthToSpace","","1+",e.depthToSpace,e.parseDepthToSpaceAttributes],["Equal","","7+",h.equal],["Elu","","6+",M.elu,M.parseEluAttributes],["Exp","","6+",M.exp],["Flatten","","1+",r.flatten,r.parseFlattenAttributes],["Floor","","6+",M.floor],["FusedConv","com.microsoft","1+",o.conv,o.parseConvAttributes],["Gather","","1+",i.gather,i.parseGatherAttributes],["Gemm","","7-10",d.gemm,d.parseGemmAttributesV7],["Gemm","","11+",d.gemm,d.parseGemmAttributesV11],["GlobalAveragePool","","1+",w.globalAveragePool,w.parseGlobalAveragePoolAttributes],["GlobalMaxPool","","1+",w.globalMaxPool],["Greater","","7+",h.greater],["Identity","","1+",M.identity],["ImageScaler","","1+",g.imageScaler,g.parseImageScalerAttributes],["InstanceNormalization","","6+",m.instanceNormalization,m.parseInstanceNormalizationAttributes],["LeakyRelu","","6+",M.leakyRelu,M.parseLeakyReluAttributes],["Less","","7+",h.less],["Log","","6+",M.log],["MatMul","","1+",b.matMul,b.parseMatMulAttributes],["MaxPool","","1+",w.maxPool,w.parseMaxPoolAttributes],["Mul","","7+",h.mul],["Neg","","6+",M.neg],["Not","","1+",M.not],["Or","","7+",h.or],["Pad","","2-10",y.padV2,y.parsePadAttributesV2],["Pad","","11+",y.padV11,y.parsePadAttributesV11],["Pow","","7+",h.pow],["PRelu","","7+",h.pRelu],["ReduceLogSum","","1+",v.reduceLogSum,v.parseReduceAttributes],["ReduceMax","","1+",v.reduceMax,v.parseReduceAttributes],["ReduceMean","","1+",v.reduceMean,v.parseReduceAttributes],["ReduceMin","","1+",v.reduceMin,v.parseReduceAttributes],["ReduceProd","","1+",v.reduceProd,v.parseReduceAttributes],["ReduceSum","","1-12",v.reduceSum,v.parseReduceAttributes],["ReduceSumSquare","","1+",v.reduceLogSumSquare,v.parseReduceAttributes],["Relu","","6+",M.relu],["Reshape","","5+",S.reshape],["Resize","","10",A.resize,A.parseResizeAttributesV10],["Resize","","11+",A.resize,A.parseResizeAttributesV11],["Shape","","1+",O.shape],["Sigmoid","","6+",M.sigmoid],["Sin","","7+",M.sin],["Slice","","10+",x.sliceV10],["Slice","","1-9",x.slice,x.parseSliceAttributes],["Softmax","","1-12",I.softmax,I.parseSoftmaxAttributes],["Softmax","","13+",I.softmaxV13,I.parseSoftmaxAttributesV13],["Split","","2-12",$.split,$.parseSplitAttributes],["Sqrt","","6+",M.sqrt],["Squeeze","","1-12",z.squeeze,z.parseSqueezeAttributes],["Squeeze","","13+",z.squeezeV13],["Sub","","7+",h.sub],["Sum","","6+",L.sum],["Tan","","7+",M.tan],["Tanh","","6+",M.tanh],["Tile","","6+",N.tile],["Transpose","","1+",H.transpose,H.parseTransposeAttributes],["Upsample","","7-8",Z.upsample,Z.parseUpsampleAttributesV7],["Upsample","","9",Z.upsample,Z.parseUpsampleAttributesV9],["Unsqueeze","","1-12",j.unsqueeze,j.parseUnsqueezeAttributes],["Unsqueeze","","13+",j.unsqueezeV13],["Xor","","7+",h.xor]]},2898:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseBatchNormalizationAttributes=n.batchNormalization=void 0;const c=s(246),l=s(5060),f=s(2039),a={name:"BatchNormalization",inputNames:["A","Scale","B","Mean","Variance"],inputTypes:[f.TextureType.unpacked,f.TextureType.unpacked,f.TextureType.unpacked,f.TextureType.unpacked,f.TextureType.unpacked]};n.batchNormalization=(u,o,t)=>(p(o),[u.run(Object.assign(Object.assign({},a),{cacheHint:t.cacheKey,get:()=>h(u,o,t)}),o)]),n.parseBatchNormalizationAttributes=u=>{const o=u.attributes.getFloat("epsilon",1e-5),t=u.attributes.getFloat("momentum",.9),e=u.attributes.getInt("spatial",1);return(0,c.createAttributeWithCacheKey)({epsilon:o,momentum:t,spatial:e})};const h=(u,o,t)=>{const e=(0,l.getGlsl)(u.session.backend.glContext.version),r=o[0].dims.length,[i,d]=u.calculateTextureWidthAndHeight(o[1].dims,f.TextureType.unpacked),g=`
- float process(int[${r}] indices) {
- vec2 position = offsetToCoords(indices[1], ${i}, ${d});
- float scale = getColorAsFloat(${e.texture2D}(Scale, position));
- float mean = getColorAsFloat(${e.texture2D}(Mean, position));
- float variance = getColorAsFloat(${e.texture2D}(Variance, position));
- float b = getColorAsFloat(${e.texture2D}(B, position));
-
- return scale * ( (_A(indices) - mean) / sqrt(variance + float(${t.epsilon})) ) + b;
- }`;return Object.assign(Object.assign({},a),{output:{dims:o[0].dims,type:o[0].type,textureType:f.TextureType.unpacked},shaderSource:g})},p=u=>{if(!u||u.length!==5)throw new Error("BatchNormalization requires 5 inputs.");const o=u[0],t=u[1],e=u[2],r=u[3],i=u[4];if(o.dims.length<3||t.dims.length!==1||e.dims.length!==1||r.dims.length!==1||i.dims.length!==1)throw new Error("invalid input shape.");if(t.dims[0]!==o.dims[1]||e.dims[0]!==o.dims[1]||r.dims[0]!==o.dims[1]||i.dims[0]!==o.dims[1])throw new Error("invalid input shape.");if(o.type!=="float32"&&o.type!=="float64"||t.type!=="float32"&&t.type!=="float64"||e.type!=="float32"&&e.type!=="float64"||r.type!=="float32"&&r.type!=="float64"||i.type!=="float32"&&i.type!=="float64")throw new Error("invalid input tensor types.")}},7839:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.xor=n.sub=n.pRelu=n.pow=n.or=n.mul=n.less=n.greater=n.equal=n.div=n.and=n.add=n.glslPRelu=n.glslPow=n.glslXor=n.glslOr=n.glslAnd=n.glslLess=n.glslGreater=n.glslEqual=n.glslSub=n.glslMul=n.glslDiv=n.glslAdd=void 0;const c=s(2517),l=s(8520),f=s(5060),a=s(2039);function h(){const v="add_";return{body:`
- float ${v}(float a, float b) {
- return a + b;
- }
- vec4 ${v}(vec4 v1, vec4 v2) {
- return v1 + v2;
- }
- `,name:v,type:l.FunctionType.ValueBased}}function p(){const v="div_";return{body:`
- float ${v}(float a, float b) {
- return a / b;
- }
- vec4 ${v}(vec4 v1, vec4 v2) {
- return v1 / v2;
- }
- `,name:v,type:l.FunctionType.ValueBased}}function u(){const v="mul_";return{body:`
- float ${v}(float a, float b) {
- return a * b;
- }
- vec4 ${v}(vec4 v1, vec4 v2) {
- return v1 * v2;
- }
- `,name:v,type:l.FunctionType.ValueBased}}function o(){const v="sub_";return{body:`
- float ${v}(float a, float b) {
- return a - b;
- }
- vec4 ${v}(vec4 v1, vec4 v2) {
- return v1 - v2;
- }
- `,name:v,type:l.FunctionType.ValueBased}}function t(){const v="equal_";return{body:`
- float ${v}(float a, float b) {
- return float(a == b);
- }
- vec4 ${v}(vec4 v1, vec4 v2) {
- return vec4(equal(v1, v2));
- }
- `,name:v,type:l.FunctionType.ValueBased}}function e(){const v="greater_";return{body:`
- float ${v}(float a, float b) {
- return float(a > b);
- }
- vec4 ${v}(vec4 v1, vec4 v2) {
- return vec4( v1.r > v2.r ,
- v1.g > v2.g,
- v1.b > v2.b,
- v1.a > v2.a );
- }
- `,name:v,type:l.FunctionType.ValueBased}}function r(){const v="less_";return{body:`
- float ${v}(float a, float b) {
- return float(a < b);
- }
- vec4 ${v}(vec4 v1, vec4 v2) {
- return vec4( v1.r < v2.r ,
- v1.g < v2.g,
- v1.b < v2.b,
- v1.a < v2.a );
- }
- `,name:v,type:l.FunctionType.ValueBased}}function i(){const v="and_";return{body:`
- float ${v}(float a, float b) {
- return float( bool(a) && bool(b) );
- }
- vec4 ${v}(vec4 v1, vec4 v2) {
- bvec4 b1 = bvec4(v1);
- bvec4 b2 = bvec4(v2);
- return vec4( b1.r && b2.r ,
- b1.g && b2.g,
- b1.b && b2.b,
- b1.a && b2.a );
- }
- `,name:v,type:l.FunctionType.ValueBased}}function d(){const v="or_";return{body:`
- float ${v}(float a, float b) {
- return float( bool(a) || bool(b) );
- }
- vec4 ${v}(vec4 v1, vec4 v2) {
- bvec4 b1 = bvec4(v1);
- bvec4 b2 = bvec4(v2);
- return vec4( b1.r || b2.r ,
- b1.g || b2.g,
- b1.b || b2.b,
- b1.a || b2.a );
- }
- `,name:v,type:l.FunctionType.ValueBased}}function g(){const v="xor_";return{body:`
- float ${v}(float a, float b) {
- return float( bool(a) ^^ bool(b) );
- }
- vec4 ${v}(vec4 v1, vec4 v2) {
- bvec4 b1 = bvec4(v1);
- bvec4 b2 = bvec4(v2);
- return vec4( b1.r ^^ b2.r ,
- b1.g ^^ b2.g,
- b1.b ^^ b2.b,
- b1.a ^^ b2.a );
- }
- `,name:v,type:l.FunctionType.ValueBased}}function m(){return function(v){const S=`${v}_`;return{body:`
- float ${S}(float a, float b) {
- return ${v}(a, b);
- }
- vec4 ${S}(vec4 v1, vec4 v2) {
- return ${v}(v1, v2);
- }
- `,name:S,type:l.FunctionType.ValueBased}}("pow")}function b(){const v="prelu_";return{body:`
- float ${v}(float a, float b) {
- return a < 0.0 ? a * b: a;
- }
- vec4 ${v}(vec4 v1, vec4 v2) {
- return vec4(
- v1.r < 0.0 ? v1.r * v2.r: v1.r,
- v1.g < 0.0 ? v1.g * v2.g: v1.g,
- v1.b < 0.0 ? v1.b * v2.b: v1.b,
- v1.a < 0.0 ? v1.a * v2.a: v1.a
- );
- }
- `,name:v,type:l.FunctionType.ValueBased}}n.glslAdd=h,n.glslDiv=p,n.glslMul=u,n.glslSub=o,n.glslEqual=t,n.glslGreater=e,n.glslLess=r,n.glslAnd=i,n.glslOr=d,n.glslXor=g,n.glslPow=m,n.glslPRelu=b;const y=(v,S,A,O=S[0].type,x)=>{const I=v.session.pack?a.TextureType.packed:a.TextureType.unpacked;return{name:A.name,inputNames:["A","B"],inputTypes:[I,I],cacheHint:x,get:()=>w(v,S,A,O)}},w=(v,S,A,O=S[0].type)=>{const x=v.session.pack?a.TextureType.packed:a.TextureType.unpacked,I=!c.ShapeUtil.areEqual(S[0].dims,S[1].dims);let $=S[0].dims;const z=v.session.pack;if(I){const H=c.BroadcastUtil.calcShape(S[0].dims,S[1].dims,!1);if(!H)throw new Error("Can't perform binary op on the given tensors");$=H;const M=$.length,j=S[0].dims.length!==0?S[0].dims.length:1,Z=S[1].dims.length!==0?S[1].dims.length:1,X=S[0].dims.length!==0?"bcastIndices_A(indices, aindices);":"aindices[0] = 0;",Q=S[1].dims.length!==0?"bcastIndices_B(indices, bindices);":"bindices[0] = 0;",ee=(0,f.getGlsl)(v.session.backend.glContext.version),ue=z?`
- ${A.body}
- void main() {
- vec4 a = getAAtOutCoords();
- vec4 b = getBAtOutCoords();
- vec4 result = ${A.name}(a, b);
- ${ee.output} = result;
- }`:`
- ${A.body}
- float process(int indices[${M}]) {
- int aindices[${j}];
- int bindices[${Z}];
- ${X}
- ${Q}
- return ${A.name}(_A(aindices), _B(bindices));
- }`;return{name:A.name,inputNames:["A","B"],inputTypes:[x,x],output:{dims:$,type:O,textureType:x},shaderSource:ue,hasMain:z}}const L=(0,f.getGlsl)(v.session.backend.glContext.version),N=`
- ${A.body}
- void main() {
- vec4 v1 = ${L.texture2D}(A, TexCoords);
- vec4 v2 = ${L.texture2D}(B, TexCoords);
- vec4 result = ${A.name}(v1, v2);
- ${L.output} = result;
- }
- `;return{name:A.name,inputNames:["A","B"],inputTypes:[x,x],output:{dims:S[0].dims,type:O,textureType:x},shaderSource:N,hasMain:!0}};n.add=(v,S)=>[v.run(y(v,S,h()),S)],n.and=(v,S)=>[v.run(y(v,S,i(),"bool"),S)],n.div=(v,S)=>[v.run(y(v,S,p()),S)],n.equal=(v,S)=>[v.run(y(v,S,t(),"bool"),S)],n.greater=(v,S)=>[v.run(y(v,S,e(),"bool"),S)],n.less=(v,S)=>[v.run(y(v,S,r(),"bool"),S)],n.mul=(v,S)=>[v.run(y(v,S,u()),S)],n.or=(v,S)=>[v.run(y(v,S,d(),"bool"),S)],n.pow=(v,S)=>[v.run(y(v,S,m()),S)],n.pRelu=(v,S)=>[v.run(y(v,S,b()),S)],n.sub=(v,S)=>[v.run(y(v,S,o()),S)],n.xor=(v,S)=>[v.run(y(v,S,g(),"bool"),S)]},4196:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseCastAttributes=n.cast=void 0;const c=s(2517);n.cast=(f,a,h)=>(l(a),[f.cast(a[0],h)]),n.parseCastAttributes=f=>c.ProtoUtil.tensorDataTypeFromProto(f.attributes.getInt("to"));const l=f=>{if(!f||f.length!==1)throw new Error("Cast requires 1 input.");if(f[0].type==="string")throw new Error("Invalid input type.")}},1163:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackedConcatProgramInfoLoader=void 0;const c=s(5060),l=s(2039),f=s(9390),a=s(2827);n.createPackedConcatProgramInfoLoader=(p,u,o)=>{const t=(e=u.length,r=o.cacheKey,{name:"Concat (packed)",inputNames:Array.from({length:e},(i,d)=>`X${d}`),inputTypes:Array(e).fill(l.TextureType.packed),cacheHint:r});var e,r;return Object.assign(Object.assign({},t),{get:()=>((i,d,g,m)=>{const b=g[0].dims.slice();if(m>=b.length||m<-1*b.length)throw new Error("axis specified for concat doesn't match input dimensionality");m<0&&(m=b.length+m);const y=b.slice(0);for(let X=1;XX.dims),x=(0,f.getGlChannels)(w),I=new Array(O.length-1);I[0]=O[0][m];for(let X=1;X= ${I[X-1]}) {
- return getChannel(
- getX${X}(${h(x,$,Q)}),
- vec2(${h(z,$,Q)}));
- }`}const H=I.length,M=I[I.length-1];N+=`
- return getChannel(
- getX${H}(${h(x,$,M)}),
- vec2(${h(z,$,M)}));`;const j=(0,c.getGlsl)(i.session.backend.glContext.version),Z=`
- ${A}
- float getValue(${x.map(X=>"int "+X)}) {
- ${N}
- }
-
- void main() {
- ${S} coords = getOutputCoords();
- int lastDim = coords.${x[w-1]};
- coords.${x[w-1]} = coords.${x[w-2]};
- coords.${x[w-2]} = lastDim;
-
- vec4 result = vec4(getValue(${v}), 0., 0., 0.);
-
- ${v[w-1]} = ${v[w-1]} + 1;
- if (${v[w-1]} < ${y[w-1]}) {
- result.g = getValue(${v});
- }
-
- ${v[w-2]} = ${v[w-2]} + 1;
- if (${v[w-2]} < ${y[w-2]}) {
- result.a = getValue(${v});
- }
-
- ${v[w-1]} = ${v[w-1]} - 1;
- if (${v[w-2]} < ${y[w-2]} &&
- ${v[w-1]} < ${y[w-1]}) {
- result.b = getValue(${v});
- }
- ${j.output} = result;
- }
- `;return Object.assign(Object.assign({},d),{output:{dims:y,type:g[0].type,textureType:l.TextureType.packed},shaderSource:Z,hasMain:!0})})(p,t,u,o.axis)})};const h=(p,u,o)=>{const t=p.indexOf(u);return p.map((e,r)=>r===t?`${e} - ${o}`:e).join()}},2069:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseConcatAttributes=n.concat=void 0;const c=s(246),l=s(2039),f=s(1163);n.concat=(e,r,i)=>(t(r),e.session.pack&&r[0].dims.length>1?[e.run((0,f.createPackedConcatProgramInfoLoader)(e,r,i),r)]:[e.run(a(e,r,i),r)]);const a=(e,r,i)=>{const d=(g=r.length,m=i.cacheKey,{name:"Concat",inputNames:Array.from({length:g},(b,y)=>`X${y}`),inputTypes:Array(g).fill(l.TextureType.unpacked),cacheHint:m});var g,m;return Object.assign(Object.assign({},d),{get:()=>((b,y,w,v)=>{const S=w[0].dims.slice();if(v>=S.length||v<-1*S.length)throw new Error("axis specified for concat doesn't match input dimensionality");v<0&&(v=S.length+v);const A=S.slice(0);for(let L=1;L`int getTextureWhereDataResides(int index) {
- ${e.map((r,i)=>`if(index<${r}) {return ${i};}
-`).join("")}
- }`,p=e=>h(e),u=(e,r)=>{const i=[`float fetchDataFromCorrectTexture(int textureIndex, int indices[${r}]) {`];for(let d=0;d{const r=["int getSizeInConcatAxisValueFromIndex(int index) {"];for(let i=0;i(0,c.createAttributeWithCacheKey)({axis:e.attributes.getInt("axis")});const t=e=>{if(!e||e.length<1)throw new Error("too few inputs");const r=e[0].type,i=e[0].dims.length;if(r==="string")throw new Error("string tensor is not supported yet");for(const d of e){if(d.type!==r)throw new Error("input tensors should be one type");if(d.dims.length!==i)throw new Error("input tensors should have the same shape")}}},4770:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createUnpackedGroupedConvProgramInfoLoader=void 0;const c=s(6231),l=s(5060),f=s(2039),a=s(8138),h=s(2823);n.createUnpackedGroupedConvProgramInfoLoader=(p,u,o)=>{const t=(e=u.length>2,r=o.cacheKey,{name:"GroupedConv",inputNames:e?["X","W","Bias"]:["X","W"],inputTypes:e?[f.TextureType.unpacked,f.TextureType.unpacked,f.TextureType.unpacked]:[f.TextureType.unpacked,f.TextureType.unpacked],cacheHint:r});var e,r;return Object.assign(Object.assign({},t),{get:()=>((i,d,g,m)=>{const b=d.length>2?"value += getBias(output_channel);":"",y=d[0].dims.slice(),w=d[1].dims.slice(),v=w[0]/m.group;c.Logger.verbose("GroupedConv",`autpPad:${m.autoPad}, dilations:${m.dilations}, group:${m.group}, kernelShape:${m.kernelShape}, pads:${m.pads}, strides:${m.strides}`);const S=(0,a.calculateOutputShape)(y,w,m.dilations,m.pads,m.strides),A=(0,l.getGlsl)(i.session.backend.glContext.version),{activationFunction:O,applyActivation:x}=(0,h.getActivationSnippet)(m),I=`
- const ivec2 strides = ivec2(${m.strides[0]}, ${m.strides[1]});
- const ivec2 pads = ivec2(${m.pads[0]}, ${m.pads[1]});
- ${O}
- void main() {
- ivec4 coords = getOutputCoords();
- int batch = coords.x;
- int output_channel = coords.y;
- ivec2 xRCCorner = coords.zw * strides - pads;
- int group_id = output_channel / ${v};
-
- float value = 0.0;
- for (int wInChannel = 0; wInChannel < ${w[1]}; wInChannel++) {
- int input_channel = group_id * ${w[1]} + wInChannel;
- for (int wHeight = 0; wHeight < ${w[2]}; wHeight++) {
- int xHeight = xRCCorner.x + wHeight * ${m.dilations[0]};
-
- if (xHeight < 0 || xHeight >= ${y[2]}) {
- continue;
- }
-
- for (int wWidth = 0; wWidth < ${w[3]}; wWidth++) {
- int xWidth = xRCCorner.y + wWidth * ${m.dilations[1]};
- if (xWidth < 0 || xWidth >= ${y[3]}) {
- continue;
- }
-
- float xVal = getX(batch, input_channel, xWidth, xHeight);
- float wVal = getW(output_channel, wInChannel, wWidth, wHeight);
- value += xVal*wVal;
- }
- }
- }
- ${b}
- ${x}
- ${A.output} = vec4(value, .0, .0, .0);
- }
-`;return Object.assign(Object.assign({},g),{output:{dims:S,type:d[0].type,textureType:f.TextureType.unpacked},shaderSource:I,hasMain:!0})})(p,u,t,o)})}},1386:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.conv2DPacked=n.conv2DPackedPointwise=void 0;const c=s(8138),l=s(8555),f=s(708);n.conv2DPackedPointwise=(a,h,p)=>{const u=h[0].dims,o=h[1].dims,t=(0,c.calculateOutputShape)(u,o,p.dilations,p.pads,p.strides),e=a.reshapePacked(h[0],[u[1],u[2]*u[3]]),r=a.reshapePacked(h[1],[o[0],o[1]]),i=h.length>2?[r,e,h[2]]:[r,e],d=a.run((0,f.createPackedMatmulProgramInfoLoader)(a,i,p),i);return a.reshapePacked(d,t)},n.conv2DPacked=(a,h,p)=>{const u=h[0].dims,o=h[1].dims,t=(0,c.calculateOutputShape)(u,o,p.dilations,p.pads,p.strides),e=a.run((0,l.createPackedIm2ColProgramInfoLoader)(a,h[0],h[1],t,p),[h[0]]),r=a.reshapePacked(h[1],[o[0],o[1]*o[2]*o[3]]),i=h.length===3?[r,e,h[2]]:[r,e],d=a.run((0,f.createPackedMatmulProgramInfoLoader)(a,i,p),i);return a.reshapePacked(d,t)}},9663:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseConvTransposeAttributes=n.convTranspose=void 0;const c=s(246),l=s(5060),f=s(2039),a=s(2823),h=(r,i,d,g,m,b)=>(r-1)*i+d+(g-1)*m+1-b,p=(r,i,d,g,m)=>{const b=Math.floor(r/2);i==="SAME_UPPER"?(d[g]=b,d[m]=r-b):i==="SAME_LOWER"&&(d[g]=r-b,d[m]=b)};n.convTranspose=(r,i,d)=>(e(i,d),u(r,i,d));const u=(r,i,d)=>{const g=t(d,i);return[o(r,i,g)]},o=(r,i,d)=>r.run(((g,m,b)=>{const y=(w=m.length>2,v=b.cacheKey,{name:"ConvTranspose",inputNames:w?["X","W","B"]:["X","W"],inputTypes:w?[f.TextureType.unpacked,f.TextureType.unpacked,f.TextureType.unpacked]:[f.TextureType.unpacked,f.TextureType.unpacked],cacheHint:v});var w,v;return Object.assign(Object.assign({},y),{get:()=>((S,A,O,x)=>{const I=A.length>2?"getB(output_channel)":"0.0",$=A[0].dims,z=A[1].dims,L=z[1],N=z[0]/x.group,H=[A[0].dims[0],A[1].dims[1]*x.group,...x.outputShape],M=(0,l.getGlsl)(S.session.backend.glContext.version),{activationFunction:j,applyActivation:Z}=(0,a.getActivationSnippet)(x),X=`
- const ivec2 strides = ivec2(${x.strides[0]}, ${x.strides[1]});
- const ivec2 pads = ivec2(${x.pads[0]}, ${x.pads[1]});
- ${j}
- void main() {
- ivec4 coords = getOutputCoords();
- int batch = coords.x;
- int output_channel = coords.y;
-
- ivec2 loc = coords.zw + pads;
-
- int group_id = output_channel / ${L};
- int wOutChannel = output_channel - group_id * ${L};
-
- float value = ${I};
- for (int inChannelOffset = 0; inChannelOffset < ${N}; inChannelOffset++) {
- int input_channel = group_id * ${N} + inChannelOffset;
- for (int wWOff = 0; wWOff < ${z[2]}; wWOff++) {
- for (int wHOff = 0; wHOff < ${z[3]}; wHOff++) {
- ivec2 wOff = ivec2(wWOff * ${x.dilations[0]}, wHOff * ${x.dilations[1]});
- ivec2 wLoc = loc - wOff;
- ivec2 wLocIn = wLoc / strides;
- if (
- wLocIn * strides == wLoc &&
- wLocIn.x >= 0 && wLocIn.x < ${$[2]} &&
- wLocIn.y >= 0 && wLocIn.y < ${$[3]}
- ) {
- float xVal = getX(batch, input_channel, wLocIn.y, wLocIn.x);
- float wVal = getW(input_channel, wOutChannel, wHOff, wWOff);
- value += xVal * wVal;
- }
- }
- }
- }
- ${Z}
- ${M.output} = vec4(value, .0, .0, .0);
- }
-`;return Object.assign(Object.assign({},O),{output:{dims:H,type:A[0].type,textureType:f.TextureType.unpacked},shaderSource:X,hasMain:!0})})(g,m,y,b)})})(r,i,d),i),t=(r,i)=>{const d=r.kernelShape.slice();if(r.kernelShape.length===0)for(let y=2;y{const $=y.length-2,z=I.length===0;for(let L=0;L<$;++L){const N=z?y[L+2]*O[L]:I[L],H=h(y[L+2],O[L],A[L],w[L],v[L],N);p(H,S,A,L,L+$),z&&I.push(O[L]*(y[L+2]-1)+x[L]+(w[L]-1)*v[L]+1-A[L]-A[L+$])}})(i[0].dims,d,r.dilations,r.autoPad,g,r.strides,r.outputPadding,m);const b=Object.assign({},r);return Object.assign(b,{kernelShape:d,pads:g,outputShape:m,cacheKey:r.cacheKey}),b};n.parseConvTransposeAttributes=r=>{const i=r.attributes,d=(0,a.parseInternalActivationAttributes)(i),g=i.getString("auto_pad","NOTSET"),m=i.getInts("dilations",[1,1]),b=i.getInt("group",1),y=i.getInts("kernel_shape",[]),w=i.getInts("output_padding",[0,0]),v=i.getInts("output_shape",[]),S=i.getInts("pads",[0,0,0,0]),A=i.getInts("strides",[1,1]);return(0,c.createAttributeWithCacheKey)(Object.assign({autoPad:g,dilations:m,group:b,kernelShape:y,outputPadding:w,outputShape:v,pads:S,strides:A},d))};const e=(r,i)=>{if(!r||r.length!==2&&r.length!==3)throw new Error("Conv requires 2 or 3 inputs");if(r[0].dims.length!==4||r[1].dims.length!==4)throw new Error("currently only support 2-dimensional conv");if(r[0].dims[1]!==r[1].dims[0])throw new Error("FILTER_IN_CHANNEL should be equal to DATA_CHANNEL");const d=r[1].dims[1]*i.group;if(r.length===3&&(r[2].dims.length!==1||r[2].dims[0]!==d))throw new Error("invalid bias");const g=r[0].dims.length-2;if(i.dilations.length!==g)throw new Error(`dilations should be ${g}D`);if(i.strides.length!==g)throw new Error(`strides should be ${g}D`);if(i.pads.length!==2*g)throw new Error(`pads should be ${2*g}D`);if(i.outputPadding.length!==g)throw new Error(`output_padding should be ${g}D`);if(i.kernelShape.length!==0&&i.kernelShape.length!==r[1].dims.length-2)throw new Error("invalid kernel shape");if(i.outputShape.length!==0&&i.outputShape.length!==r[0].dims.length-2)throw new Error("invalid output shape");if(r[0].type!=="float32"||r[1].type!=="float32")throw new Error("ConvTranspose input(X,W) should be float tensor");if(r.length===3&&r[2].type!=="float32")throw new Error("ConvTranspose input(bias) should be float tensor")}},8138:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseConvAttributes=n.conv=n.calculateOutputShape=void 0;const c=s(246),l=s(2517),f=s(4770),a=s(1386),h=s(9828),p=s(2823),u=s(3248),o=s(5623);n.calculateOutputShape=(g,m,b,y,w)=>{const v=g[0],S=g.slice(2),A=S.length,O=m[0],x=m.slice(2).map(($,z)=>$+($-1)*(b[z]-1)),I=S.map(($,z)=>$+y[z]+y[z+A]).map(($,z)=>Math.floor(($-x[z]+w[z])/w[z]));return[v,O].concat(...I)},n.conv=(g,m,b)=>(d(m,b),t(g,m,b));const t=(g,m,b)=>{const y=i(b,m),w=g.session.pack,v=y.kernelShape[0]===1&&y.kernelShape[1]===1;return y.group>1?[g.run((0,f.createUnpackedGroupedConvProgramInfoLoader)(g,m,y),m)]:v&&w?[e(g,m,y)]:w&&m[0].dims.length===4&&m[0].dims[0]===1&&!v?[(0,a.conv2DPacked)(g,m,y)]:[r(g,m,y)]},e=(g,m,b)=>{const y=m[0].dims,w=m[1].dims,v=(0,n.calculateOutputShape)(y,w,b.dilations,b.pads,b.strides),S=g.reshapeUnpacked(m[0],[y[1],y[2]*y[3]]),A=g.reshapeUnpacked(m[1],[w[0],w[1]]),O=m.length>2?[A,S,m[2]]:[A,S],x=g.run((0,o.createMatmulProgramInfoLoader)(O,b),O);return g.reshapeUnpacked(x,v)},r=(g,m,b)=>{const y=m[0].dims,w=m[1].dims,v=(0,n.calculateOutputShape)(y,w,b.dilations,b.pads,b.strides),S=g.run((0,u.createIm2ColProgramInfoLoader)(g,m[0],m[1],v,b),[m[0]]),A=m.length===3?[S,m[1],m[2]]:[S,m[1]];return g.run((0,h.createDotProductProgramInfoLoader)(g,m,v,b),A)},i=(g,m)=>{const b=g.kernelShape.slice();if(g.kernelShape.length===0)for(let v=2;v{const m=g.attributes,b=(0,p.parseInternalActivationAttributes)(m),y=m.getString("auto_pad","NOTSET"),w=m.getInts("dilations",[1,1]),v=m.getInt("group",1),S=m.getInts("kernel_shape",[]),A=m.getInts("pads",[0,0,0,0]),O=m.getInts("strides",[1,1]);return(0,c.createAttributeWithCacheKey)(Object.assign({autoPad:y,dilations:w,group:v,kernelShape:S,pads:A,strides:O},b))};const d=(g,m)=>{if(!g||g.length!==2&&g.length!==3)throw new Error("Conv requires 2 or 3 inputs");if(g[0].dims.length!==4||g[1].dims.length!==4)throw new Error("currently only support 2-dimensional conv");if(g[0].dims[1]!==g[1].dims[1]*m.group)throw new Error("FILTER_IN_CHANNEL should be equal to DATA_CHANNEL");if(g.length===3&&(g[2].dims.length!==1||g[1].dims[0]!==g[2].dims[0]))throw new Error("invalid bias");const b=g[0].dims.length-2;if(m.dilations.length!==b)throw new Error(`dilations should be ${b}D`);if(m.strides.length!==b)throw new Error(`strides should be ${b}D`);if(m.pads.length!==2*b)throw new Error(`pads should be ${2*b}D`);if(m.kernelShape.length!==0&&m.kernelShape.length!==g[1].dims.length-2)throw new Error("invalid kernel shape");if(g[0].type!=="float32"||g[1].type!=="float32")throw new Error("Conv input(X,W) should be float tensor");if(g.length===3&&g[2].type!=="float32")throw new Error("Conv input(bias) should be float tensor")}},5193:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseDepthToSpaceAttributes=n.depthToSpace=void 0;const c=s(3738);n.depthToSpace=(f,a,h)=>{l(a);const p=h.blocksize,u=p*p,o=h.mode==="DCR"?[0,3,4,1,5,2]:[0,1,4,2,5,3],t=h.mode==="DCR"?[a[0].dims[0],p,p,a[0].dims[1]/u,a[0].dims[2],a[0].dims[3]]:[a[0].dims[0],a[0].dims[1]/u,p,p,a[0].dims[2],a[0].dims[3]],e=f.reshapeUnpacked(a[0],t),r={perm:o,cacheKey:`${o}`},[i]=(0,c.transpose)(f,[e],r),d=[a[0].dims[0],a[0].dims[1]/u,a[0].dims[2]*p,a[0].dims[3]*p];return[f.reshapeUnpacked(i,d)]},n.parseDepthToSpaceAttributes=f=>{const a=f.attributes.getInt("blocksize");if(a<1)throw new Error(`blocksize must be >= 1, but got : ${a} for DepthToSpace`);const h=f.attributes.getString("mode","DCR");if(h!=="DCR"&&h!=="CRD")throw new Error(`unrecognized mode: ${h} for DepthToSpace`);return{mode:h,blocksize:a}};const l=f=>{if(f.length!==1)throw new Error(`DepthToSpace expect 1 inputs, but got ${f.length}`);if(f[0].type==="string"||f[0].dims.length!==4)throw new TypeError("DepthToSpace input should be a 4-D numeric tensor")}},9828:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createDotProductProgramInfoLoader=void 0;const c=s(2517),l=s(5060),f=s(2039),a=s(2823),h=s(3248);n.createDotProductProgramInfoLoader=(p,u,o,t)=>{const e=((r,i)=>({name:"ConvDotProduct",inputNames:r?["Im2Col","K","B"]:["Im2Col","K"],inputTypes:r?[f.TextureType.unpacked,f.TextureType.packedLastDimension,f.TextureType.unpacked]:[f.TextureType.unpacked,f.TextureType.packedLastDimension],cacheKey:i.activationCacheKey}))(u.length>2,t);return Object.assign(Object.assign({},e),{get:()=>((r,i,d,g,m)=>{const b=d[0].dims,y=d[1].dims,w=[y[0],Math.ceil(b[1]*y[2]*y[3]/4)],v=(0,h.calculateIm2ColDims)(b,y,g),[S,A]=r.calculateTextureWidthAndHeight(w,f.TextureType.packedLastDimension),O=c.ShapeUtil.computeStrides(v),[x,I]=r.calculateTextureWidthAndHeight(v,f.TextureType.packedLastDimension),$=g.length,z=d.length<3?"0.0":"_B(b)",L=Math.ceil(b[1]*y[2]*y[3]/4),{activationFunction:N,applyActivation:H}=(0,a.getActivationSnippet)(m),M=(0,l.getGlsl)(r.session.backend.glContext.version),j=`
-${N}
-float process(int indices[${$}]) {
- int b[1];
- b[0] = indices[1];
- int im2col[4];
- im2col[0] = indices[0];
- im2col[1] = indices[2];
- im2col[2] = indices[3];
- int im2colOffset = im2col[0] * ${O[0]} + im2col[1] * ${O[1]} + im2col[2] * ${O[2]};
- int kernelOffset = indices[1] * ${w[1]};
- float value = ${z};
- for (int i = 0; i < ${L}; ++i) {
- vec2 im2colCoords = offsetToCoords(im2colOffset, ${x}, ${I});
- vec2 kernelCoords = offsetToCoords(kernelOffset, ${S}, ${A});
- value += dot(${M.texture2D}(Im2Col, im2colCoords), ${M.texture2D}(K, kernelCoords));
- ++im2colOffset;
- ++kernelOffset;
- }
- ${H}
- return value;
-}`;return Object.assign(Object.assign({},i),{output:{dims:g,type:d[0].type,textureType:f.TextureType.unpacked},shaderSource:j})})(p,e,u,o,t)})}},7992:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseFlattenAttributes=n.flatten=void 0;const c=s(2517);n.flatten=(f,a,h)=>{l(a,h);const p=c.ShapeUtil.flattenShape(a[0].dims,h);return[f.reshapeUnpacked(a[0],p)]},n.parseFlattenAttributes=f=>f.attributes.getInt("axis",1);const l=(f,a)=>{if(!f||f.length!==1)throw new Error("Flatten requires 1 input.");const h=f[0].dims.length;if(h===0)throw new Error("scalar tensor is not supported.");if(a<-h||a>h)throw new Error("Invalid axis");if(f[0].type==="string")throw new Error("string tensor is not supported.")}},2823:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseInternalActivationAttributes=n.getActivationSnippet=void 0;const c=s(2517),l=s(4909);n.getActivationSnippet=function(f){let a;switch(f.activation){case"Relu":a=(0,l.glslRelu)();break;case"Sigmoid":a=(0,l.glslSigmoid)();break;case"Clip":a=(0,l.glslClip)(f.clipMin,f.clipMax);break;default:return{activationFunction:"",applyActivation:""}}const h=a.name;return{activationFunction:a.body,applyActivation:`value = ${h}_(value);`}},n.parseInternalActivationAttributes=f=>{const a=f.getString("activation","");if(a==="Clip"){const[h,p]=f.getFloats("activation_params",[c.MIN_CLIP,c.MAX_CLIP]);return{activation:a,clipMax:p,clipMin:h,activationCacheKey:`${a}:${h},${p}`}}return{activation:a,activationCacheKey:a}}},1253:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseGatherAttributes=n.gather=void 0;const c=s(246),l=s(782),f=s(2517),a=s(2039);n.gather=(o,t,e)=>(u(t,e.axis),[o.run(p(o,t,e),t)]),n.parseGatherAttributes=o=>(0,c.createAttributeWithCacheKey)({axis:o.attributes.getInt("axis",0)});const h={name:"Gather",inputNames:["A","B"],inputTypes:[a.TextureType.unpacked,a.TextureType.unpacked]},p=(o,t,e)=>{const r=Object.assign(Object.assign({},h),{cacheHint:e.cacheKey});return Object.assign(Object.assign({},r),{get:()=>((i,d,g,m)=>{const b=g[0].dims.slice(),y=g[1].dims.slice(),w=new Array(b.length+y.length-1);m=f.ShapeUtil.normalizeAxis(m,b.length);const v=[];for(let A=0;A{if(!o||o.length!==2)throw new Error("Gather requires 2 inputs.");const e=o[0].dims.length;if(e<1)throw new Error("Invalid input shape.");if(t<-e||t>e-1)throw new Error("Invalid axis.");if(l.NUMBER_TYPES.indexOf(o[0].type)===-1)throw new Error("Invaid input type.");if(o[1].type!=="int32"&&o[1].type!=="int16")throw new Error("Invaid input type.")}},4776:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseGemmAttributesV11=n.parseGemmAttributesV7=n.gemm=void 0;const c=s(246),l=s(2517),f=s(2039);n.gemm=(o,t,e)=>(u(t,e),[o.run(h(t,e),t)]);const a=(o,t)=>{const e=o.attributes.getInt("transA",0)!==0,r=o.attributes.getInt("transB",0)!==0,i=o.attributes.getFloat("alpha",1),d=o.attributes.getFloat("beta",1);return(0,c.createAttributeWithCacheKey)({transA:e,transB:r,alpha:i,beta:d,isOptionalC:t})};n.parseGemmAttributesV7=o=>a(o,!1),n.parseGemmAttributesV11=o=>a(o,!0);const h=(o,t)=>{const e={name:"Gemm",inputNames:o.length===3?["A","B","C"]:["A","B"],inputTypes:o.length===3?[f.TextureType.unpacked,f.TextureType.unpacked,f.TextureType.unpacked]:[f.TextureType.unpacked,f.TextureType.unpacked],key:t.cacheKey};return Object.assign(Object.assign({},e),{get:()=>p(e,o,t)})},p=(o,t,e)=>{const r=t[0].dims.slice(),i=t[1].dims.slice(),[d,g]=l.GemmUtil.getShapeOfGemmResult(r,e.transA,i,e.transB,t.length===3?t[2].dims:void 0),m=[d,g];if(!m)throw new Error("Can't use gemm on the given tensors");let b=r[r.length-1],y="";e.transA&&(b=r[0]),e.transA&&e.transB?y="value += _A_T(a) * _B_T(b);":e.transA&&!e.transB?y="value += _A_T(a) * _B(b);":!e.transA&&e.transB?y="value += _A(a) * _B_T(b);":e.transA||e.transB||(y="value += _A(a) * _B(b);");const w=m.length,v=`
- float process(int indices[${w}]) {
- int a[${w}];
- int b[${w}];
- ${t.length===3?`int c[${t[2].dims.length}];`:""}
-
- copyVec(indices, a);
- copyVec(indices, b);
- ${t.length===3?"bcastIndices_C(indices, c);":""}
-
- float value = 0.0;
- for (int k=0; k<${b}; ++k) {
- a[${w-1}] = k;
- b[${w-2}] = k;
- ${y}
- }
-
- value = value * alpha;
- ${t.length===3?"value += beta * _C(c);":""}
- return value;
- }`;return Object.assign(Object.assign({},o),{output:{dims:m,type:t[0].type,textureType:f.TextureType.unpacked},variables:[{name:"alpha",type:"float",data:e.alpha},{name:"beta",type:"float",data:e.beta}],shaderSource:v})},u=(o,t)=>{if(!o)throw new Error("Input is missing");if(t.isOptionalC&&(o.length<2||o.length>3))throw new Error("Invaid input shape.");if(!t.isOptionalC&&o.length!==3)throw new Error("Gemm requires 3 inputs");if(o.length===3&&o[2].dims.length!==1&&o[2].dims.length!==2)throw new Error("Invalid input shape of C");if(o[0].type!=="float32"&&o[0].type!=="float64"||o[1].type!=="float32"&&o[1].type!=="float64"||o.length===3&&o[2].type!=="float32"&&o[2].type!=="float64")throw new Error("Invalid input type.");if(o[0].type!==o[1].type||o.length===3&&o[0].type!==o[2].type)throw new Error("Input types are mismatched")}},8555:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackedIm2ColProgramInfoLoader=void 0;const c=s(5060),l=s(2039),f=s(2827);n.createPackedIm2ColProgramInfoLoader=(a,h,p,u,o)=>{const t=(e=o.cacheKey,{name:"Im2Col (packed)",inputNames:["A"],inputTypes:[l.TextureType.packed],cacheHint:e});var e;return Object.assign(Object.assign({},t),{get:()=>((r,i,d,g,m,b)=>{const y=d.dims,w=g.dims,v=m.length,S=[w[1]*w[2]*w[3],m[2]*m[3]],A=w[2]*w[3],O=(0,f.unpackFromChannel)(),x=(0,c.getGlsl)(r.session.backend.glContext.version);let I="";for(let z=0;z<=1;z++)for(let L=0;L<=1;L++)I+=`
- blockIndex = rc.x + ${L};
- pos = rc.y + ${z};
-
- if(blockIndex < ${S[1]} && pos < ${S[0]}) {
- offsetY = int(blockIndex / (${m[v-1]})) * ${b.strides[0]} -
- ${b.pads[0]};
- d0 = offsetY + ${b.dilations[0]} * (imod(pos, ${A}) / ${w[2]});
-
- if(d0 < ${y[2]} && d0 >= 0) {
- offsetX = imod(blockIndex, ${m[v-1]}) * ${b.strides[1]} -
- ${b.pads[1]};
- d1 = offsetX + ${b.dilations[1]} * imod(imod(pos, ${A}), ${w[2]});
-
- if(d1 < ${y[3]} && d1 >= 0) {
-
- ch = int(float(pos)/ ${A}.);
- innerDims = vec2(d0, d1);
- result[${2*z+L}] = getChannel(
- getA(0, ch, int(innerDims.x),
- int(innerDims.y)), innerDims);
- }
- }
- }
-
- `;const $=`
- ${O}
-
- void main() {
- ivec2 rc = getOutputCoords();
- vec4 result = vec4(0.0);
- int blockIndex, pos, offsetY, d0, offsetX, d1, ch;
- vec2 innerDims;
- ${I}
- ${x.output} = result;
- }
- `;return Object.assign(Object.assign({},i),{output:{dims:S,type:d.type,textureType:l.TextureType.packed},shaderSource:$,hasMain:!0})})(a,t,h,p,u,o)})}},3248:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.calculateIm2ColDims=n.createIm2ColProgramInfoLoader=void 0;const c=s(2039);n.createIm2ColProgramInfoLoader=(l,f,a,h,p)=>{const u=(o=p.cacheKey,{name:"Im2Col",inputNames:["X"],inputTypes:[c.TextureType.unpacked],cacheHint:o});var o;return Object.assign(Object.assign({},u),{get:()=>((t,e,r,i,d,g)=>{const m=r.dims,b=i.dims,y=d.length,w=(0,n.calculateIm2ColDims)(m,b,d,4),v=`
- const int XC = ${m[1]};
- const int XH = ${m[2]};
- const int XW = ${m[3]};
- const int KH = ${g.kernelShape[0]};
- const int KW = ${g.kernelShape[1]};
- const int dilationH = ${g.dilations[0]};
- const int dilationW = ${g.dilations[1]};
- const int strideH = ${g.strides[0]};
- const int strideW = ${g.strides[1]};
- const int padH = ${g.pads[0]};
- const int padW = ${g.pads[1]};
- const int KHKW = KH*KW;
- const int XCKHKW = XC * KHKW;
- const int outputChannels = 4;
- vec4 process(int indices[${y}]) {
- int b = indices[0]; // batch size
- int oh = indices[1] * strideH - padH; //output height
- int ow = indices[2] * strideW - padW; //output width
- int p = indices[3] * outputChannels; //patch
- vec4 value = vec4(0.0);
- for(int i=0; i < outputChannels; ++i) {
- if(p < XCKHKW) {
- int patchC = p / KHKW;
- int patchH = (p - patchC*KHKW) / KW;
- int patchW = (p - patchC*KHKW) - patchH * KW;
- int xh2 = oh + patchH * dilationH;
- int xw2 = ow + patchW * dilationW;
- int x[${m.length}];
- x[0] = b;
- x[1] = patchC;
- x[2] = xh2;
- x[3] = xw2;
- if(xh2 >= 0 &&
- xh2 < XH &&
- xw2 >= 0 &&
- xw2 < XW) {
- value[i] = _X(x);
- }
- }
- ++p;
- }
- return value;
- }
- `;return Object.assign(Object.assign({},e),{output:{dims:w,type:r.type,textureType:c.TextureType.packedLastDimension},shaderSource:v})})(0,u,f,a,h,p)})},n.calculateIm2ColDims=(l,f,a,h=4)=>[a[0],a[2],a[3],Math.ceil(l[1]*f[2]*f[3]/h)]},6572:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseImageScalerAttributes=n.imageScaler=void 0;const c=s(246),l=s(2039);n.imageScaler=(u,o,t)=>(p(o),[u.run(a(u,o,t),o)]),n.parseImageScalerAttributes=u=>{const o=u.attributes.getFloat("scale"),t=u.attributes.getFloats("bias");return(0,c.createAttributeWithCacheKey)({scale:o,bias:t})};const f={name:"ImageScaler",inputNames:["X"],inputTypes:[l.TextureType.unpacked]},a=(u,o,t)=>{const e=Object.assign(Object.assign({},f),{cacheHint:t.cacheKey});return Object.assign(Object.assign({},e),{get:()=>((r,i,d,g)=>{const m=d[0].dims.slice(),b=m.length,y=`
- ${h(g.bias.length)}
- float process(int indices[${b}]) {
- return _X(indices) * scale + getBias(bias, indices[1]);
- }`;return Object.assign(Object.assign({},i),{output:{dims:m,type:d[0].type,textureType:l.TextureType.unpacked},variables:[{name:"bias",type:"float",arrayLength:g.bias.length,data:g.bias},{name:"scale",type:"float",data:g.scale}],shaderSource:y})})(0,e,o,t)})},h=u=>{const o=[`float getBias(float bias[${u}], int channel) {`];for(let t=0;t{if(!u||u.length!==1)throw new Error("ImageScaler requires 1 input.");if(u[0].dims.length!==4)throw new Error("Invalid input shape.");if(u[0].type!=="float32"&&u[0].type!=="float64")throw new Error("Invalid input type.")}},3346:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseInstanceNormalizationAttributes=n.instanceNormalization=void 0;const c=s(5060),l=s(2039);n.instanceNormalization=(o,t,e)=>{u(t);const r=o.run(a(t[0]),t);return[o.run(p(o,t[0],e,r.dims),[t[0],r,t[1],t[2]])]},n.parseInstanceNormalizationAttributes=o=>o.attributes.getFloat("epsilon",1e-5);const f={name:"InstanceNormalization_MeanAndVariance",inputNames:["X"],inputTypes:[l.TextureType.unpacked]},a=o=>Object.assign(Object.assign({},f),{get:()=>((t,e)=>{const r=e.dims.slice(),i=r[1],d=r[2]*r[3],g=[r[0],i],m=`
- vec4 process(int[2] indices) {
- vec4 v = vec4(0.0);
- int a[4];
- a[0] = indices[0];
- a[1] = indices[1];
- float temp = 0.0;
- for(int a2=0; a2<${r[2]}; a2++) {
- a[2] = a2;
- for(int a3=0; a3<${r[3]}; a3++) {
- a[3] = a3;
- float x = _X(a);
- temp += x;
- }
- }
- float mean = temp / float(${d});
- temp = 0.0;
- for(int a2=0; a2<${r[2]}; a2++) {
- a[2] = a2;
- for(int a3=0; a3<${r[3]}; a3++) {
- a[3] = a3;
- float x = _X(a);
- temp += (x - mean) * (x - mean);
- }
- }
- v.r = mean;
- v.g = temp / float(${d});
-
- return v;
- }`;return Object.assign(Object.assign({},t),{output:{dims:g,type:e.type,textureType:l.TextureType.packedLastDimension},shaderSource:m})})(f,o)}),h={name:"InstanceNormalization_ComputeOutput",inputNames:["X","MeanAndVariance","Scale","B"],inputTypes:[l.TextureType.unpacked,l.TextureType.packedLastDimension,l.TextureType.unpacked,l.TextureType.unpacked]},p=(o,t,e,r)=>{const i=Object.assign(Object.assign({},h),{cacheHint:`${e}`});return Object.assign(Object.assign({},i),{get:()=>((d,g,m,b,y)=>{const w=(0,c.getGlsl)(d.session.backend.glContext.version),[v,S]=d.calculateTextureWidthAndHeight(y,l.TextureType.packedLastDimension),[A,O]=[v/4,S],x=`
- vec4 get_MeanAndVariance(int[2] mv) {
- int offset = indicesToOffset_MeanAndVariance(mv);
- vec2 coords = offsetToCoords(offset, ${A}, ${O});
- return ${w.texture2D}(MeanAndVariance, coords);
- }
-
- float process(int[4] indices) {
- int mv[2];
- mv[0] = indices[0];
- mv[1] = indices[1];
- vec4 mean_and_variance = get_MeanAndVariance(mv);
- float mean = mean_and_variance.r;
- float variance = mean_and_variance.g;
-
- int sb[1];
- sb[0] = indices[1];
- float scale = _Scale(sb);
- float b = _B(sb);
-
- return scale * (_X(indices) - mean) / sqrt(variance + epsilon) + b;
- }`;return Object.assign(Object.assign({},g),{output:{dims:m.dims,type:m.type,textureType:l.TextureType.unpacked},variables:[{name:"epsilon",type:"float",data:b}],shaderSource:x})})(o,i,t,e,r)})},u=o=>{if(!o||o.length!==3)throw new Error("InstanceNormalization requires 3 inputs.");const t=o[0],e=o[1],r=o[2];if(t.dims.length<3||e.dims.length!==1||r.dims.length!==1)throw new Error("Invalid input shape.");if(e.dims[0]!==t.dims[1]||r.dims[0]!==t.dims[1])throw new Error("Input shapes are mismatched.");if(t.type!=="float32"&&t.type!=="float64"||e.type!=="float32"&&e.type!=="float64"||r.type!=="float32"&&r.type!=="float64")throw new Error("Invalid input type.");if(o[0].dims.length!==4)throw new Error("Only support 4-D input shape.")}},708:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackedMatmulProgramInfoLoader=void 0;const c=s(2517),l=s(5060),f=s(2039),a=s(9390),h=s(2823),p=s(5623);n.createPackedMatmulProgramInfoLoader=(u,o,t)=>{const e=(r=o.length>2,i=t.activationCacheKey,{name:"MatMul (packed)",inputNames:r?["A","B","Bias"]:["A","B"],inputTypes:r?[f.TextureType.packed,f.TextureType.packed,f.TextureType.packed]:[f.TextureType.packed,f.TextureType.packed],cacheHint:i});var r,i;return Object.assign(Object.assign({},e),{get:()=>((d,g,m,b)=>{const y=m.length>2,w=y?"value += getBiasForMatmul();":"",v=m[0].dims,S=m[1].dims,A=c.BroadcastUtil.calcShape(v,S,!0),O=!c.ShapeUtil.areEqual(m[0].dims,m[1].dims);if(!A)throw new Error("Can't use matmul on the given tensors");const x=v[v.length-1],I=Math.ceil(x/2),$=v.length,z=S.length,L=(0,l.getGlsl)(d.session.backend.glContext.version),N=(0,a.getCoordsDataType)(A.length),H=A.length,M=(0,a.getGlChannels)(),{activationFunction:j,applyActivation:Z}=(0,h.getActivationSnippet)(b),X=y?`${(0,p.getBiasForMatmul)(N,M,m[2].dims,A,!0)}`:"",Q=O?`${function(xe,oe,we,ye){let ke=[],Ne=[];const Te=we[0].dims,$e=we[1].dims,Ce=Te.length,Ee=$e.length,Oe=ye.length,ze=Oe-Ce,Ve=Oe-Ee;ke=Te.map((Ie,je)=>`coords.${oe[je+ze]}`),ke[Ce-1]="i*2",ke.join(", "),Ne=$e.map((Ie,je)=>`coords.${oe[je+Ve]}`),Ne[Ee-2]="i*2",Ne.join(", ");const Ge=c.BroadcastUtil.getBroadcastDims(Te,ye),Ye=c.BroadcastUtil.getBroadcastDims($e,ye),Je=Ge.map(Ie=>`coords.${oe[Ie+ze]} = 0;`).join(`
-`),qe=Ye.map(Ie=>`coords.${oe[Ie+Ve]} = 0;`).join(`
-`),Ue=`int lastDim = coords.${oe[Oe-1]};
- coords.${oe[Oe-1]} = coords.${oe[Oe-2]};
- coords.${oe[Oe-2]} = lastDim;`;return`
-vec4 getAAtOutCoordsMatmul(int i) {
- ${xe} coords = getOutputCoords();
- ${Ue}
- ${Je}
- vec4 outputValue = getA(${ke});
- return outputValue;
-}
-
-vec4 getBAtOutCoordsMatmul(int i) {
- ${xe} coords = getOutputCoords();
- ${Ue}
- ${qe}
- vec4 outputValue = getB(${Ne});
- return outputValue;
-}`}(N,M,m,A)}`:"",ee=O?"getAAtOutCoordsMatmul(i)":`getA(${function(xe,oe){let we="";for(let ye=0;ye{Object.defineProperty(n,"__esModule",{value:!0}),n.getBiasForMatmul=n.createMatmulProgramInfoLoader=n.parseMatMulAttributes=n.matMul=void 0;const c=s(2517),l=s(2039),f=s(9390),a=s(2823),h=s(708);function p(t,e){const r=(i=t.length>2,d=e.activationCacheKey,{name:"MatMul",inputNames:i?["A","B","Bias"]:["A","B"],inputTypes:i?[l.TextureType.unpacked,l.TextureType.unpacked,l.TextureType.unpacked]:[l.TextureType.unpacked,l.TextureType.unpacked],cacheHint:d});var i,d;return Object.assign(Object.assign({},r),{get:()=>function(g,m,b){const y=m[0].dims,w=m[1].dims,v=c.BroadcastUtil.calcShape(y,w,!0);if(!v)throw new Error("Can't use matmul on the given tensors");const S=(0,f.getCoordsDataType)(v.length),A=(0,f.getGlChannels)(),{activationFunction:O,applyActivation:x}=(0,a.getActivationSnippet)(b),I=m.length>2,$=I?"value += getBiasForMatmul();":"",z=I?`${o(S,A,m[2].dims,v,!1)}`:"",L=v.length,N=y.length,H=w.length,M=`
- ${O}
- ${z}
- float process(int indices[${L}]) {
- int a[${N}];
- int b[${H}];
- bcastMatmulIndices_A(indices, a);
- bcastMatmulIndices_B(indices, b);
-
- float value;
- for (int k=0; k<${y[y.length-1]}; ++k) {
- a[${N-1}] = k;
- b[${H-2}] = k;
- value += _A(a) * _B(b);
- }
- ${$}
- ${x}
- return value;
- }`;return Object.assign(Object.assign({},g),{output:{dims:v,type:m[0].type,textureType:l.TextureType.unpacked},shaderSource:M})}(r,t,e)})}n.matMul=(t,e,r)=>(u(e),t.session.pack?[t.run((0,h.createPackedMatmulProgramInfoLoader)(t,e,r),e)]:[t.run(p(e,r),e)]),n.parseMatMulAttributes=t=>(0,a.parseInternalActivationAttributes)(t.attributes),n.createMatmulProgramInfoLoader=p;const u=t=>{if(!t||t.length!==2)throw new Error("MatMul requires 2 inputs.");if(t[0].dims[t[0].dims.length-1]!==t[1].dims[t[1].dims.length-2])throw new Error("shared dimension does not match.");if(t[0].type!=="float32"&&t[0].type!=="float64"||t[1].type!=="float32"&&t[1].type!=="float64")throw new Error("inputs should be float type");if(t[0].type!==t[1].type)throw new Error("inputs types should match")};function o(t,e,r,i,d){let g="";const m=r.length,b=i.length,y=b-m;g=b<2&&m>0?"coords":r.map((S,A)=>`coords.${e[A+y]}`).join(", ");const w=c.BroadcastUtil.getBroadcastDims(r,i).map(S=>`coords.${e[S+y]} = 0;`).join(`
-`);let v="vec4(outputValue.xx, outputValue.yy)";return c.ShapeUtil.size(r)===1&&(v="vec4(outputValue.x)"),d?`
-vec4 getBiasForMatmul() {
- ${t} coords = getOutputCoords();
- ${w}
- vec4 outputValue = getBias(${g});
- return ${v};
-}`:`
-float getBiasForMatmul() {
- ${t} coords = getOutputCoords();
- ${w}
- return getBias(coords.x);
-}`}n.getBiasForMatmul=o},2403:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackProgramInfoLoader=void 0;const c=s(5060),l=s(2039),f=s(9390),a=s(2827),h={name:"pack",inputNames:["A"],inputTypes:[l.TextureType.unpackedReversed]};n.createPackProgramInfoLoader=(p,u)=>Object.assign(Object.assign({},h),{get:()=>((o,t)=>{const e=(0,c.getGlsl)(o.session.backend.glContext.version),r=t.dims,i=r.length,d=t.dims.length,g=(0,f.getCoordsDataType)(d),m=(0,a.getChannels)("rc",d),b=(y=d,w=m,v=r[r.length-2],S=r[r.length-1],y===0||y===1?"":`
- int r = ${w[y-2]};
- int c = ${w[y-1]};
- int rp1 = ${w[y-2]} + 1;
- int cp1 = ${w[y-1]} + 1;
- bool rEdge = rp1 >= ${S};
- bool cEdge = cp1 >= ${v};
- `);var y,w,v,S;let A;A=i===0?[1,1]:i===1?[r[0],1]:[r[d-1],r[d-2]];const O=function($,z,L){if($===0)return"false";if($===1)return`rc > ${z[0]}`;let N="";for(let H=$-2;H<$;H++)N+=`${L[H]} >= ${z[H-$+2]}`,H<$-1&&(N+="||");return N}(d,A,m),x=function($,z){const L=$.length;if(L===0)return"getA(), 0, 0, 0";if(L===1)return`getA(rc),
- rc + 1 >= ${$[0]} ? 0. : getA(rc + 1),
- 0, 0`;let N="";if(L>2)for(let H=0;H{Object.defineProperty(n,"__esModule",{value:!0}),n.unpackFromChannel=n.getChannels=n.getVecChannels=void 0;const c=s(9390);function l(f,a){return(0,c.getGlChannels)(a).map(h=>`${f}.${h}`)}n.getVecChannels=l,n.getChannels=function(f,a){return a===1?[f]:l(f,a)},n.unpackFromChannel=function(){return`
- float getChannel(vec4 frag, int dim) {
- int modCoord = imod(dim, 2);
- return modCoord == 0 ? frag.r : frag.g;
- }
-
- float getChannel(vec4 frag, vec2 innerDims) {
- vec2 modCoord = mod(innerDims, 2.);
- return modCoord.x == 0. ?
- (modCoord.y == 0. ? frag.r : frag.g) :
- (modCoord.y == 0. ? frag.b : frag.a);
- }
- `}},2870:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parsePadAttributesV11=n.padV11=n.parsePadAttributesV2=n.padV2=void 0;const c=s(246),l=s(2517),f=s(5060),a=s(2039),h={name:"Pad",inputNames:["A"],inputTypes:[a.TextureType.unpacked]};n.padV2=(g,m,b)=>(o(m),[g.run(Object.assign(Object.assign({},h),{cacheHint:b.cacheKey,get:()=>u(g,m[0],b)}),m)]),n.parsePadAttributesV2=g=>{const m=g.attributes.getString("mode","constant"),b=g.attributes.getFloat("value",0),y=g.attributes.getInts("pads");return(0,c.createAttributeWithCacheKey)({mode:m,value:b,pads:y})},n.padV11=(g,m,b)=>{t(m);const y=p(g,m,b);return(0,n.padV2)(g,[m[0]],y)},n.parsePadAttributesV11=g=>g.attributes.getString("mode","constant");const p=(g,m,b)=>{if(!g.session.isInitializer(m[1].dataId)||m.length>=3&&!g.session.isInitializer(m[2].dataId))throw new Error("dynamic pad attributes are not allowed");const y=Array.from(m[1].integerData),w=m.length>=3?m[2].floatData[0]:0;return(0,c.createAttributeWithCacheKey)({mode:b,pads:y,value:w})},u=(g,m,b)=>{const y=l.ShapeUtil.padShape(m.dims.slice(),b.pads),w=y.length,v=`
- ${e(g,m,b)}
- float process(int[${w}] indices) {
- return padA(indices);
- }`;return{name:"Pad",inputNames:["A"],inputTypes:[a.TextureType.unpacked],output:{dims:y,type:m.type,textureType:a.TextureType.unpacked},shaderSource:v}},o=g=>{if(!g||g.length!==1)throw new Error("Pad requires 1 input");if(g[0].type!=="float32"&&g[0].type!=="float64")throw new Error("Invalid input type.")},t=g=>{if(!g||g.length!==2&&g.length!==3)throw new Error("Pad requires 2 or 3 inputs");if(g[1].type!=="int32")throw new Error("Invalid input type.");if(g.length>=3&&g[2].type==="string")throw new Error("Invalid input type.")},e=(g,m,b)=>{const y=(0,f.getGlsl)(g.session.backend.glContext.version),[w,v]=g.calculateTextureWidthAndHeight(m.dims,a.TextureType.unpacked),S=l.ShapeUtil.computeStrides(m.dims);switch(b.mode){case"constant":return r(y,m.dims,S,w,v,b.pads,b.value);case"reflect":return i(y,m.dims,S,w,v,b.pads);case"edge":return d(y,m.dims,S,w,v,b.pads);default:throw new Error("Invalid mode")}},r=(g,m,b,y,w,v,S)=>{const A=m.length;let O="";for(let x=A-1;x>=0;--x)O+=`
- k = m[${x}] - ${v[x]};
- if (k < 0) return constant;
- if (k >= ${m[x]}) return constant;
- offset += k * ${b[x]};
- `;return`
- float padA(int m[${A}]) {
- const float constant = float(${S});
- int offset = 0;
- int k = 0;
- ${O}
- vec2 coords = offsetToCoords(offset, ${y}, ${w});
- float value = getColorAsFloat(${g.texture2D}(A, coords));
- return value;
- }
- `},i=(g,m,b,y,w,v)=>{const S=m.length;let A="";for(let O=S-1;O>=0;--O)A+=`
- k = m[${O}] - ${v[O]};
- if (k < 0) { k = -k; }
- {
- const int _2n_1 = ${2*(m[O]-1)};
- k = int( mod( float(k), float(_2n_1) ) ) ;
- if(k >= ${m[O]}) { k = _2n_1 - k; }
- }
- offset += k * ${b[O]};
- `;return`
- float padA(int m[${S}]) {
- int offset = 0;
- int k = 0;
- ${A}
- vec2 coords = offsetToCoords(offset, ${y}, ${w});
- float value = getColorAsFloat(${g.texture2D}(A, coords));
- return value;
- }
- `},d=(g,m,b,y,w,v)=>{const S=m.length;let A="";for(let O=S-1;O>=0;--O)A+=`
- k = m[${O}] - ${v[O]};
- if (k < 0) k = 0;
- if (k >= ${m[O]}) k = ${m[O]-1};
- offset += k * ${b[O]};
- `;return`
- float padA(int m[${S}]) {
- int offset = 0;
- int k = 0;
- ${A}
- vec2 coords = offsetToCoords(offset, ${y}, ${w});
- float value = getColorAsFloat(${g.texture2D}(A, coords));
- return value;
- }
- `}},2143:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.globalMaxPool=n.parseMaxPoolAttributes=n.maxPool=n.parseGlobalAveragePoolAttributes=n.globalAveragePool=n.parseAveragePoolAttributes=n.averagePool=void 0;const c=s(246),l=s(2517),f=s(2039);n.averagePool=(d,g,m)=>{t(g);const b={name:"AveragePool",inputNames:["X"],inputTypes:[f.TextureType.unpacked],cacheHint:m.cacheKey};return[d.run(Object.assign(Object.assign({},b),{get:()=>a(g,b,!1,m)}),g)]},n.parseAveragePoolAttributes=d=>{const g=d.attributes.getString("auto_pad","NOTSET"),m=d.attributes.getInt("ceil_mode",0),b=d.attributes.getInt("count_include_pad",0)!==0,y=d.attributes.getInts("kernel_shape"),w=d.attributes.getInts("strides",[]),v=d.attributes.getInts("pads",[]);if(m!==0)throw new Error("using ceil() in shape computation is not yet supported for AveragePool");return(0,c.createAttributeWithCacheKey)({autoPad:g,ceilMode:m,countIncludePad:b,kernelShape:y,strides:w,pads:v})};const a=(d,g,m,b)=>{const[y,w]=p(d,b,m),v=l.ShapeUtil.size(y.kernelShape);let S="";y.countIncludePad?S+=`value /= float(${v});`:S+=`value /= float(${v} - pad);`;const A=`
- ${e(d[0].dims,y,"value += _X(x);",S,"0.0")}
- `;return Object.assign(Object.assign({},g),{output:{dims:w,type:d[0].type,textureType:f.TextureType.unpacked},shaderSource:A})};n.globalAveragePool=(d,g,m)=>{t(g);const b={name:"GlobalAveragePool",inputNames:["X"],inputTypes:[f.TextureType.unpacked],cacheHint:`${m.countIncludePad}`};return[d.run(Object.assign(Object.assign({},b),{get:()=>a(g,b,!0,m)}),g)]},n.parseGlobalAveragePoolAttributes=d=>{const g=d.attributes.getInt("count_include_pad",0)!==0;return(0,c.createAttributeWithCacheKey)({autoPad:"",ceilMode:0,countIncludePad:g,kernelShape:[],strides:[],pads:[]})},n.maxPool=(d,g,m)=>{t(g);const b={name:"MaxPool",inputNames:["X"],inputTypes:[f.TextureType.unpacked],cacheHint:m.cacheKey};return[d.run(Object.assign(Object.assign({},b),{get:()=>h(g,b,!1,m)}),g)]},n.parseMaxPoolAttributes=d=>{const g=d.attributes.getString("auto_pad","NOTSET"),m=d.attributes.getInt("ceil_mode",0),b=d.attributes.getInts("kernel_shape"),y=d.attributes.getInts("strides",[]),w=d.attributes.getInts("pads",[]),v=d.attributes.getInt("storage_order",0),S=d.attributes.getInts("dilations",[]);if(v!==0)throw new Error("column major storage order is not yet supported for MaxPool");if(m!==0)throw new Error("using ceil() in shape computation is not yet supported for MaxPool");return(0,c.createAttributeWithCacheKey)({autoPad:g,ceilMode:m,countIncludePad:!1,kernelShape:b,strides:y,pads:w,storageOrder:v,dilations:S})};const h=(d,g,m,b)=>{const[y,w]=p(d,b,m),v=`
- ${e(d[0].dims,y,`
- value = max(_X(x), value);
- `,"","-1e5")}
- `;return Object.assign(Object.assign({},g),{output:{dims:w,type:d[0].type,textureType:f.TextureType.unpacked},shaderSource:v})},p=(d,g,m)=>{const b=d[0].dims.slice(),y=Object.hasOwnProperty.call(g,"dilations"),w=g.kernelShape.slice(),v=g.strides.slice(),S=y?g.dilations.slice():[],A=g.pads.slice();l.PoolConvUtil.adjustPoolAttributes(m,b,w,v,S,A);const O=l.PoolConvUtil.computePoolOutputShape(m,b,v,S,w,A,g.autoPad),x=Object.assign({},g);return y?Object.assign(x,{kernelShape:w,strides:v,pads:A,dilations:S,cacheKey:g.cacheKey}):Object.assign(x,{kernelShape:w,strides:v,pads:A,cacheKey:g.cacheKey}),[x,O]},u={autoPad:"",ceilMode:0,countIncludePad:!1,kernelShape:[],strides:[],pads:[],storageOrder:0,dilations:[],cacheKey:""},o={name:"GlobalMaxPool",inputNames:["X"],inputTypes:[f.TextureType.unpacked]};n.globalMaxPool=(d,g)=>(t(g),[d.run(Object.assign(Object.assign({},o),{get:()=>h(g,o,!0,u)}),g)]);const t=d=>{if(!d||d.length!==1)throw new Error("Pool ops requires 1 input.");if(d[0].type!=="float32"&&d[0].type!=="float64")throw new Error("Invalid input type.")},e=(d,g,m,b,y)=>{const w=d.length;if(g.kernelShape.length<=2){const v=g.kernelShape[g.kernelShape.length-1],S=g.strides[g.strides.length-1],A=g.pads[g.pads.length/2-1],O=g.pads[g.pads.length-1],x=d[w-1];let I="",$="",z="";if(I=A+O!==0?`
- for (int i = 0; i < ${v}; i++) {
- x[${w} - 1] = indices[${w} - 1] * ${S} - ${A} + i;
- if (x[${w} - 1] < 0 || x[${w} - 1] >= ${x}) {
- pad++;
- continue;
- }
- ${m}
- }`:`
- for (int i = 0; i < ${v}; i++) {
- x[${w} - 1] = indices[${w} - 1] * ${S} - ${A} + i;
- ${m}
- }`,g.kernelShape.length===2){const L=g.kernelShape[g.kernelShape.length-2],N=g.strides[g.strides.length-2],H=g.pads[g.pads.length/2-2],M=g.pads[g.pads.length-2],j=d[w-2];$=H+M!==0?`
- for (int j = 0; j < ${L}; j++) {
- x[${w} - 2] = indices[${w} - 2] * ${N} - ${H} + j;
- if (x[${w} - 2] < 0 || x[${w} - 2] >= ${j}) {
- pad+= ${v};
- continue;
- }
- `:`
- for (int j = 0; j < ${L}; j++) {
- x[${w} - 2] = indices[${w} - 2] * ${N} - ${H} + j;
- `,z=`
- }
- `}return`
- float process(int indices[${w}]) {
- int x[${w}];
- copyVec(indices, x);
-
- float value = ${y};
- int pad = 0;
- ${$}
- ${I}
- ${z}
- ${b}
- return value;
- }
- `}{const v=l.ShapeUtil.size(g.kernelShape),S=l.ShapeUtil.computeStrides(g.kernelShape),A=S.length,O=g.pads.length,x=i(A),I=r(d,"inputDims"),$=r(g.pads,"pads"),z=r(S,"kernelStrides"),L=r(g.strides,"strides");let N="";return N=g.pads.reduce((H,M)=>H+M)?`
- if (x[j] >= inputDims[j] || x[j] < 0) {
- pad++;
- isPad = true;
- break;
- }
- }
- if (!isPad) {
- ${m}
- }`:`
- }
- ${m}
- `,`
- ${x}
- float process(int indices[${w}]) {
- int x[${w}];
- copyVec(indices, x);
- int offset[${A}];
- int pads[${O}];
- int inputDims[${w}];
- int kernelStrides[${A}];
- int strides[${A}];
- ${$}
- ${I}
- ${L}
- ${z}
-
- float value = ${y};
- int pad = 0;
- bool isPad = false;
- for (int i = 0; i < ${v}; i++) {
- offsetToIndices(i, kernelStrides, offset);
- isPad = false;
- for (int j = ${w} - ${A}; j < ${w}; j++) {
- x[j] = indices[j] * strides[j - ${w} + ${A}]
- + offset[j - ${w} + ${A}] - pads[j - 2];
- ${N}
- }
- ${b}
-
- return value;
- }
- `}},r=(d,g)=>{let m="";for(let b=0;b`
- void offsetToIndices(int offset, int[${d}] strides, out int[${d}] indices) {
- if (${d} == 0) {
- return;
- }
- for (int i = 0; i < ${d} - 1; ++i) {
- indices[i] = offset / strides[i];
- offset -= indices[i] * strides[i];
- }
- indices[${d} - 1] = offset;
- }`},4939:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.reduceLogSumSquare=n.reduceLogSum=n.reduceProd=n.reduceMin=n.reduceMax=n.reduceMean=n.reduceSum=n.parseReduceAttributes=void 0;const c=s(246),l=s(782),f=s(2517),a=s(2039),h=(o,t,e,r,i)=>{u(t);const d={name:r,inputNames:["A"],inputTypes:[a.TextureType.unpacked]};return[o.run(Object.assign(Object.assign({},d),{cacheHint:e.cacheKey,get:()=>p(o,t,e,r,i,d)}),t)]};n.parseReduceAttributes=o=>{const t=o.attributes.getInts("axes",[]),e=o.attributes.getInt("keepdims",1)===1;return(0,c.createAttributeWithCacheKey)({axes:t,keepDims:e})};const p=(o,t,e,r,i,d)=>{const g=[],m=t[0].dims.length||1,b=[],y=f.ShapeUtil.normalizeAxes(e.axes,t[0].dims.length),w=i(t,y);let v=w[1];for(let A=0;A=0||y.length===0?(e.keepDims&&g.push(1),v=`
- for(int j${A} = 0; j${A} < ${t[0].dims[A]}; j${A}++) {
- inputIdx[${A}] = j${A};
- ${v}
- }`):(b.push(`inputIdx[${A}] = outputIdx[${g.length}];`),g.push(t[0].dims[A]));const S=`
- float process(int outputIdx[${g.length||1}]) {
- float value; // final result
- int inputIdx[${m}]; // addressing input data
- ${b.join(`
-`)}
- ${w[0]} // init ops for reduce max/min
- ${v}
- ${w[2]} // final computation for reduce mean
- return value;
- }`;return Object.assign(Object.assign({},d),{output:{dims:g,type:t[0].type,textureType:a.TextureType.unpacked},shaderSource:S})},u=o=>{if(!o||o.length!==1)throw new Error("Reduce op requires 1 input.");if(l.NUMBER_TYPES.indexOf(o[0].type)===-1)throw new Error("Invalid input type.")};n.reduceSum=(o,t,e)=>h(o,t,e,"ReduceSum",()=>["value = 0.0;","value += _A(inputIdx);",""]),n.reduceMean=(o,t,e)=>h(o,t,e,"ReduceMean",(r,i)=>{let d=1;for(let g=0;g=0||i.length===0)&&(d*=r[0].dims[g]);return["value = 0.0;","value += _A(inputIdx);",`value /= ${d}.;`]}),n.reduceMax=(o,t,e)=>h(o,t,e,"ReduceMax",(r,i)=>{const d=[];for(let g=0;g=0||i.length===0)&&d.push(`inputIdx[${g}] = 0;`);return[`${d.join(`
-`)}
-value = _A(inputIdx);`,"value = max(value, _A(inputIdx));",""]}),n.reduceMin=(o,t,e)=>h(o,t,e,"ReduceMin",(r,i)=>{const d=[];for(let g=0;g=0||i.length===0)&&d.push(`inputIdx[${g}] = 0;`);return[`${d.join(`
-`)}
-value = _A(inputIdx);`,"value = min(value, _A(inputIdx));",""]}),n.reduceProd=(o,t,e)=>h(o,t,e,"ReduceProd",()=>["value = 1.0;","value *= _A(inputIdx);",""]),n.reduceLogSum=(o,t,e)=>h(o,t,e,"ReduceLogSum",()=>["value = 0.0;","value += _A(inputIdx);","value = log(value);"]),n.reduceLogSumSquare=(o,t,e)=>h(o,t,e,"ReduceLogSumSquare",()=>["float t; value = 0.0;","t = _A(inputIdx); value += t * t;",""])},7019:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.isReshapeCheap=n.processDims3D=n.createPackedReshape3DProgramInfoLoader=void 0;const c=s(2517),l=s(5060),f=s(2039),a=s(2827);n.createPackedReshape3DProgramInfoLoader=(h,p,u)=>{const o=(t=>({name:"Reshape (packed)",inputTypes:[f.TextureType.packed],inputNames:["A"],cacheHint:`${t}`}))(u);return Object.assign(Object.assign({},o),{get:()=>((t,e,r,i)=>{const d=e.dims,g=i;let m="";for(let w=0;w<4;w++){let v="";switch(w){case 0:v="outputCoords = rc;";break;case 1:v="outputCoords = ivec3(rc.x, rc.y+1, rc.z);";break;case 2:v="outputCoords = ivec3(rc.x, rc.y, rc.z+1);";break;case 3:v="outputCoords = ivec3(rc.x, rc.y+1, rc.z+1);";break;default:throw new Error}m+=`
- ${v}
- ${w>0?"if(outputCoords.y < rows && outputCoords.z < cols){":""}
- int flattenedIndex = getFlattenedIndex(outputCoords);
-
- ivec3 inputRC = inputCoordsFromReshapedOutCoords(flattenedIndex);
- vec2 innerDims = vec2(float(inputRC.y),float(inputRC.z));
-
- result[${w}] = getChannel(getA(inputRC.x, inputRC.y, inputRC.z), innerDims);
-
- ${w>0?"}":""}
- `}const b=(0,l.getGlsl)(t.session.backend.glContext.version),y=`
- ${function(w){const v=c.ShapeUtil.computeStrides(w),S=["b","r","c"],A="index";return`
- ivec3 inputCoordsFromReshapedOutCoords(int index) {
- ${v.map((O,x)=>`int ${S[x]} = ${A} / ${O}; ${x===v.length-1?`int ${S[x+1]} = ${A} - ${S[x]} * ${O}`:`index -= ${S[x]} * ${O}`};`).join("")}
- return ivec3(b, r, c);
- }
- `}(d)}
- ${function(w){const v=c.ShapeUtil.computeStrides(w);return`
- int getFlattenedIndex(ivec3 coords) {
- // reverse y, z order
- return coords.x * ${v[0]} + coords.z * ${v[1]} + coords.y;
- }
-`}(g)}
- ${(0,a.unpackFromChannel)()}
-
- void main() {
- ivec3 rc = getOutputCoords();
-
- vec4 result = vec4(0.0);
-
- ivec3 outputCoords;
- int rows = ${g[2]};
- int cols = ${g[1]};
-
- ${m}
- ${b.output} = result;
- }
- `;return Object.assign(Object.assign({},r),{output:{dims:g,type:e.type,textureType:f.TextureType.packed},shaderSource:y,hasMain:!0})})(h,p,o,u)})},n.processDims3D=function(h){if(h.length===0)return[1,1,1];let p=1;for(let u=0;u1?h[h.length-2]:1,h[h.length-1]]},n.isReshapeCheap=function(h,p){let u=!1;return u=h.length===0||p.length===0||(h.length<2||p.length<2?h[h.length-1]===p[p.length-1]:h[h.length-1]===p[p.length-1]&&h[h.length-2]===p[p.length-2]),u}},718:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.reshape=void 0;const c=s(2517);n.reshape=(l,f)=>{const a=c.ShapeUtil.calculateReshapedDims(f[0].dims,f[1].integerData);return l.session.pack?[l.reshapePacked(f[0],a)]:[l.reshapeUnpacked(f[0],a)]}},2268:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseResizeAttributesV11=n.parseResizeAttributesV10=n.resize=void 0;const c=s(5060),l=s(2039),f=s(9390),a=s(2827),h=s(9793),p={name:"Resize",inputNames:["A"],inputTypes:[l.TextureType.packed]};n.resize=(r,i,d)=>((0,h.validateInputs)(i,d),[r.run(Object.assign(Object.assign({},p),{cacheHint:d.cacheKey,get:()=>u(r,i,d)}),i)]),n.parseResizeAttributesV10=r=>(0,h.parseUpsampleAttributes)(r,10),n.parseResizeAttributesV11=r=>(0,h.parseUpsampleAttributes)(r,11);const u=(r,i,d)=>{const g=(0,c.getGlsl)(r.session.backend.glContext.version),[m,b]=o(i,d);if(m.every(N=>N===1)&&d.coordinateTransformMode!=="tf_crop_and_resize")return Object.assign(Object.assign({},p),{output:{dims:b,type:i[0].type,textureType:l.TextureType.packed},hasMain:!0,shaderSource:`void main() {
- vec4 v = ${g.texture2D}(X, TexCoords);
- ${g.output} = v;
- }`});const y=b.length;if(y<2)throw new Error(`output dimension should be at least 2, but got ${y}`);const w=b[y-2],v=b[y-1],S=i[0].dims;if(y!==S.length)throw new Error(`output dimension should match input ${S.length}, but got ${y}`);const A=S[y-2],O=S[y-1],x=m[y-2],I=m[y-1];let $="";if(d.mode!=="linear")throw new Error(`resize (packed) does not support mode: '${d.mode}'`);switch(d.coordinateTransformMode){case"asymmetric":$=`
- vec4 getSourceFracIndex(ivec4 coords) {
- return vec4(coords) / scaleWHWH;
- }
- `;break;case"half_pixel":$=`
- vec4 getSourceFracIndex(ivec4 coords) {
- return (vec4(coords) + 0.5) / scaleWHWH - 0.5;
- }
- `;break;case"pytorch_half_pixel":$=`
- vec4 getSourceFracIndex(ivec4 coords) {
- vec4 fcoords = vec4(coords);
- return vec4(
- ${v}.0 > 1.0 ? (fcoords.x + 0.5) / scaleWHWH.x - 0.5 : 0.0,
- ${w}.0 > 1.0 ? (fcoords.y + 0.5) / scaleWHWH.y - 0.5 : 0.0,
- ${v}.0 > 1.0 ? (fcoords.z + 0.5) / scaleWHWH.z - 0.5 : 0.0,
- ${w}.0 > 1.0 ? (fcoords.w + 0.5) / scaleWHWH.w - 0.5 : 0.0
- );
- }
- `;break;case"align_corners":$=`
- vec4 getSourceFracIndex(ivec4 coords) {
- vec4 resized = vec4(${v}.0 - 1.0, ${w}.0 - 1.0, ${v}.0 - 1.0,
- ${w}.0 - 1.0);
- vec4 original = vec4(${O}.0 - 1.0, ${A}.0 - 1.0, ${O}.0 - 1.0,
- ${A}.0 - 1.0);
- vec4 new_scale = original / resized;
- return vec4(coords) * new_scale;
- }
- `;break;default:throw new Error(`resize (packed) does not support coordinateTransformMode: '${d.coordinateTransformMode}'`)}const z=(0,f.getCoordsDataType)(y),L=`
- const vec2 inputWH = vec2(${A}.0, ${O}.0);
- const vec4 scaleWHWH = vec4(float(${x}), float(${I}), float(${x}), float(${I}));
- ${(0,a.unpackFromChannel)()}
- ${$}
- float getAValue(int x10, int r, int c, int d) {
- return getChannel(getA(x10, r, c, d), vec2(c, d));
- }
- void main() {
- ${z} rc = getOutputCoords();
-
- int batch = rc[0];
- int depth = rc[1];
-
- // retrieve the 4 coordinates that is used in the 4 packed output values.
- ivec4 coords = ivec4(rc.wz, rc.w + 1, rc.z + 1);
-
- // calculate the source index in fraction
- vec4 sourceFrac = getSourceFracIndex(coords);
-
- // get the lower and upper bound of the 4 values that will be packed into one texel.
- ivec4 x00 = ivec4(max(sourceFrac.xy, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.xy)));
- ivec4 x01 = ivec4(max(sourceFrac.xw, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.xw)));
- ivec4 x10 = ivec4(max(sourceFrac.zy, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.zy)));
- ivec4 x11 = ivec4(max(sourceFrac.zw, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.zw)));
-
- bool hasNextRow = rc.w < ${w-1};
- bool hasNextCol = rc.z < ${v-1};
-
- // pack x00, x01, x10, x11's top-left corner into one vec4 structure
- vec4 topLeft = vec4(
- getAValue(batch, depth, x00.x, x00.y),
- hasNextCol ? getAValue(batch, depth, x01.x, x01.y) : 0.0,
- hasNextRow ? getAValue(batch, depth, x10.x, x10.y) : 0.0,
- (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.x, x11.y) : 0.0);
-
- // pack x00, x01, x10, x11's top-right corner into one vec4 structure
- vec4 topRight = vec4(
- getAValue(batch, depth, x00.x, x00.w),
- hasNextCol ? getAValue(batch, depth, x01.x, x01.w) : 0.0,
- hasNextRow ? getAValue(batch, depth, x10.x, x10.w) : 0.0,
- (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.x, x11.w) : 0.0);
-
- // pack x00, x01, x10, x11's bottom-left corner into one vec4 structure
- vec4 bottomLeft = vec4(
- getAValue(batch, depth, x00.z, x00.y),
- hasNextCol ? getAValue(batch, depth, x01.z, x01.y) : 0.0,
- hasNextRow ? getAValue(batch, depth, x10.z, x10.y) : 0.0,
- (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.z, x11.y) : 0.0);
-
- // pack x00, x01, x10, x11's bottom-right corner into one vec4 structure
- vec4 bottomRight = vec4(
- getAValue(batch, depth, x00.z, x00.w),
- hasNextCol ? getAValue(batch, depth, x01.z, x01.w) : 0.0,
- hasNextRow ? getAValue(batch, depth, x10.z, x10.w) : 0.0,
- (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.z, x11.w) : 0.0);
-
- // calculate the interpolation fraction on u and v direction
- vec4 frac = vec4(sourceFrac) - floor(sourceFrac);
- vec4 clampFrac = clamp(frac, vec4(0.0), vec4(1.0));
-
- vec4 top = mix(topLeft, topRight, clampFrac.ywyw);
- vec4 bottom = mix(bottomLeft, bottomRight, clampFrac.ywyw);
- vec4 newValue = mix(top, bottom, clampFrac.xxzz);
-
- ${g.output} = vec4(newValue);
- }
- `;return Object.assign(Object.assign({},p),{output:{dims:b,type:i[0].type,textureType:l.TextureType.packed},hasMain:!0,shaderSource:L})},o=(r,i)=>{const d=r[0].dims;let g,m=i.scales;if(m.length===0){const y=r[i.scalesInputIdx];if(y&&y.size!==0){if(r[i.sizesInputIdx])throw new Error("Only one of scales or sizes must be provided as input.");m=t(y,i.mode,i.isResize)}else{const w=r[i.sizesInputIdx];if(!w||w.size===0)throw new Error("Either scales or sizes MUST be provided as input.");g=Array.from(w.integerData),m=e(g,d,i.mode,i.isResize)}}else if(r[i.sizesInputIdx])throw new Error("Only one of scales or sizes must be provided as input.");const b=g||d.map((y,w)=>Math.floor(y*m[w]));return[m,b]},t=(r,i,d)=>{const g=Array.from(r.floatData);return(0,h.scalesValidation)(g,i,d),g},e=(r,i,d,g)=>{const m=i.length,b=new Array(m);for(let y=0,w=m;y{Object.defineProperty(n,"__esModule",{value:!0}),n.shape=void 0;const c=s(9162);n.shape=(f,a)=>(l(a),[new c.Tensor([a[0].dims.length],"int32",void 0,void 0,new Int32Array(a[0].dims))]);const l=f=>{if(!f||f.length!==1)throw new Error("Shape requires 1 input.")}},2278:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.sliceV10=n.parseSliceAttributes=n.slice=void 0;const c=s(246),l=s(782),f=s(2517),a=s(2039),h={name:"Slice",inputNames:["A"],inputTypes:[a.TextureType.unpacked]};n.slice=(e,r,i)=>(u(r),[e.run(Object.assign(Object.assign({},h),{cacheHint:i.cacheKey,get:()=>p(e,r[0],i)}),r)]),n.parseSliceAttributes=e=>{const r=e.attributes.getInts("starts"),i=e.attributes.getInts("ends"),d=e.attributes.getInts("axes",[]);return(0,c.createAttributeWithCacheKey)({starts:r,ends:i,axes:d})};const p=(e,r,i)=>{const d=i.axes.length===0?r.dims.slice(0).map((S,A)=>A):i.axes,g=f.ShapeUtil.normalizeAxes(d,r.dims.length),m=i.starts.map((S,A)=>S>r.dims[g[A]]-1?r.dims[g[A]]:f.ShapeUtil.normalizeAxis(S,r.dims[g[A]])),b=i.ends.map((S,A)=>S>r.dims[g[A]]-1?r.dims[g[A]]:f.ShapeUtil.normalizeAxis(S,r.dims[g[A]])),y=r.dims.slice(),w=[];for(let S=0;S0&&w.push(`outputIdx[${g[S]}] += ${m[S]};`);const v=`
- float process(int outputIdx[${y.length}]) {
- ${w.join(`
- `)}
- return _A(outputIdx);
- }`;return Object.assign(Object.assign({},h),{output:{dims:y,type:r.type,textureType:a.TextureType.unpacked},shaderSource:v})},u=e=>{if(!e||e.length!==1)throw new Error("Slice requires 1 input.");if(l.NUMBER_TYPES.indexOf(e[0].type)===-1)throw new Error("Invalid input type.")};n.sliceV10=(e,r)=>{t(r);const i=o(e,r);return[e.run(Object.assign(Object.assign({},h),{cacheHint:i.cacheKey,get:()=>p(e,r[0],i)}),[r[0]])]};const o=(e,r)=>{if(!e.session.isInitializer(r[1].dataId)||!e.session.isInitializer(r[2].dataId)||r.length>=4&&!e.session.isInitializer(r[3].dataId)||r.length>=5&&!e.session.isInitializer(r[4].dataId))throw new Error("dynamic slice attributes are not allowed");if(r.length>=5&&r[4].integerData.some(m=>m!==1))throw new Error("currently non-1 steps is not supported for Slice");const i=Array.from(r[1].integerData),d=Array.from(r[2].integerData),g=r.length>=4?Array.from(r[3].integerData):[];return{starts:i,ends:d,axes:g,cacheKey:`${g};${i};${d}`}},t=e=>{if(!e||e.length<3||e.length>5)throw new Error("Invalid input number.");if(e[1].type!=="int32"||e[1].dims.length!==1)throw new Error("Invalid input type.");if(e[2].type!=="int32"||e[2].dims.length!==1)throw new Error("Invalid input type.");if(e.length>=4&&(e[3].type!=="int32"||e[3].dims.length!==1))throw new Error("Invalid input type.");if(e.length>=5&&(e[4].type!=="int32"||e[4].dims.length!==1))throw new Error("Invalid input type.")}},5524:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.softmaxV13=n.parseSoftmaxAttributesV13=n.parseSoftmaxAttributes=n.softmax=void 0;const c=s(246),l=s(2517),f=s(5060),a=s(2039),h=s(3738),p={name:"SoftmaxComputeMax",inputNames:["A"],inputTypes:[a.TextureType.unpacked]},u={name:"SoftmaxComputeScale",inputNames:["A","Max"],inputTypes:[a.TextureType.unpacked,a.TextureType.unpacked]},o={name:"SoftMax",inputNames:["A","Max","Norm"],inputTypes:[a.TextureType.unpacked,a.TextureType.unpacked,a.TextureType.unpacked]};n.softmax=(g,m,b)=>{d(m);const y=m[0].dims.slice(),w=l.ShapeUtil.normalizeAxis(b.axis,y.length),v=l.ShapeUtil.sizeToDimension(y,w),S=l.ShapeUtil.sizeFromDimension(y,w);return t(g,m,b,v,S)},n.parseSoftmaxAttributes=g=>(0,c.createAttributeWithCacheKey)({axis:g.attributes.getInt("axis",1)}),n.parseSoftmaxAttributesV13=g=>(0,c.createAttributeWithCacheKey)({axis:g.attributes.getInt("axis",-1)}),n.softmaxV13=(g,m,b)=>{d(m);const y=m[0].dims.slice(),w=l.ShapeUtil.normalizeAxis(b.axis,y.length),v=y.length,S=w!==v-1,A=[];let O,x=[],I=[];S&&(x=Array.from({length:v}).map((N,H)=>H),x[w]=v-1,x[v-1]=w,x.map(N=>A.push(y[N])),O=(0,c.createAttributeWithCacheKey)({perm:x}),I=(0,h.transpose)(g,m,O));const $=S?l.ShapeUtil.sizeToDimension(A,v-1):l.ShapeUtil.sizeToDimension(y,v-1),z=S?l.ShapeUtil.sizeFromDimension(A,v-1):l.ShapeUtil.sizeFromDimension(y,v-1),L=t(g,S?I:m,b,$,z);return S?(0,h.transpose)(g,L,O):L};const t=(g,m,b,y,w)=>{const v=e(g,m[0],y,w,[y]),S=g.run(Object.assign(Object.assign({},p),{cacheHint:b.cacheKey,get:()=>v}),m),A=r(g,m[0],y,w,v.output.dims,[y]),O=g.run(Object.assign(Object.assign({},u),{cacheHint:b.cacheKey,get:()=>A}),[m[0],S]),x=i(g,m[0],y,w,v.output.dims,A.output.dims);return[g.run(Object.assign(Object.assign({},o),{cacheHint:b.cacheKey,get:()=>x}),[m[0],S,O])]},e=(g,m,b,y,w)=>{const[v,S]=g.calculateTextureWidthAndHeight(m.dims,a.TextureType.unpacked),A=w.length;if(b<1||y<1)throw new Error("Logical row count N and feature count D must be greater than or equal to 1");if(w.length!==1)throw new Error("Dimensionality of the output should be 1");if(w[0]!==b)throw new Error("Shape of the output should be equal to logical row count");const O=(0,f.getGlsl)(g.session.backend.glContext.version),x=`
- float process(int[${A}] indices) {
- int logical_row_start_offset = indices[0] * ${y};
-
- float max = getColorAsFloat(${O.texture2D}(A, offsetToCoords(logical_row_start_offset, ${v},
- ${S} )));
- for(int i=1; i<${y}; ++i)
- {
- float current = getColorAsFloat(${O.texture2D}(A, offsetToCoords(logical_row_start_offset + i,
- ${v}, ${S})));
- if(current > max)
- max = current;
- }
-
- return max;
- }`;return Object.assign(Object.assign({},p),{output:{dims:w,type:m.type,textureType:a.TextureType.unpacked},shaderSource:x})},r=(g,m,b,y,w,v)=>{const[S,A]=g.calculateTextureWidthAndHeight(m.dims,a.TextureType.unpacked),O=v.length;if(b<1||y<1)throw new Error("Logical row count N and feature count D must be greater than or equal to 1");if(v.length!==1)throw new Error("Dimensionality of the output should be 1");if(v[0]!==b)throw new Error("Shape of the output should be equal to logical row count");if(w.length!==1)throw new Error("Dimensionality of the intermediate results should be 1");if(w[0]!==b)throw new Error("Shape of the intermediate results should be equal to logical row count");const x=`
- float process(int[${O}] indices) {
- int logical_row_start_offset = indices[0] * ${y};
-
- float norm_factor = 0.0;
- float max = _Max(indices);
- for(int i=0; i<${y}; ++i)
- {
- norm_factor += exp(getColorAsFloat(${(0,f.getGlsl)(g.session.backend.glContext.version).texture2D}(A, offsetToCoords(logical_row_start_offset + i,
- ${S}, ${A}))) - max);
- }
-
- return norm_factor;
- }`;return Object.assign(Object.assign({},u),{output:{dims:v,type:m.type,textureType:a.TextureType.unpacked},shaderSource:x})},i=(g,m,b,y,w,v)=>{const[S,A]=g.calculateTextureWidthAndHeight(m.dims,a.TextureType.unpacked),O=m.dims.length;if(b<1||y<1)throw new Error("Logical row count N and feature count D must be greater than or equal to 1");if(w.length!==1||v.length!==1)throw new Error("Dimensionality of the intermediate results should be 1");if(w[0]!==b||v[0]!==b)throw new Error("Shape of the intermediate results should be equal to logical row count");const x=`
- float process(int[${O}] indices) {
-
- // get offset of current logical tensor index from the 2-D texture coordinates (TexCoords)
- int offset = coordsToOffset(TexCoords, ${S}, ${A});
-
- //determine the logical row for this index
- int logical_row_index[1];
- logical_row_index[0] = offset / ${y};
-
- float norm_factor = _Norm(logical_row_index);
-
- // avoid possible division by 0
- // if norm_facor is 0, all elements are zero
- // if so, return 0
- if(norm_factor == 0.0)
- return 0.0;
-
- return exp(_A(indices) - _Max(logical_row_index)) / norm_factor;
- }`;return Object.assign(Object.assign({},o),{output:{dims:m.dims,type:m.type,textureType:a.TextureType.unpacked},shaderSource:x})},d=g=>{if(!g||g.length!==1)throw new Error("Softmax requires 1 input.");if(g[0].type!=="float32"&&g[0].type!=="float64")throw new Error("Invalid input type")}},5975:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseSplitAttributes=n.split=void 0;const c=s(246),l=s(2517),f=s(2039),a={name:"Split",inputNames:["A"],inputTypes:[f.TextureType.unpacked]};n.split=(o,t,e)=>{u(t);const r=l.ShapeUtil.normalizeAxis(e.axis,t[0].dims.length),i=h(o,t,r,e),d=[];for(let g=0;gp(o,t[0],e,r,g)}),t));return d},n.parseSplitAttributes=o=>{const t=o.attributes.getInt("axis",0),e=o.attributes.getInts("split",[]),r=o.outputs.length;return(0,c.createAttributeWithCacheKey)({axis:t,split:e,numOutputs:r})};const h=(o,t,e,r)=>{const[,i]=l.SplitUtil.splitShape(t[0].dims,e,r.split,r.numOutputs);return i.length},p=(o,t,e,r,i)=>{const[d,g]=l.SplitUtil.splitShape(t.dims,r,e.split,e.numOutputs),m=g[i],b=d[i],y=`
- float process(int indices[${b.length}]) {
- indices[${r}] += ${m};
- return _A(indices);
- }
- `;return Object.assign(Object.assign({},a),{cacheHint:`${e.cacheKey}:${i}`,output:{dims:b,type:t.type,textureType:f.TextureType.unpacked},shaderSource:y})},u=o=>{if(!o||o.length!==1)throw new Error("Split requires one input.");if(o[0].type!=="int8"&&o[0].type!=="uint8"&&o[0].type!=="int16"&&o[0].type!=="uint16"&&o[0].type!=="int32"&&o[0].type!=="uint32"&&o[0].type!=="float32"&&o[0].type!=="float64"&&o[0].type!=="bool")throw new Error("Invalid input type.")}},3933:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseSqueezeAttributes=n.squeezeV13=n.squeeze=void 0;const c=s(2517);n.squeeze=(a,h,p)=>{l(h);const u=c.ShapeUtil.squeezeShape(h[0].dims,p);return[a.reshapeUnpacked(h[0],u)]},n.squeezeV13=(a,h)=>(f(h),(0,n.squeeze)(a,[h[0]],Array.from(h[1].integerData))),n.parseSqueezeAttributes=a=>a.attributes.getInts("axes");const l=a=>{if(!a||a.length!==1)throw new Error("Squeeze requires 1 input.");if(a[0].type==="string")throw new Error("invalid input tensor types.")},f=a=>{if(!a||a.length!==2)throw new Error("Squeeze requires 2 inputs.");if(a[1].type!=="int32")throw new Error("Invalid input type.")}},6558:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.sum=void 0;const c=s(5060),l=s(2039);n.sum=(h,p)=>{a(p);const u={name:"Sum",inputNames:p.map((o,t)=>`X${t}`),inputTypes:new Array(p.length).fill(l.TextureType.unpacked)};return[h.run(Object.assign(Object.assign({},u),{get:()=>f(h,p,u)}),p)]};const f=(h,p,u)=>{const o=(0,c.getGlsl)(h.session.backend.glContext.version),t=p[0].dims.slice(),e=`
- void main() {
- vec4 result = ${p.map((r,i)=>`${o.texture2D}(X${i},TexCoords)`).join(" + ")};
- ${o.output} = result;
- }
- `;return Object.assign(Object.assign({},u),{output:{dims:t,type:p[0].type,textureType:l.TextureType.unpacked},hasMain:!0,shaderSource:e})},a=h=>{if(!h||h.length===0)throw new Error("Sum requires inputs.");const p=h[0].dims.length;for(let u=1;u{Object.defineProperty(n,"__esModule",{value:!0}),n.tile=void 0;const c=s(782),l=s(2039);n.tile=(h,p)=>{a(p);const u={name:"Tile",inputNames:["A"],inputTypes:[l.TextureType.unpacked]};return[h.run(Object.assign(Object.assign({},u),{get:()=>f(h,p,u)}),p)]};const f=(h,p,u)=>{const o=p[0].dims.slice(),t=new Array(o.length),e=[];for(let d=0;d{if(!h||h.length!==2)throw new Error("Tile requires 2 input.");if(h[1].dims.length!==1)throw new Error("The second input shape must 1 dimension.");if(h[1].dims[0]!==h[0].dims.length)throw new Error("Invalid input shape.");if(c.NUMBER_TYPES.indexOf(h[0].type)===-1)throw new Error("Invalid input type.");if(h[1].type!=="int32"&&h[1].type!=="int16")throw new Error("Invalid repeat type.")}},3738:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseTransposeAttributes=n.transpose=void 0;const c=s(246),l=s(2517),f=s(2039),a={name:"Transpose",inputNames:["A"],inputTypes:[f.TextureType.unpacked]};n.transpose=(e,r,i)=>(t(r),[e.run(Object.assign(Object.assign({},a),{cacheHint:i.cacheKey,get:()=>h(e,r[0],i.perm)}),r)]),n.parseTransposeAttributes=e=>(0,c.createAttributeWithCacheKey)({perm:e.attributes.getInts("perm",[])});const h=(e,r,i)=>{const d=r.dims;i=p(d,i);const g=u(d,i),m=d.length,b=`
- ${o("perm",i,m)}
- float process(int indices[${m}]) {
- int a[${m}];
- perm(a, indices);
- return _A(a);
- }`;return Object.assign(Object.assign({},a),{output:{dims:g,type:r.type,textureType:f.TextureType.unpacked},shaderSource:b})},p=(e,r)=>(r&&r.length!==e.length&&(r=[...e.keys()].reverse()),r),u=(e,r)=>(r=p(e,r),l.ShapeUtil.sortBasedOnPerm(e,r)),o=(e,r,i)=>{const d=[];d.push(`void ${e}(out int a[${i}], int src[${i}]) {`);for(let g=0;g{if(!e||e.length!==1)throw new Error("Transpose requires 1 input.");if(e[0].type!=="float32"&&e[0].type!=="float64")throw new Error("input should be float tensor")}},8710:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.encodeAsUint8=void 0;const c=s(5060),l=s(2039);n.encodeAsUint8=(f,a)=>{const h=a.shape,p=(0,c.getGlsl)(f.session.backend.glContext.version),u=`
- const float FLOAT_MAX = 1.70141184e38;
- const float FLOAT_MIN = 1.17549435e-38;
-
- bool isNaN(float val) {
- return (val < 1.0 || 0.0 < val || val == 0.0) ? false : true;
- }
-
- highp vec4 encodeAsUint8(highp float v) {
- if (isNaN(v)) {
- return vec4(255, 255, 255, 255);
- }
-
- highp float av = abs(v);
-
- if(av < FLOAT_MIN) {
- return vec4(0.0, 0.0, 0.0, 0.0);
- } else if(v > FLOAT_MAX) {
- return vec4(0.0, 0.0, 128.0, 127.0) / 255.0;
- } else if(v < -FLOAT_MAX) {
- return vec4(0.0, 0.0, 128.0, 255.0) / 255.0;
- }
-
- highp vec4 c = vec4(0,0,0,0);
-
- highp float e = floor(log2(av));
- highp float m = exp2(fract(log2(av))) - 1.0;
-
- c[2] = floor(128.0 * m);
- m -= c[2] / 128.0;
- c[1] = floor(32768.0 * m);
- m -= c[1] / 32768.0;
- c[0] = floor(8388608.0 * m);
-
- highp float ebias = e + 127.0;
- c[3] = floor(ebias / 2.0);
- ebias -= c[3] * 2.0;
- c[2] += floor(ebias) * 128.0;
-
- c[3] += 128.0 * step(0.0, -v);
-
- return c / 255.0;
- }
-
- void main() {
- float value = ${p.texture2D}(X,TexCoords).r;
- ${p.output} = encodeAsUint8(value);
- }`,o={name:"Uint8Encode",inputTypes:[l.TextureType.unpacked],inputNames:["X"],output:{dims:h,type:a.tensor.type,textureType:l.TextureType.downloadUint8AsFloat},shaderSource:u,hasMain:!0};return f.executeProgram(o,[a.tensor])}},4909:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.tanh=n.tan=n.sqrt=n.sin=n.sigmoid=n.relu=n.not=n.neg=n.log=n.parseLeakyReluAttributes=n.leakyRelu=n.identity=n.floor=n.exp=n.parseEluAttributes=n.elu=n.cos=n.ceil=n.clipV11=n.parseClipAttributes=n.clip=n.atan=n.asin=n.acos=n.abs=n.glslTanh=n.glslTan=n.glslSqrt=n.glslSigmoid=n.glslRelu=n.glslSin=n.glslNot=n.glslNeg=n.glslLog=n.glslLeakyRelu=n.glslIdentity=n.glslClip=n.glslFloor=n.glslExp=n.glslElu=n.glslCos=n.glslCeil=n.glslAtan=n.glslAsin=n.glslAcos=n.glslAbs=void 0;const c=s(246),l=s(2517),f=s(8520),a=s(5060),h=s(2039);function p(){return L("abs")}function u(){return L("acos")}function o(){return L("asin")}function t(){return L("atan")}function e(){return L("ceil")}function r(){return L("cos")}function i(M){const j="elu";return{body:`
- const float alpha = float(${M});
-
- float ${j}_(float a) {
- return a >= 0.0 ? a: (exp(a) - 1.0) * alpha;
- }
- vec4 ${j}_(vec4 v) {
- return vec4(${j}_(v.x), ${j}_(v.y), ${j}_(v.z), ${j}_(v.w));
- }
- `,name:j,type:f.FunctionType.ValueBased}}function d(){return L("exp")}function g(){return L("floor")}function m(M,j){const Z="clip";return{body:`
- const float min = float(${M});
- const float max = float(${j});
-
- float ${Z}_(float a) {
- return clamp(a, min, max);
- }
- vec4 ${Z}_(vec4 v) {
- return clamp(v, min, max);
- }
- `,name:Z,type:f.FunctionType.ValueBased}}function b(){const M="indentity";return{body:`
- float ${M}_(float a) {
- return a;
- }
- vec4 ${M}_(vec4 v) {
- return v;
- }
- `,name:M,type:f.FunctionType.ValueBased}}function y(M){const j="leakyRelu";return{body:`
- const float alpha = float(${M});
-
- float ${j}_(float a) {
- return a < 0.0 ? a * alpha : a;
- }
- vec4 ${j}_(vec4 v) {
- return vec4(${j}_(v.x), ${j}_(v.y), ${j}_(v.z), ${j}_(v.w));
- }
- `,name:j,type:f.FunctionType.ValueBased}}function w(){return L("log")}function v(){const M="neg";return{body:`
- float ${M}_(float a) {
- return -a;
- }
- vec4 ${M}_(vec4 v) {
- return -v;
- }
- `,name:M,type:f.FunctionType.ValueBased}}function S(){const M="not";return{body:`
- float ${M}_(float a) {
- return float( ! bool(a) );
- }
- bool ${M}_(bool a) {
- return !a;
- }
- vec4 ${M}_(vec4 v) {
- return vec4(!bool(v.x), !bool(v.y), !bool(v.z), !bool(v.w));
- }
- bvec4 ${M}_(bvec4 v) {
- return bvec4(!v.x, !v.y, !v.z, !v.w);
- }
- `,name:M,type:f.FunctionType.ValueBased}}function A(){return L("sin")}function O(){const M="relu";return{body:`
- float ${M}_(float a) {
- return max( a, 0.0 );
- }
- vec4 ${M}_(vec4 v) {
- return max( v, 0.0 );
- }
- `,name:M,type:f.FunctionType.ValueBased}}function x(){const M="sigmoid";return{body:`
- float ${M}_(float a) {
- return 1.0 / (1.0 + exp(-a));
- }
- vec4 ${M}_(vec4 v) {
- return 1.0 / (1.0 + exp(-v));
- }
- `,name:M,type:f.FunctionType.ValueBased}}function I(){return L("sqrt")}function $(){return L("tan")}function z(){const M="tanh";return{body:`
- float ${M}_(float a) {
- a = clamp(a, -10., 10.);
- a = exp(2.*a);
- return (a - 1.) / (a + 1.);
- }
- vec4 ${M}_(vec4 v) {
- v = clamp(v, -10., 10.);
- v = exp(2.*v);
- return (v - 1.) / (v + 1.);
- }
- `,name:M,type:f.FunctionType.ValueBased}}function L(M){return{body:`
- float ${M}_(float a) {
- return ${M}(a);
- }
- vec4 ${M}_(vec4 v) {
- return ${M}(v);
- }
- `,name:M,type:f.FunctionType.ValueBased}}n.glslAbs=p,n.glslAcos=u,n.glslAsin=o,n.glslAtan=t,n.glslCeil=e,n.glslCos=r,n.glslElu=i,n.glslExp=d,n.glslFloor=g,n.glslClip=m,n.glslIdentity=b,n.glslLeakyRelu=y,n.glslLog=w,n.glslNeg=v,n.glslNot=S,n.glslSin=A,n.glslRelu=O,n.glslSigmoid=x,n.glslSqrt=I,n.glslTan=$,n.glslTanh=z;const N=(M,j,Z,X)=>{const Q=M.session.pack?h.TextureType.packed:h.TextureType.unpacked,ee={name:Z.name,inputTypes:[Q],inputNames:["A"],cacheHint:X};return Object.assign(Object.assign({},ee),{get:()=>((ue,Ae,xe,oe)=>{const we=ue.session.pack?h.TextureType.packed:h.TextureType.unpacked,ye=(0,a.getGlsl)(ue.session.backend.glContext.version);return Object.assign(Object.assign({},Ae),{output:{dims:xe.dims,type:xe.type,textureType:we},shaderSource:`
- ${oe.body}
- void main() {
- vec4 v = ${ye.texture2D}(A, TexCoords);
- v = ${oe.name}_(v);
- ${ye.output} = v;
- }
- `,hasMain:!0})})(M,ee,j,Z)})};n.abs=(M,j)=>[M.run(N(M,j[0],p()),j)],n.acos=(M,j)=>[M.run(N(M,j[0],u()),j)],n.asin=(M,j)=>[M.run(N(M,j[0],o()),j)],n.atan=(M,j)=>[M.run(N(M,j[0],t()),j)],n.clip=(M,j,Z)=>[M.run(N(M,j[0],m(Z.min,Z.max),Z.cacheKey),j)],n.parseClipAttributes=M=>(0,c.createAttributeWithCacheKey)({min:M.attributes.getFloat("min",l.MIN_CLIP),max:M.attributes.getFloat("max",l.MAX_CLIP)}),n.clipV11=(M,j)=>{const Z=H(M,j);return(0,n.clip)(M,[j[0]],Z)};const H=(M,j)=>{if(j.length>=3&&(!M.session.isInitializer(j[1].dataId)||!M.session.isInitializer(j[2].dataId)))throw new Error("dynamic clip attributes are not allowed");const Z=j.length>=3?j[1].numberData[0]:l.MIN_CLIP,X=j.length>=3?j[2].numberData[0]:l.MAX_CLIP;return(0,c.createAttributeWithCacheKey)({min:Z,max:X})};n.ceil=(M,j)=>[M.run(N(M,j[0],e()),j)],n.cos=(M,j)=>[M.run(N(M,j[0],r()),j)],n.elu=(M,j,Z)=>[M.run(N(M,j[0],i(Z.alpha),Z.cacheKey),j)],n.parseEluAttributes=M=>(0,c.createAttributeWithCacheKey)({alpha:M.attributes.getFloat("alpha",1)}),n.exp=(M,j)=>[M.run(N(M,j[0],d()),j)],n.floor=(M,j)=>[M.run(N(M,j[0],g()),j)],n.identity=(M,j)=>[M.run(N(M,j[0],b()),j)],n.leakyRelu=(M,j,Z)=>[M.run(N(M,j[0],y(Z.alpha),Z.cacheKey),j)],n.parseLeakyReluAttributes=M=>(0,c.createAttributeWithCacheKey)({alpha:M.attributes.getFloat("alpha",.01)}),n.log=(M,j)=>[M.run(N(M,j[0],w()),j)],n.neg=(M,j)=>[M.run(N(M,j[0],v()),j)],n.not=(M,j)=>[M.run(N(M,j[0],S()),j)],n.relu=(M,j)=>[M.run(N(M,j[0],O()),j)],n.sigmoid=(M,j)=>[M.run(N(M,j[0],x()),j)],n.sin=(M,j)=>[M.run(N(M,j[0],A()),j)],n.sqrt=(M,j)=>[M.run(N(M,j[0],I()),j)],n.tan=(M,j)=>[M.run(N(M,j[0],$()),j)],n.tanh=(M,j)=>[M.run(N(M,j[0],z()),j)]},5611:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createUnpackProgramInfoLoader=n.createUnpackProgramInfo=void 0;const c=s(5060),l=s(2039),f=s(9390),a=s(2827),h={name:"unpack",inputNames:["A"],inputTypes:[l.TextureType.packed]};n.createUnpackProgramInfo=(p,u)=>{const o=u.dims.length,t=(0,a.getChannels)("rc",o),e=t.slice(-2),r=(0,f.getCoordsDataType)(o),i=(0,a.unpackFromChannel)(),d=u.dims.length===0?"":function(b,y){if(b===1)return"rc";let w="";for(let v=0;vObject.assign(Object.assign({},h),{get:()=>(0,n.createUnpackProgramInfo)(p,u)})},8428:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseUnsqueezeAttributes=n.unsqueezeV13=n.unsqueeze=void 0;const c=s(2517);n.unsqueeze=(a,h,p)=>{l(h);const u=c.ShapeUtil.unsqueezeShape(h[0].dims,p);return[a.reshapeUnpacked(h[0],u)]},n.unsqueezeV13=(a,h)=>(f(h),(0,n.unsqueeze)(a,[h[0]],Array.from(h[1].integerData))),n.parseUnsqueezeAttributes=a=>a.attributes.getInts("axes");const l=a=>{if(!a||a.length!==1)throw new Error("Unsqueeze requires 1 input.");if(a[0].type==="string")throw new Error("invalid input tensor types.")},f=a=>{if(!a||a.length!==2)throw new Error("Unsqueeze requires 2 inputs.");if(a[1].type!=="int32")throw new Error("Invalid input type.")}},9793:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.scalesValidation=n.validateInputs=n.parseUpsampleAttributes=n.parseUpsampleAttributesV9=n.parseUpsampleAttributesV7=n.upsample=void 0;const c=s(246),l=s(5060),f=s(2039),a={name:"Upsample",inputNames:["X"],inputTypes:[f.TextureType.unpacked]};n.upsample=(p,u,o)=>((0,n.validateInputs)(u,o),[p.run(Object.assign(Object.assign({},a),{cacheHint:o.cacheKey,get:()=>h(p,u,o)}),u)]),n.parseUpsampleAttributesV7=p=>(0,n.parseUpsampleAttributes)(p,7),n.parseUpsampleAttributesV9=p=>(0,n.parseUpsampleAttributes)(p,9),n.parseUpsampleAttributes=(p,u)=>{const o=u>=10,t=p.attributes.getString("mode","nearest");if(t!=="nearest"&&t!=="linear"&&(u<11||t!=="cubic"))throw new Error(`unrecognized mode: ${t}`);let e=[];u<9&&(e=p.attributes.getFloats("scales"),(0,n.scalesValidation)(e,t,o));const r=p.attributes.getFloat("extrapolation_value",0),i=u>10?p.attributes.getString("coordinate_transformation_mode","half_pixel"):"asymmetric";if(["asymmetric","pytorch_half_pixel","tf_half_pixel_for_nn","align_corners","tf_crop_and_resize","half_pixel"].indexOf(i)===-1)throw new Error(`coordinate_transform_mode '${i}' is not supported`);const d=i==="tf_crop_and_resize",g=d,m=t==="nearest"&&u>=11?p.attributes.getString("nearest_mode","round_prefer_floor"):"";if(["round_prefer_floor","round_prefer_ceil","floor","ceil",""].indexOf(m)===-1)throw new Error(`nearest_mode '${m}' is not supported`);const b=p.attributes.getFloat("cubic_coeff_a",-.75),y=p.attributes.getInt("exclude_outside",0)!==0;if(y&&t!=="cubic")throw new Error("exclude_outside can be set to 1 only when mode is CUBIC.");const w=u<11||t==="nearest"&&i==="asymmetric"&&m==="floor";let v=0,S=0,A=0;return u>10?p.inputs.length>2?(v=1,S=2,A=3):(S=1,A=2):u===9&&(S=1),(0,c.createAttributeWithCacheKey)({opset:u,isResize:o,mode:t,scales:e,extrapolationValue:r,coordinateTransformMode:i,useExtrapolation:g,needRoiInput:d,nearestMode:m,cubicCoefficientA:b,excludeOutside:y,useNearest2xOptimization:w,roiInputIdx:v,scalesInputIdx:S,sizesInputIdx:A})};const h=(p,u,o)=>{const t=(0,l.getGlsl)(p.session.backend.glContext.version),[e,r]=p.calculateTextureWidthAndHeight(u[0].dims,f.TextureType.unpacked),i=u[0].dims.map((A,O)=>Math.floor(A*o.scales[O])),[d,g]=p.calculateTextureWidthAndHeight(i,f.TextureType.unpacked),m=i.length,b=new Array(m),y=new Array(m);let w=`
- int output_pitches[${m}];
- int input_pitches[${m}];
- `;for(let A=m-1;A>=0;A--)b[A]=A===m-1?1:b[A+1]*i[A+1],y[A]=A===m-1?1:y[A+1]*u[0].dims[A+1],w+=`
- output_pitches[${A}] = ${b[A]};
- input_pitches[${A}] = ${y[A]};
- `;const v=`
- float getInputFloat(int index) {
- vec2 coords = offsetToCoords(index, ${e}, ${r});
- float value = getColorAsFloat(${t.texture2D}(X, coords));
- return value;
- }
- `,S=o.mode==="nearest"?`
- ${v}
- float process(int indices[${m}]) {
- int input_index = 0;
- int output_index = coordsToOffset(TexCoords, ${d}, ${g});
-
- ${w}
-
- int d, m;
- for (int dim = 0; dim < ${m}; ++dim) {
- d = output_index / output_pitches[dim];
- m = output_index - d * output_pitches[dim];
- output_index = m;
-
- if (scales[dim] != 1 && d > 0) {
- int d2 = d / scales[dim];
- m = d - d2 * scales[dim];
- d = d2;
- }
- input_index += input_pitches[dim] * d;
- }
-
- return getInputFloat(input_index);
- }`:m===4?`
- ${v}
- float process(int indices[4]) {
- int input_index = 0;
- int output_index = coordsToOffset(TexCoords, ${d}, ${g});
-
- ${w}
-
- int m;
- int index_of_dim0, index_of_dim1, index_of_dim2, index_of_dim3;
- index_of_dim0 = output_index / output_pitches[0];
- m = output_index - index_of_dim0 * output_pitches[0];
- index_of_dim1 = m / output_pitches[1];
- m = m - index_of_dim1 * output_pitches[1];
- index_of_dim2 = m / output_pitches[2];
- m = m - index_of_dim2 * output_pitches[2];
- index_of_dim3 = m;
-
- int index_of_input_dim2, index_of_input_dim3, x_offset, y_offset;
- index_of_input_dim2 = index_of_dim2 / scales[2];
- y_offset = index_of_dim2 - index_of_input_dim2 * scales[2];
- index_of_input_dim3 = index_of_dim3 / scales[3];
- x_offset = index_of_dim3 - index_of_input_dim3 * scales[3];
-
- input_index = index_of_dim0 * input_pitches[0] +
- index_of_dim1 * input_pitches[1] +
- index_of_input_dim2 * input_pitches[2] +
- index_of_input_dim3;
-
- float x00 = getInputFloat(input_index);
- float x10, x01, x11;
-
- bool end_of_dim2 = false;
- if (index_of_input_dim2 == (${u[0].dims[2]} - 1)) {
- // It's the end in dimension 2
- x01 = x00;
- end_of_dim2 = true;
- } else {
- x01 = getInputFloat(input_index + input_pitches[2]);
- }
-
- if (index_of_input_dim3 == (input_pitches[2] - 1)) {
- // It's the end in dimension 3
- x10 = x00;
- x11 = x01;
- }
- else {
- x10 = getInputFloat(input_index + 1);
- x11 = end_of_dim2 ? x10 : getInputFloat(input_index + input_pitches[2] + 1);
- }
-
- float y0 = x00 + float(y_offset) * (x01 - x00) / float(scales[2]);
- float y1 = x10 + float(y_offset) * (x11 - x10) / float(scales[2]);
- return y0 + float(x_offset) * (y1 - y0) / float(scales[3]);
- }`:`
- ${v}
- float process(int indices[2]) {
- int input_index = 0;
- int output_index = coordsToOffset(TexCoords, ${d}, ${g});
-
- ${w}
-
- int m;
- int index_of_dim0, index_of_dim1;
- index_of_dim0 = output_index / output_pitches[0];
- m = output_index - index_of_dim0 * output_pitches[0];
- index_of_dim1 = m;
-
- int index_of_input_dim0, index_of_input_dim1, x_offset, y_offset;
- index_of_input_dim0 = index_of_dim0 / scales[0];
- y_offset = index_of_dim0 - index_of_input_dim0 * scales[0];
- index_of_input_dim1 = index_of_dim1 / scales[1];
- x_offset = index_of_dim1 - index_of_input_dim1 * scales[1];
-
- input_index = index_of_input_dim0 * input_pitches[0] + index_of_input_dim1;
-
- float x00 = getInputFloat(input_index);
- float x10, x01, x11;
-
- bool end_of_dim0 = false;
- if (index_of_input_dim0 == (${u[0].dims[0]} - 1)) {
- // It's the end in dimension 0
- x01 = x00;
- end_of_dim0 = true;
- } else {
- x01 = getInputFloat(input_index + input_pitches[0]);
- }
-
- if (index_of_input_dim1 == (input_pitches[0] - 1)) {
- // It's the end in dimension 1
- x10 = x00;
- x11 = x01;
- }
- else {
- x10 = getInputFloat(input_index + 1);
- x11 = end_of_dim0 ? x10 : getInputFloat(input_index + input_pitches[0] + 1);
- }
-
- float y0 = x00 + float(y_offset) * (x01 - x00) / float(scales[0]);
- float y1 = x10 + float(y_offset) * (x11 - x10) / float(scales[0]);
- return y0 + float(x_offset) * (y1 - y0) / float(scales[1]);
- }`;return Object.assign(Object.assign({},a),{output:{dims:i,type:u[0].type,textureType:f.TextureType.unpacked},shaderSource:S,variables:[{name:"scales",type:"int",arrayLength:o.scales.length,data:o.scales.map(A=>Math.ceil(A))}]})};n.validateInputs=(p,u)=>{if(!p||u.opset<9&&p.length!==1||u.opset>=9&&u.opset<11&&p.length!==2||u.opset>=11&&p.length<2)throw new Error("invalid inputs.");if(u.scales.length>0&&p[0].dims.length!==u.scales.length)throw new Error("Invalid input shape.");if(p[0].type==="string")throw new Error("Invalid input tensor types.")},n.scalesValidation=(p,u,o)=>{if(o){for(const t of p)if(t<=0)throw new Error("Scale value should be greater than 0.")}else for(const t of p)if(t<1)throw new Error("Scale value should be greater than or equal to 1.");if(!(u!=="linear"&&u!=="cubic"||p.length===2||p.length===4&&p[0]===1&&p[1]===1))throw new Error(`'Linear' mode and 'Cubic' mode only support 2-D inputs ('Bilinear', 'Bicubic') or 4-D inputs with the corresponding outermost 2 scale values being 1 in the ${o?"Resize":"Upsample"} opeartor.`)}},1958:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.ProgramManager=void 0;const c=s(1670),l=s(6231),f=s(8879),a=s(5060);n.ProgramManager=class{constructor(h,p,u){this.profiler=h,this.glContext=p,this.textureLayoutStrategy=u,this.repo=new Map,this.attributesBound=!1}getArtifact(h){return this.repo.get(h)}setArtifact(h,p){this.repo.set(h,p)}run(h,p,u){var o;this.profiler.event("op",`ProgramManager.run ${(o=h.programInfo.name)!==null&&o!==void 0?o:"unknown kernel"}`,()=>{var t;const e=this.glContext.gl,r=h.program;e.useProgram(r);try{this.bindOutput(u),this.attributesBound||this.bindAttributes(h.attribLocations),this.bindUniforms(h.uniformLocations,(t=h.programInfo.variables)!==null&&t!==void 0?t:[],p)}catch(i){throw l.Logger.error("ProgramManager",h.programInfo.shaderSource),i}this.profiler.event("backend","GlContext.draw()",()=>{this.glContext.draw()})},this.glContext)}dispose(){this.vertexShader&&this.glContext.deleteShader(this.vertexShader),this.repo.forEach(h=>this.glContext.deleteProgram(h.program))}build(h,p,u){return this.profiler.event("backend","ProgramManager.build",()=>{const o=new f.GlslPreprocessor(this.glContext,h,p,u),t=o.preprocess(),e=this.compile(t);return{programInfo:h,program:e,uniformLocations:this.getUniformLocations(e,o.context.programInfo.inputNames,o.context.programInfo.variables),attribLocations:this.getAttribLocations(e)}})}compile(h){if(!this.vertexShader){l.Logger.verbose("ProrgramManager","Compiling and caching Vertex shader for the first time");const o=(0,a.getVertexShaderSource)(this.glContext.version);this.vertexShader=this.glContext.compileShader(o,this.glContext.gl.VERTEX_SHADER)}c.env.debug&&l.Logger.verbose("ProrgramManager",`FragShader:
-${h}
-`);const p=this.glContext.compileShader(h,this.glContext.gl.FRAGMENT_SHADER),u=this.glContext.createProgram(this.vertexShader,p);return this.glContext.deleteShader(p),u}bindOutput(h){const p=h.width,u=h.height;l.Logger.verbose("ProrgramManager",`Binding output texture to Framebuffer: w/h=${p}/${u}, shape=${h.shape}, type=${h.tensor.type}`),this.glContext.attachFramebuffer(h.texture,p,u)}bindAttributes(h){const p=h.position,u=h.textureCoord;this.glContext.setVertexAttributes(p,u),this.attributesBound=!0}bindUniforms(h,p,u){var o;const t=this.glContext.gl;let e=0;for(const{name:r,type:i,location:d,arrayLength:g}of h){const m=(o=p.find(b=>b.name===r))===null||o===void 0?void 0:o.data;if(i!=="sampler2D"&&!m)throw new Error(`variable '${r}' does not have data defined in program info`);switch(i){case"sampler2D":this.bindTexture(u[e],d,e),e++;break;case"float":g?t.uniform1fv(d,m):t.uniform1f(d,m);break;case"int":g?t.uniform1iv(d,m):t.uniform1i(d,m);break;default:throw new Error(`Uniform not implemented: ${i}`)}}}bindTexture(h,p,u){this.glContext.bindTextureToUniform(h.texture,u,p)}getAttribLocations(h){return{position:this.getAttribLocation(h,"position"),textureCoord:this.getAttribLocation(h,"textureCoord")}}getUniformLocations(h,p,u){const o=[];if(p)for(const t of p)o.push({name:t,type:"sampler2D",location:this.getUniformLocation(h,t)});if(u)for(const t of u)o.push(Object.assign(Object.assign({},t),{location:this.getUniformLocation(h,t.name)}));return o}getUniformLocation(h,p){const u=this.glContext.gl.getUniformLocation(h,p);if(u===null)throw new Error(`Uniform ${p} not found.`);return u}getAttribLocation(h,p){return this.glContext.gl.getAttribLocation(h,p)}}},6416:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLSessionHandler=void 0;const c=s(6231),l=s(1047),f=s(8316),a=s(1640),h=s(1958),p=s(7859),u=s(5702);n.WebGLSessionHandler=class{constructor(o,t){this.backend=o,this.context=t,this.layoutStrategy=new p.PreferLogicalStrategy(o.glContext.maxTextureSize),this.programManager=new h.ProgramManager(this.context.profiler,o.glContext,this.layoutStrategy),this.textureManager=new u.TextureManager(o.glContext,this.layoutStrategy,this.context.profiler,{reuseTextures:o.textureCacheMode==="full"}),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache=new Map,this.pack=o.pack,this.pack2unpackMap=new Map,this.unpack2packMap=new Map}createInferenceHandler(){return new f.WebGLInferenceHandler(this)}onGraphInitialized(o){const t=o.getValues().filter(e=>e.from===-1&&e.tensor).map(e=>e.tensor.dataId);this.initializers=new Set(t)}isInitializer(o){return!!this.initializers&&this.initializers.has(o)}addInitializer(o){this.initializers.add(o)}getTextureData(o,t){return t?this.packedTextureDataCache.get(o):this.unpackedTextureDataCache.get(o)}setTextureData(o,t,e=!1){c.Logger.verbose("WebGLSessionHandler","Storing Texture data in cache"),e?this.packedTextureDataCache.set(o,t):this.unpackedTextureDataCache.set(o,t)}dispose(){this.programManager.dispose(),this.textureManager.clearActiveTextures(),this.packedTextureDataCache.forEach(o=>this.textureManager.releaseTexture(o,!0)),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache.forEach(o=>this.textureManager.releaseTexture(o,!0)),this.unpackedTextureDataCache=new Map}resolve(o,t,e){const r=(0,l.resolveOperator)(o,t,a.WEBGL_OP_RESOLVE_RULES);return{impl:r.opImpl,context:r.opInit?r.opInit(o,e):o}}}},7769:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Uint8DataEncoder=n.RGBAFloatDataEncoder=n.RedFloat32DataEncoder=void 0;const c=s(6231);n.RedFloat32DataEncoder=class{constructor(l,f=1){if(f===1)this.internalFormat=l.R32F,this.format=l.RED,this.textureType=l.FLOAT,this.channelSize=f;else{if(f!==4)throw new Error(`Invalid number of channels: ${f}`);this.internalFormat=l.RGBA32F,this.format=l.RGBA,this.textureType=l.FLOAT,this.channelSize=f}}encode(l,f){let a,h;return l.constructor!==Float32Array&&(c.Logger.warning("Encoder","data was not of type Float32; creating new Float32Array"),h=new Float32Array(l)),f*this.channelSize>l.length?(c.Logger.warning("Encoder","Source data too small. Allocating larger array"),h=l,a=this.allocate(f*this.channelSize),h.forEach((p,u)=>a[u]=p)):(h=l,a=h),a}allocate(l){return new Float32Array(4*l)}decode(l,f){return this.channelSize===1?l.filter((a,h)=>h%4==0).subarray(0,f):l.subarray(0,f)}},n.RGBAFloatDataEncoder=class{constructor(l,f=1,a){if(f!==1&&f!==4)throw new Error(`Invalid number of channels: ${f}`);this.internalFormat=l.RGBA,this.format=l.RGBA,this.channelSize=f,this.textureType=a||l.FLOAT}encode(l,f){let a=l;return this.channelSize===1&&(c.Logger.verbose("Encoder","Exploding into a larger array"),a=this.allocate(f),l.forEach((h,p)=>a[4*p]=h)),a}allocate(l){return new Float32Array(4*l)}decode(l,f){return this.channelSize===1?l.filter((a,h)=>h%4==0).subarray(0,f):l.subarray(0,f)}},n.Uint8DataEncoder=class{constructor(l,f=1){if(this.channelSize=4,f===1)this.internalFormat=l.ALPHA,this.format=l.ALPHA,this.textureType=l.UNSIGNED_BYTE,this.channelSize=f;else{if(f!==4)throw new Error(`Invalid number of channels: ${f}`);this.internalFormat=l.RGBA,this.format=l.RGBA,this.textureType=l.UNSIGNED_BYTE,this.channelSize=f}}encode(l,f){return new Uint8Array(l.buffer,l.byteOffset,l.byteLength)}allocate(l){return new Uint8Array(l*this.channelSize)}decode(l,f){if(l instanceof Uint8Array)return l.subarray(0,f);throw new Error(`Invalid array type: ${l.constructor}`)}}},7859:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.getBatchDim=n.sizeToSquarishShape=n.getRowsCols=n.sizeFromShape=n.isInt=n.parseAxisParam=n.squeezeShape=n.PreferLogicalStrategy=n.AlwaysKeepOriginalSizeStrategy=void 0;const c=s(6231),l=s(2517);function f(o,t){const e=[],r=[],i=t!=null&&Array.isArray(t)&&t.length===0,d=t==null||i?null:a(t,o).sort();let g=0;for(let m=0;mm)&&o[m]===1&&(e.push(o[m]),r.push(m)),d[g]<=m&&g++}o[m]!==1&&(e.push(o[m]),r.push(m))}return{newShape:e,keptDims:r}}function a(o,t){const e=t.length;return o=o==null?t.map((r,i)=>i):[].concat(o),(0,l.assert)(o.every(r=>r>=-e&&r`All values in axis param must be in range [-${e}, ${e}) but got axis ${o}`),(0,l.assert)(o.every(h),()=>`All values in axis param must be integers but got axis ${o}`),o.map(r=>r<0?e+r:r)}function h(o){return o%1==0}function p(o){if(o.length===0)return 1;let t=o[0];for(let e=1;e=o.length?1:o.slice(t.breakAxis).reduce((m,b)=>m*b),g=t.breakAxis<=0?1:o.slice(0,t.breakAxis).reduce((m,b)=>m*b);if(!(d>e||g>e))return[d,g];c.Logger.verbose("TextureLayout",`Given width/height preferences were unattainable: shape:${o}, breakAxis:${t.breakAxis}`)}const r=o.reduce((d,g)=>d*g);let i=Math.floor(Math.sqrt(r));for(;i=e||r%i!=0)throw new Error(`The given dimensions are outside this GPU's boundaries: ${o}`);return[i,r/i]}},n.PreferLogicalStrategy=class{constructor(o){this.maxTextureSize=o}computeTextureWH(o,t){const e=this.computeTexture(o,t);return t&&t.isPacked&&(e[0]/=2,e[1]/=2),t&&t.reverseWH?[e[1],e[0]]:e}computeTexture(o,t){const e=t&&t.isPacked;if(o.length===0)return e?[2,2]:[1,1];let r=this.maxTextureSize;if(t&&t.breakAxis!==void 0){const g=t.breakAxis>=o.length?1:o.slice(t.breakAxis).reduce((b,y)=>b*y),m=t.breakAxis<=0?1:o.slice(0,t.breakAxis).reduce((b,y)=>b*y);if(!(g>r||m>r))return[g,m];c.Logger.verbose("TextureLayout",`Given width/height preferences were unattainable: shape:${o}, breakAxis:${t.breakAxis}`)}let i=o.slice(0);e&&(r*=2,i=i.map((g,m)=>m>=i.length-2?i[m]%2==0?i[m]:i[m]+1:i[m]),i.length===1&&(i=[2,i[0]])),i.length!==2&&(i=f(i).newShape);const d=p(i);return i.length<=1&&d<=r?[1,d]:i.length===2&&i[0]<=r&&i[1]<=r?i:i.length===3&&i[0]*i[1]<=r&&i[2]<=r?[i[0]*i[1],i[2]]:i.length===3&&i[0]<=r&&i[1]*i[2]<=r?[i[0],i[1]*i[2]]:i.length===4&&i[0]*i[1]*i[2]<=r&&i[3]<=r?[i[0]*i[1]*i[2],i[3]]:i.length===4&&i[0]<=r&&i[1]*i[2]*i[3]<=r?[i[0],i[1]*i[2]*i[3]]:e?u(d/4).map(g=>2*g):u(d)}},n.squeezeShape=f,n.parseAxisParam=a,n.isInt=h,n.sizeFromShape=p,n.getRowsCols=function(o){if(o.length===0)throw Error("Cannot get rows and columns of an empty shape array.");return[o.length>1?o[o.length-2]:1,o[o.length-1]]},n.sizeToSquarishShape=u,n.getBatchDim=function(o,t=2){return p(o.slice(0,o.length-t))}},4057:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createTextureLayoutFromShape=n.calculateTextureWidthAndHeight=n.createTextureLayoutFromTextureType=void 0;const c=s(2517),l=s(2039);n.createTextureLayoutFromTextureType=(f,a,h)=>{const p=h===l.TextureType.unpacked||h===l.TextureType.unpackedReversed?1:4,u=h===l.TextureType.packed,o=h===l.TextureType.unpackedReversed||h===l.TextureType.packed,t=h===l.TextureType.packedLastDimension?a.length-1:void 0,e=h===l.TextureType.packedLastDimension?a.map((r,i)=>i===a.length-1?4*r:r):void 0;return(0,n.createTextureLayoutFromShape)(f,a,p,e,{isPacked:u,reverseWH:o,breakAxis:t})},n.calculateTextureWidthAndHeight=(f,a,h)=>{const p=(0,n.createTextureLayoutFromTextureType)(f,a,h);return[p.width,p.height]},n.createTextureLayoutFromShape=(f,a,h=1,p,u)=>{const o=!(!u||!u.isPacked),[t,e]=f.computeTextureWH(o&&p||a,u),r=a.length;let i=a.slice(0);if(r===0&&(i=[1]),h===1)p=a;else if(o){if(h!==4)throw new Error("a packed texture must be 4-channel");p=a,r>0&&(i[r-1]=Math.ceil(i[r-1]/2)),r>1&&(i[r-2]=Math.ceil(i[r-2]/2))}else if(!p)throw new Error("Unpacked shape is needed when using channels > 1");return{width:t,height:e,channels:h,isPacked:o,shape:i,strides:c.ShapeUtil.computeStrides(i),unpackedShape:p,reversedWH:u&&u.reverseWH}}},5702:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.TextureManager=void 0;const c=s(6231);n.TextureManager=class{constructor(l,f,a,h){this.glContext=l,this.layoutStrategy=f,this.profiler=a,this.config=h,this.pendingRead=new Map,h.reuseTextures&&(this.inUseTextures=new Map,this.idleTextures=new Map,this.textureLookup=new Map)}createTextureFromLayout(l,f,a,h){const p=this.toEncoderType(l),u=this.glContext.getEncoder(p,f.channels||1,h);if(f.isPacked&&h===1)throw new Error("not implemented");const o=f.width,t=f.height;let e,r;if(this.config.reuseTextures){e=`${o}x${t}_${u.format}_${u.internalFormat}_${u.textureType}`,r=this.inUseTextures.get(e),r||(r=[],this.inUseTextures.set(e,r));const d=this.idleTextures.get(e);if(d&&d.length>0){const g=d.pop();return r.push(g),h===1&&this.glContext.updateTexture(g,o,t,u,this.toTextureData(l,a)),g}}c.Logger.verbose("TextureManager",`Creating new texture of size ${f.width}x${f.height}`);const i=this.glContext.allocateTexture(o,t,u,this.toTextureData(l,a));return this.config.reuseTextures&&(r.push(i),this.textureLookup.set(i,e)),i}readTexture(l,f,a){return a||(a=1),this.profiler.event("backend","TextureManager.readTexture",()=>{const h=l.shape.reduce((u,o)=>u*o)*a,p=this.glContext.readTexture(l.texture,l.width,l.height,h,this.toEncoderType(f),a);return this.toTensorData(f,p)})}async readTextureAsync(l,f,a){const h=l.tensor.dataId;if(a||(a=1),this.pendingRead.has(h)){const p=this.pendingRead.get(h);return new Promise(u=>p==null?void 0:p.push(u))}return this.profiler.event("backend","TextureManager.readTextureAsync",async()=>{this.pendingRead.set(h,[]);const p=l.shape.reduce((e,r)=>e*r)*a;await this.glContext.createAndWaitForFence();const u=this.glContext.readTexture(l.texture,l.width,l.height,p,this.toEncoderType(f),a),o=this.toTensorData(f,u),t=this.pendingRead.get(h);return this.pendingRead.delete(h),t==null||t.forEach(e=>e(o)),o})}readUint8TextureAsFloat(l){return this.profiler.event("backend","TextureManager.readUint8TextureAsFloat",()=>{const f=l.shape.reduce((h,p)=>h*p),a=this.glContext.readTexture(l.texture,l.width,l.height,4*f,"byte",4);return new Float32Array(a.buffer,a.byteOffset,f)})}releaseTexture(l,f){let a;if(this.config.reuseTextures&&(a=this.textureLookup.get(l.texture),a)){f&&this.textureLookup.delete(a);const h=this.inUseTextures.get(a);if(h){const p=h.indexOf(l.texture);if(p!==-1){h.splice(p,1);let u=this.idleTextures.get(a);u||(u=[],this.idleTextures.set(a,u)),u.push(l.texture)}}}a&&!f||(c.Logger.verbose("TextureManager",`Deleting texture of size ${l.width}x${l.height}`),this.glContext.deleteTexture(l.texture))}toTensorData(l,f){switch(l){case"int16":return f instanceof Int16Array?f:Int16Array.from(f);case"int32":return f instanceof Int32Array?f:Int32Array.from(f);case"int8":return f instanceof Int8Array?f:Int8Array.from(f);case"uint16":return f instanceof Uint16Array?f:Uint16Array.from(f);case"uint32":return f instanceof Uint32Array?f:Uint32Array.from(f);case"uint8":case"bool":return f instanceof Uint8Array?f:Uint8Array.from(f);case"float32":return f instanceof Float32Array?f:Float32Array.from(f);case"float64":return f instanceof Float64Array?f:Float64Array.from(f);default:throw new Error(`TensorData type ${l} is not supported`)}}toTextureData(l,f){if(f)return f instanceof Float32Array?f:new Float32Array(f)}toEncoderType(l){return"float"}clearActiveTextures(){this.glContext.clearActiveTextures()}}},2039:(_,n)=>{var s;Object.defineProperty(n,"__esModule",{value:!0}),n.TextureType=void 0,(s=n.TextureType||(n.TextureType={}))[s.unpacked=0]="unpacked",s[s.unpackedReversed=1]="unpackedReversed",s[s.packed=2]="packed",s[s.downloadUint8AsFloat=3]="downloadUint8AsFloat",s[s.packedLastDimension=4]="packedLastDimension"},9390:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.getGlChannels=n.getCoordsDataType=n.getSqueezedParams=n.squeezeInputShape=n.generateShaderFuncNameFromInputSamplerNameAtOutCoords=n.generateShaderFuncNameFromInputSamplerName=n.repeatedTry=n.getPackedShape=void 0;const c=s(2517);n.getPackedShape=function(l){const f=l.length;return l.slice(0,f-1).concat(l[f-1]/4)},n.repeatedTry=async function(l,f=h=>0,a){return new Promise((h,p)=>{let u=0;const o=()=>{if(l())return void h();u++;const t=f(u);a!=null&&u>=a?p():setTimeout(o,t)};o()})},n.generateShaderFuncNameFromInputSamplerName=function(l){return(0,c.assert)(l!==void 0&&l.length!==0,()=>"empty string found for sampler name"),"get"+l.charAt(0).toUpperCase()+l.slice(1)},n.generateShaderFuncNameFromInputSamplerNameAtOutCoords=function(l){return(0,c.assert)(l!==void 0&&l.length!==0,()=>"empty string found for sampler name"),"get"+l.charAt(0).toUpperCase()+l.slice(1)+"AtOutCoords"},n.squeezeInputShape=function(l,f){let a=JSON.parse(JSON.stringify(l));return a=f,a},n.getSqueezedParams=function(l,f){return f.map(a=>l[a]).join(", ")},n.getCoordsDataType=function(l){if(l<=1)return"int";if(l===2)return"ivec2";if(l===3)return"ivec3";if(l===4)return"ivec4";if(l===5)return"ivec5";if(l===6)return"ivec6";throw Error(`GPU for rank ${l} is not yet supported`)},n.getGlChannels=function(l=6){return["x","y","z","w","u","v"].slice(0,l)}},7305:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createNewWebGLContext=n.createWebGLContext=void 0;const c=s(6231),l=s(1713),f={};function a(h){const p=function(){if(typeof document>"u"){if(typeof OffscreenCanvas>"u")throw new TypeError("failed to create canvas: OffscreenCanvas is not supported");return new OffscreenCanvas(1,1)}const t=document.createElement("canvas");return t.width=1,t.height=1,t}();let u;const o={alpha:!1,depth:!1,antialias:!1,stencil:!1,preserveDrawingBuffer:!1,premultipliedAlpha:!1,failIfMajorPerformanceCaveat:!1};if((!h||h==="webgl2")&&(u=p.getContext("webgl2",o),u))try{return new l.WebGLContext(u,2)}catch(t){c.Logger.warning("GlContextFactory",`failed to create WebGLContext using contextId 'webgl2'. Error: ${t}`)}if((!h||h==="webgl")&&(u=p.getContext("webgl",o)||p.getContext("experimental-webgl",o),u))try{return new l.WebGLContext(u,1)}catch(t){c.Logger.warning("GlContextFactory",`failed to create WebGLContext using contextId 'webgl' or 'experimental-webgl'. Error: ${t}`)}throw new Error("WebGL is not supported")}n.createWebGLContext=function h(p){let u;p&&p!=="webgl2"||!("webgl2"in f)?p&&p!=="webgl"||!("webgl"in f)||(u=f.webgl):u=f.webgl2,u=u||a(p),p=p||u.version===1?"webgl":"webgl2";const o=u.gl;return f[p]=u,o.isContextLost()?(delete f[p],h(p)):(o.disable(o.DEPTH_TEST),o.disable(o.STENCIL_TEST),o.disable(o.BLEND),o.disable(o.DITHER),o.disable(o.POLYGON_OFFSET_FILL),o.disable(o.SAMPLE_COVERAGE),o.enable(o.SCISSOR_TEST),o.enable(o.CULL_FACE),o.cullFace(o.BACK),u)},n.createNewWebGLContext=a},1713:function(_,n,s){var c=this&&this.__createBinding||(Object.create?function(o,t,e,r){r===void 0&&(r=e);var i=Object.getOwnPropertyDescriptor(t,e);i&&!("get"in i?!t.__esModule:i.writable||i.configurable)||(i={enumerable:!0,get:function(){return t[e]}}),Object.defineProperty(o,r,i)}:function(o,t,e,r){r===void 0&&(r=e),o[r]=t[e]}),l=this&&this.__setModuleDefault||(Object.create?function(o,t){Object.defineProperty(o,"default",{enumerable:!0,value:t})}:function(o,t){o.default=t}),f=this&&this.__importStar||function(o){if(o&&o.__esModule)return o;var t={};if(o!=null)for(var e in o)e!=="default"&&Object.prototype.hasOwnProperty.call(o,e)&&c(t,o,e);return l(t,o),t};Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLContext=n.linearSearchLastTrue=void 0;const a=s(1670),h=f(s(7769)),p=s(9390);function u(o){let t=0;for(;tthis.isTimerResultAvailable(o)),this.getTimerResult(o)}async createAndWaitForFence(){const o=this.createFence(this.gl);return this.pollFence(o)}createFence(o){let t;const e=o,r=e.fenceSync(e.SYNC_GPU_COMMANDS_COMPLETE,0);return o.flush(),t=r===null?()=>!0:()=>{const i=e.clientWaitSync(r,0,0);return i===e.ALREADY_SIGNALED||i===e.CONDITION_SATISFIED},{query:r,isFencePassed:t}}async pollFence(o){return new Promise(t=>{this.addItemToPoll(()=>o.isFencePassed(),()=>t())})}pollItems(){const o=u(this.itemsToPoll.map(t=>t.isDoneFn));for(let t=0;t<=o;++t){const{resolveFn:e}=this.itemsToPoll[t];e()}this.itemsToPoll=this.itemsToPoll.slice(o+1)}async addItemToPoll(o,t){this.itemsToPoll.push({isDoneFn:o,resolveFn:t}),this.itemsToPoll.length>1||await(0,p.repeatedTry)(()=>(this.pollItems(),this.itemsToPoll.length===0))}}},1036:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.ExecutionPlan=void 0;const c=s(6231);class l{constructor(a,h){this.op=a,this.node=h}}n.ExecutionPlan=class{constructor(f,a,h){this.graph=f,this.profiler=h,this.initialize(a)}initialize(f){this.profiler.event("session","ExecutionPlan.initialize",()=>{const a=this.graph.getNodes();if(a.length!==f.length)throw new Error("The size of nodes and OPs do not match.");this._ops=f.map((h,p)=>new l(h,a[p])),this.reset(),this._starter=[],this._ops.forEach((h,p)=>{let u=!0;for(const o of h.node.inputs)if(!this._values[o]&&this.graph.getInputIndices().indexOf(o)===-1){u=!1;break}u&&this._starter.push(p)})})}reset(){this._values=this.graph.getValues().map(f=>f.tensor)}async execute(f,a){return this.profiler.event("session","ExecutionPlan.execute",async()=>{this.reset();const h=f.createInferenceHandler(),p=this.graph.getInputIndices();if(a.length!==p.length)throw new Error(`number of input tensors don't match the number of inputs to the model: actual: ${a.length} expected: ${p.length}`);a.forEach((i,d)=>{const g=p[d];this._values[g]=i});const u=this._starter.slice(0),o=this.graph.getValues(),t=this.graph.getNodes();let e=0;for(;ethis._values[w]);if(g.indexOf(void 0)!==-1)throw new Error(`unresolved input detected: op: ${d.node}`);const m=g;c.Logger.verbose("ExecPlan",`Runing op:${d.node.name} (${m.map((w,v)=>`'${d.node.inputs[v]}': ${w.type}[${w.dims.join(",")}]`).join(", ")})`);const b=await this.profiler.event("node",d.node.name,async()=>d.op.impl(h,m,d.op.context));if(b.length!==d.node.outputs.length)throw new Error("the size of output does not match model definition.");b.forEach((w,v)=>{const S=d.node.outputs[v];if(this._values[S])throw new Error(`output [${S}] already has value: op:${d.node.name}`);this._values[S]=w});const y=new Set;b.forEach((w,v)=>{const S=d.node.outputs[v];for(const A of o[S].to){const O=t[A];let x=!0;for(const I of O.inputs)if(!this._values[I]){x=!1;break}x&&y.add(A)}}),u.push(...y)}const r=[];for(let i=0;i{Object.defineProperty(n,"__esModule",{value:!0}),n.Graph=void 0;const c=s(1446),l=s(7778),f=s(9395),a=s(9162),h=s(2517);var p=f.onnxruntime.experimental.fbs;n.Graph={from:(e,r)=>new t(e,r)};class u{constructor(r){this._from=void 0,this._to=[],this.tensor=void 0,this.type=void 0,r&&(this.type=h.ProtoUtil.tensorValueTypeFromProto(r.type.tensorType))}get from(){return this._from}get to(){return this._to}}class o{constructor(r,i){r instanceof c.onnx.NodeProto?(this.name=r.name,this.opType=r.opType,this.attributes=new l.Attribute(r.attribute)):r instanceof p.Node&&(this.name=i??r.name(),this.opType=r.opType(),this.attributes=new l.Attribute(h.ProtoUtil.tensorAttributesFromORTFormat(r))),this.inputs=[],this.outputs=[],this.executeNode=!0}}class t{constructor(r,i){if(!r)throw new TypeError("graph is empty");this.buildGraph(r),this.transformGraph(i),this.checkIsAcyclic()}getInputIndices(){return this._allInputIndices}getInputNames(){return this._allInputNames}getOutputIndices(){return this._allOutputIndices}getOutputNames(){return this._allOutputNames}getValues(){return this._allData}getNodes(){return this._nodes}buildGraph(r){if(r instanceof c.onnx.GraphProto)this.buildGraphFromOnnxFormat(r);else{if(!(r instanceof p.Graph))throw new TypeError("Graph type is not supported.");this.buildGraphFromOrtFormat(r)}}buildGraphFromOnnxFormat(r){const i=new Map;this._allData=[],this._allInputIndices=[],this._allInputNames=[],this._allOutputIndices=[],this._allOutputNames=[],this._nodes=[];const d=new Map;if(!r.input)throw new Error("missing information in graph: input");const g=[];for(const m of r.input){if(i.has(m.name))throw new Error(`duplicated input name: ${m.name}`);const b=this._allData.push(new u(m))-1;i.set(m.name,b),g.push(m.name)}if(!r.initializer)throw new Error("missing information in graph: initializer");for(const m of r.initializer){let b=i.get(m.name);if(b===void 0){const y=new u;y.type={shape:{dims:h.ProtoUtil.tensorDimsFromProto(m.dims)},tensorType:h.ProtoUtil.tensorDataTypeFromProto(m.dataType)},b=this._allData.push(y)-1,i.set(m.name,b)}this._allData[b]._from=-1,this._allData[b].tensor=a.Tensor.fromProto(m)}for(let m=0;m{this._allData[g]._to.forEach(m=>{r.add(m)})});const i=Array.from(r),d=new Array(this._nodes.length).fill("white");for(;i.length>0;){const g=i.pop();d[g]==="gray"?d[g]="black":(i.push(g),d[g]="gray",this._nodes[g].outputs.forEach(m=>{const b=this._allData[m];if(b.tensor!==void 0)throw new Error("node outputs should not be initialized");if(b._from!==g)throw new Error("from property of the Value object doesn't match index of Node being processed");b._to.forEach(y=>{if(d[y]==="gray")throw new Error("model graph is cyclic");d[y]==="white"&&i.push(y)})}))}}transformGraph(r){this.removeAllIdentityNodes(),this.removeAllDropoutNodes(),this.fuseConvActivationNodes(),r&&r.transformGraph(this),this.finalizeGraph()}finalizeGraph(){let r=0;for(let i=0;i0&&(this._nodes[i].inputs.forEach(d=>{const g=this._allData[d]._to.indexOf(i+r);g!==-1&&(this._allData[d]._to[g]=i)}),this._nodes[i].outputs.forEach(d=>{this._allData[d]._from&&this._allData[d]._from===i+r&&(this._allData[d]._from=i)})):(r++,this._nodes[i].outputs.forEach(d=>{this._allData[d]._from=-2}),this._nodes.splice(i,1),i--);r=0;for(let i=0;i0){let d=-1;this._allData[i].from!==void 0&&this._allData[i].from!==-1?(d=this._nodes[this._allData[i].from].outputs.indexOf(i+r),d!==-1&&(this._nodes[this._allData[i].from].outputs[d]=i)):(d=this._allInputIndices.indexOf(i+r),d!==-1&&(this._allInputIndices[d]=i)),this._allData[i].to.forEach(g=>{d=this._nodes[g].inputs.indexOf(i+r),d!==-1&&(this._nodes[g].inputs[d]=i)}),this._allData[i].to.length===0&&(d=this._allOutputIndices.indexOf(i+r),d!==-1&&(this._allOutputIndices[d]=i))}}else r++,this._allData.splice(i,1),i--}deleteNode(r){const i=this._nodes[r];if(i.outputs.length>1){for(let w=1;w0)throw new Error("Node deletion with more than one output connected to other nodes is not supported. ")}i.executeNode=!1;const d=i.inputs[0],g=i.outputs[0],m=this._allData[g].to,b=this._allData[d].to.indexOf(r);if(b===-1)throw new Error("The Value object doesn't have the current Node in it's 'to' property ");this._allData[d].to.splice(b,1),this._allData[g]._to=[];const y=this._allOutputIndices.indexOf(g);if(y!==-1&&(this._allOutputIndices[y]=d),m&&m.length>0)for(const w of m){const v=this._nodes[w].inputs.indexOf(g);if(v===-1)throw new Error("The Node object doesn't have the output Value in it's 'inputs' property ");this._nodes[w].inputs[v]=d,this._allData[d].to.push(w)}}removeAllDropoutNodes(){let r=0;for(const i of this._nodes){if(i.opType==="Dropout"){if(i.inputs.length!==1)throw new Error("Dropout nodes should only contain one input. ");if(i.outputs.length!==1&&i.outputs.length!==2)throw new Error("Dropout nodes should contain either 1 or 2 output(s)");if(i.outputs.length===2&&this._allData[i.outputs[1]]._to.length!==0)throw new Error("Dropout nodes's second output should not be referenced by other nodes");this.deleteNode(r)}r++}}removeAllIdentityNodes(){let r=0;for(const i of this._nodes)i.opType==="Identity"&&this.deleteNode(r),r++}isActivation(r){switch(r.opType){case"Relu":case"Sigmoid":case"Clip":return!0;default:return!1}}fuseConvActivationNodes(){for(const r of this._nodes)if(r.opType==="Conv"){const i=this._allData[r.outputs[0]]._to;if(i.length===1&&this.isActivation(this._nodes[i[0]])){const d=this._nodes[i[0]];if(d.opType==="Clip")if(d.inputs.length===1)try{r.attributes.set("activation_params","floats",[d.attributes.getFloat("min"),d.attributes.getFloat("max")])}catch{r.attributes.set("activation_params","floats",[h.MIN_CLIP,h.MAX_CLIP])}else{if(!(d.inputs.length>=3&&this._allData[d.inputs[1]].tensor!==void 0&&this._allData[d.inputs[2]].tensor!==void 0))continue;r.attributes.set("activation_params","floats",[this._allData[d.inputs[1]].tensor.floatData[0],this._allData[d.inputs[2]].tensor.floatData[0]])}r.attributes.set("activation","string",d.opType),this.deleteNode(i[0])}}}}},6231:(_,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.now=n.Profiler=n.Logger=void 0;const s={verbose:1e3,info:2e3,warning:4e3,error:5e3,fatal:6e3},c={none:new class{log(o,t,e){}},console:new class{log(o,t,e){console.log(`${this.color(o)} ${e?"\x1B[35m"+e+"\x1B[0m ":""}${t}`)}color(o){switch(o){case"verbose":return"\x1B[34;40mv\x1B[0m";case"info":return"\x1B[32mi\x1B[0m";case"warning":return"\x1B[30;43mw\x1B[0m";case"error":return"\x1B[31;40me\x1B[0m";case"fatal":return"\x1B[101mf\x1B[0m";default:throw new Error(`unsupported severity: ${o}`)}}}},l={provider:"console",minimalSeverity:"warning",logDateTime:!0,logSourceLocation:!1};let f={"":l};function a(o,t,e,r){if(t===void 0)return i=o,{verbose:a.verbose.bind(null,i),info:a.info.bind(null,i),warning:a.warning.bind(null,i),error:a.error.bind(null,i),fatal:a.fatal.bind(null,i)};if(e===void 0)h(o,t);else if(typeof e=="number"&&r===void 0)h(o,t);else if(typeof e=="string"&&r===void 0)h(o,e,0,t);else{if(typeof e!="string"||typeof r!="number")throw new TypeError("input is valid");h(o,e,0,t)}var i}function h(o,t,e,r){const i=f[r||""]||f[""];s[o]{g.then(async y=>{i&&await i.end(),m(y)},async y=>{i&&await i.end(),b(y)})});if(!d&&i){const m=i.end();if(m&&typeof m.then=="function")return new Promise((b,y)=>{m.then(()=>{b(g)},w=>{y(w)})})}return g}begin(o,t,e){if(!this._started)throw new Error("profiler is not started yet");if(e===void 0){const r=(0,n.now)();return this.flush(r),new p(o,t,r,i=>this.endSync(i))}{const r=e.beginTimer();return new p(o,t,0,async i=>this.end(i),r,e)}}async end(o){const t=await o.checkTimer();this._timingEvents.length=this._flushBatchSize||o-this._flushTime>=this._flushIntervalInMilliseconds){for(const t=this._flushPointer;this._flushPointerperformance.now():Date.now},2644:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Model=void 0;const c=s(5686),l=s(1446),f=s(7070),a=s(9395),h=s(2517);var p=a.onnxruntime.experimental.fbs;n.Model=class{constructor(){}load(u,o,t){if(!t)try{return void this.loadFromOnnxFormat(u,o)}catch(e){if(t!==void 0)throw e}this.loadFromOrtFormat(u,o)}loadFromOnnxFormat(u,o){const t=l.onnx.ModelProto.decode(u);if(h.LongUtil.longToNumber(t.irVersion)<3)throw new Error("only support ONNX model with IR_VERSION>=3");this._opsets=t.opsetImport.map(e=>({domain:e.domain,version:h.LongUtil.longToNumber(e.version)})),this._graph=f.Graph.from(t.graph,o)}loadFromOrtFormat(u,o){const t=new c.flatbuffers.ByteBuffer(u),e=p.InferenceSession.getRootAsInferenceSession(t).model();if(h.LongUtil.longToNumber(e.irVersion())<3)throw new Error("only support ONNX model with IR_VERSION>=3");this._opsets=[];for(let r=0;r{Object.defineProperty(n,"__esModule",{value:!0}),n.FLOAT_TYPES=n.INT_TYPES=n.NUMBER_TYPES=void 0,n.NUMBER_TYPES=["float32","float64","int32","int16","int8","uint16","uint32","uint8"],n.INT_TYPES=["int32","int16","int8","uint16","uint32","uint8"],n.FLOAT_TYPES=["float32","float64"]},1047:(_,n)=>{function s(c,l){if(l.endsWith("+")){const f=Number.parseInt(l.substring(0,l.length-1),10);return!isNaN(f)&&f<=c}if(l.split("-").length===2){const f=l.split("-"),a=Number.parseInt(f[0],10),h=Number.parseInt(f[1],10);return!isNaN(a)&&!isNaN(h)&&a<=c&&c<=h}return Number.parseInt(l,10)===c}Object.defineProperty(n,"__esModule",{value:!0}),n.resolveOperator=void 0,n.resolveOperator=function(c,l,f){for(const a of f){const h=a[0],p=a[1],u=a[2],o=a[3],t=a[4];if(c.opType===h){for(const e of l)if((e.domain===p||e.domain==="ai.onnx"&&p==="")&&s(e.version,u))return{opImpl:o,opInit:t}}}throw new TypeError(`cannot resolve operator '${c.opType}' with opsets: ${l.map(a=>`${a.domain||"ai.onnx"} v${a.version}`).join(", ")}`)}},9395:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.onnxruntime=void 0;const c=s(5686);var l,f;l=n.onnxruntime||(n.onnxruntime={}),function(a){(function(h){h[h.UNDEFINED=0]="UNDEFINED",h[h.FLOAT=1]="FLOAT",h[h.INT=2]="INT",h[h.STRING=3]="STRING",h[h.TENSOR=4]="TENSOR",h[h.GRAPH=5]="GRAPH",h[h.FLOATS=6]="FLOATS",h[h.INTS=7]="INTS",h[h.STRINGS=8]="STRINGS",h[h.TENSORS=9]="TENSORS",h[h.GRAPHS=10]="GRAPHS",h[h.SPARSE_TENSOR=11]="SPARSE_TENSOR",h[h.SPARSE_TENSORS=12]="SPARSE_TENSORS"})(a.AttributeType||(a.AttributeType={}))}((f=l.experimental||(l.experimental={})).fbs||(f.fbs={})),function(a){(function(h){(function(p){(function(u){u[u.UNKNOWN=0]="UNKNOWN",u[u.VALUE=1]="VALUE",u[u.PARAM=2]="PARAM"})(p.DimensionValueType||(p.DimensionValueType={}))})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){(function(u){u[u.UNDEFINED=0]="UNDEFINED",u[u.FLOAT=1]="FLOAT",u[u.UINT8=2]="UINT8",u[u.INT8=3]="INT8",u[u.UINT16=4]="UINT16",u[u.INT16=5]="INT16",u[u.INT32=6]="INT32",u[u.INT64=7]="INT64",u[u.STRING=8]="STRING",u[u.BOOL=9]="BOOL",u[u.FLOAT16=10]="FLOAT16",u[u.DOUBLE=11]="DOUBLE",u[u.UINT32=12]="UINT32",u[u.UINT64=13]="UINT64",u[u.COMPLEX64=14]="COMPLEX64",u[u.COMPLEX128=15]="COMPLEX128",u[u.BFLOAT16=16]="BFLOAT16"})(p.TensorDataType||(p.TensorDataType={}))})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){(function(u){u[u.Primitive=0]="Primitive",u[u.Fused=1]="Fused"})(p.NodeType||(p.NodeType={}))})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){(function(u){u[u.NONE=0]="NONE",u[u.tensor_type=1]="tensor_type",u[u.sequence_type=2]="sequence_type",u[u.map_type=3]="map_type"})(p.TypeInfoValue||(p.TypeInfoValue={}))})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsShape(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsShape(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}dim(t,e){let r=this.bb.__offset(this.bb_pos,4);return r?(e||new a.experimental.fbs.Dimension).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}dimLength(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.__vector_len(this.bb_pos+t):0}static startShape(t){t.startObject(1)}static addDim(t,e){t.addFieldOffset(0,e,0)}static createDimVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startDimVector(t,e){t.startVector(4,e,4)}static endShape(t){return t.endObject()}static createShape(t,e){return u.startShape(t),u.addDim(t,e),u.endShape(t)}}p.Shape=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsDimension(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsDimension(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}value(t){let e=this.bb.__offset(this.bb_pos,4);return e?(t||new a.experimental.fbs.DimensionValue).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}denotation(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}static startDimension(t){t.startObject(2)}static addValue(t,e){t.addFieldOffset(0,e,0)}static addDenotation(t,e){t.addFieldOffset(1,e,0)}static endDimension(t){return t.endObject()}static createDimension(t,e,r){return u.startDimension(t),u.addValue(t,e),u.addDenotation(t,r),u.endDimension(t)}}p.Dimension=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsDimensionValue(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsDimensionValue(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}dimType(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt8(this.bb_pos+t):a.experimental.fbs.DimensionValueType.UNKNOWN}dimValue(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}dimParam(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__string(this.bb_pos+e,t):null}static startDimensionValue(t){t.startObject(3)}static addDimType(t,e){t.addFieldInt8(0,e,a.experimental.fbs.DimensionValueType.UNKNOWN)}static addDimValue(t,e){t.addFieldInt64(1,e,t.createLong(0,0))}static addDimParam(t,e){t.addFieldOffset(2,e,0)}static endDimensionValue(t){return t.endObject()}static createDimensionValue(t,e,r,i){return u.startDimensionValue(t),u.addDimType(t,e),u.addDimValue(t,r),u.addDimParam(t,i),u.endDimensionValue(t)}}p.DimensionValue=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsTensorTypeAndShape(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsTensorTypeAndShape(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}elemType(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt32(this.bb_pos+t):a.experimental.fbs.TensorDataType.UNDEFINED}shape(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new a.experimental.fbs.Shape).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startTensorTypeAndShape(t){t.startObject(2)}static addElemType(t,e){t.addFieldInt32(0,e,a.experimental.fbs.TensorDataType.UNDEFINED)}static addShape(t,e){t.addFieldOffset(1,e,0)}static endTensorTypeAndShape(t){return t.endObject()}static createTensorTypeAndShape(t,e,r){return u.startTensorTypeAndShape(t),u.addElemType(t,e),u.addShape(t,r),u.endTensorTypeAndShape(t)}}p.TensorTypeAndShape=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsMapType(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsMapType(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}keyType(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt32(this.bb_pos+t):a.experimental.fbs.TensorDataType.UNDEFINED}valueType(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new a.experimental.fbs.TypeInfo).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startMapType(t){t.startObject(2)}static addKeyType(t,e){t.addFieldInt32(0,e,a.experimental.fbs.TensorDataType.UNDEFINED)}static addValueType(t,e){t.addFieldOffset(1,e,0)}static endMapType(t){return t.endObject()}static createMapType(t,e,r){return u.startMapType(t),u.addKeyType(t,e),u.addValueType(t,r),u.endMapType(t)}}p.MapType=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsSequenceType(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsSequenceType(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}elemType(t){let e=this.bb.__offset(this.bb_pos,4);return e?(t||new a.experimental.fbs.TypeInfo).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startSequenceType(t){t.startObject(1)}static addElemType(t,e){t.addFieldOffset(0,e,0)}static endSequenceType(t){return t.endObject()}static createSequenceType(t,e){return u.startSequenceType(t),u.addElemType(t,e),u.endSequenceType(t)}}p.SequenceType=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(h.fbs||(h.fbs={})).EdgeEnd=class{constructor(){this.bb=null,this.bb_pos=0}__init(p,u){return this.bb_pos=p,this.bb=u,this}nodeIndex(){return this.bb.readUint32(this.bb_pos)}srcArgIndex(){return this.bb.readInt32(this.bb_pos+4)}dstArgIndex(){return this.bb.readInt32(this.bb_pos+8)}static createEdgeEnd(p,u,o,t){return p.prep(4,12),p.writeInt32(t),p.writeInt32(o),p.writeInt32(u),p.offset()}}})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsNodeEdge(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsNodeEdge(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}nodeIndex(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readUint32(this.bb_pos+t):0}inputEdges(t,e){let r=this.bb.__offset(this.bb_pos,6);return r?(e||new a.experimental.fbs.EdgeEnd).__init(this.bb.__vector(this.bb_pos+r)+12*t,this.bb):null}inputEdgesLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}outputEdges(t,e){let r=this.bb.__offset(this.bb_pos,8);return r?(e||new a.experimental.fbs.EdgeEnd).__init(this.bb.__vector(this.bb_pos+r)+12*t,this.bb):null}outputEdgesLength(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}static startNodeEdge(t){t.startObject(3)}static addNodeIndex(t,e){t.addFieldInt32(0,e,0)}static addInputEdges(t,e){t.addFieldOffset(1,e,0)}static startInputEdgesVector(t,e){t.startVector(12,e,4)}static addOutputEdges(t,e){t.addFieldOffset(2,e,0)}static startOutputEdgesVector(t,e){t.startVector(12,e,4)}static endNodeEdge(t){return t.endObject()}static createNodeEdge(t,e,r,i){return u.startNodeEdge(t),u.addNodeIndex(t,e),u.addInputEdges(t,r),u.addOutputEdges(t,i),u.endNodeEdge(t)}}p.NodeEdge=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsNode(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsNode(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}name(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}docString(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}domain(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__string(this.bb_pos+e,t):null}sinceVersion(){let t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readInt32(this.bb_pos+t):0}index(){let t=this.bb.__offset(this.bb_pos,12);return t?this.bb.readUint32(this.bb_pos+t):0}opType(t){let e=this.bb.__offset(this.bb_pos,14);return e?this.bb.__string(this.bb_pos+e,t):null}type(){let t=this.bb.__offset(this.bb_pos,16);return t?this.bb.readInt32(this.bb_pos+t):a.experimental.fbs.NodeType.Primitive}executionProviderType(t){let e=this.bb.__offset(this.bb_pos,18);return e?this.bb.__string(this.bb_pos+e,t):null}inputs(t,e){let r=this.bb.__offset(this.bb_pos,20);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}inputsLength(){let t=this.bb.__offset(this.bb_pos,20);return t?this.bb.__vector_len(this.bb_pos+t):0}outputs(t,e){let r=this.bb.__offset(this.bb_pos,22);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}outputsLength(){let t=this.bb.__offset(this.bb_pos,22);return t?this.bb.__vector_len(this.bb_pos+t):0}attributes(t,e){let r=this.bb.__offset(this.bb_pos,24);return r?(e||new a.experimental.fbs.Attribute).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}attributesLength(){let t=this.bb.__offset(this.bb_pos,24);return t?this.bb.__vector_len(this.bb_pos+t):0}inputArgCounts(t){let e=this.bb.__offset(this.bb_pos,26);return e?this.bb.readInt32(this.bb.__vector(this.bb_pos+e)+4*t):0}inputArgCountsLength(){let t=this.bb.__offset(this.bb_pos,26);return t?this.bb.__vector_len(this.bb_pos+t):0}inputArgCountsArray(){let t=this.bb.__offset(this.bb_pos,26);return t?new Int32Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+t),this.bb.__vector_len(this.bb_pos+t)):null}implicitInputs(t,e){let r=this.bb.__offset(this.bb_pos,28);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}implicitInputsLength(){let t=this.bb.__offset(this.bb_pos,28);return t?this.bb.__vector_len(this.bb_pos+t):0}static startNode(t){t.startObject(13)}static addName(t,e){t.addFieldOffset(0,e,0)}static addDocString(t,e){t.addFieldOffset(1,e,0)}static addDomain(t,e){t.addFieldOffset(2,e,0)}static addSinceVersion(t,e){t.addFieldInt32(3,e,0)}static addIndex(t,e){t.addFieldInt32(4,e,0)}static addOpType(t,e){t.addFieldOffset(5,e,0)}static addType(t,e){t.addFieldInt32(6,e,a.experimental.fbs.NodeType.Primitive)}static addExecutionProviderType(t,e){t.addFieldOffset(7,e,0)}static addInputs(t,e){t.addFieldOffset(8,e,0)}static createInputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startInputsVector(t,e){t.startVector(4,e,4)}static addOutputs(t,e){t.addFieldOffset(9,e,0)}static createOutputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startOutputsVector(t,e){t.startVector(4,e,4)}static addAttributes(t,e){t.addFieldOffset(10,e,0)}static createAttributesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startAttributesVector(t,e){t.startVector(4,e,4)}static addInputArgCounts(t,e){t.addFieldOffset(11,e,0)}static createInputArgCountsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addInt32(e[r]);return t.endVector()}static startInputArgCountsVector(t,e){t.startVector(4,e,4)}static addImplicitInputs(t,e){t.addFieldOffset(12,e,0)}static createImplicitInputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startImplicitInputsVector(t,e){t.startVector(4,e,4)}static endNode(t){return t.endObject()}static createNode(t,e,r,i,d,g,m,b,y,w,v,S,A,O){return u.startNode(t),u.addName(t,e),u.addDocString(t,r),u.addDomain(t,i),u.addSinceVersion(t,d),u.addIndex(t,g),u.addOpType(t,m),u.addType(t,b),u.addExecutionProviderType(t,y),u.addInputs(t,w),u.addOutputs(t,v),u.addAttributes(t,S),u.addInputArgCounts(t,A),u.addImplicitInputs(t,O),u.endNode(t)}}p.Node=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsValueInfo(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsValueInfo(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}name(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}docString(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}type(t){let e=this.bb.__offset(this.bb_pos,8);return e?(t||new a.experimental.fbs.TypeInfo).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startValueInfo(t){t.startObject(3)}static addName(t,e){t.addFieldOffset(0,e,0)}static addDocString(t,e){t.addFieldOffset(1,e,0)}static addType(t,e){t.addFieldOffset(2,e,0)}static endValueInfo(t){return t.endObject()}static createValueInfo(t,e,r,i){return u.startValueInfo(t),u.addName(t,e),u.addDocString(t,r),u.addType(t,i),u.endValueInfo(t)}}p.ValueInfo=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsTypeInfo(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsTypeInfo(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}denotation(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}valueType(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.readUint8(this.bb_pos+t):a.experimental.fbs.TypeInfoValue.NONE}value(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__union(t,this.bb_pos+e):null}static startTypeInfo(t){t.startObject(3)}static addDenotation(t,e){t.addFieldOffset(0,e,0)}static addValueType(t,e){t.addFieldInt8(1,e,a.experimental.fbs.TypeInfoValue.NONE)}static addValue(t,e){t.addFieldOffset(2,e,0)}static endTypeInfo(t){return t.endObject()}static createTypeInfo(t,e,r,i){return u.startTypeInfo(t),u.addDenotation(t,e),u.addValueType(t,r),u.addValue(t,i),u.endTypeInfo(t)}}p.TypeInfo=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsOperatorSetId(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsOperatorSetId(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}domain(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}version(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}static startOperatorSetId(t){t.startObject(2)}static addDomain(t,e){t.addFieldOffset(0,e,0)}static addVersion(t,e){t.addFieldInt64(1,e,t.createLong(0,0))}static endOperatorSetId(t){return t.endObject()}static createOperatorSetId(t,e,r){return u.startOperatorSetId(t),u.addDomain(t,e),u.addVersion(t,r),u.endOperatorSetId(t)}}p.OperatorSetId=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsTensor(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsTensor(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}name(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}docString(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}dims(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.readInt64(this.bb.__vector(this.bb_pos+e)+8*t):this.bb.createLong(0,0)}dimsLength(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}dataType(){let t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readInt32(this.bb_pos+t):a.experimental.fbs.TensorDataType.UNDEFINED}rawData(t){let e=this.bb.__offset(this.bb_pos,12);return e?this.bb.readUint8(this.bb.__vector(this.bb_pos+e)+t):0}rawDataLength(){let t=this.bb.__offset(this.bb_pos,12);return t?this.bb.__vector_len(this.bb_pos+t):0}rawDataArray(){let t=this.bb.__offset(this.bb_pos,12);return t?new Uint8Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+t),this.bb.__vector_len(this.bb_pos+t)):null}stringData(t,e){let r=this.bb.__offset(this.bb_pos,14);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}stringDataLength(){let t=this.bb.__offset(this.bb_pos,14);return t?this.bb.__vector_len(this.bb_pos+t):0}static startTensor(t){t.startObject(6)}static addName(t,e){t.addFieldOffset(0,e,0)}static addDocString(t,e){t.addFieldOffset(1,e,0)}static addDims(t,e){t.addFieldOffset(2,e,0)}static createDimsVector(t,e){t.startVector(8,e.length,8);for(let r=e.length-1;r>=0;r--)t.addInt64(e[r]);return t.endVector()}static startDimsVector(t,e){t.startVector(8,e,8)}static addDataType(t,e){t.addFieldInt32(3,e,a.experimental.fbs.TensorDataType.UNDEFINED)}static addRawData(t,e){t.addFieldOffset(4,e,0)}static createRawDataVector(t,e){t.startVector(1,e.length,1);for(let r=e.length-1;r>=0;r--)t.addInt8(e[r]);return t.endVector()}static startRawDataVector(t,e){t.startVector(1,e,1)}static addStringData(t,e){t.addFieldOffset(5,e,0)}static createStringDataVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startStringDataVector(t,e){t.startVector(4,e,4)}static endTensor(t){return t.endObject()}static createTensor(t,e,r,i,d,g,m){return u.startTensor(t),u.addName(t,e),u.addDocString(t,r),u.addDims(t,i),u.addDataType(t,d),u.addRawData(t,g),u.addStringData(t,m),u.endTensor(t)}}p.Tensor=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsSparseTensor(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsSparseTensor(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}values(t){let e=this.bb.__offset(this.bb_pos,4);return e?(t||new a.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}indices(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new a.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}dims(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.readInt64(this.bb.__vector(this.bb_pos+e)+8*t):this.bb.createLong(0,0)}dimsLength(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}static startSparseTensor(t){t.startObject(3)}static addValues(t,e){t.addFieldOffset(0,e,0)}static addIndices(t,e){t.addFieldOffset(1,e,0)}static addDims(t,e){t.addFieldOffset(2,e,0)}static createDimsVector(t,e){t.startVector(8,e.length,8);for(let r=e.length-1;r>=0;r--)t.addInt64(e[r]);return t.endVector()}static startDimsVector(t,e){t.startVector(8,e,8)}static endSparseTensor(t){return t.endObject()}static createSparseTensor(t,e,r,i){return u.startSparseTensor(t),u.addValues(t,e),u.addIndices(t,r),u.addDims(t,i),u.endSparseTensor(t)}}p.SparseTensor=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsAttribute(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsAttribute(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}name(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}docString(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}type(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.readInt32(this.bb_pos+t):a.experimental.fbs.AttributeType.UNDEFINED}f(){let t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readFloat32(this.bb_pos+t):0}i(){let t=this.bb.__offset(this.bb_pos,12);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}s(t){let e=this.bb.__offset(this.bb_pos,14);return e?this.bb.__string(this.bb_pos+e,t):null}t(t){let e=this.bb.__offset(this.bb_pos,16);return e?(t||new a.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}g(t){let e=this.bb.__offset(this.bb_pos,18);return e?(t||new a.experimental.fbs.Graph).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}floats(t){let e=this.bb.__offset(this.bb_pos,20);return e?this.bb.readFloat32(this.bb.__vector(this.bb_pos+e)+4*t):0}floatsLength(){let t=this.bb.__offset(this.bb_pos,20);return t?this.bb.__vector_len(this.bb_pos+t):0}floatsArray(){let t=this.bb.__offset(this.bb_pos,20);return t?new Float32Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+t),this.bb.__vector_len(this.bb_pos+t)):null}ints(t){let e=this.bb.__offset(this.bb_pos,22);return e?this.bb.readInt64(this.bb.__vector(this.bb_pos+e)+8*t):this.bb.createLong(0,0)}intsLength(){let t=this.bb.__offset(this.bb_pos,22);return t?this.bb.__vector_len(this.bb_pos+t):0}strings(t,e){let r=this.bb.__offset(this.bb_pos,24);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}stringsLength(){let t=this.bb.__offset(this.bb_pos,24);return t?this.bb.__vector_len(this.bb_pos+t):0}tensors(t,e){let r=this.bb.__offset(this.bb_pos,26);return r?(e||new a.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}tensorsLength(){let t=this.bb.__offset(this.bb_pos,26);return t?this.bb.__vector_len(this.bb_pos+t):0}graphs(t,e){let r=this.bb.__offset(this.bb_pos,28);return r?(e||new a.experimental.fbs.Graph).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}graphsLength(){let t=this.bb.__offset(this.bb_pos,28);return t?this.bb.__vector_len(this.bb_pos+t):0}static startAttribute(t){t.startObject(13)}static addName(t,e){t.addFieldOffset(0,e,0)}static addDocString(t,e){t.addFieldOffset(1,e,0)}static addType(t,e){t.addFieldInt32(2,e,a.experimental.fbs.AttributeType.UNDEFINED)}static addF(t,e){t.addFieldFloat32(3,e,0)}static addI(t,e){t.addFieldInt64(4,e,t.createLong(0,0))}static addS(t,e){t.addFieldOffset(5,e,0)}static addT(t,e){t.addFieldOffset(6,e,0)}static addG(t,e){t.addFieldOffset(7,e,0)}static addFloats(t,e){t.addFieldOffset(8,e,0)}static createFloatsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addFloat32(e[r]);return t.endVector()}static startFloatsVector(t,e){t.startVector(4,e,4)}static addInts(t,e){t.addFieldOffset(9,e,0)}static createIntsVector(t,e){t.startVector(8,e.length,8);for(let r=e.length-1;r>=0;r--)t.addInt64(e[r]);return t.endVector()}static startIntsVector(t,e){t.startVector(8,e,8)}static addStrings(t,e){t.addFieldOffset(10,e,0)}static createStringsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startStringsVector(t,e){t.startVector(4,e,4)}static addTensors(t,e){t.addFieldOffset(11,e,0)}static createTensorsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startTensorsVector(t,e){t.startVector(4,e,4)}static addGraphs(t,e){t.addFieldOffset(12,e,0)}static createGraphsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startGraphsVector(t,e){t.startVector(4,e,4)}static endAttribute(t){return t.endObject()}static createAttribute(t,e,r,i,d,g,m,b,y,w,v,S,A,O){return u.startAttribute(t),u.addName(t,e),u.addDocString(t,r),u.addType(t,i),u.addF(t,d),u.addI(t,g),u.addS(t,m),u.addT(t,b),u.addG(t,y),u.addFloats(t,w),u.addInts(t,v),u.addStrings(t,S),u.addTensors(t,A),u.addGraphs(t,O),u.endAttribute(t)}}p.Attribute=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsGraph(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsGraph(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}initializers(t,e){let r=this.bb.__offset(this.bb_pos,4);return r?(e||new a.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}initializersLength(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.__vector_len(this.bb_pos+t):0}nodeArgs(t,e){let r=this.bb.__offset(this.bb_pos,6);return r?(e||new a.experimental.fbs.ValueInfo).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}nodeArgsLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}nodes(t,e){let r=this.bb.__offset(this.bb_pos,8);return r?(e||new a.experimental.fbs.Node).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}nodesLength(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}maxNodeIndex(){let t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readUint32(this.bb_pos+t):0}nodeEdges(t,e){let r=this.bb.__offset(this.bb_pos,12);return r?(e||new a.experimental.fbs.NodeEdge).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}nodeEdgesLength(){let t=this.bb.__offset(this.bb_pos,12);return t?this.bb.__vector_len(this.bb_pos+t):0}inputs(t,e){let r=this.bb.__offset(this.bb_pos,14);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}inputsLength(){let t=this.bb.__offset(this.bb_pos,14);return t?this.bb.__vector_len(this.bb_pos+t):0}outputs(t,e){let r=this.bb.__offset(this.bb_pos,16);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}outputsLength(){let t=this.bb.__offset(this.bb_pos,16);return t?this.bb.__vector_len(this.bb_pos+t):0}sparseInitializers(t,e){let r=this.bb.__offset(this.bb_pos,18);return r?(e||new a.experimental.fbs.SparseTensor).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}sparseInitializersLength(){let t=this.bb.__offset(this.bb_pos,18);return t?this.bb.__vector_len(this.bb_pos+t):0}static startGraph(t){t.startObject(8)}static addInitializers(t,e){t.addFieldOffset(0,e,0)}static createInitializersVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startInitializersVector(t,e){t.startVector(4,e,4)}static addNodeArgs(t,e){t.addFieldOffset(1,e,0)}static createNodeArgsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startNodeArgsVector(t,e){t.startVector(4,e,4)}static addNodes(t,e){t.addFieldOffset(2,e,0)}static createNodesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startNodesVector(t,e){t.startVector(4,e,4)}static addMaxNodeIndex(t,e){t.addFieldInt32(3,e,0)}static addNodeEdges(t,e){t.addFieldOffset(4,e,0)}static createNodeEdgesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startNodeEdgesVector(t,e){t.startVector(4,e,4)}static addInputs(t,e){t.addFieldOffset(5,e,0)}static createInputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startInputsVector(t,e){t.startVector(4,e,4)}static addOutputs(t,e){t.addFieldOffset(6,e,0)}static createOutputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startOutputsVector(t,e){t.startVector(4,e,4)}static addSparseInitializers(t,e){t.addFieldOffset(7,e,0)}static createSparseInitializersVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startSparseInitializersVector(t,e){t.startVector(4,e,4)}static endGraph(t){return t.endObject()}static createGraph(t,e,r,i,d,g,m,b,y){return u.startGraph(t),u.addInitializers(t,e),u.addNodeArgs(t,r),u.addNodes(t,i),u.addMaxNodeIndex(t,d),u.addNodeEdges(t,g),u.addInputs(t,m),u.addOutputs(t,b),u.addSparseInitializers(t,y),u.endGraph(t)}}p.Graph=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsModel(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsModel(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}irVersion(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}opsetImport(t,e){let r=this.bb.__offset(this.bb_pos,6);return r?(e||new a.experimental.fbs.OperatorSetId).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}opsetImportLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}producerName(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__string(this.bb_pos+e,t):null}producerVersion(t){let e=this.bb.__offset(this.bb_pos,10);return e?this.bb.__string(this.bb_pos+e,t):null}domain(t){let e=this.bb.__offset(this.bb_pos,12);return e?this.bb.__string(this.bb_pos+e,t):null}modelVersion(){let t=this.bb.__offset(this.bb_pos,14);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}docString(t){let e=this.bb.__offset(this.bb_pos,16);return e?this.bb.__string(this.bb_pos+e,t):null}graph(t){let e=this.bb.__offset(this.bb_pos,18);return e?(t||new a.experimental.fbs.Graph).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}graphDocString(t){let e=this.bb.__offset(this.bb_pos,20);return e?this.bb.__string(this.bb_pos+e,t):null}static startModel(t){t.startObject(9)}static addIrVersion(t,e){t.addFieldInt64(0,e,t.createLong(0,0))}static addOpsetImport(t,e){t.addFieldOffset(1,e,0)}static createOpsetImportVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startOpsetImportVector(t,e){t.startVector(4,e,4)}static addProducerName(t,e){t.addFieldOffset(2,e,0)}static addProducerVersion(t,e){t.addFieldOffset(3,e,0)}static addDomain(t,e){t.addFieldOffset(4,e,0)}static addModelVersion(t,e){t.addFieldInt64(5,e,t.createLong(0,0))}static addDocString(t,e){t.addFieldOffset(6,e,0)}static addGraph(t,e){t.addFieldOffset(7,e,0)}static addGraphDocString(t,e){t.addFieldOffset(8,e,0)}static endModel(t){return t.endObject()}static createModel(t,e,r,i,d,g,m,b,y,w){return u.startModel(t),u.addIrVersion(t,e),u.addOpsetImport(t,r),u.addProducerName(t,i),u.addProducerVersion(t,d),u.addDomain(t,g),u.addModelVersion(t,m),u.addDocString(t,b),u.addGraph(t,y),u.addGraphDocString(t,w),u.endModel(t)}}p.Model=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsKernelCreateInfos(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsKernelCreateInfos(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}nodeIndices(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readUint32(this.bb.__vector(this.bb_pos+e)+4*t):0}nodeIndicesLength(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.__vector_len(this.bb_pos+t):0}nodeIndicesArray(){let t=this.bb.__offset(this.bb_pos,4);return t?new Uint32Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+t),this.bb.__vector_len(this.bb_pos+t)):null}kernelDefHashes(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.readUint64(this.bb.__vector(this.bb_pos+e)+8*t):this.bb.createLong(0,0)}kernelDefHashesLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}static startKernelCreateInfos(t){t.startObject(2)}static addNodeIndices(t,e){t.addFieldOffset(0,e,0)}static createNodeIndicesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addInt32(e[r]);return t.endVector()}static startNodeIndicesVector(t,e){t.startVector(4,e,4)}static addKernelDefHashes(t,e){t.addFieldOffset(1,e,0)}static createKernelDefHashesVector(t,e){t.startVector(8,e.length,8);for(let r=e.length-1;r>=0;r--)t.addInt64(e[r]);return t.endVector()}static startKernelDefHashesVector(t,e){t.startVector(8,e,8)}static endKernelCreateInfos(t){return t.endObject()}static createKernelCreateInfos(t,e,r){return u.startKernelCreateInfos(t),u.addNodeIndices(t,e),u.addKernelDefHashes(t,r),u.endKernelCreateInfos(t)}}p.KernelCreateInfos=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsSubGraphSessionState(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsSubGraphSessionState(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}graphId(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}sessionState(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new a.experimental.fbs.SessionState).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startSubGraphSessionState(t){t.startObject(2)}static addGraphId(t,e){t.addFieldOffset(0,e,0)}static addSessionState(t,e){t.addFieldOffset(1,e,0)}static endSubGraphSessionState(t){let e=t.endObject();return t.requiredField(e,4),e}static createSubGraphSessionState(t,e,r){return u.startSubGraphSessionState(t),u.addGraphId(t,e),u.addSessionState(t,r),u.endSubGraphSessionState(t)}}p.SubGraphSessionState=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsSessionState(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsSessionState(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}kernels(t){let e=this.bb.__offset(this.bb_pos,4);return e?(t||new a.experimental.fbs.KernelCreateInfos).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}subGraphSessionStates(t,e){let r=this.bb.__offset(this.bb_pos,6);return r?(e||new a.experimental.fbs.SubGraphSessionState).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}subGraphSessionStatesLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}static startSessionState(t){t.startObject(2)}static addKernels(t,e){t.addFieldOffset(0,e,0)}static addSubGraphSessionStates(t,e){t.addFieldOffset(1,e,0)}static createSubGraphSessionStatesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startSubGraphSessionStatesVector(t,e){t.startVector(4,e,4)}static endSessionState(t){return t.endObject()}static createSessionState(t,e,r){return u.startSessionState(t),u.addKernels(t,e),u.addSubGraphSessionStates(t,r),u.endSessionState(t)}}p.SessionState=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(a){(function(h){(function(p){class u{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsInferenceSession(t,e){return(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsInferenceSession(t,e){return t.setPosition(t.position()+c.flatbuffers.SIZE_PREFIX_LENGTH),(e||new u).__init(t.readInt32(t.position())+t.position(),t)}static bufferHasIdentifier(t){return t.__has_identifier("ORTM")}ortVersion(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}model(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new a.experimental.fbs.Model).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}sessionState(t){let e=this.bb.__offset(this.bb_pos,8);return e?(t||new a.experimental.fbs.SessionState).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startInferenceSession(t){t.startObject(3)}static addOrtVersion(t,e){t.addFieldOffset(0,e,0)}static addModel(t,e){t.addFieldOffset(1,e,0)}static addSessionState(t,e){t.addFieldOffset(2,e,0)}static endInferenceSession(t){return t.endObject()}static finishInferenceSessionBuffer(t,e){t.finish(e,"ORTM")}static finishSizePrefixedInferenceSessionBuffer(t,e){t.finish(e,"ORTM",!0)}static createInferenceSession(t,e,r,i){return u.startInferenceSession(t),u.addOrtVersion(t,e),u.addModel(t,r),u.addSessionState(t,i),u.endInferenceSession(t)}}p.InferenceSession=u})(h.fbs||(h.fbs={}))})(a.experimental||(a.experimental={}))}(n.onnxruntime||(n.onnxruntime={}))},7448:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.OnnxjsSessionHandler=void 0;const c=s(1670),l=s(9162);n.OnnxjsSessionHandler=class{constructor(f){this.session=f,this.inputNames=this.session.inputNames,this.outputNames=this.session.outputNames}async dispose(){}async run(f,a,h){const p=new Map;for(const t in f)if(Object.hasOwnProperty.call(f,t)){const e=f[t];p.set(t,new l.Tensor(e.dims,e.type,void 0,void 0,e.data))}const u=await this.session.run(p),o={};return u.forEach((t,e)=>{o[e]=new c.Tensor(t.type,t.data,t.dims)}),o}startProfiling(){this.session.startProfiling()}endProfiling(){this.session.endProfiling()}}},6919:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Session=void 0;const c=s(7067),l=s(1296),f=s(7091),a=s(1036),h=s(6231),p=s(2644);n.Session=class{constructor(u={}){this._initialized=!1,this.backendHint=u.backendHint,this.profiler=h.Profiler.create(u.profiler),this.context={profiler:this.profiler,graphInputTypes:[],graphInputDims:[]}}get inputNames(){return this._model.graph.getInputNames()}get outputNames(){return this._model.graph.getOutputNames()}startProfiling(){this.profiler.start()}endProfiling(){this.profiler.stop()}async loadModel(u,o,t){await this.profiler.event("session","Session.loadModel",async()=>{const e=await(0,f.resolveBackend)(this.backendHint);if(this.sessionHandler=e.createSessionHandler(this.context),this._model=new p.Model,typeof u=="string"){const r=u.endsWith(".ort");if(typeof fetch>"u"){const i=await(0,l.promisify)(c.readFile)(u);this.initialize(i,r)}else{const i=await fetch(u),d=await i.arrayBuffer();this.initialize(new Uint8Array(d),r)}}else if(ArrayBuffer.isView(u))this.initialize(u);else{const r=new Uint8Array(u,o||0,t||u.byteLength);this.initialize(r)}})}initialize(u,o){if(this._initialized)throw new Error("already initialized");this.profiler.event("session","Session.initialize",()=>{const t=this.sessionHandler.transformGraph?this.sessionHandler:void 0;this._model.load(u,t,o),this.sessionHandler.onGraphInitialized&&this.sessionHandler.onGraphInitialized(this._model.graph),this.initializeOps(this._model.graph),this._executionPlan=new a.ExecutionPlan(this._model.graph,this._ops,this.profiler)}),this._initialized=!0}async run(u){if(!this._initialized)throw new Error("session not initialized yet");return this.profiler.event("session","Session.run",async()=>{const o=this.normalizeAndValidateInputs(u),t=await this._executionPlan.execute(this.sessionHandler,o);return this.createOutput(t)})}normalizeAndValidateInputs(u){const o=this._model.graph.getInputNames();if(Array.isArray(u)){if(u.length!==o.length)throw new Error(`incorrect input array length: expected ${o.length} but got ${u.length}`)}else{if(u.size!==o.length)throw new Error(`incorrect input map size: expected ${o.length} but got ${u.size}`);const t=new Array(u.size);let e=0;for(let r=0;rtypeof O=="string")))throw new TypeError("cache should be a string array");A&&(this.cache=new Array(S))}else{if(w!==void 0){const O=e(m);if(!(w instanceof O))throw new TypeError(`cache should be type ${O.name}`)}if(A){const O=new ArrayBuffer(S*function(x){switch(x){case"bool":case"int8":case"uint8":return 1;case"int16":case"uint16":return 2;case"int32":case"uint32":case"float32":return 4;case"float64":return 8;default:throw new Error(`cannot calculate sizeof() on type ${x}`)}}(m));this.cache=function(x,I){return new(e(I))(x)}(O,m)}}}static fromProto(g){if(!g)throw new Error("cannot construct Value from an empty tensor");const m=p.ProtoUtil.tensorDataTypeFromProto(g.dataType),b=p.ProtoUtil.tensorDimsFromProto(g.dims),y=new o(b,m);if(m==="string")g.stringData.forEach((w,v)=>{y.data[v]=(0,p.decodeUtf8String)(w)});else if(g.rawData&&typeof g.rawData.byteLength=="number"&&g.rawData.byteLength>0){const w=y.data,v=new DataView(g.rawData.buffer,g.rawData.byteOffset,g.rawData.byteLength),S=t(g.dataType),A=g.rawData.byteLength/S;if(g.rawData.byteLength%S!=0)throw new Error("invalid buffer length");if(w.length!==A)throw new Error("buffer length mismatch");for(let O=0;O0){const w=y.data,v=new DataView(g.rawDataArray().buffer,g.rawDataArray().byteOffset,g.rawDataLength()),S=t(g.dataType()),A=g.rawDataLength()/S;if(g.rawDataLength()%S!=0)throw new Error("invalid buffer length");if(w.length!==A)throw new Error("buffer length mismatch");for(let O=0;O 1&&I>1)return;A[S-O]=Math.max(x,I)}return A}static index(m,b){const y=new Array(b.length);return u.fillIndex(m,b,y),y}static fillIndex(m,b,y){const w=m.length-b.length;for(let v=0;v=0;Z--)x[Z]=z%S[Z],z=Math.floor(z/S[Z]);H||(u.fillIndex(x,m.dims,I),L=m.get(I)),M||(u.fillIndex(x,b.dims,$),N=b.get($)),O.set(x,y(L,N))}}return O}}static isValidBroadcast(m,b){const y=m.length,w=b.length;if(y>w)return!1;for(let v=1;v<=y;v++)if(m[y-v]!==1&&m[y-v]!==b[w-v])return!1;return!0}static getBroadcastDims(m,b){const y=m.length,w=[];for(let v=0;v1&&A===1&&w.unshift(S)}return w}}n.BroadcastUtil=u,n.arrayCopyHelper=function(g,m,b,y,w){if(y<0||y>=m.length)throw new Error("sourceIndex out of bounds");if(b<0||b>=g.length)throw new Error("targetIndex out of bounds");if(y+w>m.length)throw new Error("source indices to be copied are outside bounds");if(b+w>g.length)throw new Error("target array is too small to hold result");for(let v=0;vf.default.isLong(b)?b.toNumber():b)}static tensorValueTypeFromProto(m){return{tensorType:o.tensorDataTypeFromProto(m.elemType),shape:{dims:o.tensorDimsFromProto(m.shape.dim.map(b=>b.dimValue))}}}static tensorDimsFromORTFormat(m){const b=[];for(let y=0;ym.length)throw new Error(`invalid dimension of ${b} for sizeFromDimension as Tensor has ${m.length} dimensions.`);return e.getSizeFromDimensionRange(m,b,m.length)}static sizeToDimension(m,b){if(b<0||b>m.length)throw new Error(`invalid dimension of ${b} for sizeToDimension as Tensor has ${m.length} dimensions.`);return e.getSizeFromDimensionRange(m,0,b)}static getSizeFromDimensionRange(m,b,y){let w=1;for(let v=b;v=0;--w)y[w]=y[w+1]*m[w+1];return y}static transpose(m){return m.slice().reverse()}static indicesToOffset(m,b,y){y===void 0&&(y=m.length);let w=0;for(let v=0;v=b)throw new Error("unsupported axis for this operation.");return m<0?m+b:m}static normalizeAxes(m,b){return m.map(y=>this.normalizeAxis(y,b))}static incrementIndex(m,b,y){if(b.length===0||m.length===0)throw new Error("Index incrementing unsupported for scalar Tensor");if(y===void 0)y=b.length;else if(y<=0||y>b.length)throw new Error("Incorrect axis to increment on");for(let w=y-1;w>=0&&(m[w]++,!(m[w]=m.length)throw new Error("the dimension with value zero exceeds the dimension size of the input tensor");w[O]=m[O]}else w[O]=b[O];S*=w[O]}}const A=e.size(m);if(v!==-1){if(A%S!=0)throw new Error(`the input tensor cannot be reshaped to the requested shape. Input shape: [${m}] Output shape: [${b}]`);w[v]=A/S}else if(S!==A)throw new Error("reshapedDims and originalDims don't have matching sizes");return w}static sortBasedOnPerm(m,b){return b?b.map(y=>m[y]):m.slice().reverse()}static padShape(m,b){const y=m.length;return m.map((w,v)=>w+b[v]+b[v+y])}static areEqual(m,b){return m.length===b.length&&m.every((y,w)=>y===b[w])}static validateDimsAndCalcSize(m){if(m.length>6)throw new TypeError("Only rank 0 to 6 is supported for tensor shape.");let b=1;for(const y of m){if(!Number.isInteger(y))throw new TypeError(`Invalid shape: ${y} is not an integer`);if(y<0||y>2147483647)throw new TypeError(`Invalid shape: length ${y} is not allowed`);b*=y}return b}static flattenShape(m,b){b<0&&(b+=m.length);const y=m.reduce((v,S)=>v*S,1),w=m.slice(b).reduce((v,S)=>v*S,1);return[y/w,w]}static squeezeShape(m,b){const y=new Array;b=e.normalizeAxes(b,m.length);for(let w=0;w=0;if(v&&m[w]!==1)throw new Error("squeeze an axis of size different than 1");(b.length===0&&m[w]>1||b.length>0&&!v)&&y.push(m[w])}return y}static unsqueezeShape(m,b){const y=new Array(m.length+b.length);y.fill(0);for(let v=0;v=y.length)throw new Error("'axes' has an out of range axis");if(y[S]!==0)throw new Error("'axes' has a duplicate axis");y[S]=1}let w=0;for(let v=0;v=m.length)throw new Error("sourceIndex out of bounds");if(b<0||b>=g.length)throw new Error("targetIndex out of bounds");if(y+w>m.length)throw new Error("source indices to be copied are outside bounds");if(b+w>g.length)throw new Error("target array is too small to hold result");for(let v=0;v=m.length)throw new Error("sourceIndex out of bounds");if(b<0||b>=g.length)throw new Error("targetIndex out of bounds");if(y+w>m.length)throw new Error("source indices to be copied are outside bounds");if(b+w>g.length)throw new Error("target array is too small to hold result");for(let S=0;S=m.length)throw new Error("sourceIndex out of bounds");if(b<0||b>=g.length)throw new Error("targetIndex out of bounds");if(y+w>m.length)throw new Error("source indices to be copied are outside bounds");if(b+w>g.length)throw new Error("target array is too small to hold result");for(let S=0;S=m.length)throw new Error("sourceIndex out of bounds");if(b<0||b>=g.length)throw new Error("targetIndex out of bounds");if(y+w>m.length)throw new Error("source indices to be copied are outside bounds");if(b+w>g.length)throw new Error("target array is too small to hold result");for(let v=0;vb.push(N));const A=i.calcReduceShape(S,b,!0),O=e.size(A),x=new h.Tensor(A,m.type),I=e.computeStrides(A),$=e.computeStrides(S),z=new Array(S.length);for(let L=0;L=b.length)return S(m[v]);const x=b[w],I=x>=y.length?1:e.size(y.slice(x+1));for(let $=0;$v!==0)}}n.ReduceUtil=i;class d{static adjustPoolAttributes(m,b,y,w,v,S){if(!m&&y.length!==b.length-2)throw new Error("length of specified kernel shapes should be 2 less than length of input dimensions");if(m)for(let A=0;A=y.length?y.push(b[A+2]):y[A]=b[A+2];for(let A=0;A=y[A]||S[A+y.length]>=y[A])throw new Error("pads should be smaller than kernel")}}static adjustPadsBasedOnAutoPad(m,b,y,w,v,S){if(S){if(v.length!==2*(m.length-2))throw new Error("length of pads should be twice the length of data dimensions");if(b.length!==m.length-2)throw new Error("length of strides should be the length of data dimensions");if(w.length!==m.length-2)throw new Error("length of kernel shapes should be the length of data dimensions");for(let A=0;A{Object.defineProperty(n,"__esModule",{value:!0}),n.iterateExtraOptions=void 0,n.iterateExtraOptions=(s,c,l,f)=>{if(typeof s=="object"&&s!==null){if(l.has(s))throw new Error("Circular reference in options");l.add(s)}Object.entries(s).forEach(([a,h])=>{const p=c?c+a:a;if(typeof h=="object")(0,n.iterateExtraOptions)(h,p+".",l,f);else if(typeof h=="string"||typeof h=="number")f(p,h.toString());else{if(typeof h!="boolean")throw new Error("Can't handle extra config type: "+typeof h);f(p,h?"1":"0")}})}},2157:function(_,n,s){var c,l=this&&this.__createBinding||(Object.create?function(I,$,z,L){L===void 0&&(L=z);var N=Object.getOwnPropertyDescriptor($,z);N&&!("get"in N?!$.__esModule:N.writable||N.configurable)||(N={enumerable:!0,get:function(){return $[z]}}),Object.defineProperty(I,L,N)}:function(I,$,z,L){L===void 0&&(L=z),I[L]=$[z]}),f=this&&this.__setModuleDefault||(Object.create?function(I,$){Object.defineProperty(I,"default",{enumerable:!0,value:$})}:function(I,$){I.default=$}),a=this&&this.__importStar||function(I){if(I&&I.__esModule)return I;var $={};if(I!=null)for(var z in I)z!=="default"&&Object.prototype.hasOwnProperty.call(I,z)&&l($,I,z);return f($,I),$};Object.defineProperty(n,"__esModule",{value:!0}),n.endProfiling=n.run=n.releaseSession=n.createSession=n.createSessionFinalize=n.createSessionAllocate=n.initOrt=n.initWasm=void 0;const h=s(1670),p=a(s(349)),u=s(6361),o=()=>!!h.env.wasm.proxy&&typeof document<"u";let t,e,r,i=!1,d=!1,g=!1;const m=[],b=[],y=[],w=[],v=[],S=[],A=()=>{if(i||!d||g||!t)throw new Error("worker not ready")},O=I=>{switch(I.data.type){case"init-wasm":i=!1,I.data.err?(g=!0,e[1](I.data.err)):(d=!0,e[0]());break;case"init-ort":I.data.err?r[1](I.data.err):r[0]();break;case"create_allocate":I.data.err?m.shift()[1](I.data.err):m.shift()[0](I.data.out);break;case"create_finalize":I.data.err?b.shift()[1](I.data.err):b.shift()[0](I.data.out);break;case"create":I.data.err?y.shift()[1](I.data.err):y.shift()[0](I.data.out);break;case"release":I.data.err?w.shift()[1](I.data.err):w.shift()[0]();break;case"run":I.data.err?v.shift()[1](I.data.err):v.shift()[0](I.data.out);break;case"end-profiling":I.data.err?S.shift()[1](I.data.err):S.shift()[0]()}},x=typeof document<"u"?(c=document==null?void 0:document.currentScript)===null||c===void 0?void 0:c.src:void 0;n.initWasm=async()=>{if(o()){if(d)return;if(i)throw new Error("multiple calls to 'initWasm()' detected.");if(g)throw new Error("previous call to 'initWasm()' failed.");return i=!0,h.env.wasm.wasmPaths===void 0&&x&&x.indexOf("blob:")!==0&&(h.env.wasm.wasmPaths=x.substr(0,+x.lastIndexOf("/")+1)),new Promise((I,$)=>{t==null||t.terminate(),t=s(9710).Z(),t.onmessage=O,e=[I,$];const z={type:"init-wasm",in:h.env.wasm};t.postMessage(z)})}return(0,u.initializeWebAssembly)(h.env.wasm)},n.initOrt=async(I,$)=>{if(o())return A(),new Promise((z,L)=>{r=[z,L];const N={type:"init-ort",in:{numThreads:I,loggingLevel:$}};t.postMessage(N)});p.initOrt(I,$)},n.createSessionAllocate=async I=>o()?(A(),new Promise(($,z)=>{m.push([$,z]);const L={type:"create_allocate",in:{model:I}};t.postMessage(L,[I.buffer])})):p.createSessionAllocate(I),n.createSessionFinalize=async(I,$)=>o()?(A(),new Promise((z,L)=>{b.push([z,L]);const N={type:"create_finalize",in:{modeldata:I,options:$}};t.postMessage(N)})):p.createSessionFinalize(I,$),n.createSession=async(I,$)=>o()?(A(),new Promise((z,L)=>{y.push([z,L]);const N={type:"create",in:{model:I,options:$}};t.postMessage(N,[I.buffer])})):p.createSession(I,$),n.releaseSession=async I=>{if(o())return A(),new Promise(($,z)=>{w.push([$,z]);const L={type:"release",in:I};t.postMessage(L)});p.releaseSession(I)},n.run=async(I,$,z,L,N)=>o()?(A(),new Promise((H,M)=>{v.push([H,M]);const j={type:"run",in:{sessionId:I,inputIndices:$,inputs:z,outputIndices:L,options:N}};t.postMessage(j,p.extractTransferableBuffers(z))})):p.run(I,$,z,L,N),n.endProfiling=async I=>{if(o())return A(),new Promise(($,z)=>{S.push([$,z]);const L={type:"end-profiling",in:I};t.postMessage(L)});p.endProfiling(I)}},586:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.setRunOptions=void 0;const c=s(7967),l=s(4983),f=s(6361);n.setRunOptions=a=>{const h=(0,f.getInstance)();let p=0;const u=[],o=a||{};try{if((a==null?void 0:a.logSeverityLevel)===void 0)o.logSeverityLevel=2;else if(typeof a.logSeverityLevel!="number"||!Number.isInteger(a.logSeverityLevel)||a.logSeverityLevel<0||a.logSeverityLevel>4)throw new Error(`log serverity level is not valid: ${a.logSeverityLevel}`);if((a==null?void 0:a.logVerbosityLevel)===void 0)o.logVerbosityLevel=0;else if(typeof a.logVerbosityLevel!="number"||!Number.isInteger(a.logVerbosityLevel))throw new Error(`log verbosity level is not valid: ${a.logVerbosityLevel}`);(a==null?void 0:a.terminate)===void 0&&(o.terminate=!1);let t=0;if((a==null?void 0:a.tag)!==void 0&&(t=(0,l.allocWasmString)(a.tag,u)),p=h._OrtCreateRunOptions(o.logSeverityLevel,o.logVerbosityLevel,!!o.terminate,t),p===0)throw new Error("Can't create run options");return(a==null?void 0:a.extra)!==void 0&&(0,c.iterateExtraOptions)(a.extra,"",new WeakSet,(e,r)=>{const i=(0,l.allocWasmString)(e,u),d=(0,l.allocWasmString)(r,u);if(h._OrtAddRunConfigEntry(p,i,d)!==0)throw new Error(`Can't set a run config entry: ${e} - ${r}`)}),[p,u]}catch(t){throw p!==0&&h._OrtReleaseRunOptions(p),u.forEach(h._free),t}}},2306:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.OnnxruntimeWebAssemblySessionHandler=void 0;const c=s(2806),l=s(1670),f=s(2850),a=s(2157);let h;n.OnnxruntimeWebAssemblySessionHandler=class{async createSessionAllocate(p){const u=await fetch(p),o=await u.arrayBuffer();return(0,a.createSessionAllocate)(new Uint8Array(o))}async loadModel(p,u){if(h||(await(0,a.initOrt)(l.env.wasm.numThreads,(o=>{switch(o){case"verbose":return 0;case"info":return 1;case"warning":return 2;case"error":return 3;case"fatal":return 4;default:throw new Error(`unsupported logging level: ${o}`)}})(l.env.logLevel)),h=!0),typeof p=="string")if(typeof fetch>"u"){const o=await(0,f.promisify)(c.readFile)(p);[this.sessionId,this.inputNames,this.outputNames]=await(0,a.createSession)(o,u)}else{const o=await this.createSessionAllocate(p);[this.sessionId,this.inputNames,this.outputNames]=await(0,a.createSessionFinalize)(o,u)}else[this.sessionId,this.inputNames,this.outputNames]=await(0,a.createSession)(p,u)}async dispose(){return(0,a.releaseSession)(this.sessionId)}async run(p,u,o){const t=[],e=[];Object.entries(p).forEach(g=>{const m=g[0],b=g[1],y=this.inputNames.indexOf(m);if(y===-1)throw new Error(`invalid input '${m}'`);t.push(b),e.push(y)});const r=[];Object.entries(u).forEach(g=>{const m=g[0],b=this.outputNames.indexOf(m);if(b===-1)throw new Error(`invalid output '${m}'`);r.push(b)});const i=await(0,a.run)(this.sessionId,e,t.map(g=>[g.type,g.dims,g.data]),r,o),d={};for(let g=0;g{Object.defineProperty(n,"__esModule",{value:!0}),n.setSessionOptions=void 0;const c=s(7967),l=s(4983),f=s(6361);n.setSessionOptions=a=>{const h=(0,f.getInstance)();let p=0;const u=[],o=a||{};(t=>{t.extra||(t.extra={}),t.extra.session||(t.extra.session={});const e=t.extra.session;e.use_ort_model_bytes_directly||(e.use_ort_model_bytes_directly="1")})(o);try{(a==null?void 0:a.graphOptimizationLevel)===void 0&&(o.graphOptimizationLevel="all");const t=(i=>{switch(i){case"disabled":return 0;case"basic":return 1;case"extended":return 2;case"all":return 99;default:throw new Error(`unsupported graph optimization level: ${i}`)}})(o.graphOptimizationLevel);(a==null?void 0:a.enableCpuMemArena)===void 0&&(o.enableCpuMemArena=!0),(a==null?void 0:a.enableMemPattern)===void 0&&(o.enableMemPattern=!0),(a==null?void 0:a.executionMode)===void 0&&(o.executionMode="sequential");const e=(i=>{switch(i){case"sequential":return 0;case"parallel":return 1;default:throw new Error(`unsupported execution mode: ${i}`)}})(o.executionMode);let r=0;if((a==null?void 0:a.logId)!==void 0&&(r=(0,l.allocWasmString)(a.logId,u)),(a==null?void 0:a.logSeverityLevel)===void 0)o.logSeverityLevel=2;else if(typeof a.logSeverityLevel!="number"||!Number.isInteger(a.logSeverityLevel)||a.logSeverityLevel<0||a.logSeverityLevel>4)throw new Error(`log serverity level is not valid: ${a.logSeverityLevel}`);if((a==null?void 0:a.logVerbosityLevel)===void 0)o.logVerbosityLevel=0;else if(typeof a.logVerbosityLevel!="number"||!Number.isInteger(a.logVerbosityLevel))throw new Error(`log verbosity level is not valid: ${a.logVerbosityLevel}`);if((a==null?void 0:a.enableProfiling)===void 0&&(o.enableProfiling=!1),p=h._OrtCreateSessionOptions(t,!!o.enableCpuMemArena,!!o.enableMemPattern,e,!!o.enableProfiling,0,r,o.logSeverityLevel,o.logVerbosityLevel),p===0)throw new Error("Can't create session options");return a!=null&&a.executionProviders&&((i,d,g)=>{for(const m of d){let b=typeof m=="string"?m:m.name;switch(b){case"xnnpack":b="XNNPACK";break;case"wasm":case"cpu":continue;default:throw new Error(`not supported EP: ${b}`)}const y=(0,l.allocWasmString)(b,g);if((0,f.getInstance)()._OrtAppendExecutionProvider(i,y)!==0)throw new Error(`Can't append execution provider: ${b}`)}})(p,a.executionProviders,u),(a==null?void 0:a.extra)!==void 0&&(0,c.iterateExtraOptions)(a.extra,"",new WeakSet,(i,d)=>{const g=(0,l.allocWasmString)(i,u),m=(0,l.allocWasmString)(d,u);if(h._OrtAddSessionConfigEntry(p,g,m)!==0)throw new Error(`Can't set a session config entry: ${i} - ${d}`)}),[p,u]}catch(t){throw p!==0&&h._OrtReleaseSessionOptions(p),u.forEach(h._free),t}}},4983:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.allocWasmString=void 0;const c=s(6361);n.allocWasmString=(l,f)=>{const a=(0,c.getInstance)(),h=a.lengthBytesUTF8(l)+1,p=a._malloc(h);return a.stringToUTF8(l,p,h),f.push(p),p}},349:(_,n,s)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.extractTransferableBuffers=n.endProfiling=n.run=n.releaseSession=n.createSession=n.createSessionFinalize=n.createSessionAllocate=n.initOrt=void 0;const c=s(586),l=s(4919),f=s(4983),a=s(6361);n.initOrt=(t,e)=>{const r=(0,a.getInstance)()._OrtInit(t,e);if(r!==0)throw new Error(`Can't initialize onnxruntime. error code = ${r}`)};const h=new Map;n.createSessionAllocate=t=>{const e=(0,a.getInstance)(),r=e._malloc(t.byteLength);return e.HEAPU8.set(t,r),[r,t.byteLength]},n.createSessionFinalize=(t,e)=>{const r=(0,a.getInstance)();let i=0,d=0,g=[];try{if([d,g]=(0,l.setSessionOptions)(e),i=r._OrtCreateSession(t[0],t[1],d),i===0)throw new Error("Can't create a session")}finally{r._free(t[0]),r._OrtReleaseSessionOptions(d),g.forEach(r._free)}const m=r._OrtGetInputCount(i),b=r._OrtGetOutputCount(i),y=[],w=[],v=[],S=[];for(let A=0;A{const r=(0,n.createSessionAllocate)(t);return(0,n.createSessionFinalize)(r,e)},n.releaseSession=t=>{const e=(0,a.getInstance)(),r=h.get(t);if(!r)throw new Error("invalid session id");const i=r[0],d=r[1],g=r[2];d.forEach(e._OrtFree),g.forEach(e._OrtFree),e._OrtReleaseSession(i),h.delete(t)};const p=t=>{switch(t){case"int8":return 3;case"uint8":return 2;case"bool":return 9;case"int16":return 5;case"uint16":return 4;case"int32":return 6;case"uint32":return 12;case"float32":return 1;case"float64":return 11;case"string":return 8;case"int64":return 7;case"uint64":return 13;default:throw new Error(`unsupported data type: ${t}`)}},u=t=>{switch(t){case 3:return"int8";case 2:return"uint8";case 9:return"bool";case 5:return"int16";case 4:return"uint16";case 6:return"int32";case 12:return"uint32";case 1:return"float32";case 11:return"float64";case 8:return"string";case 7:return"int64";case 13:return"uint64";default:throw new Error(`unsupported data type: ${t}`)}},o=t=>{switch(t){case"float32":return Float32Array;case"uint8":case"bool":return Uint8Array;case"int8":return Int8Array;case"uint16":return Uint16Array;case"int16":return Int16Array;case"int32":return Int32Array;case"float64":return Float64Array;case"uint32":return Uint32Array;case"int64":return BigInt64Array;case"uint64":return BigUint64Array;default:throw new Error(`unsupported type: ${t}`)}};n.run=(t,e,r,i,d)=>{const g=(0,a.getInstance)(),m=h.get(t);if(!m)throw new Error("invalid session id");const b=m[0],y=m[1],w=m[2],v=e.length,S=i.length;let A=0,O=[];const x=[],I=[];try{[A,O]=(0,c.setRunOptions)(d);for(let M=0;Mg.HEAP32[xe++]=we);const oe=g._OrtCreateTensor(p(j),Q,ee,Ae,Z.length);if(oe===0)throw new Error("Can't create a tensor");x.push(oe)}finally{g.stackRestore(ue)}}const $=g.stackSave(),z=g.stackAlloc(4*v),L=g.stackAlloc(4*v),N=g.stackAlloc(4*S),H=g.stackAlloc(4*S);try{let M=z/4,j=L/4,Z=N/4,X=H/4;for(let ue=0;ueOe*ze);if(we=u(Ne),we==="string"){const Oe=[];let ze=ye/4;for(let Ve=0;Ve{const e=(0,a.getInstance)(),r=h.get(t);if(!r)throw new Error("invalid session id");const i=r[0],d=e._OrtEndProfiling(i);if(d===0)throw new Error("Can't get an profile file name");e._OrtFree(d)},n.extractTransferableBuffers=t=>{const e=[];for(const r of t){const i=r[2];!Array.isArray(i)&&i.buffer&&e.push(i.buffer)}return e}},6361:function(_,n,s){var c=this&&this.__createBinding||(Object.create?function(d,g,m,b){b===void 0&&(b=m);var y=Object.getOwnPropertyDescriptor(g,m);y&&!("get"in y?!g.__esModule:y.writable||y.configurable)||(y={enumerable:!0,get:function(){return g[m]}}),Object.defineProperty(d,b,y)}:function(d,g,m,b){b===void 0&&(b=m),d[b]=g[m]}),l=this&&this.__setModuleDefault||(Object.create?function(d,g){Object.defineProperty(d,"default",{enumerable:!0,value:g})}:function(d,g){d.default=g}),f=this&&this.__importStar||function(d){if(d&&d.__esModule)return d;var g={};if(d!=null)for(var m in d)m!=="default"&&Object.prototype.hasOwnProperty.call(d,m)&&c(g,d,m);return l(g,d),g},a=this&&this.__importDefault||function(d){return d&&d.__esModule?d:{default:d}};Object.defineProperty(n,"__esModule",{value:!0}),n.dispose=n.getInstance=n.initializeWebAssembly=void 0;const h=f(s(6449)),p=a(s(932)),u=s(3474);let o,t=!1,e=!1,r=!1;const i=(d,g)=>g?d?"ort-wasm-simd-threaded.wasm":"ort-wasm-threaded.wasm":d?"ort-wasm-simd.wasm":"ort-wasm.wasm";n.initializeWebAssembly=async d=>{if(t)return Promise.resolve();if(e)throw new Error("multiple calls to 'initializeWebAssembly()' detected.");if(r)throw new Error("previous call to 'initializeWebAssembly()' failed.");e=!0;const g=d.initTimeout,m=d.numThreads,b=d.simd,y=m>1&&(()=>{try{return typeof SharedArrayBuffer<"u"&&(typeof MessageChannel<"u"&&new MessageChannel().port1.postMessage(new SharedArrayBuffer(1)),WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,5,4,1,3,1,1,10,11,1,9,0,65,0,254,16,2,0,26,11])))}catch{return!1}})(),w=b&&(()=>{try{return WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,10,30,1,28,0,65,0,253,15,253,12,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,253,186,1,26,11]))}catch{return!1}})(),v=typeof d.wasmPaths=="string"?d.wasmPaths:void 0,S=i(!1,y),A=i(w,y),O=typeof d.wasmPaths=="object"?d.wasmPaths[A]:void 0;let x=!1;const I=[];if(g>0&&I.push(new Promise($=>{setTimeout(()=>{x=!0,$()},g)})),I.push(new Promise(($,z)=>{const L=y?u:p.default,N={locateFile:(H,M)=>y&&H.endsWith(".worker.js")&&typeof Blob<"u"?URL.createObjectURL(new Blob([s(4154)],{type:"text/javascript"})):H===S?O??(v??M)+A:M+H};if(y)if(typeof Blob>"u")N.mainScriptUrlOrBlob=h.join("/","ort-wasm-threaded.js");else{const H=`var ortWasmThreaded=(function(){var _scriptDir;return ${L.toString()}})();`;N.mainScriptUrlOrBlob=new Blob([H],{type:"text/javascript"})}L(N).then(H=>{e=!1,t=!0,o=H,$()},H=>{e=!1,r=!0,z(H)})})),await Promise.race(I),x)throw new Error(`WebAssembly backend initializing failed due to timeout: ${g}ms`)},n.getInstance=()=>{if(t&&o)return o;throw new Error("WebAssembly is not initialized yet.")},n.dispose=()=>{var d;!t||e||r||(e=!0,(d=o.PThread)===null||d===void 0||d.terminateAllThreads(),o=void 0,e=!1,t=!1,r=!0)}},9710:(_,n,s)=>{s.d(n,{Z:()=>f});var c=s(477),l=s.n(c);function f(){return l()('/*!\n* ONNX Runtime Web v1.14.0\n* Copyright (c) Microsoft Corporation. All rights reserved.\n* Licensed under the MIT License.\n*/\n(()=>{var t={474:(t,e,n)=>{var _scriptDir,r=(_scriptDir=(_scriptDir="undefined"!=typeof document&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(t){function e(){return j.buffer!=D&&N(j.buffer),P}function r(){return j.buffer!=D&&N(j.buffer),U}function a(){return j.buffer!=D&&N(j.buffer),F}function i(){return j.buffer!=D&&N(j.buffer),I}function o(){return j.buffer!=D&&N(j.buffer),W}var u,c,s;t=t||{},u||(u=void 0!==t?t:{}),u.ready=new Promise((function(t,e){c=t,s=e}));var l,f,p,h,d,y,b=Object.assign({},u),m="./this.program",g=(t,e)=>{throw e},v="object"==typeof window,w="function"==typeof importScripts,_="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node,O=u.ENVIRONMENT_IS_PTHREAD||!1,A="";function S(t){return u.locateFile?u.locateFile(t,A):A+t}if(_){let e;A=w?n(908).dirname(A)+"/":"//",y=()=>{d||(h=n(384),d=n(908))},l=function(t,e){return y(),t=d.normalize(t),h.readFileSync(t,e?void 0:"utf8")},p=t=>((t=l(t,!0)).buffer||(t=new Uint8Array(t)),t),f=(t,e,n)=>{y(),t=d.normalize(t),h.readFile(t,(function(t,r){t?n(t):e(r.buffer)}))},1{if(Q())throw process.exitCode=t,e;e instanceof ct||x("exiting due to exception: "+e),process.exit(t)},u.inspect=function(){return"[Emscripten Module object]"};try{e=n(925)}catch(t){throw console.error(\'The "worker_threads" module is not supported in this node.js build - perhaps a newer version is needed?\'),t}n.g.Worker=e.Worker}else(v||w)&&(w?A=self.location.href:"undefined"!=typeof document&&document.currentScript&&(A=document.currentScript.src),_scriptDir&&(A=_scriptDir),A=0!==A.indexOf("blob:")?A.substr(0,A.replace(/[?#].*/,"").lastIndexOf("/")+1):"",_||(l=t=>{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.send(null),e.responseText},w&&(p=t=>{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.responseType="arraybuffer",e.send(null),new Uint8Array(e.response)}),f=(t,e,n)=>{var r=new XMLHttpRequest;r.open("GET",t,!0),r.responseType="arraybuffer",r.onload=()=>{200==r.status||0==r.status&&r.response?e(r.response):n()},r.onerror=n,r.send(null)}));_&&"undefined"==typeof performance&&(n.g.performance=n(953).performance);var T=console.log.bind(console),E=console.warn.bind(console);_&&(y(),T=t=>h.writeSync(1,t+"\\n"),E=t=>h.writeSync(2,t+"\\n"));var M,C=u.print||T,x=u.printErr||E;Object.assign(u,b),b=null,u.thisProgram&&(m=u.thisProgram),u.quit&&(g=u.quit),u.wasmBinary&&(M=u.wasmBinary);var R=u.noExitRuntime||!1;"object"!=typeof WebAssembly&&at("no native wasm support detected");var j,k,D,P,U,F,I,W,H=!1,L="undefined"!=typeof TextDecoder?new TextDecoder("utf8"):void 0;function z(t,e,n){var r=(e>>>=0)+n;for(n=e;t[n]&&!(n>=r);)++n;if(16(a=224==(240&a)?(15&a)<<12|i<<6|o:(7&a)<<18|i<<12|o<<6|63&t[e++])?r+=String.fromCharCode(a):(a-=65536,r+=String.fromCharCode(55296|a>>10,56320|1023&a))}}else r+=String.fromCharCode(a)}return r}function Y(t,e){return(t>>>=0)?z(r(),t,e):""}function B(t,e,n,r){if(!(0