After you have finished editing your video, you can export it in your desired format and quality. Here are some steps you can follow:
- In this article, we have explained what xforce keygen adobe premiere pro cc torrentinstmank is and how to use it to get Adobe Premiere Pro CC for free. We have also shown you how to use Adobe Premiere Pro CC for video editing and exporting. However, we do not recommend using this method of obtaining Adobe Premiere Pro CC, as it is illegal and unethical. It may also expose your computer to viruses, malware, or other security risks. If you want to use Adobe Premiere Pro CC legally and safely, you should purchase a subscription from the official Adobe website or use other free or low-cost alternatives.
- ¿Te gustaría saber lo que piensan y sienten los demás con solo observar sus movimientos y expresiones? ¿Te gustaría mejorar tu capacidad de comunicarte con los demás y evitar malentendidos y conflictos? Si la respuesta es sí, entonces este artículo te interesa.
-En este artículo te voy a hablar de un libro que te enseñará a dominar el arte de la comunicación no verbal. Se trata de El cuerpo habla , del autor Joe Navarro, un ex agente del FBI especializado en el análisis del comportamiento humano.
- El cuerpo habla es un libro que te revelará los secretos del lenguaje no verbal, ese lenguaje silencioso pero poderoso que todos emitimos y recibimos inconscientemente. Aprenderás a interpretar las señales que los demás envían con su cuerpo, lo que te permitirá conocer sus intenciones y sentimientos reales y evitar así engaños y trampas. También aprenderás a utilizar el lenguaje no verbal para transmitir a los demás lo que realmente quieres comunicarles, ya sean familiares, amigos o jefes.
- Si quieres saber más sobre este libro, sigue leyendo. Te voy a contar quién es Joe Navarro, qué es el lenguaje no verbal, qué nos enseña el libro El cuerpo habla y dónde puedes descargarlo en formato PDF.
- Joe Navarro es un reconocido experto en el campo de la comunicación no verbal. Nacido en Cuba, emigró a Estados Unidos cuando tenía ocho años y se convirtió en ciudadano estadounidense. Estudió justicia criminal en la Universidad Brigham Young y se unió al FBI como agente especial.
- Durante 25 años, Joe Navarro trabajó como agente del FBI en diferentes áreas, como contraespionaje, contraterrorismo, crimen organizado y comportamiento criminal. Su labor consistía en interrogar e investigar a sospechosos, testigos y víctimas, utilizando sus habilidades para leer el lenguaje no verbal y detectar el engaño.
- Joe Navarro fue uno de los fundadores del National Security Division's Behavioral Analysis Program, un programa que se encarga de analizar el comportamiento de individuos y grupos que suponen una amenaza para la seguridad nacional. También fue consultor para otros organismos gubernamentales y privados, como la CIA, el Departamento de Defensa o la NASA.
- Tras retirarse del FBI en 2003, Joe Navarro se dedicó a escribir libros y artículos sobre el tema de la comunicación no verbal. Su obra más famosa es El cuerpo habla , publicada en 2008 y traducida a más de 30 idiomas. Otros libros suyos son La biblia del lenguaje corporal , Mensajes peligrosos o Louder than words .
-Además de escribir, Joe Navarro imparte conferencias y cursos sobre el lenguaje no verbal en todo el mundo. Su público incluye desde estudiantes y profesores hasta empresarios y líderes políticos. Su objetivo es ayudar a las personas a mejorar sus habilidades sociales y profesionales mediante el conocimiento del lenguaje no verbal.
- El lenguaje no verbal es el conjunto de señales que emitimos y recibimos con nuestro cuerpo sin utilizar palabras. Estas señales pueden ser gestos, posturas, expresiones faciales, contacto visual, tono de voz, distancia interpersonal, etc.
- La comunicación no verbal es una forma de comunicación universal que compartimos con otros animales. Es una forma de comunicación instintiva e inconsciente que tiene su origen en nuestro cerebro primitivo o límbico. Este cerebro se encarga de regular nuestras emociones, nuestros impulsos y nuestra supervivencia.
- La comunicación no verbal tiene una gran influencia en la interacción humana. Según algunos estudios, el 93% de la comunicación entre las personas se basa en el lenguaje no verbal, mientras que solo el 7% se basa en las palabras. Esto significa que nuestro cuerpo dice mucho más que nuestras palabras.
- La comunicación no verbal nos permite transmitir información sobre nosotros mismos, como nuestra personalidad, nuestro estado de ánimo, nuestra actitud o nuestras intenciones. También nos permite captar información sobre los demás, como sus emociones, sus pensamientos o sus motivaciones.
- El lenguaje no verbal se compone de diferentes tipos de señales que podemos clasificar según la parte del cuerpo que las emite o según su función. Algunos ejemplos son:
-
-Gestos : son los movimientos que hacemos con las manos o los brazos para acompañar o sustituir las palabras. Por ejemplo, asentir con la cabeza para decir sí o negar con la cabeza para decir no.
-Posturas : son las posiciones que adoptamos con nuestro cuerpo cuando estamos sentados o de pie. Por ejemplo, cruzar los brazos o las piernas puede indicar defensa o rechazo.
-Expresiones faciales : son las configuraciones que hacemos con los músculos de la cara para mostrar nuestras emociones o reacciones. Por ejemplo, sonreír para expresar alegría o fruncir el ceño para expresar enfado.
-Contacto visual : es el grado y la duración de la mirada que establecemos con otra persona. Por ejemplo, mirar fijamente puede indicar interés o desafío.
-Tono de voz : es la variación del volumen, la velocidad y la entonación de nuestra voz cuando hablamos. Por ejemplo, hablar alto puede indicar confianza o agresividad.
-Distancia interpersonal : es el espacio físico que mantenemos entre nosotros y los demás cuando interactuamos. Por ejemplo, acercarnos mucho puede indicar intimidad o invasión.
-Los beneficios de aprender a leer el lenguaje no verbal
- Aprender a leer el lenguaje no verbal tiene muchas ventajas tanto en el ámbito personal como en el profesional. Algunos beneficios que te puede aportar son:
-
-Mejorar tus relaciones interpersonales : el lenguaje no verbal te ayuda a establecer confianza, empatía y conexión con los demás. Al comprender mejor lo que sienten y piensan los demás, puedes adaptar tu comunicación a sus necesidades y evitar conflictos.
-Mejorar tu comunicación verbal : el lenguaje no verbal te ayuda a reforzar y clarificar tu mensaje verbal. Al utilizar gestos, expresiones o tonos de voz adecuados, puedes hacer que tu mensaje sea más convincente, memorable y persuasivo.
-Mejorar tu capacidad de persuasión e influencia : el lenguaje no verbal te ayuda a transmitir autoridad, credibilidad y confianza. Al proyectar una imagen positiva y segura de ti mismo, puedes ganarte el respeto y la admiración de los demás.
-Mejorar tu capacidad de detección del engaño : el lenguaje no verbal te ayuda a identificar las señales que indican que alguien está mintiendo o ocultando algo. Al observar las incongruencias entre lo que dice y lo que hace una persona, puedes protegerte de posibles fraudes o manipulaciones.
-Mejorar tu autoconocimiento y autocontrol : el lenguaje no verbal te ayuda a conocer mejor tus propias emociones y reacciones. Al ser consciente de cómo te expresas con tu cuerpo, puedes controlar mejor tus impulsos y mejorar tu inteligencia emocional.
-
- ¿Qué nos enseña el libro El cuerpo habla?
- El libro El cuerpo habla es una guía práctica y amena que te enseñará a dominar los secretos de la comunicación no verbal. A través de ejemplos, anécdotas y consejos, Joe Navarro te mostrará cómo interpretar y utilizar el lenguaje no verbal en diferentes situaciones de la vida cotidiana.
- El libro se divide en nueve capítulos que abordan los siguientes temas:
- Cómo dominar los secretos de la comunicación no verbal
- En este capítulo, Joe Navarro te introduce al mundo del lenguaje no verbal y te explica por qué es tan importante aprender a leerlo y usarlo. Te da algunas claves para mejorar tu observación y tu atención a las señales no verbales que emiten los demás y tú mismo.
- Cómo entender nuestro legado límbico y sus implicaciones en nuestro comportamiento
- En este capítulo, Joe Navarro te explica cómo funciona nuestro cerebro límbico, el responsable de regular nuestras emociones y nuestro comportamiento instintivo. Te muestra cómo nuestro cerebro límbico se manifiesta a través de nuestro cuerpo y cómo podemos reconocer sus señales.
- Cómo utilizar el lenguaje no verbal para generar confianza, autoridad y sinceridad
- En este capítulo, Joe Navarro te enseña cómo utilizar el lenguaje no verbal para transmitir una imagen positiva de ti mismo y crear una buena impresión en los demás. Te da consejos para mejorar tu postura, tu gestualidad, tu contacto visual y tu tono de voz según el contexto y el objetivo que quieras conseguir.
- Cómo detectar el engaño a través de las señales no verbales
- ¿Dónde puedo descargar el libro El cuerpo habla en formato PDF?
- Si te ha interesado el libro El cuerpo habla y quieres leerlo en formato digital, te voy a dar algunas opciones para que puedas descargarlo en tu ordenador, tablet o móvil.
- Las ventajas de leer el libro en formato digital
- Leer el libro en formato digital tiene algunas ventajas sobre leerlo en formato impreso. Algunas de ellas son:
-
-Ahorro de espacio y dinero : al descargar el libro en formato PDF, no tendrás que ocupar espacio en tu estantería ni gastar dinero en comprarlo. Además, podrás acceder al libro desde cualquier dispositivo y lugar.
-Facilidad de lectura y búsqueda : al leer el libro en formato PDF, podrás ajustar el tamaño de la letra, el brillo y el contraste a tu gusto. También podrás buscar palabras o frases concretas dentro del texto y marcar las páginas que te interesen.
-Respeto al medio ambiente : al leer el libro en formato PDF, estarás contribuyendo a reducir el consumo de papel y tinta, lo que supone un beneficio para el planeta.
-
- Los sitios web donde se puede descargar el libro de forma gratuita o de pago
- Existen varios sitios web donde se puede descargar el libro El cuerpo habla de forma gratuita o de pago. Algunos de ellos son:
-
-
-Sitio web
-Descripción
-Precio
-
-
-Zoboko.com
-Es una plataforma de descarga de libros electrónicos en diferentes formatos y categorías. Ofrece una amplia variedad de libros gratuitos y de pago.
-Gratis o 9,99 €
-
-
-Scribd.com
-Es una plataforma de lectura y publicación de documentos en línea. Permite acceder a millones de libros, audiolibros, revistas y otros contenidos.
-Gratis con registro o 9,99 € al mes con suscripción
-
-
-Idoc.pub
-Es una plataforma de alojamiento y descarga de documentos en diferentes formatos. Permite compartir y descargar documentos de forma gratuita.
-Gratis
-
-
- Estos son solo algunos ejemplos de sitios web donde se puede descargar el libro El cuerpo habla. Sin embargo, hay que tener en cuenta que algunos de estos sitios pueden no tener los derechos de autor del libro o pueden contener virus o malware. Por eso, se recomienda verificar la fiabilidad y la legalidad de los sitios antes de descargar cualquier contenido.
- Conclusión
- En este artículo te he hablado del libro El cuerpo habla, del autor Joe Navarro, un ex agente del FBI experto en comunicación no verbal. Te he contado quién es Joe Navarro, qué es el lenguaje no verbal, qué nos enseña el libro El cuerpo habla y dónde puedes descargarlo en formato PDF.
- El cuerpo habla es un libro que te ayudará a mejorar tus habilidades sociales y profesionales mediante el conocimiento del lenguaje no verbal. Aprenderás a interpretar las señales que los demás emiten con su cuerpo y a utilizar tu propio cuerpo para comunicarte mejor con los demás.
- Si te ha gustado este artículo y quieres saber más sobre el tema, te invito a leer el libro El cuerpo habla. Estoy seguro de que te resultará muy útil e interesante.
- Preguntas frecuentes
-
-¿Qué es el lenguaje no verbal?
-El lenguaje no verbal es el conjunto de señales que emitimos y recibimos con nuestro cuerpo sin utilizar palabras. Estas señales pueden ser gestos, posturas, expresiones faciales, contacto visual, tono de voz, distancia interpersonal, etc.
-¿Por qué es importante aprender a leer el lenguaje no verbal?
-Aprender a leer el lenguaje no verbal es importante porque nos permite transmitir y captar información sobre nosotros mismos y los demás que no se expresa con palabras. Nos ayuda a mejorar nuestras relaciones interpersonales, nuestra comunicación verbal, nuestra capacidad de persuasión e influencia, nuestra capacidad de detección del engaño y nuestro autoconocimiento y autocontrol.
-¿Quién es Joe Navarro?
-Joe Navarro es un reconocido experto en el campo de la comunicación no verbal. Es un ex agente del FBI especializado en el análisis del comportamiento humano. Es autor de varios libros sobre el tema, entre ellos El cuerpo habla. También es conferenciante y profesor sobre el lenguaje no verbal.
-¿Qué nos enseña el libro El cuerpo habla?
-El libro El cuerpo habla nos enseña a dominar los secretos de la comunicación no verbal. Nos muestra cómo interpretar y utilizar el lenguaje no verbal en diferentes situaciones de la vida cotidiana. Nos explica cómo funciona nuestro cerebro límbico, cómo generar confianza, autoridad y sinceridad con nuestro cuerpo y cómo detectar el engaño a través de las señales no verbales.
-¿Dónde puedo descargar el libro El cuerpo habla en formato PDF?
-Puedes descargar el libro El cuerpo habla en formato PDF en varios sitios web, como Zoboko.com, Scribd.com o Idoc.pub. Sin embargo, debes tener cuidado con la fiabilidad y la legalidad de los sitios antes de descargar cualquier contenido.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Brsobstetricsandgynecologypdffree11 UPDATED.md b/spaces/1gistliPinn/ChatGPT4/Examples/Brsobstetricsandgynecologypdffree11 UPDATED.md
deleted file mode 100644
index 240096facd2510f357aa1c7c97ac9dcd3de48430..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Brsobstetricsandgynecologypdffree11 UPDATED.md
+++ /dev/null
@@ -1,6 +0,0 @@
-brsobstetricsandgynecologypdffree11 DOWNLOAD · https://imgfil.com/2uxXQ1
-
- d5da3c52bf
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download [WORK].md b/spaces/1gistliPinn/ChatGPT4/Examples/Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download [WORK].md
deleted file mode 100644
index b0f97bf5c882af0dcf1b81babfeff0783534cd1d..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download [WORK].md
+++ /dev/null
@@ -1,24 +0,0 @@
-
-Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download
-Daum PotPlayer is a powerful and versatile media player that supports various formats, codecs, and subtitles. It also offers advanced features such as 3D video support, screen capture, live streaming, and audio enhancement. If you are looking for a reliable and easy-to-use media player for your Windows PC, Daum PotPlayer is a great choice.
-Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download Download File ✅ https://imgfil.com/2uy0tk
-However, if you want to enjoy the full benefits of Daum PotPlayer, you need to activate it with a serial key. A serial key is a unique code that unlocks the premium features of the software. Without a serial key, you will be limited to the basic functions of Daum PotPlayer and miss out on some of the best features.
-Fortunately, you can get Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download from our website. This is a cracked version of Daum PotPlayer that comes with a valid serial key that you can use to activate the software. By downloading and installing this cracked version, you will be able to enjoy Daum PotPlayer without any restrictions or limitations.
-Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download is safe and secure to use. It does not contain any viruses, malware, or spyware that could harm your PC or compromise your privacy. It also does not require any registration or payment to use. All you need to do is follow these simple steps:
-
-Download Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download from the link below.
-Extract the zip file and run the setup file.
-Follow the installation instructions and agree to the terms and conditions.
-Copy the serial key from the crack folder and paste it into the activation window.
-Click on activate and enjoy Daum PotPlayer with all its features.
-
-That's it! You have successfully installed and activated Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download on your PC. Now you can play any media file with high quality and performance. You can also customize your preferences and settings according to your needs and preferences.
-Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download is the best way to experience Daum PotPlayer without spending any money or risking any legal issues. It is compatible with Windows XP, Vista, 7, 8, 8.1, and 10 (32-bit and 64-bit). It also supports multiple languages and has a user-friendly interface.
-So what are you waiting for? Download Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download today and enjoy the ultimate media player for your PC.
-
-
-Daum PotPlayer is not just a media player, it is also a media manager. You can organize your media files into playlists, folders, and categories. You can also sort them by name, date, size, type, and more. You can also search for your media files using keywords and filters. Daum PotPlayer makes it easy to find and access your media files anytime and anywhere.
-Daum PotPlayer also supports online streaming and downloading. You can watch live TV channels, radio stations, podcasts, and webcams from around the world. You can also download online videos and audio files from various websites and platforms. Daum PotPlayer lets you enjoy online media content without any hassle or interruption.
-Daum PotPlayer also has a built-in screen recorder and editor. You can capture your screen activity and save it as a video file. You can also edit your recorded videos using various tools and effects. You can crop, trim, rotate, resize, add text, watermark, and more. Daum PotPlayer allows you to create your own videos and share them with others.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Ed Sheeran Plus Album Zip Mega.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Ed Sheeran Plus Album Zip Mega.md
deleted file mode 100644
index 775593002126f54e1cf5e1001d5699c7378e281a..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Ed Sheeran Plus Album Zip Mega.md
+++ /dev/null
@@ -1,10 +0,0 @@
-Download Ed Sheeran Plus Album Zip Mega Download File ⇒⇒⇒ https://imgfil.com/2uxZmz
-
-CHAOS PDF.pdf. Has he ever met Ed Sheeran? Find out what music master of everything he is able to do!Download Ed Sheeran album ‘ + at the album page. The “Ed Sheeran” released on August 26, 2015, with the album number number.Download CHAOS + Album Zip. CHAOS (crack) + Full Album Zip / PDF / MPG / M4A + MP3. We have provided below links from where you can download CHAOS (crack) + Full Album Zip / PDF / MPG / M4A + MP3. You can get the song, music, information of Ed Sheeran- plus album song track/track list.Chaos, by Ed Sheeran has been downloaded by a lot of people. Chances are that you are one of them who has downloaded this album from the internet. Download Ed Sheeran, Live at Abbey Road, Limited Edition Download · Free Download - Ed Sheeran, + Album Download · Ed Sheeran (2015), Chaos [Plus Album Zip and Tracklist], [Full Tracklist] Download. (PDF) Download · Ed Sheeran (2015), Chaos (2015), PLUS ALBUM ZIP Download · Ed Sheeran - Chaos + Album (2015), plus Album Zip (3.2MB) · Ed Sheeran - Ed (2015) - PLUS.The albums containing song "Chaos" download is available at. Listen to this artist's songs and watch the videos. Listen to songs by Ed Sheeran - Chaos (2015) -plus album zip. Plus, download Ed Sheeran. ITunes:. 19.03.20. Chaos, Sheeran album tracklist, download. Chaos, a song by Ed Sheeran from the album Ch. Plus, download Ed Sheeran. 19.03.20.. 29.11.15.
-
-Download
-
-Ed Sheeran Chaos Plus Album Zip Mega. DOWNLOAD: . PrintMusic 2014 (crack CHAOS) [ChingLiu] Free!!BETTER!! Download . CHAOS PDF.pdf. Has he ever met Ed Sheeran? Find out what music master of everything he is able to do!DOWNLOAD ED SHEERAN PLUS ALBUM ZIP MEGA. DOWNLOAD: . PrintMusic 2014 (crack CHAOS) [ChingLiu] 4fefd39f24
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cmo ver y descargar pelculas y series en Cuevana 3 para PC y Android.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cmo ver y descargar pelculas y series en Cuevana 3 para PC y Android.md
deleted file mode 100644
index 21c57a6b26e1bf0db8b87c65004195281fc9733c..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cmo ver y descargar pelculas y series en Cuevana 3 para PC y Android.md
+++ /dev/null
@@ -1,135 +0,0 @@
-
-Cuevana 3 Peliculas y Series APK: La mejor app para ver contenido en español
- Si eres un fanático de las películas y series, seguramente te gustaría tener una aplicación que te permita ver todo el contenido que quieras en tu dispositivo móvil, sin pagar nada y sin tener que soportar anuncios molestos. ¿Existe una app así? Sí, se llama Cuevana 3 y es una de las mejores opciones que hay para disfrutar del cine y la televisión en español.
-cuevana 3 peliculas y series apk Download ✺✺✺ https://urlin.us/2uSVVf
- ¿Qué es Cuevana 3?
- Cuevana 3 es una aplicación para Android que te permite ver películas y series online gratis, en calidad HD y sin cortes. Es la versión más reciente de Cuevana, una plataforma de streaming que lleva más de una década ofreciendo contenido en español y subtitulado a millones de usuarios.
- Cuevana 3 tiene un catálogo muy variado y actualizado, con los últimos estrenos y las series más populares del momento. Además, tiene una interfaz sencilla y rápida, que te facilita la búsqueda y la reproducción del contenido. También te da la opción de descargar las películas y series que quieras para verlas offline, o compartirlas con tus amigos a través de redes sociales o aplicaciones de mensajería.
- ¿Cómo descargar e instalar Cuevana 3 APK?
- Requisitos previos
- Para poder descargar e instalar Cuevana 3 APK en tu dispositivo Android, necesitas cumplir con algunos requisitos previos:
-
-Tener un dispositivo con Android 4.1 o superior.
-Tener espacio suficiente en la memoria interna o externa del dispositivo.
-Tener una conexión a internet estable (preferiblemente Wi-Fi).
-Activar la opción de "Orígenes desconocidos" o "Fuentes desconocidas" en los ajustes de seguridad del dispositivo. Esto te permitirá instalar aplicaciones que no provienen de la tienda oficial de Google Play.
-
- Pasos a seguir
- Una vez que hayas cumplido con los requisitos previos, puedes seguir estos pasos para descargar e instalar Cuevana 3 APK:
-
-Descarga el archivo APK de Cuevana 3 desde nuestra página web, haciendo clic en el botón de "Descargar". El archivo pesa unos 34,7 MB.
-Busca el archivo descargado en la carpeta de "Descargas" o "Download" de tu dispositivo, o en el lugar donde hayas elegido guardar el archivo.
-Toca el archivo para iniciar la instalación. Acepta los permisos que te solicite la aplicación. Espera a que se complete la instalación. Puede tardar unos segundos o minutos, dependiendo de la velocidad de tu dispositivo y de tu conexión a internet.
-Una vez instalada, abre la aplicación y disfruta de Cuevana 3.
-
- ¿Cómo usar Cuevana 3 APK?
- Usar Cuevana 3 APK es muy fácil y divertido. Solo tienes que seguir estos pasos:
- Buscar películas y series
- Cuevana 3 tiene un buscador integrado que te permite encontrar el contenido que quieras por título, género, año, idioma o calidad. Solo tienes que escribir lo que buscas en la barra superior y presionar el icono de la lupa.
-cuevana 3 pro descargar gratis para android
-cuevana 3 app para ver peliculas y series online
-descargar cuevana 3 apk ultima version 2023
-cuevana 3 peliculas y series en español latino
-cuevana 3 apk sin publicidad ni virus
-como instalar cuevana 3 pro en pc
-cuevana 3 apk mod premium full hd
-cuevana 3 peliculas y series de estreno
-descargar cuevana 3 apk para smart tv
-cuevana 3 app oficial para android
-cuevana 3 pro apk mega mediafire
-cuevana 3 peliculas y series gratis sin registrarse
-descargar cuevana 3 apk para firestick
-cuevana 3 app para ios iphone ipad
-cuevana 3 apk xapk installer
-cuevana 3 peliculas y series en version original subtitulada
-descargar cuevana 3 apk para windows 10
-cuevana 3 app para chromecast
-cuevana 3 apk no funciona solucion
-cuevana 3 peliculas y series en calidad 4k
-descargar cuevana 3 apk para roku
-cuevana 3 app alternativas similares
-cuevana 3 apk opiniones y comentarios
-cuevana 3 peliculas y series de netflix
-descargar cuevana 3 apk para linux
-cuevana 3 app requisitos minimos
-cuevana 3 apk tutorial y guia
-cuevana 3 peliculas y series de disney plus
-descargar cuevana 3 apk para mac
-cuevana 3 app ventajas y desventajas
-cuevana 3 apk actualizacion automatica
-cuevana 3 peliculas y series de amazon prime video
-descargar cuevana 3 apk para android tv box
-cuevana 3 app canales de television en vivo
-cuevana 3 apk descargar contenidos offline
-cuevana 3 peliculas y series de hbo max
-descargar cuevana 3 apk para bluestacks
-cuevana 3 app modo oscuro activar
-cuevana 3 apk notificaciones de estrenos activar
-cuevana 3 peliculas y series de marvel y dc comics
-descargar cuevana 3 apk para nox player
-cuevana 3 app favoritos marcar y acceder
-cuevana 3 apk fuentes elegir la mejor calidad
-cuevana 3 peliculas y series de terror y suspenso
-descargar cuevana 3 apk para memu play
-cuevana 3 app previsualizar los contenidos
-cuevana 3 apk buscador avanzado por palabras clave
-cuevana 3 peliculas y series de comedia y romance
-descargar cuevana 3 apk para ldplayer
-cuevana 3 app seguridad y legalidad
- También puedes explorar el catálogo de Cuevana 3 por categorías, como "Estrenos", "Más vistas", "Mejor valoradas", "Series", "Películas", etc. Solo tienes que deslizar el dedo por la pantalla para ver las diferentes opciones y tocar la que te interese.
- Reproducir contenido en streaming
- Cuando encuentres una película o serie que quieras ver, solo tienes que tocarla para ver los detalles, como el título, la sinopsis, el reparto, el género, el año, la duración, la valoración y los comentarios de otros usuarios.
- Para reproducir el contenido en streaming, solo tienes que tocar el botón de "Ver ahora" y elegir el servidor y la calidad que prefieras. Cuevana 3 te ofrece varios servidores y calidades para que elijas la que mejor se adapte a tu conexión y a tu dispositivo.
- Una vez que empiece la reproducción, puedes disfrutar del contenido en pantalla completa, con sonido y subtítulos en español. También puedes pausar, adelantar, retroceder o ajustar el volumen y el brillo con los controles táctiles.
- Descargar contenido para ver offline
- Si quieres descargar una película o serie para verla offline, sin necesidad de internet, solo tienes que tocar el botón de "Descargar" y elegir el servidor y la calidad que prefieras. Cuevana 3 te mostrará el tamaño del archivo y el tiempo estimado de descarga.
- Una vez que se inicie la descarga, puedes ver el progreso en la barra inferior de la pantalla. También puedes pausar o cancelar la descarga en cualquier momento.
- Cuando la descarga se complete, podrás ver el contenido descargado en la sección de "Descargas" de Cuevana 3. Allí podrás reproducirlo sin conexión a internet, borrarlo o compartirlo con tus amigos.
- Ventajas y desventajas de Cuevana 3 APK
- Cuevana 3 APK es una aplicación muy completa y atractiva para los amantes del cine y la televisión en español. Sin embargo, como toda aplicación, tiene sus ventajas y desventajas. Veamos algunas de ellas:
- Ventajas
-
-Es gratis y sin publicidad. No tienes que pagar nada ni registrarte para usar Cuevana 3. Además, no tiene anuncios molestos ni ventanas emergentes que interrumpan tu experiencia.
-Tiene contenido en español y subtitulado. Puedes ver películas y series en español latino o castellano, o en su idioma original con subtítulos en español. Así puedes disfrutar del contenido en tu idioma preferido o aprender otros idiomas.
-Es compatible con varios dispositivos. Puedes usar Cuevana 3 en tu smartphone o tablet Android, o en tu Smart TV o Chromecast si los conectas con tu dispositivo móvil. Así puedes ver el contenido en una pantalla más grande y cómoda.
-
- Desventajas
-
-No tiene licencia oficial. Cuevana 3 no tiene los derechos de autor ni las licencias legales para ofrecer el contenido que muestra. Por eso puede infringir las normas de propiedad intelectual y estar sujeta a cierres o bloqueos por parte de las autoridades o los proveedores de internet.
-Puede tener errores o fallos. Cuevana 3 puede presentar problemas técnicos o de funcionamiento, como caídas del servidor, enlaces rotos, baja calidad de imagen o sonido, sincronización de subtítulos, etc. Estos problemas pueden afectar la calidad y la continuidad de tu experiencia.
-Puede consumir muchos datos móviles. Cuevana 3 requiere una conexión a internet para funcionar, y si usas datos móviles en lugar de Wi-Fi, puede consumir una gran cantidad de tu plan de datos. Esto puede generar cargos adicionales o reducir la velocidad de tu conexión.
-
- Alternativas a Cuevana 3 APK
- Si por alguna razón no puedes o no quieres usar Cuevana 3 APK, existen otras alternativas que también te permiten ver películas y series online gratis o pagando una suscripción mensual. Algunas de ellas son:
- Netflix
- Netflix es la plataforma de streaming más popular y reconocida del mundo. Tiene un catálogo muy amplio y variado, con películas y series originales y exclusivas, así como contenido de otros estudios y productoras. Tiene una interfaz muy intuitiva y personalizada, que te recomienda el contenido que más te puede gustar según tus gustos y preferencias. También te permite descargar el contenido para verlo offline, crear perfiles para diferentes usuarios y ajustar la calidad de imagen y sonido según tu conexión. Netflix tiene un costo mensual que varía según el plan que elijas, y puedes probarlo gratis por un mes.
- HBO Max
- HBO Max es la plataforma de streaming de HBO, que te ofrece todo el contenido de este canal, así como películas y series de Warner Bros, DC, Cartoon Network, Adult Swim y más. Tiene un catálogo muy atractivo y actualizado, con estrenos simultáneos al cine y series aclamadas por la crítica y el público. Tiene una interfaz sencilla y funcional, que te permite buscar el contenido por categorías, géneros o colecciones. También te permite descargar el contenido para verlo offline, crear perfiles para diferentes usuarios y ajustar la calidad de imagen y sonido según tu conexión. HBO Max tiene un costo mensual que varía según el plan que elijas, y puedes probarlo gratis por una semana.
- Disney+
- Disney+ es la plataforma de streaming de Disney, que te ofrece todo el contenido de este estudio, así como películas y series de Pixar, Marvel, Star Wars, National Geographic y más. Tiene un catálogo muy completo y diverso, con películas y series clásicas y modernas, así como contenido original y exclusivo. Tiene una interfaz muy atractiva y dinámica, que te permite buscar el contenido por franquicias, géneros o temas. También te permite descargar el contenido para verlo offline, crear perfiles para diferentes usuarios y ajustar la calidad de imagen y sonido según tu conexión. Disney+ tiene un costo mensual fijo, y puedes probarlo gratis por una semana.
- Conclusión
- Cuevana 3 Peliculas y Series APK es una aplicación que te permite ver películas y series online gratis, en calidad HD y sin cortes. Es una excelente opción para los amantes del cine y la televisión en español, ya que tiene un catálogo muy variado y actualizado, con los últimos estrenos y las series más populares del momento.
- Para usar Cuevana 3 APK solo necesitas tener un dispositivo Android con una conexión a internet estable. Además, puedes descargar el contenido para verlo offline o compartirlo con tus amigos. Cuevana 3 tiene una interfaz sencilla y rápida, que te facilita la búsqueda y la reproducción del contenido.
- Sin embargo, Cuevana 3 también tiene algunas desventajas, como no tener licencia oficial, tener errores o fallos técnicos o consumir muchos datos móviles. Por eso, debes usarla bajo tu propia responsabilidad y criterio.
- Si quieres probar otras alternativas a Cuevana 3 APK, puedes optar por plataformas de streaming como Netflix, HBO Max o Disney+, que también te ofrecen un gran catálogo de películas y series online, pero con un costo mensual.
- Esperamos que este artículo te haya sido útil e informativo. Si tienes alguna duda o comentario sobre Cuevana 3 APK o sobre otras aplicaciones similares, no dudes en dejarnos tu opinión en la sección de abajo.
- Preguntas frecuentes
-
-¿Cuevana 3 APK es legal?
-Cuevana 3 APK no es una aplicación legal, ya que no tiene los derechos de autor ni las licencias necesarias para ofrecer el contenido que muestra. Por eso, puede estar violando las normas de propiedad intelectual y estar expuesta a cierres o bloqueos por parte de las autoridades o los proveedores de internet. Usar Cuevana 3 APK es bajo tu propia responsabilidad y criterio.
-¿Cuevana 3 APK es segura?
-Cuevana 3 APK es una aplicación segura en el sentido de que no contiene virus, malware ni software malicioso que pueda dañar tu dispositivo o robar tu información. Sin embargo, al no ser una aplicación oficial, no tiene garantías ni soporte técnico, por lo que puede presentar errores o fallos que afecten su funcionamiento. Además, al usar Cuevana 3 APK puedes estar infringiendo las leyes de tu país o región, por lo que debes tomar precauciones y usar un VPN para proteger tu privacidad y seguridad.
-¿Cuevana 3 APK tiene publicidad?
-No, Cuevana 3 APK no tiene publicidad ni ventanas emergentes que interrumpan tu experiencia. Es una de las ventajas de esta aplicación, ya que te permite ver el contenido sin distracciones ni molestias. Sin embargo, al no tener publicidad, tampoco tiene ingresos para mantenerse y actualizarse, por lo que depende de las donaciones voluntarias de los usuarios para seguir funcionando.
-¿Cuevana 3 APK tiene contenido en español?
-Sí, Cuevana 3 APK tiene contenido en español latino y castellano, así como en su idioma original con subtítulos en español. Puedes elegir el idioma que prefieras en el momento de reproducir el contenido. Cuevana 3 APK es una de las mejores aplicaciones para ver películas y series en español, ya que tiene un catálogo muy amplio y variado, con los últimos estrenos y las series más populares del momento.
-¿Cuevana 3 APK funciona en Smart TV o Chromecast?
-Sí, Cuevana 3 APK funciona en Smart TV o Chromecast si los conectas con tu dispositivo Android. De esta manera, puedes ver el contenido en una pantalla más grande y cómoda. Para hacerlo, solo tienes que seguir estos pasos:
-
-Conecta tu Smart TV o Chromecast a la misma red Wi-Fi que tu dispositivo Android.
-Abre la aplicación de Cuevana 3 en tu dispositivo Android y busca el contenido que quieras ver.
-Toca el icono de "Cast" o "Enviar" en la parte superior derecha de la pantalla y selecciona tu Smart TV o Chromecast como destino.
-Espera a que se establezca la conexión y disfruta del contenido en tu Smart TV o Chromecast.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download AR Emoji Stickers and Customize Them with Your Favorite Accessories and Backgrounds.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download AR Emoji Stickers and Customize Them with Your Favorite Accessories and Backgrounds.md
deleted file mode 100644
index db683c5acfeaf9e2c055d62f4058b176f955aebc..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download AR Emoji Stickers and Customize Them with Your Favorite Accessories and Backgrounds.md
+++ /dev/null
@@ -1,147 +0,0 @@
-
-How to Download AR Emoji Stickers
-Emoji are everywhere and there are plenty to choose from. But what if you want to make your own personalized and animated emoji that look like you or your favorite characters? That's where AR emoji stickers come in. In this article, we'll show you what are AR emoji stickers, how to create them, and how to share and use them in your messages and social media.
- What are AR Emoji Stickers?
-AR emoji stickers are a type of augmented reality (AR) technology that allows you to create and animate your own avatars using your smartphone's camera. You can use these avatars to make custom emoji, stickers, GIFs, and videos that reflect your personality, mood, and style. They're fun and more personal than the standard emoji and stickers you might use.
-download ar emoji stickers DOWNLOAD ⭐ https://urlin.us/2uSXTD
- Definition and examples of AR emoji stickers
-AR stands for augmented reality, which means adding digital elements to the real world through your device's screen. AR emoji stickers are one example of this, as they let you overlay your virtual avatar on top of your surroundings or any background you choose. You can also animate them with your facial expressions and movements in real-time.
-Some examples of AR emoji stickers are:
-
-Samsung's AR Emoji, which lets you create an animated version of yourself or wear masks of other characters on your Galaxy device.
-iPhone's Memoji, which lets you create a custom avatar that looks like you or anyone else on your iOS device.
-Other apps that let you create and use AR emoji stickers on any device, such as Filmora for Mobile, Mirror, or Yoji.
-
- Benefits and uses of AR emoji stickers
-AR emoji stickers have many benefits and uses, such as:
-
-They allow you to express yourself in a more creative and fun way than regular emoji or text.
-They help you communicate your emotions, reactions, and opinions more clearly and effectively.
-They make your messages and social media posts more engaging and interactive.
-They let you customize your avatar with different looks, styles, accessories, and backgrounds.
-They let you have fun with your friends and family by creating and sharing funny and cute AR emoji stickers.
-
- How to Create Your Own AR Emoji Stickers
-Creating your own AR emoji stickers is easy and fun. You just need a smartphone with a camera and an app that supports AR emoji stickers. Here are some of the most popular apps that let you create your own AR emoji stickers:
- Using Samsung AR Emoji Stickers app
-If you have a Samsung Galaxy device that supports AR Emoji, such as the Galaxy S9 or later, you can use the pre-installed app called "AR Zone" to create your own AR emoji stickers. Here's how:
-
-Open the "AR Zone" app on your Galaxy device.
-Tap "AR Emoji Studio" Select "Create My Emoji" and follow the instructions to scan your face and customize your avatar.
-Tap "Sticker" and choose from the different categories of stickers, such as "Basic", "Emotion", or "Pose".
-Tap the sticker you want to use and then tap the download icon to save it to your device.
-You can also tap the share icon to send it directly to your contacts or social media apps.
-
- Using iPhone Memoji AR Stickers app
-If you have an iPhone X or later, you can use the built-in app called "Messages" to create your own Memoji AR stickers. Here's how:
-
-Open the "Messages" app on your iPhone and start a new conversation or open an existing one.
-Tap the "Animoji" icon (the monkey face) and then swipe left to find the "+" button.
-Tap the "+" button and follow the instructions to create your Memoji avatar. You can customize its appearance, hairstyle, accessories, and more.
-Tap "Done" when you're satisfied with your Memoji.
-To use your Memoji as a sticker, tap the sticker icon (the square with a peeling corner) and then tap your Memoji. You can also swipe up and down to see different expressions and poses.
-Tap the sticker you want to use and then drag it to the message bubble or the photo you want to attach it to.
-You can also tap the send button to send it as a separate message.
-
- Using other apps for AR emoji stickers
-If you don't have a Samsung Galaxy or an iPhone device, or if you want to try other apps for creating and using AR emoji stickers, there are many options available for both Android and iOS devices. Some of them are:
-
-Filmora for Mobile, which lets you create and edit videos with AR emoji stickers, filters, effects, music, and more.
-Mirror, which lets you create personalized emoji that look like you or anyone else, and use them as stickers, GIFs, or avatars.
-Yoji, which lets you create 3D animated emoji that mimic your facial expressions and voice, and share them as videos or GIFs.
-
-To use these apps, you need to download them from the Google Play Store or the App Store, depending on your device. Then, follow the instructions on each app to create your AR emoji stickers and share them with others.
-How to create your own ar emoji stickers
-Best apps for ar emoji stickers on Galaxy Store
-Ar emoji stickers for Samsung devices
-Ar emoji editor: customize your emoji with style
-Ar emoji stickers: fun and expressive way to communicate
-Download ar emoji stickers from Galaxy Store
-Ar emoji stickers: how to use them on social media
-Ar emoji stickers: make your messages more lively
-Ar emoji stickers: create and share your animated version
-Ar emoji stickers: add facial expressions, actions, and backgrounds
-Ar emoji stickers: how to edit and delete them
-Ar emoji stickers: how to access them on your keyboard
-Ar emoji stickers: how to send them as GIFs or videos
-Ar emoji stickers: how to sync them with your contacts
-Ar emoji stickers: how to download new sticker packs
-Ar emoji stickers: how to make them look like you
-Ar emoji stickers: how to change their clothes and accessories
-Ar emoji stickers: how to apply makeup and hair styles
-Ar emoji stickers: how to use them with Bixby Vision
-Ar emoji stickers: how to create custom text and drawings
-Ar emoji stickers vs Bitmoji: which one is better?
-Ar emoji stickers vs Animoji: which one is more realistic?
-Ar emoji stickers vs Memoji: which one is more fun?
-Ar emoji stickers vs Zepeto: which one is more popular?
-Ar emoji stickers vs Facemoji: which one is more diverse?
-How to make ar emoji stickers with your pets
-How to make ar emoji stickers with your friends
-How to make ar emoji stickers with celebrities
-How to make ar emoji stickers with cartoon characters
-How to make ar emoji stickers with emojis
-How to use ar emoji stickers on WhatsApp
-How to use ar emoji stickers on Instagram
-How to use ar emoji stickers on Snapchat
-How to use ar emoji stickers on Facebook Messenger
-How to use ar emoji stickers on TikTok
-How to use ar emoji stickers on YouTube
-How to use ar emoji stickers on Zoom
-How to use ar emoji stickers on Skype
-How to use ar emoji stickers on Discord
-How to use ar emoji stickers on Telegram
-Benefits of using ar emoji stickers for communication
-Challenges of using ar emoji stickers for communication
-Tips and tricks for using ar emoji stickers effectively
-Reviews and ratings of ar emoji sticker apps
-FAQs and troubleshooting for ar emoji sticker apps
- How to Share and Use Your AR Emoji Stickers
-Once you have created your AR emoji stickers, you can share and use them in various ways. Here are some of the most common ways to do so:
- Saving and downloading your AR emoji stickers
-If you want to save your AR emoji stickers for later use or download them to your device, you can do so by following these steps:
-
-Open the app that you used to create your AR emoji stickers.
-Find the AR emoji sticker that you want to save or download.
-Tap the download icon (usually a downward arrow) to save it to your device's gallery or file manager. You can also tap the menu icon (usually three dots) and select "Save" or "Export".
-You can also tap the share icon (usually a paper plane) and select "Save Image" or "Save Video" if you want to save it as an image or a video file.
-
- Adding your AR emoji stickers to messages and social media
-If you want to add your AR emoji stickers to your messages and social media posts, you can do so by following these steps:
-
-Open the app that you want to use, such as WhatsApp, Facebook Messenger, Instagram, Snapchat, etc.
-Start a new conversation or open an existing one, or create a new post or story.
-Tap the attachment icon (usually a paper clip) and select "Gallery" or "Photos".
-Find the AR emoji sticker that you want to use from your device's gallery or file manager.
-Select it and then tap the send button or the post button.
-
- Tips and tricks for making your AR emoji stickers more fun and expressive
-To make your AR emoji stickers more fun and expressive, you can try these tips and tricks:
-
-Use different facial expressions and gestures when creating your AR emoji stickers Use different backgrounds and filters to change the mood and atmosphere of your AR emoji stickers.
-Use different accessories and outfits to customize your AR emoji stickers and make them more unique and stylish.
-Use different poses and movements to make your AR emoji stickers more dynamic and lively.
-Use different text and captions to add more context and humor to your AR emoji stickers.
-
- Conclusion
-AR emoji stickers are a great way to spice up your messages and social media posts with your own personalized and animated avatars. They're easy to create, share, and use, and they can help you express yourself in a more fun and creative way. Whether you use Samsung's AR Emoji, iPhone's Memoji, or any other app, you can enjoy making and using AR emoji stickers with your friends and family.
-We hope you found this article helpful and informative. If you have any questions or feedback, please let us know in the comments below. And don't forget to share your AR emoji stickers with us too!
- FAQs
-What is the difference between AR emoji stickers and regular emoji?
-Regular emoji are standard symbols that represent various emotions, objects, animals, etc. They are usually static and have a fixed appearance. AR emoji stickers are customized avatars that you can create and animate using your smartphone's camera. They are usually dynamic and have a variable appearance.
- How can I make my AR emoji stickers look more like me?
-You can make your AR emoji stickers look more like you by adjusting the facial features, skin tone, hair color, eye color, etc. of your avatar. You can also add accessories, such as glasses, hats, earrings, etc. that match your style. You can also use your facial expressions and movements to make your AR emoji stickers more realistic.
- Can I use AR emoji stickers on any device?
-You can use AR emoji stickers on any device that supports AR technology and has a camera. However, some apps may be exclusive to certain devices or operating systems. For example, Samsung's AR Emoji is only available on Galaxy devices, while iPhone's Memoji is only available on iOS devices.
- How can I delete or edit my AR emoji stickers?
-You can delete or edit your AR emoji stickers by following these steps:
-
-Open the app that you used to create your AR emoji stickers.
-Find the AR emoji sticker that you want to delete or edit.
-Tap the menu icon (usually three dots) and select "Delete" or "Edit".
-Confirm your action or make the changes you want.
-
- Where can I find more AR emoji stickers to download?
-You can find more AR emoji stickers to download by browsing the app store of your device or searching online for "AR emoji stickers". You can also check out the websites or social media pages of the apps that you use for creating AR emoji stickers, as they may offer more options or updates.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Bricks King APK The Best Brick Breaker Game for Android.md b/spaces/1phancelerku/anime-remove-background/Bricks King APK The Best Brick Breaker Game for Android.md
deleted file mode 100644
index fcb692f5ca91ec85a56f14fd402f1b40a575c1df..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Bricks King APK The Best Brick Breaker Game for Android.md
+++ /dev/null
@@ -1,130 +0,0 @@
-
-Bricks King APK Download: A Fun and Relaxing Brick Breaker Game
-If you are looking for a new, exciting, and addictive casual game to play on your Android device, you might want to check out Bricks King. This is a brick breaker game that offers smooth and fluid gameplay, amazing powerups, beautiful graphics, and hundreds of challenging levels. In this article, we will tell you what Bricks King is, how to download and install it on your device, and what are the pros and cons of doing so.
-bricks king apk download Download File ►►► https://jinyurl.com/2uNKxr
-What is Bricks King?
-Bricks King is a casual brick breaker game developed by Prota Games. It was released in January 2023 and has been downloaded over 1 million times from Google Play Store. The game is rated 4.4 out of 5 stars by more than 1,000 users.
-The goal of the game is to break all the bricks on the screen by using a ball and a paddle. You can move the paddle left and right by swiping your finger on the screen. The ball will bounce off the paddle and the bricks, creating satisfying chain reactions. You can also use various powerups to enhance your gameplay, such as extra balls, fireballs, magnets, lasers, and more.
-Features of Bricks King
-Bricks King has many features that make it a fun and relaxing game to play. Here are some of them:
-Smooth and fluid gameplay
-The game has a smooth and fluid gameplay that makes it easy to control the paddle and the ball. The game also has a clear user interface that shows you your score, level, lives, and powerups. The game runs smoothly on most Android devices without any lag or glitches.
-Amazing powerups and chain reactions
-The game has many powerups that you can collect by breaking certain bricks or hitting them with the ball. Some of the powerups are:
-
-Extra balls: This powerup gives you more balls to play with, increasing your chances of breaking more bricks.
-Fireball: This powerup makes your ball burn through any brick it touches, creating a trail of fire.
-Magnet: This powerup makes your paddle attract the ball, making it easier to catch it.
-Laser: This powerup makes your paddle shoot lasers that can break bricks in a straight line.
-And more!
-
-The game also has amazing chain reactions that happen when you break multiple bricks at once or use powerups. You can see sparks, explosions, flames, and other effects that make the game more enjoyable.
-bricks king game free download apk
-bricks king android game apk
-bricks king casual brick breaker apk
-bricks king apk mod unlimited money
-bricks king apk latest version 2023
-bricks king apk for pc windows 10
-bricks king apk offline installer
-bricks king apk uptodown.com
-bricks king apk pure.com
-bricks king apk combo.com
-bricks king apk mirror.com
-bricks king apk no ads
-bricks king apk hack version
-bricks king apk old version
-bricks king apk new update
-bricks king apk full unlocked
-bricks king apk pro version
-bricks king apk cracked version
-bricks king apk premium version
-bricks king apk file download
-bricks king apk direct download link
-bricks king apk free download for android
-bricks king apk download for tablet
-bricks king apk download for android tv
-bricks king apk download from google play store
-bricks king game download apkpure
-bricks king game download uptodown
-bricks king game download apkmirror
-bricks king game download apkpure.com
-bricks king game download uptodown.com
-bricks king game download apkmirror.com
-bricks king game mod apk download
-bricks king game hack apk download
-bricks king game latest version apk download
-bricks king game old version apk download
-bricks king game new version apk download
-bricks king game update apk download
-bricks king game offline apk download
-bricks king game online apk download
-bricks king game free online play without downloading the app or the APK file.
-Beautiful graphics and sounds
-The game has beautiful graphics that are colorful and vibrant. The game also has relaxing sounds that match the gameplay. You can hear the sound of the ball bouncing off the bricks, the sound of the powerups activating, and the sound of the background music. The game also has different themes for each level, such as forest, desert, ocean, space, and more.
-Hundreds of challenging levels
-The game has hundreds of challenging levels for you to conquer. Each level has a different layout of bricks, different powerups, and different obstacles. Some levels have moving bricks, rotating bricks, invisible bricks, or unbreakable bricks. You have to use your skills and strategy to break all the bricks and complete the level. The game also has a star rating system that rewards you for completing the level with fewer balls or using fewer powerups. You can also replay the levels to improve your score and challenge yourself.
-How to download and install Bricks King APK on your Android device
-If you want to play Bricks King on your Android device, you can download and install it from Google Play Store. However, if you want to get the latest version of the game or access some features that are not available on the official app, you can download and install the Bricks King APK file from a trusted source. Here are the steps to do so:
-Step 1: Enable unknown sources
-Before you can install any APK file on your device, you need to enable unknown sources. This is a security setting that allows you to install apps from sources other than Google Play Store. To enable unknown sources, follow these steps:
-
-Go to your device's settings and tap on security or privacy.
-Find the option that says unknown sources or install unknown apps and toggle it on.
-A warning message will pop up. Read it carefully and tap on OK or allow.
-
-Step 2: Download the APK file from a trusted source
-Next, you need to download the APK file of Bricks King from a trusted source. There are many websites that offer APK files for free, but not all of them are safe and reliable. Some of them may contain malware, viruses, or unwanted ads that can harm your device or compromise your privacy. To avoid this, you should only download APK files from reputable sources that have positive reviews and ratings from other users. One such source is [APKPure], which is a popular and trusted website that provides safe and updated APK files for various apps and games. To download the APK file of Bricks King from APKPure, follow these steps:
-
-Go to [APKPure] using your device's browser.
-Type Bricks King in the search bar and tap on the search icon.
-Find the app that matches the name and icon of Bricks King and tap on it.
-Tap on the download button and wait for the download to finish.
-
-Step 3: Locate and install the APK file
-After you have downloaded the APK file of Bricks King, you need to locate and install it on your device. To do this, follow these steps:
-
-Go to your device's file manager and find the folder where you saved the APK file. It is usually in the downloads folder.
-Tap on the APK file and a prompt will appear. Tap on install and wait for the installation to complete.
-If another prompt appears asking for permissions, tap on allow or accept.
-
-Step 4: Launch and enjoy the game
-Once you have installed the APK file of Bricks King, you can launch and enjoy the game. To do this, follow these steps:
-
-Go to your device's app drawer and find the icon of Bricks King. Tap on it to open the game.
-You may see a splash screen or an intro video. Wait for it to finish or skip it if possible.
-You will see the main menu of the game. Tap on play or start to begin playing.
-You can also adjust the settings, view your achievements, or access other features of the game from the main menu.
-
-Pros and cons of Bricks King APK download
-Downloading and installing Bricks King APK on your device has some pros and cons that you should be aware of. Here are some of them:
-Pros
-
-You can get the latest version of the game before it is available on Google Play Store.
-You can access some features that are not available on the official app, such as unlimited coins, unlocked levels, or ad-free gameplay.
-You can play the game even if it is not compatible with your device or region.
-You can save some storage space by deleting the original app after installing the APK file.
-
-Cons
-
-You may encounter some bugs or errors that are not fixed yet by the developers.
-You may not receive any updates or support from the developers if you encounter any problems with the game.
-You may violate some terms and conditions of Google Play Store or the developers by installing an unofficial app.
-You may expose your device or data to some risks by installing an app from an unknown source.
- <| Conclusion
-Bricks King is a fun and relaxing brick breaker game that you can play on your Android device. It has smooth and fluid gameplay, amazing powerups, beautiful graphics, and hundreds of challenging levels. You can download and install it from Google Play Store or from a trusted source like APKPure. However, you should also be aware of the pros and cons of doing so, and make sure you have enabled unknown sources on your device. If you are looking for a new, exciting, and addictive casual game to play, you should give Bricks King a try.
-FAQs
-Here are some frequently asked questions about Bricks King APK download:
-
-Is Bricks King APK download safe?
-Bricks King APK download is safe if you download it from a trusted source like APKPure. However, you should always scan the APK file with an antivirus or malware scanner before installing it on your device. You should also avoid downloading APK files from unknown or suspicious sources that may contain harmful or unwanted content.
-How can I update Bricks King APK?
-If you have downloaded Bricks King APK from a trusted source like APKPure, you can update it by visiting the same website and downloading the latest version of the APK file. You can then install it over the existing app without losing your progress or data. However, you may not receive any notifications or alerts about the updates, so you have to check the website regularly for any new versions.
-Can I play Bricks King offline?
-Yes, you can play Bricks King offline without any internet connection. However, some features of the game may not work properly or at all, such as the leaderboard, achievements, or ads. You may also miss out on some updates or bug fixes that require an internet connection.
-Can I play Bricks King on PC?
-Yes, you can play Bricks King on PC by using an Android emulator. An Android emulator is a software that allows you to run Android apps and games on your PC. Some of the popular Android emulators are BlueStacks, NoxPlayer, and LDPlayer. You can download and install any of these emulators on your PC and then download and install Bricks King APK from a trusted source like APKPure. You can then launch and enjoy the game on your PC.
-How can I contact the developers of Bricks King?
-If you have any questions, feedback, suggestions, or issues with Bricks King, you can contact the developers of the game by sending them an email at protagames@gmail.com. You can also visit their website at https://protagames.com/ or follow them on Facebook at https://www.facebook.com/protagames/.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/models/cross_attention.py b/spaces/1toTree/lora_test/ppdiffusers/models/cross_attention.py
deleted file mode 100644
index 3bda145120f9f8837eda3919d8862a19d132750b..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/models/cross_attention.py
+++ /dev/null
@@ -1,435 +0,0 @@
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from typing import Optional, Union
-
-import paddle
-import paddle.nn as nn
-import paddle.nn.functional as F
-
-from ..initializer import normal_, zeros_
-
-
-class CrossAttention(nn.Layer):
- r"""
- A cross attention layer.
-
- Parameters:
- query_dim (`int`): The number of channels in the query.
- cross_attention_dim (`int`, *optional*):
- The number of channels in the encoder_hidden_states. If not given, defaults to `query_dim`.
- heads (`int`, *optional*, defaults to 8): The number of heads to use for multi-head attention.
- dim_head (`int`, *optional*, defaults to 64): The number of channels in each head.
- dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use.
- bias (`bool`, *optional*, defaults to False):
- Set to `True` for the query, key, and value linear layers to contain a bias parameter.
- """
-
- def __init__(
- self,
- query_dim: int,
- cross_attention_dim: Optional[int] = None,
- heads: int = 8,
- dim_head: int = 64,
- dropout: float = 0.0,
- bias=False,
- upcast_attention: bool = False,
- upcast_softmax: bool = False,
- added_kv_proj_dim: Optional[int] = None,
- norm_num_groups: Optional[int] = None,
- processor: Optional["AttnProcessor"] = None,
- ):
- super().__init__()
- inner_dim = dim_head * heads
- cross_attention_dim = cross_attention_dim if cross_attention_dim is not None else query_dim
- self.upcast_attention = upcast_attention
- self.upcast_softmax = upcast_softmax
-
- self.scale = dim_head**-0.5
- self.num_heads = heads
- self.head_dim = inner_dim // heads
- # for slice_size > 0 the attention score computation
- # is split across the batch axis to save memory
- # You can set slice_size with `set_attention_slice`
- self.sliceable_head_dim = heads
-
- self.added_kv_proj_dim = added_kv_proj_dim
-
- if norm_num_groups is not None:
- self.group_norm = nn.GroupNorm(num_channels=inner_dim, num_groups=norm_num_groups, epsilon=1e-5)
- else:
- self.group_norm = None
-
- self.to_q = nn.Linear(query_dim, inner_dim, bias_attr=bias)
- self.to_k = nn.Linear(cross_attention_dim, inner_dim, bias_attr=bias)
- self.to_v = nn.Linear(cross_attention_dim, inner_dim, bias_attr=bias)
-
- if self.added_kv_proj_dim is not None:
- self.add_k_proj = nn.Linear(added_kv_proj_dim, cross_attention_dim)
- self.add_v_proj = nn.Linear(added_kv_proj_dim, cross_attention_dim)
-
- self.to_out = nn.LayerList([])
- self.to_out.append(nn.Linear(inner_dim, query_dim))
- self.to_out.append(nn.Dropout(dropout))
-
- # set attention processor
- processor = processor if processor is not None else CrossAttnProcessor()
- self.set_processor(processor)
-
- def set_attention_slice(self, slice_size):
- if slice_size is not None and slice_size > self.sliceable_head_dim:
- raise ValueError(f"slice_size {slice_size} has to be smaller or equal to {self.sliceable_head_dim}.")
-
- if slice_size is not None and self.added_kv_proj_dim is not None:
- processor = SlicedAttnAddedKVProcessor(slice_size)
- elif slice_size is not None:
- processor = SlicedAttnProcessor(slice_size)
- elif self.added_kv_proj_dim is not None:
- processor = CrossAttnAddedKVProcessor()
- else:
- processor = CrossAttnProcessor()
-
- self.set_processor(processor)
-
- def set_processor(self, processor: "AttnProcessor"):
- self.processor = processor
-
- def forward(self, hidden_states, encoder_hidden_states=None, attention_mask=None, **cross_attention_kwargs):
- # The `CrossAttention` class can call different attention processors / attention functions
- # here we simply pass along all tensors to the selected processor class
- # For standard processors that are defined here, `**cross_attention_kwargs` is empty
- return self.processor(
- self,
- hidden_states,
- encoder_hidden_states=encoder_hidden_states,
- attention_mask=attention_mask,
- **cross_attention_kwargs,
- )
-
- def batch_to_head_dim(self, tensor):
- tensor = tensor.transpose([0, 2, 1, 3])
- tensor = tensor.reshape([0, 0, tensor.shape[2] * tensor.shape[3]])
- return tensor
-
- def head_to_batch_dim(self, tensor):
- tensor = tensor.reshape([0, 0, self.num_heads, self.head_dim])
- tensor = tensor.transpose([0, 2, 1, 3])
- return tensor
-
- def get_attention_scores(self, query, key, attention_mask=None):
- if self.upcast_attention:
- query = query.cast("float32")
- key = key.cast("float32")
-
- attention_scores = paddle.matmul(query, key, transpose_y=True) * self.scale
-
- if attention_mask is not None:
- attention_scores = attention_scores + attention_mask
-
- if self.upcast_softmax:
- attention_scores = attention_scores.cast("float32")
-
- attention_probs = F.softmax(attention_scores, axis=-1)
- if self.upcast_softmax:
- attention_probs = attention_probs.cast(query.dtype)
-
- return attention_probs
-
- def prepare_attention_mask(self, attention_mask, target_length):
- if attention_mask is None:
- return attention_mask
-
- if attention_mask.shape[-1] != target_length:
- attention_mask = F.pad(attention_mask, (0, target_length), value=0.0, data_format="NCL")
- attention_mask = attention_mask.repeat_interleave(self.num_heads, axis=0)
- return attention_mask
-
-
-class CrossAttnProcessor:
- def __call__(self, attn: CrossAttention, hidden_states, encoder_hidden_states=None, attention_mask=None):
- batch_size, sequence_length, _ = hidden_states.shape
- attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length)
- attention_mask = (
- attention_mask.reshape([batch_size, attn.num_heads, -1, attention_mask.shape[-1]])
- if attention_mask is not None
- else None
- )
-
- query = attn.to_q(hidden_states)
- query = attn.head_to_batch_dim(query)
-
- encoder_hidden_states = encoder_hidden_states if encoder_hidden_states is not None else hidden_states
- key = attn.to_k(encoder_hidden_states)
- value = attn.to_v(encoder_hidden_states)
- key = attn.head_to_batch_dim(key)
- value = attn.head_to_batch_dim(value)
-
- attention_probs = attn.get_attention_scores(query, key, attention_mask)
- hidden_states = paddle.matmul(attention_probs, value)
- hidden_states = attn.batch_to_head_dim(hidden_states)
-
- # linear proj
- hidden_states = attn.to_out[0](hidden_states)
- # dropout
- hidden_states = attn.to_out[1](hidden_states)
-
- return hidden_states
-
-
-class LoRALinearLayer(nn.Layer):
- def __init__(self, in_features, out_features, rank=4):
- super().__init__()
-
- if rank > min(in_features, out_features):
- raise ValueError(f"LoRA rank {rank} must be less or equal than {min(in_features, out_features)}")
-
- self.down = nn.Linear(in_features, rank, bias_attr=False)
- self.up = nn.Linear(rank, out_features, bias_attr=False)
- self.scale = 1.0
-
- normal_(self.down.weight, std=1 / rank)
- zeros_(self.up.weight)
-
- def forward(self, hidden_states):
- orig_dtype = hidden_states.dtype
- dtype = self.down.weight.dtype
-
- down_hidden_states = self.down(hidden_states.cast(dtype))
- up_hidden_states = self.up(down_hidden_states)
-
- return up_hidden_states.cast(orig_dtype)
-
-
-class LoRACrossAttnProcessor(nn.Layer):
- def __init__(self, hidden_size, cross_attention_dim=None, rank=4):
- super().__init__()
-
- self.hidden_size = hidden_size
- self.cross_attention_dim = cross_attention_dim
- self.rank = rank
-
- self.to_q_lora = LoRALinearLayer(hidden_size, hidden_size, rank)
- self.to_k_lora = LoRALinearLayer(cross_attention_dim or hidden_size, hidden_size, rank)
- self.to_v_lora = LoRALinearLayer(cross_attention_dim or hidden_size, hidden_size, rank)
- self.to_out_lora = LoRALinearLayer(hidden_size, hidden_size, rank)
-
- def __call__(
- self, attn: CrossAttention, hidden_states, encoder_hidden_states=None, attention_mask=None, scale=1.0
- ):
- batch_size, sequence_length, _ = hidden_states.shape
- attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length)
- attention_mask = (
- attention_mask.reshape([batch_size, attn.num_heads, -1, attention_mask.shape[-1]])
- if attention_mask is not None
- else None
- )
-
- query = attn.to_q(hidden_states) + scale * self.to_q_lora(hidden_states)
- query = attn.head_to_batch_dim(query)
-
- encoder_hidden_states = encoder_hidden_states if encoder_hidden_states is not None else hidden_states
-
- key = attn.to_k(encoder_hidden_states) + scale * self.to_k_lora(encoder_hidden_states)
- value = attn.to_v(encoder_hidden_states) + scale * self.to_v_lora(encoder_hidden_states)
-
- key = attn.head_to_batch_dim(key)
- value = attn.head_to_batch_dim(value)
-
- attention_probs = attn.get_attention_scores(query, key, attention_mask)
- hidden_states = paddle.matmul(attention_probs, value)
- hidden_states = attn.batch_to_head_dim(hidden_states)
-
- # linear proj
- hidden_states = attn.to_out[0](hidden_states) + scale * self.to_out_lora(hidden_states)
- # dropout
- hidden_states = attn.to_out[1](hidden_states)
-
- return hidden_states
-
-
-class CrossAttnAddedKVProcessor:
- def __call__(self, attn: CrossAttention, hidden_states, encoder_hidden_states=None, attention_mask=None):
- residual = hidden_states
- hidden_states = hidden_states.reshape([hidden_states.shape[0], hidden_states.shape[1], -1]).transpose(
- [0, 2, 1]
- )
- batch_size, sequence_length, _ = hidden_states.shape
- encoder_hidden_states = encoder_hidden_states.transpose([0, 2, 1])
-
- attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length)
- attention_mask = (
- attention_mask.reshape([batch_size, attn.num_heads, -1, attention_mask.shape[-1]])
- if attention_mask is not None
- else None
- )
-
- hidden_states = attn.group_norm(hidden_states.transpose([0, 2, 1])).transpose([0, 2, 1])
-
- query = attn.to_q(hidden_states)
- query = attn.head_to_batch_dim(query)
-
- key = attn.to_k(hidden_states)
- value = attn.to_v(hidden_states)
- key = attn.head_to_batch_dim(key)
- value = attn.head_to_batch_dim(value)
-
- encoder_hidden_states_key_proj = attn.add_k_proj(encoder_hidden_states)
- encoder_hidden_states_value_proj = attn.add_v_proj(encoder_hidden_states)
- encoder_hidden_states_key_proj = attn.head_to_batch_dim(encoder_hidden_states_key_proj)
- encoder_hidden_states_value_proj = attn.head_to_batch_dim(encoder_hidden_states_value_proj)
-
- key = paddle.concat([encoder_hidden_states_key_proj, key], axis=2)
- value = paddle.concat([encoder_hidden_states_value_proj, value], axis=2)
-
- attention_probs = attn.get_attention_scores(query, key, attention_mask)
- hidden_states = paddle.matmul(attention_probs, value)
- hidden_states = attn.batch_to_head_dim(hidden_states)
-
- # linear proj
- hidden_states = attn.to_out[0](hidden_states)
- # dropout
- hidden_states = attn.to_out[1](hidden_states)
-
- hidden_states = hidden_states.transpose([0, 2, 1]).reshape(residual.shape)
- hidden_states = hidden_states + residual
-
- return hidden_states
-
-
-class SlicedAttnProcessor:
- def __init__(self, slice_size):
- self.slice_size = slice_size
-
- def __call__(self, attn: CrossAttention, hidden_states, encoder_hidden_states=None, attention_mask=None):
- batch_size, sequence_length, _ = hidden_states.shape
-
- attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length)
-
- query = attn.to_q(hidden_states)
- query = attn.head_to_batch_dim(query)
-
- encoder_hidden_states = encoder_hidden_states if encoder_hidden_states is not None else hidden_states
- key = attn.to_k(encoder_hidden_states)
- value = attn.to_v(encoder_hidden_states)
- key = attn.head_to_batch_dim(key)
- value = attn.head_to_batch_dim(value)
-
- query = query.flatten(0, 1)
- key = key.flatten(0, 1)
- value = value.flatten(0, 1)
-
- batch_size_attention = query.shape[0]
- hidden_states = paddle.zeros((batch_size_attention, sequence_length, attn.head_dim), dtype=query.dtype)
-
- for i in range(hidden_states.shape[0] // self.slice_size):
- start_idx = i * self.slice_size
- end_idx = (i + 1) * self.slice_size
-
- query_slice = query[start_idx:end_idx]
- key_slice = key[start_idx:end_idx]
- attn_mask_slice = attention_mask[start_idx:end_idx] if attention_mask is not None else None
-
- attn_slice = attn.get_attention_scores(query_slice, key_slice, attn_mask_slice)
-
- attn_slice = paddle.matmul(attn_slice, value[start_idx:end_idx])
-
- hidden_states[start_idx:end_idx] = attn_slice
-
- # reshape back to [bs, num_heads, seqlen, head_dim]
- hidden_states = hidden_states.reshape([-1, attn.num_heads, sequence_length, attn.head_dim])
- # reshape hidden_states
- hidden_states = attn.batch_to_head_dim(hidden_states)
-
- # linear proj
- hidden_states = attn.to_out[0](hidden_states)
- # dropout
- hidden_states = attn.to_out[1](hidden_states)
-
- return hidden_states
-
-
-class SlicedAttnAddedKVProcessor:
- def __init__(self, slice_size):
- self.slice_size = slice_size
-
- def __call__(self, attn: "CrossAttention", hidden_states, encoder_hidden_states=None, attention_mask=None):
- residual = hidden_states
- hidden_states = hidden_states.reshape([hidden_states.shape[0], hidden_states.shape[1], -1]).transpose(
- [0, 2, 1]
- )
- encoder_hidden_states = encoder_hidden_states.transpose([0, 2, 1])
-
- batch_size, sequence_length, _ = hidden_states.shape
-
- attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length)
-
- hidden_states = attn.group_norm(hidden_states.transpose([0, 2, 1])).transpose([0, 2, 1])
-
- query = attn.to_q(hidden_states)
- query = attn.head_to_batch_dim(query)
-
- key = attn.to_k(hidden_states)
- value = attn.to_v(hidden_states)
- encoder_hidden_states_key_proj = attn.add_k_proj(encoder_hidden_states)
- encoder_hidden_states_value_proj = attn.add_v_proj(encoder_hidden_states)
-
- key = attn.head_to_batch_dim(key)
- value = attn.head_to_batch_dim(value)
- encoder_hidden_states_key_proj = attn.head_to_batch_dim(encoder_hidden_states_key_proj)
- encoder_hidden_states_value_proj = attn.head_to_batch_dim(encoder_hidden_states_value_proj)
-
- key = paddle.concat([encoder_hidden_states_key_proj, key], axis=2)
- value = paddle.concat([encoder_hidden_states_value_proj, value], axis=2)
-
- query = query.flatten(0, 1)
- key = key.flatten(0, 1)
- value = value.flatten(0, 1)
-
- batch_size_attention = query.shape[0]
- hidden_states = paddle.zeros((batch_size_attention, sequence_length, attn.head_dim), dtype=query.dtype)
- for i in range(hidden_states.shape[0] // self.slice_size):
- start_idx = i * self.slice_size
- end_idx = (i + 1) * self.slice_size
-
- query_slice = query[start_idx:end_idx]
- key_slice = key[start_idx:end_idx]
- attn_mask_slice = attention_mask[start_idx:end_idx] if attention_mask is not None else None
-
- attn_slice = attn.get_attention_scores(query_slice, key_slice, attn_mask_slice)
-
- attn_slice = paddle.matmul(attn_slice, value[start_idx:end_idx])
-
- hidden_states[start_idx:end_idx] = attn_slice
-
- # reshape back to [bs, num_heads, seqlen, head_dim]
- hidden_states = hidden_states.reshape([-1, attn.num_heads, sequence_length, attn.head_dim])
- # reshape hidden_states
- hidden_states = attn.batch_to_head_dim(hidden_states)
-
- # linear proj
- hidden_states = attn.to_out[0](hidden_states)
- # dropout
- hidden_states = attn.to_out[1](hidden_states)
-
- hidden_states = hidden_states.transpose([0, 2, 1]).reshape(residual.shape)
- hidden_states = hidden_states + residual
-
- return hidden_states
-
-
-AttnProcessor = Union[
- CrossAttnProcessor,
- SlicedAttnProcessor,
- CrossAttnAddedKVProcessor,
- SlicedAttnAddedKVProcessor,
-]
diff --git a/spaces/34we12er/newbing/Dockerfile b/spaces/34we12er/newbing/Dockerfile
deleted file mode 100644
index 54a04ddfd8c57faf6e91d6f2148994dc628ea549..0000000000000000000000000000000000000000
--- a/spaces/34we12er/newbing/Dockerfile
+++ /dev/null
@@ -1,34 +0,0 @@
-# Build Stage
-# 使用 golang:alpine 作为构建阶段的基础镜像
-FROM golang:alpine AS builder
-
-# 添加 git,以便之后能从GitHub克隆项目
-RUN apk --no-cache add git
-
-# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下
-RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app
-
-# 设置工作目录为之前克隆的项目目录
-WORKDIR /workspace/app
-
-# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小
-RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go
-
-# Runtime Stage
-# 使用轻量级的 alpine 镜像作为运行时的基础镜像
-FROM alpine
-
-# 设置工作目录
-WORKDIR /workspace/app
-
-# 从构建阶段复制编译后的二进制文件到运行时镜像中
-COPY --from=builder /workspace/app/go-proxy-bingai .
-
-# 设置环境变量,此处为随机字符
-ENV Go_Proxy_BingAI_USER_TOKEN_1="adhdadtbjxiuaj2562715zshyw38bjxy012hdy37bdola9"
-
-# 暴露8080端口
-EXPOSE 8080
-
-# 容器启动时运行的命令
-CMD ["/workspace/app/go-proxy-bingai"]
\ No newline at end of file
diff --git a/spaces/4Taps/SadTalker/src/facerender/modules/keypoint_detector.py b/spaces/4Taps/SadTalker/src/facerender/modules/keypoint_detector.py
deleted file mode 100644
index 62a38a962b2f1a4326aac771aced353ec5e22a96..0000000000000000000000000000000000000000
--- a/spaces/4Taps/SadTalker/src/facerender/modules/keypoint_detector.py
+++ /dev/null
@@ -1,179 +0,0 @@
-from torch import nn
-import torch
-import torch.nn.functional as F
-
-from src.facerender.sync_batchnorm import SynchronizedBatchNorm2d as BatchNorm2d
-from src.facerender.modules.util import KPHourglass, make_coordinate_grid, AntiAliasInterpolation2d, ResBottleneck
-
-
-class KPDetector(nn.Module):
- """
- Detecting canonical keypoints. Return keypoint position and jacobian near each keypoint.
- """
-
- def __init__(self, block_expansion, feature_channel, num_kp, image_channel, max_features, reshape_channel, reshape_depth,
- num_blocks, temperature, estimate_jacobian=False, scale_factor=1, single_jacobian_map=False):
- super(KPDetector, self).__init__()
-
- self.predictor = KPHourglass(block_expansion, in_features=image_channel,
- max_features=max_features, reshape_features=reshape_channel, reshape_depth=reshape_depth, num_blocks=num_blocks)
-
- # self.kp = nn.Conv3d(in_channels=self.predictor.out_filters, out_channels=num_kp, kernel_size=7, padding=3)
- self.kp = nn.Conv3d(in_channels=self.predictor.out_filters, out_channels=num_kp, kernel_size=3, padding=1)
-
- if estimate_jacobian:
- self.num_jacobian_maps = 1 if single_jacobian_map else num_kp
- # self.jacobian = nn.Conv3d(in_channels=self.predictor.out_filters, out_channels=9 * self.num_jacobian_maps, kernel_size=7, padding=3)
- self.jacobian = nn.Conv3d(in_channels=self.predictor.out_filters, out_channels=9 * self.num_jacobian_maps, kernel_size=3, padding=1)
- '''
- initial as:
- [[1 0 0]
- [0 1 0]
- [0 0 1]]
- '''
- self.jacobian.weight.data.zero_()
- self.jacobian.bias.data.copy_(torch.tensor([1, 0, 0, 0, 1, 0, 0, 0, 1] * self.num_jacobian_maps, dtype=torch.float))
- else:
- self.jacobian = None
-
- self.temperature = temperature
- self.scale_factor = scale_factor
- if self.scale_factor != 1:
- self.down = AntiAliasInterpolation2d(image_channel, self.scale_factor)
-
- def gaussian2kp(self, heatmap):
- """
- Extract the mean from a heatmap
- """
- shape = heatmap.shape
- heatmap = heatmap.unsqueeze(-1)
- grid = make_coordinate_grid(shape[2:], heatmap.type()).unsqueeze_(0).unsqueeze_(0)
- value = (heatmap * grid).sum(dim=(2, 3, 4))
- kp = {'value': value}
-
- return kp
-
- def forward(self, x):
- if self.scale_factor != 1:
- x = self.down(x)
-
- feature_map = self.predictor(x)
- prediction = self.kp(feature_map)
-
- final_shape = prediction.shape
- heatmap = prediction.view(final_shape[0], final_shape[1], -1)
- heatmap = F.softmax(heatmap / self.temperature, dim=2)
- heatmap = heatmap.view(*final_shape)
-
- out = self.gaussian2kp(heatmap)
-
- if self.jacobian is not None:
- jacobian_map = self.jacobian(feature_map)
- jacobian_map = jacobian_map.reshape(final_shape[0], self.num_jacobian_maps, 9, final_shape[2],
- final_shape[3], final_shape[4])
- heatmap = heatmap.unsqueeze(2)
-
- jacobian = heatmap * jacobian_map
- jacobian = jacobian.view(final_shape[0], final_shape[1], 9, -1)
- jacobian = jacobian.sum(dim=-1)
- jacobian = jacobian.view(jacobian.shape[0], jacobian.shape[1], 3, 3)
- out['jacobian'] = jacobian
-
- return out
-
-
-class HEEstimator(nn.Module):
- """
- Estimating head pose and expression.
- """
-
- def __init__(self, block_expansion, feature_channel, num_kp, image_channel, max_features, num_bins=66, estimate_jacobian=True):
- super(HEEstimator, self).__init__()
-
- self.conv1 = nn.Conv2d(in_channels=image_channel, out_channels=block_expansion, kernel_size=7, padding=3, stride=2)
- self.norm1 = BatchNorm2d(block_expansion, affine=True)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
-
- self.conv2 = nn.Conv2d(in_channels=block_expansion, out_channels=256, kernel_size=1)
- self.norm2 = BatchNorm2d(256, affine=True)
-
- self.block1 = nn.Sequential()
- for i in range(3):
- self.block1.add_module('b1_'+ str(i), ResBottleneck(in_features=256, stride=1))
-
- self.conv3 = nn.Conv2d(in_channels=256, out_channels=512, kernel_size=1)
- self.norm3 = BatchNorm2d(512, affine=True)
- self.block2 = ResBottleneck(in_features=512, stride=2)
-
- self.block3 = nn.Sequential()
- for i in range(3):
- self.block3.add_module('b3_'+ str(i), ResBottleneck(in_features=512, stride=1))
-
- self.conv4 = nn.Conv2d(in_channels=512, out_channels=1024, kernel_size=1)
- self.norm4 = BatchNorm2d(1024, affine=True)
- self.block4 = ResBottleneck(in_features=1024, stride=2)
-
- self.block5 = nn.Sequential()
- for i in range(5):
- self.block5.add_module('b5_'+ str(i), ResBottleneck(in_features=1024, stride=1))
-
- self.conv5 = nn.Conv2d(in_channels=1024, out_channels=2048, kernel_size=1)
- self.norm5 = BatchNorm2d(2048, affine=True)
- self.block6 = ResBottleneck(in_features=2048, stride=2)
-
- self.block7 = nn.Sequential()
- for i in range(2):
- self.block7.add_module('b7_'+ str(i), ResBottleneck(in_features=2048, stride=1))
-
- self.fc_roll = nn.Linear(2048, num_bins)
- self.fc_pitch = nn.Linear(2048, num_bins)
- self.fc_yaw = nn.Linear(2048, num_bins)
-
- self.fc_t = nn.Linear(2048, 3)
-
- self.fc_exp = nn.Linear(2048, 3*num_kp)
-
- def forward(self, x):
- out = self.conv1(x)
- out = self.norm1(out)
- out = F.relu(out)
- out = self.maxpool(out)
-
- out = self.conv2(out)
- out = self.norm2(out)
- out = F.relu(out)
-
- out = self.block1(out)
-
- out = self.conv3(out)
- out = self.norm3(out)
- out = F.relu(out)
- out = self.block2(out)
-
- out = self.block3(out)
-
- out = self.conv4(out)
- out = self.norm4(out)
- out = F.relu(out)
- out = self.block4(out)
-
- out = self.block5(out)
-
- out = self.conv5(out)
- out = self.norm5(out)
- out = F.relu(out)
- out = self.block6(out)
-
- out = self.block7(out)
-
- out = F.adaptive_avg_pool2d(out, 1)
- out = out.view(out.shape[0], -1)
-
- yaw = self.fc_roll(out)
- pitch = self.fc_pitch(out)
- roll = self.fc_yaw(out)
- t = self.fc_t(out)
- exp = self.fc_exp(out)
-
- return {'yaw': yaw, 'pitch': pitch, 'roll': roll, 't': t, 'exp': exp}
-
diff --git a/spaces/52Hz/SRMNet_thesis/app.py b/spaces/52Hz/SRMNet_thesis/app.py
deleted file mode 100644
index 97ccaf8b139bd20f995b698a42f48014f918f416..0000000000000000000000000000000000000000
--- a/spaces/52Hz/SRMNet_thesis/app.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import os
-import gradio as gr
-from PIL import Image
-
-
-os.system('wget https://github.com/FanChiMao/SRMNet-thesis/releases/download/v0.0/Deblurring_motionblur.pth -P experiments/pretrained_models')
-os.system('wget https://github.com/FanChiMao/SRMNet-thesis/releases/download/v0.0/Dehaze_realworld.pth -P experiments/pretrained_models')
-os.system('wget https://github.com/FanChiMao/SRMNet-thesis/releases/download/v0.0/Denoise_gaussian.pth -P experiments/pretrained_models')
-os.system('wget https://github.com/FanChiMao/SRMNet-thesis/releases/download/v0.0/Denoise_realworld.pth -P experiments/pretrained_models')
-os.system('wget https://github.com/FanChiMao/SRMNet-thesis/releases/download/v0.0/Deraining_raindrop.pth -P experiments/pretrained_models')
-os.system('wget https://github.com/FanChiMao/SRMNet-thesis/releases/download/v0.0/Deraining_rainstreak.pth -P experiments/pretrained_models')
-os.system('wget https://github.com/FanChiMao/SRMNet-thesis/releases/download/v0.0/LLEnhancement.pth -P experiments/pretrained_models')
-os.system('wget https://github.com/FanChiMao/SRMNet-thesis/releases/download/v0.0/Retouching.pth -P experiments/pretrained_models')
-
-def inference(img, model):
- os.system('mkdir test')
- img.save("test/1.png", "PNG")
-
- if model == 'Denoising (gaussian)':
- os.system('python main_test_SRMNet.py --input_dir test --task Denoise_gaussian')
- elif model == 'Denoising (real-world)':
- os.system('python main_test_SRMNet.py --input_dir test --task Denoise_realworld')
- elif model == 'Deblurring (motion-blur)':
- os.system('python main_test_SRMNet.py --input_dir test --task Deblurring_motionblur')
- elif model == 'Dehazing (dense haze)':
- os.system('python main_test_SRMNet.py --input_dir test --task Dehaze_realworld')
- elif model == 'Deraining (rainstreak)':
- os.system('python main_test_SRMNet.py --input_dir test --task Deraining_rainstreak')
- elif model == 'Deraining (raindrop)':
- os.system('python main_test_SRMNet.py --input_dir test --task Deraining_raindrop')
- elif model == 'Low-light Enhancement':
- os.system('python main_test_SRMNet.py --input_dir test --task LLEnhancement')
- elif model == 'Retouching':
- os.system('python main_test_SRMNet.py --input_dir test --task Retouching')
-
- return 'result/1.png'
-
-
-title = "[NCHU thesis] Image Restoration by Selective Residual Block on Improved Hierarchical Encoder-Decoder Networks"
-description = ""
-article = "Image Restoration by Selective Residual Block on Improved Hierarchical Encoder-Decoder Networks | Github Repo
"
-
-examples = [
-['figures/noise_1.png', 'Denoising (gaussian)'],
-['figures/noise_2.png', 'Denoising (real-world)'],
-['figures/blur.png', 'Deblurring (motion-blur)'],
-['figures/haze.png', 'Dehazing (dense haze)'],
-['figures/rainstreak.png', 'Deraining (rainstreak)'],
-['figures/raindrop.png', 'Deraining (raindrop)'],
-['figures/LL.png', 'Low-light Enhancement'],
-['figures/nchu.png', 'Retouching'],
-]
-gr.Interface(
- inference,
- [gr.inputs.Image(type="pil", label="Input"), gr.inputs.Dropdown(choices=[
- 'Denoising (gaussian)',
- 'Denoising (real-world)',
- 'Deblurring (motion-blur)',
- 'Dehazing (dense haze)',
- 'Deraining (rainstreak)',
- 'Deraining (raindrop)',
- 'Low-light Enhancement',
- 'Retouching',
- ], type="value", default='Denoising (gaussian)', label="model")],
- gr.outputs.Image(type="file", label="Output"),
- title=title,
- description=description,
- article=article,
- allow_flagging=False,
- allow_screenshot=False,
- examples=examples
-).launch(debug=True)
\ No newline at end of file
diff --git a/spaces/A666sxr/Genshin_TTS/losses.py b/spaces/A666sxr/Genshin_TTS/losses.py
deleted file mode 100644
index f54cbaad0c849d3bbc83ae4dc2f5c4ea02a76b67..0000000000000000000000000000000000000000
--- a/spaces/A666sxr/Genshin_TTS/losses.py
+++ /dev/null
@@ -1,71 +0,0 @@
-import torch
-from torch.nn import functional as F
-from stft_loss import MultiResolutionSTFTLoss
-
-
-import commons
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- rl = rl.float().detach()
- gl = gl.float()
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- dr = dr.float()
- dg = dg.float()
- r_loss = torch.mean((1-dr)**2)
- g_loss = torch.mean(dg**2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- dg = dg.float()
- l = torch.mean((1-dg)**2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
-
-
-def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
- """
- z_p, logs_q: [b, h, t_t]
- m_p, logs_p: [b, h, t_t]
- """
- z_p = z_p.float()
- logs_q = logs_q.float()
- m_p = m_p.float()
- logs_p = logs_p.float()
- z_mask = z_mask.float()
-
- kl = logs_p - logs_q - 0.5
- kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p)
- kl = torch.sum(kl * z_mask)
- l = kl / torch.sum(z_mask)
- return l
-
-def subband_stft_loss(h, y_mb, y_hat_mb):
- sub_stft_loss = MultiResolutionSTFTLoss(h.train.fft_sizes, h.train.hop_sizes, h.train.win_lengths)
- y_mb = y_mb.view(-1, y_mb.size(2))
- y_hat_mb = y_hat_mb.view(-1, y_hat_mb.size(2))
- sub_sc_loss, sub_mag_loss = sub_stft_loss(y_hat_mb[:, :y_mb.size(-1)], y_mb)
- return sub_sc_loss+sub_mag_loss
-
diff --git a/spaces/AB-TW/team-ai/agents/tools/smart_domain/api_layer_code_tool.py b/spaces/AB-TW/team-ai/agents/tools/smart_domain/api_layer_code_tool.py
deleted file mode 100644
index 298c16964a5c4dcbfb6f244e72595e0422783ee6..0000000000000000000000000000000000000000
--- a/spaces/AB-TW/team-ai/agents/tools/smart_domain/api_layer_code_tool.py
+++ /dev/null
@@ -1,96 +0,0 @@
-from langchain import LLMChain, PromptTemplate
-from langchain.agents import tool
-
-from models import llm
-
-
-API_LAYER = """You are a software developer. Your task is to generate the api layer tests and product code.
-
-===TechStack
-Java17、reactor、lombok、Junit5、reactor test、Mockito、 Spring WebFlux、Spring Boot Test
-===END OF TechStack
-
-===Architecture
-the api layer inclue 2 componets:
-* DTO: This component is use to define data structure that api request and response.
-* Controller: This component is use to define the interface to access api.
- ---eaxmple code:
- @RestController
- @RequiredArgsConstructor
- @RequestMapping("/features")
- public class FeatureController {{
- private final Features features;
-
- @GetMapping()
- public Flux findAll() {{
- return features.getAll();
- }}
-
- @PostMapping()
- public Mono add(@RequestBody Feature feature) {{
- return features.add(feature);
- }}
- }}
- ---end of eaxmple code
-===END OF Architecture
-
-===TestStrategy
-For the Controller and DTO, we can write component test to test the actual implementation of api operations, test class rely on Association interface use WebFluxTest and WebTestClient ability.
- ---eaxmple code:
- @ExtendWith(SpringExtension.class)
- @WebFluxTest(value = FeatureFlagApi.class, properties = "spring.main.lazy-initialization=true")
- @ContextConfiguration(classes = TestConfiguration.class)
- class FeatureControllerTest extends ControllerTestBase {{
- @Autowired
- WebTestClient webClient;
-
- @MockBean
- Features features;
-
- @Test
- void should_getAll_success_when_no_records() {{
- when(features.getAll(Mockito.any())).thenReturn(Flux.empty());
-
- webClient.get()
- .uri("/features")
- .exchange()
- .expectStatus()
- .isOk()
- .expectBodyList(FeatureFlagResponse.class)
- .hasSize(0);
- }}
- }}
- ---end of eaxmple code
-===END OF TestStrategy
-
-Use the following format:
-request: the request that you need to fulfill include Entity and Association of domain layer
-
-DTO:
-```
-the DTO code that you write to fulfill the request, follow TechStack and Architecture
-```
-
-Controller:
-```
-the Controller code that you write to fulfill the request, follow TechStack and Architecture
-```
-
-Test:
-```
-the test code that you write to fulfill the request, follow TechStack Architecture and TestStrategy
-```
-
-request: {input}"""
-
-API_LAYER_PROMPT = PromptTemplate(input_variables=["input"], template=API_LAYER,)
-
-
-apiChain = LLMChain(llm = llm(temperature=0.1), prompt=API_LAYER_PROMPT)
-
-
-@tool("Generate API Layer Code", return_direct=True)
-def apiLayerCodeGenerator(input: str) -> str:
- '''useful for when you need to generate API layer code'''
- response = apiChain.run(input)
- return response
\ No newline at end of file
diff --git a/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Getting Started 6bc871dcdd4a4554b5b22c0c40740841.md b/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Getting Started 6bc871dcdd4a4554b5b22c0c40740841.md
deleted file mode 100644
index 0dc28d1f5f82270eec008b269abe7fb10dcfb43b..0000000000000000000000000000000000000000
--- a/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Getting Started 6bc871dcdd4a4554b5b22c0c40740841.md
+++ /dev/null
@@ -1,66 +0,0 @@
-# Getting Started
-
-Last edited time: March 31, 2023 1:49 PM
-Owner: Anonymous
-Tags: Guides and Processes
-
-
-💡 Notion Tip: When creating a page, it's important to give it a clear title and provide some content. This could include verifying the information, summarizing the topic, or sharing your thoughts and opinions on something that matters to you.
-
-
-
-# The Basics
-
-## Create a Page
-
-In your sidebar, click the `+` that appears next to the word **Workspace** on hover. A new page will appear. Give it a title and start typing like you would in any other document.
-
-## Headings
-
-You can add headings and subheadings in one of two ways:
-
-- Type `/heading` or `/h1`, `/h2`, or `/h3` to choose the heading size you want.
-- Use Markdown shortcuts, like `#`, `##`, and `###`.
- - Create inline code by wrapping text with ``` (or with the shortcut `cmd/ctrl + e`).
-
-## Toggle Lists
-
-- Toggle lists streamline your content. Click the arrow to open.
- - Click the arrow again to hide this content.
- - Create a toggle by typing `/toggle` and pressing `enter`.
- - You can add anything to toggles, including images and embeds.
-
-## Callout Blocks
-
-
-💡 Create a callout block like this by typing `/call` and pressing `enter`.
-Helpful for adding inline instructions, warnings, disclaimers, and tips.
-Change the emoji icon by clicking on it.
-
-
-
-## Code Blocks
-
-You can add code notation to any Notion page:
-
-- Type `/code` and press `enter`.
-- Choose the language from the dropdown in the bottom right corner.
-- Here's an example:
-
-```html
-Hover over this block to see the Copy to Clipboard option!
-```
-
-- Your teammates can select any code to comment on it.
-
-## Organizing Pages
-
-Instead of using folders, Notion lets you nest pages inside pages.
-
-- Type `/page` and press `enter` to create a sub-page inside a page. Like this:
-
-[Example sub-page](Getting%20Started%206bc871dcdd4a4554b5b22c0c40740841/Example%20sub-page%2048f64d6186ec4428b2e4180475245a9c.md)
-
-# Advanced Techniques
-
-Check out this [Notion Editor 101](https://www.notion.so/68c7c67047494fdb87d50185429df93e) guide for more advanced tips and how-to's.
\ No newline at end of file
diff --git a/spaces/AIConsultant/MusicGen/tests/models/test_musicgen.py b/spaces/AIConsultant/MusicGen/tests/models/test_musicgen.py
deleted file mode 100644
index 65618a9e2ef5bb382694b50b23dd50958d590d4e..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/tests/models/test_musicgen.py
+++ /dev/null
@@ -1,58 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import pytest
-import torch
-
-from audiocraft.models import MusicGen
-
-
-class TestMusicGenModel:
- def get_musicgen(self):
- mg = MusicGen.get_pretrained(name='debug', device='cpu')
- mg.set_generation_params(duration=2.0, extend_stride=2.)
- return mg
-
- def test_base(self):
- mg = self.get_musicgen()
- assert mg.frame_rate == 25
- assert mg.sample_rate == 32000
- assert mg.audio_channels == 1
-
- def test_generate_unconditional(self):
- mg = self.get_musicgen()
- wav = mg.generate_unconditional(3)
- assert list(wav.shape) == [3, 1, 64000]
-
- def test_generate_continuation(self):
- mg = self.get_musicgen()
- prompt = torch.randn(3, 1, 32000)
- wav = mg.generate_continuation(prompt, 32000)
- assert list(wav.shape) == [3, 1, 64000]
-
- prompt = torch.randn(2, 1, 32000)
- wav = mg.generate_continuation(
- prompt, 32000, ['youpi', 'lapin dort'])
- assert list(wav.shape) == [2, 1, 64000]
-
- prompt = torch.randn(2, 1, 32000)
- with pytest.raises(AssertionError):
- wav = mg.generate_continuation(
- prompt, 32000, ['youpi', 'lapin dort', 'one too many'])
-
- def test_generate(self):
- mg = self.get_musicgen()
- wav = mg.generate(
- ['youpi', 'lapin dort'])
- assert list(wav.shape) == [2, 1, 64000]
-
- def test_generate_long(self):
- mg = self.get_musicgen()
- mg.max_duration = 3.
- mg.set_generation_params(duration=4., extend_stride=2.)
- wav = mg.generate(
- ['youpi', 'lapin dort'])
- assert list(wav.shape) == [2, 1, 32000 * 4]
diff --git a/spaces/AIWaves/Debate/src/agents/Environment/base_environment.py b/spaces/AIWaves/Debate/src/agents/Environment/base_environment.py
deleted file mode 100644
index 2cf4f08bcd83f4f8c0437e0789db1456e13998e1..0000000000000000000000000000000000000000
--- a/spaces/AIWaves/Debate/src/agents/Environment/base_environment.py
+++ /dev/null
@@ -1,167 +0,0 @@
-from utils import get_relevant_history, get_embedding
-import torch
-from LLM.base_LLM import *
-from Memory import Memory
-from Prompt import *
-import json
-class Environment:
- """
- The place where the agent activities, responsible for storing some shared memories
- """
- def __init__(self, config) -> None:
- self.shared_memory = {"long_term_memory": [], "short_term_memory": None}
- self.agents = None
-
- self.summary_system_prompt = {}
- self.summary_last_prompt = {}
- self.environment_prompt = {}
- self.environment_type = config["environment_type"] if "environment_type" in config else "cooperative"
- self.current_chat_history_idx = 0
- self.LLMs = {}
-
- # 初始化每个state 的summary 方法
- # Initialize the summary method for each state
- for state_name, state_dict in config["states"].items():
- if state_name != "end_state":
- self.summary_system_prompt[state_name] = (
- state_dict["summary_system_prompt"]
- if "summary_system_prompt" in state_dict
- else eval(Default_environment_summary_system_prompt)
- )
-
- self.summary_last_prompt[state_name] = (
- state_dict["summary_last_prompt"]
- if "summary_last_prompt" in state_dict
- else eval(Default_environment_summary_last_prompt)
- )
-
- self.environment_prompt[state_name] = (
- state_dict["environment_prompt"]
- if "environment_prompt" in state_dict
- else " "
- )
- self.LLMs[state_name] = init_LLM(f"logs/{state_name}",**state_dict)
- self.roles_to_names = None
- self.names_to_roles = None
-
- @classmethod
- def from_config(cls, config_path):
- with open(config_path) as f:
- config = json.load(f)
- return cls(config)
-
- def summary(self, current_state):
- """
- Summarize the situation in the current environment every once in a while
- """
- MAX_CHAT_HISTORY = eval(os.environ["MAX_CHAT_HISTORY"])
- current_state_name = current_state.name
-
- query = self.shared_memory["long_term_memory"][-1].content
- relevant_history = get_relevant_history(
- query,
- self.shared_memory["long_term_memory"][:-1],
- self.shared_memory["chat_embeddings"][:-1],
- )
-
- relevant_history = Memory.get_chat_history(relevant_history)
- chat_history = Memory.get_chat_history(
- self.shared_memory["long_term_memory"][-MAX_CHAT_HISTORY + 1 :]
- )
- summary = self.shared_memory["short_term_memory"]
-
-
- # system prompt = environment prompt + current memory + system prompt
- # current_memory = summary + chat history + relevant history
- current_memory = eval(Environment_summary_memory)
- environment_prompt = self.environment_prompt[current_state_name]
- summary_system_prompt = self.summary_system_prompt[current_state_name]
-
- environment_summary_system_prompt = eval(Environment_summary_system_prompt)
- response = self.LLMs[current_state_name].get_response(None, environment_summary_system_prompt, stream=False)
- return response
-
- def update_memory(self, memory, current_state):
- """
- update chat embbedings and long term memory,short term memory,agents long term memory
- """
- MAX_CHAT_HISTORY = eval(os.environ["MAX_CHAT_HISTORY"])
- self.shared_memory["long_term_memory"].append(memory)
- current_embedding = get_embedding(memory.content)
- if "chat_embeddings" not in self.shared_memory:
- self.shared_memory["chat_embeddings"] = current_embedding
- else:
- self.shared_memory["chat_embeddings"] = torch.cat(
- [self.shared_memory["chat_embeddings"], current_embedding], dim=0
- )
- if len(self.shared_memory["long_term_memory"]) % MAX_CHAT_HISTORY == 0:
- summary = self.summary(current_state)
- self.shared_memory["short_term_memory"] = summary
-
- self.agents[memory.send_name].update_memory(memory)
-
-
- def _get_agent_last_conversation_idx(self,agent,current_long_term_memory):
- last_conversation_idx = -1
- for i, history in enumerate(current_long_term_memory):
- if history.send_name == agent.name:
- last_conversation_idx = i
- return last_conversation_idx
-
-
- def _get_agent_new_memory(self,agent,current_long_term_memory):
- # get new conversation
- last_conversation_idx = self._get_agent_last_conversation_idx(agent,current_long_term_memory)
-
- if last_conversation_idx == -1:
- new_conversation =current_long_term_memory
- elif (
- last_conversation_idx
- == len(current_long_term_memory) - 1
- ):
- new_conversation = []
- else:
- new_conversation = current_long_term_memory[
- last_conversation_idx + 1 :
- ]
-
- # get chat history from new conversation
- return Memory.get_chat_history(new_conversation)
-
-
- def _observe(self,agent):
- MAX_CHAT_HISTORY = eval(os.environ["MAX_CHAT_HISTORY"])
- current_state = agent.current_state
- current_role = agent.state_roles[current_state.name]
- current_component_dict = current_state.components[current_role]
-
- # cooperative:Sharing information between different states ; competive: No information is shared between different states
- current_chat_history_idx = self.current_chat_history_idx if self.environment_type == "competive" else 0
- current_long_term_memory = self.shared_memory["long_term_memory"][current_chat_history_idx:]
- current_chat_embbedings = self.shared_memory["chat_embeddings"][current_chat_history_idx:]
-
-
- # relevant_memory
- query = current_long_term_memory[-1].content
-
- relevant_memory = get_relevant_history(
- query,
- current_long_term_memory[:-1],
- current_chat_embbedings[:-1],
- )
- relevant_memory = Memory.get_chat_history(relevant_memory,agent.name)
-
- relevant_memory = eval(Agent_observe_relevant_memory)
- agent.relevant_memory = relevant_memory
-
-
- # get chat history from new conversation
- conversations = self._get_agent_new_memory(agent,current_long_term_memory)
-
- # memory = relevant_memory + summary + history + query
- query = current_long_term_memory[-1]
- current_memory = eval(Agent_observe_memory)
-
- return {"role": "user", "content": current_memory}
-
-
diff --git a/spaces/Aadarsh4all/ChatWithBear/app.py b/spaces/Aadarsh4all/ChatWithBear/app.py
deleted file mode 100644
index fbfe723a1c435df9740770883625bb6b813beb55..0000000000000000000000000000000000000000
--- a/spaces/Aadarsh4all/ChatWithBear/app.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import os
-import openai
-import gradio as gr
-
-#if you have OpenAI API key as an environment variable, enable the below
-#openai.api_key = os.getenv("OPENAI_API_KEY")
-
-#if you have OpenAI API key as a string, enable the below
-openai.api_key = "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"
-
-start_sequence = "\nAI:"
-restart_sequence = "\nHuman: "
-
-prompt = "Send A Message "
-
-def openai_create(prompt):
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=prompt,
- temperature=0.9,
- max_tokens=150,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0.6,
- stop=[" Human:", " AI:"]
- )
-
- return response.choices[0].text
-
-
-
-def chatgpt_clone(input, history):
- history = history or []
- s = list(sum(history, ()))
- s.append(input)
- inp = ' '.join(s)
- output = openai_create(inp)
- history.append((input, output))
- return history, history
-
-
-block = gr.Blocks()
-
-
-with block:
- gr.Markdown("""ChatWithBear
-
- """)
- chatbot = gr.Chatbot()
- message = gr.Textbox(placeholder=prompt)
- state = gr.State()
- submit = gr.Button("SEND")
- submit.click(chatgpt_clone, inputs=[message, state], outputs=[chatbot, state])
- gr.Markdown("""Made by Aadarsh with 💕
-
- """)
-
-block.launch()
\ No newline at end of file
diff --git a/spaces/Abdllh/poetry/app.py b/spaces/Abdllh/poetry/app.py
deleted file mode 100644
index 743e179975a957641a72c9206563bc53ca407c7b..0000000000000000000000000000000000000000
--- a/spaces/Abdllh/poetry/app.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import gc
-import gradio as gr
-from transformers import pipeline, set_seed
-
-pipe = pipeline('text-generation', framework='pt', model='akhooli/ap2023', tokenizer='akhooli/ap2023')
-#gc.collect()
-samples = [['أنت'
- ,1.0, 50, 1.0, 1.0, 114],['هل غادر'
- ,1.0, 50, 1.0, 1.0, 114 ],['ألا ليت'
- ,1.0, 50, 1.0, 1.0, 114 ],['يا قدس'
- ,1.0, 50, 1.0, 1.0, 114],['عيد بأية حال'
- ,1.0, 50, 1.0, 1.0, 114],['لكل شيء إذا ما'
- ,1.0, 50, 1.0, 1.0, 114 ],['.'
- ,1.0, 50, 1.0, 1.0, 114]]
-
-notes = """
-- Enter a short prompt or select (click) one of the examples and click SEND
-- Adjust parameters (temperture, top k, top p and penalty) through the slider (keep close to default values).
-- For the same seed (randomness), the same output is regenerated if other parameters are fixed. Seed should be 0 or more (not empty)
-- Clear and enter new prompt or select another example and SEND to regenerate
-- The '.' means start a new line from no prompt (your prompt need not be long)
-- Be patient: this runs on CPU (free tier)
-- Feedback (Twitter): @akhooli (https://twitter.com/akhooli/status/1611025232201977859)
-- Note/Disclaimer: may generate unaccepted or inappropriate content. Use at your own risk.
-"""
-def sayPoetry(prompt, temp=1.0, topk = 50, topp = 1.0, penalty=1.0, seed=114):
- if not int(seed) >= 0: seed=114
- set_seed(seed)
- gen = pipe(prompt, max_length=96, do_sample=True, temperature=temp, top_k=topk, top_p=topp, repetition_penalty=penalty,
- min_length = 64, no_repeat_ngram_size = 3, return_full_text=True,
- num_beams=5, num_return_sequences=1)[0]["generated_text"]
- poetry =""
- for line in gen.split('.')[:-1]:
- poetry += line #+ "\n"
- return poetry
-poetry = gr.Interface(fn=sayPoetry,
- inputs=[
- gr.Textbox(label="Enter short prompt or select from examples:"),
- gr.Slider(0.70, 1.2, step=0.01,value=1.0, label='control temperature'),
- gr.Slider(25, 100, step=1,value=50, label='control top k'),
- gr.Slider(0.80, 1.0, step=0.01,value=1.0, label='control top p'),
- gr.Slider(0.90, 1.50, step=0.01,value=1.0, label='control penalty'),
- gr.Number(value=139750, precision=0, label='Seed'),
- ],
- outputs=[gr.Textbox(label="Generated Poetry:")],
-
- allow_flagging='never',
- title='Arabic Poetry Generation Demo (updated Jan. 2023)',
- description = "A simple demo of AI generated poetry based on 1M poems fine-tuned using AraGPT2 (be patient, runs on cpu)",
- examples=samples,
- cache_examples=False,
- article = notes)
-poetry.launch()
\ No newline at end of file
diff --git a/spaces/Abdulkader/HumanMotionsDetector/app.py b/spaces/Abdulkader/HumanMotionsDetector/app.py
deleted file mode 100644
index 219a477d82aab9164df9822972aab4e3db7d620c..0000000000000000000000000000000000000000
--- a/spaces/Abdulkader/HumanMotionsDetector/app.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import math
-import numpy as np
-import tensorflow as tf
-from tensorflow import keras
-import tensorflow_addons as tfa
-import matplotlib.pyplot as plt
-from tensorflow.keras import layers
-from tensorflow.keras.models import load_model
-
-from tensorflow import keras
-model = keras.models.load_model('https://github.com/abdulkader902017/CervixNet/blob/6217a51b73ff30724d50712545b2b62bec8a754e/my_model/saved_model.pb')
-response = requests.get("https://github.com/abdulkader902017/CervixNet/blob/main/labels.txt")
-labels = response.text.split("\n")
-
-def classify_image(inp):
- inp = inp.reshape((-1, 32, 32, 3))
- inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp)
- prediction = inception_net.predict(inp).flatten()
- confidences = {labels[i]: float(prediction[i]) for i in range(3)}
- return confidences
-
-gr.Interface(fn=classify_image,
- inputs=gr.Image(shape=(32, 32)),
- outputs=gr.Label(num_top_classes=3)).launch()
\ No newline at end of file
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/generated/client/nodes/2.js b/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/generated/client/nodes/2.js
deleted file mode 100644
index 1cb4f85527f86d91cd9752ec526ddeb7272289ae..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/generated/client/nodes/2.js
+++ /dev/null
@@ -1 +0,0 @@
-export { default as component } from "../../../../src/routes/+page.svelte";
\ No newline at end of file
diff --git a/spaces/Adapter/CoAdapter/ldm/modules/ema.py b/spaces/Adapter/CoAdapter/ldm/modules/ema.py
deleted file mode 100644
index bded25019b9bcbcd0260f0b8185f8c7859ca58c4..0000000000000000000000000000000000000000
--- a/spaces/Adapter/CoAdapter/ldm/modules/ema.py
+++ /dev/null
@@ -1,80 +0,0 @@
-import torch
-from torch import nn
-
-
-class LitEma(nn.Module):
- def __init__(self, model, decay=0.9999, use_num_upates=True):
- super().__init__()
- if decay < 0.0 or decay > 1.0:
- raise ValueError('Decay must be between 0 and 1')
-
- self.m_name2s_name = {}
- self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32))
- self.register_buffer('num_updates', torch.tensor(0, dtype=torch.int) if use_num_upates
- else torch.tensor(-1, dtype=torch.int))
-
- for name, p in model.named_parameters():
- if p.requires_grad:
- # remove as '.'-character is not allowed in buffers
- s_name = name.replace('.', '')
- self.m_name2s_name.update({name: s_name})
- self.register_buffer(s_name, p.clone().detach().data)
-
- self.collected_params = []
-
- def reset_num_updates(self):
- del self.num_updates
- self.register_buffer('num_updates', torch.tensor(0, dtype=torch.int))
-
- def forward(self, model):
- decay = self.decay
-
- if self.num_updates >= 0:
- self.num_updates += 1
- decay = min(self.decay, (1 + self.num_updates) / (10 + self.num_updates))
-
- one_minus_decay = 1.0 - decay
-
- with torch.no_grad():
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
-
- for key in m_param:
- if m_param[key].requires_grad:
- sname = self.m_name2s_name[key]
- shadow_params[sname] = shadow_params[sname].type_as(m_param[key])
- shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key]))
- else:
- assert not key in self.m_name2s_name
-
- def copy_to(self, model):
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
- for key in m_param:
- if m_param[key].requires_grad:
- m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data)
- else:
- assert not key in self.m_name2s_name
-
- def store(self, parameters):
- """
- Save the current parameters for restoring later.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- temporarily stored.
- """
- self.collected_params = [param.clone() for param in parameters]
-
- def restore(self, parameters):
- """
- Restore the parameters stored with the `store` method.
- Useful to validate the model with EMA parameters without affecting the
- original optimization process. Store the parameters before the
- `copy_to` method. After validation (or model saving), use this to
- restore the former parameters.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- updated with the stored parameters.
- """
- for c_param, param in zip(self.collected_params, parameters):
- param.data.copy_(c_param.data)
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/GetExpandedChildHeight.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/GetExpandedChildHeight.js
deleted file mode 100644
index b8f25d3f05bb8f75fdc9eaeb5d89908ffbb27a26..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/GetExpandedChildHeight.js
+++ /dev/null
@@ -1,22 +0,0 @@
-var GetExpandedChildHeight = function (child, parentHeight) {
- if (parentHeight === undefined) {
- parentHeight = this.height;
- }
-
- var childHeight;
- var childConfig = child.rexSizer;
- var padding = childConfig.padding;
- if (this.orientation === 0) { // x
- if (childConfig.expand) {
- var innerHeight = parentHeight - this.space.top - this.space.bottom;
- childHeight = innerHeight - padding.top - padding.bottom;
- }
- } else { // y
- if ((childConfig.proportion > 0) && (this.proportionLength > 0)) {
- childHeight = (childConfig.proportion * this.proportionLength);
- }
- }
- return childHeight;
-}
-
-export default GetExpandedChildHeight;
\ No newline at end of file
diff --git a/spaces/Aishwini/myfirstaigen/app.py b/spaces/Aishwini/myfirstaigen/app.py
deleted file mode 100644
index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000
--- a/spaces/Aishwini/myfirstaigen/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import os
-import gradio as gr
-from langchain.chat_models import ChatOpenAI
-from langchain import LLMChain, PromptTemplate
-from langchain.memory import ConversationBufferMemory
-
-OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
-
-template = """You are a helpful assistant to answer all user queries.
-{chat_history}
-User: {user_message}
-Chatbot:"""
-
-prompt = PromptTemplate(
- input_variables=["chat_history", "user_message"], template=template
-)
-
-memory = ConversationBufferMemory(memory_key="chat_history")
-
-llm_chain = LLMChain(
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
- prompt=prompt,
- verbose=True,
- memory=memory,
-)
-
-def get_text_response(user_message,history):
- response = llm_chain.predict(user_message = user_message)
- return response
-
-demo = gr.ChatInterface(get_text_response)
-
-if __name__ == "__main__":
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
diff --git a/spaces/AkitoP/umamusume_bert_vits2/commons.py b/spaces/AkitoP/umamusume_bert_vits2/commons.py
deleted file mode 100644
index 53b2a742371b4145fd1aff6c170668daee6f911c..0000000000000000000000000000000000000000
--- a/spaces/AkitoP/umamusume_bert_vits2/commons.py
+++ /dev/null
@@ -1,160 +0,0 @@
-import math
-import torch
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- layer = pad_shape[::-1]
- pad_shape = [item for sublist in layer for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += (
- 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
- )
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
- num_timescales - 1
- )
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
- )
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- layer = pad_shape[::-1]
- pad_shape = [item for sublist in layer for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- total_norm = total_norm ** (1.0 / norm_type)
- return total_norm
diff --git a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/latex/attention/background.tex b/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/latex/attention/background.tex
deleted file mode 100644
index 785069dc0f9143bad24e640056dd1072d5c6e5b5..0000000000000000000000000000000000000000
--- a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/latex/attention/background.tex
+++ /dev/null
@@ -1,58 +0,0 @@
-The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU \citep{extendedngpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions \citep{hochreiter2001gradient}. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section~\ref{sec:attention}.
-
-Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations \citep{cheng2016long, decomposableAttnModel, paulus2017deep, lin2017structured}.
-
-End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks \citep{sukhbaatar2015}.
-
-To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution.
-In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as \citep{neural_gpu, NalBytenet2017} and \citep{JonasFaceNet2017}.
-
-
-%\citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
-
-%For example,! in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at low computation cost, making it an essential ingredient in competitive recurrent models for machine translation.
-
-%A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
-
-%After the seminal models introduced in \citep{sutskever14, bahdanau2014neural, cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation (MT) and language modeling with recurrent endoder-decoder and recurrent language models. Recent effort \citep{shazeer2017outrageously} has successfully combined the power of conditional computation with sequence models to train very large models for MT, pushing SOTA at lower computational cost.
-
-%Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state precludes processing all timesteps at once, instead requiring long sequences of sequential operations. In practice, this results in greatly reduced computational efficiency, as on modern computing hardware, a single operation on a large batch is much faster than a large number of operations on small batches. The problem gets worse at longer sequence lengths. Although sequential computation is not a severe bottleneck at inference time, as autoregressively generating each output requires all previous outputs, the inability to compute scores at all output positions at once hinders us from rapidly training our models over large datasets. Although impressive work such as \citep{Kuchaiev2017Factorization} is able to significantly accelerate the training of LSTMs with factorization tricks, we are still bound by the linear dependence on sequence length.
-
-%If the model could compute hidden states at each time step using only the inputs and outputs, it would be liberated from the dependence on results from previous time steps during training. This line of thought is the foundation of recent efforts such as the Markovian neural GPU \citep{neural_gpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as a building block to compute hidden representations simultaneously for all timesteps, resulting in $O(1)$ sequential time complexity. \citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
-
-%A crucial component for accurate sequence prediction is modeling cross-positional communication. For example, in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at a low computation cost, also $O(1)$ sequential time complexity, making it an essential ingredient in recurrent encoder-decoder architectures for MT. A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
-
-
-
-%Note: Facebook model is no better than RNNs in this regard, since it requires a number of layers proportional to the distance you want to communicate. Bytenet is more promising, since it requires a logarithmnic number of layers (does bytenet have SOTA results)?
-
-%Note: An attention layer can connect a very large number of positions at a low computation cost in O(1) sequential operations. This is why encoder-decoder attention has been so successful in seq-to-seq models so far. It is only natural, then, to also use attention to connect the timesteps of the same sequence.
-
-%Note: I wouldn't say that long sequences are not a problem during inference. It would be great if we could infer with no long sequences. We could just say later on that, while our training graph is constant-depth, our model still requires sequential operations in the decoder part during inference due to the autoregressive nature of the model.
-
-%\begin{table}[h!]
-%\caption{Attention models are quite efficient for cross-positional communications when sequence length is smaller than channel depth. $n$ represents the sequence length and $d$ represents the channel depth.}
-%\label{tab:op_complexities}
-%\begin{center}
-%\vspace{-5pt}
-%\scalebox{0.75}{
-
-%\begin{tabular}{l|c|c|c}
-%\hline \hline
-%Layer Type & Receptive & Complexity & Sequential \\
-% & Field & & Operations \\
-%\hline
-%Pointwise Feed-Forward & $1$ & $O(n \cdot d^2)$ & $O(1)$ \\
-%\hline
-%Recurrent & $n$ & $O(n \cdot d^2)$ & $O(n)$ \\
-%\hline
-%Convolutional & $r$ & $O(r \cdot n \cdot d^2)$ & $O(1)$ \\
-%\hline
-%Convolutional (separable) & $r$ & $O(r \cdot n \cdot d + n %\cdot d^2)$ & $O(1)$ \\
-%\hline
-%Attention & $r$ & $O(r \cdot n \cdot d)$ & $O(1)$ \\
-%\hline \hline
-%\end{tabular}
-%}
-%\end{center}
-%\end{table}
\ No newline at end of file
diff --git a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/dataset/ray_utils.py b/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/dataset/ray_utils.py
deleted file mode 100644
index 348035707c8f8fcfd7f6dd8bac5dc0f90bae0691..0000000000000000000000000000000000000000
--- a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/dataset/ray_utils.py
+++ /dev/null
@@ -1,289 +0,0 @@
-import torch, re
-import numpy as np
-from torch import searchsorted
-from kornia import create_meshgrid
-
-
-# from utils import index_point_feature
-
-def depth2dist(z_vals, cos_angle):
- # z_vals: [N_ray N_sample]
- device = z_vals.device
- dists = z_vals[..., 1:] - z_vals[..., :-1]
- dists = torch.cat([dists, torch.Tensor([1e10]).to(device).expand(dists[..., :1].shape)], -1) # [N_rays, N_samples]
- dists = dists * cos_angle.unsqueeze(-1)
- return dists
-
-
-def ndc2dist(ndc_pts, cos_angle):
- dists = torch.norm(ndc_pts[:, 1:] - ndc_pts[:, :-1], dim=-1)
- dists = torch.cat([dists, 1e10 * cos_angle.unsqueeze(-1)], -1) # [N_rays, N_samples]
- return dists
-
-
-def get_ray_directions(H, W, focal, center=None):
- """
- Get ray directions for all pixels in camera coordinate.
- Reference: https://www.scratchapixel.com/lessons/3d-basic-rendering/
- ray-tracing-generating-camera-rays/standard-coordinate-systems
- Inputs:
- H, W, focal: image height, width and focal length
- Outputs:
- directions: (H, W, 3), the direction of the rays in camera coordinate
- """
- grid = create_meshgrid(H, W, normalized_coordinates=False)[0] + 0.5
-
- i, j = grid.unbind(-1)
- # the direction here is without +0.5 pixel centering as calibration is not so accurate
- # see https://github.com/bmild/nerf/issues/24
- cent = center if center is not None else [W / 2, H / 2]
- directions = torch.stack([(i - cent[0]) / focal[0], (j - cent[1]) / focal[1], torch.ones_like(i)], -1) # (H, W, 3)
-
- return directions
-
-
-def get_ray_directions_blender(H, W, focal, center=None):
- """
- Get ray directions for all pixels in camera coordinate.
- Reference: https://www.scratchapixel.com/lessons/3d-basic-rendering/
- ray-tracing-generating-camera-rays/standard-coordinate-systems
- Inputs:
- H, W, focal: image height, width and focal length
- Outputs:
- directions: (H, W, 3), the direction of the rays in camera coordinate
- """
- grid = create_meshgrid(H, W, normalized_coordinates=False)[0]+0.5
- i, j = grid.unbind(-1)
- # the direction here is without +0.5 pixel centering as calibration is not so accurate
- # see https://github.com/bmild/nerf/issues/24
- cent = center if center is not None else [W / 2, H / 2]
- directions = torch.stack([(i - cent[0]) / focal[0], -(j - cent[1]) / focal[1], -torch.ones_like(i)],
- -1) # (H, W, 3)
-
- return directions
-
-
-def get_rays(directions, c2w):
- """
- Get ray origin and normalized directions in world coordinate for all pixels in one image.
- Reference: https://www.scratchapixel.com/lessons/3d-basic-rendering/
- ray-tracing-generating-camera-rays/standard-coordinate-systems
- Inputs:
- directions: (H, W, 3) precomputed ray directions in camera coordinate
- c2w: (3, 4) transformation matrix from camera coordinate to world coordinate
- Outputs:
- rays_o: (H*W, 3), the origin of the rays in world coordinate
- rays_d: (H*W, 3), the normalized direction of the rays in world coordinate
- """
- # Rotate ray directions from camera coordinate to the world coordinate
- rays_d = directions @ c2w[:3, :3].T # (H, W, 3)
- # rays_d = rays_d / torch.norm(rays_d, dim=-1, keepdim=True)
- # The origin of all rays is the camera origin in world coordinate
- rays_o = c2w[:3, 3].expand(rays_d.shape) # (H, W, 3)
-
- rays_d = rays_d.view(-1, 3)
- rays_o = rays_o.view(-1, 3)
-
- return rays_o, rays_d
-
-
-def ndc_rays_blender(H, W, focal, near, rays_o, rays_d):
- # Shift ray origins to near plane
- t = -(near + rays_o[..., 2]) / rays_d[..., 2]
- rays_o = rays_o + t[..., None] * rays_d
-
- # Projection
- o0 = -1. / (W / (2. * focal)) * rays_o[..., 0] / rays_o[..., 2]
- o1 = -1. / (H / (2. * focal)) * rays_o[..., 1] / rays_o[..., 2]
- o2 = 1. + 2. * near / rays_o[..., 2]
-
- d0 = -1. / (W / (2. * focal)) * (rays_d[..., 0] / rays_d[..., 2] - rays_o[..., 0] / rays_o[..., 2])
- d1 = -1. / (H / (2. * focal)) * (rays_d[..., 1] / rays_d[..., 2] - rays_o[..., 1] / rays_o[..., 2])
- d2 = -2. * near / rays_o[..., 2]
-
- rays_o = torch.stack([o0, o1, o2], -1)
- rays_d = torch.stack([d0, d1, d2], -1)
-
- return rays_o, rays_d
-
-def ndc_rays(H, W, focal, near, rays_o, rays_d):
- # Shift ray origins to near plane
- t = (near - rays_o[..., 2]) / rays_d[..., 2]
- rays_o = rays_o + t[..., None] * rays_d
-
- # Projection
- o0 = 1. / (W / (2. * focal)) * rays_o[..., 0] / rays_o[..., 2]
- o1 = 1. / (H / (2. * focal)) * rays_o[..., 1] / rays_o[..., 2]
- o2 = 1. - 2. * near / rays_o[..., 2]
-
- d0 = 1. / (W / (2. * focal)) * (rays_d[..., 0] / rays_d[..., 2] - rays_o[..., 0] / rays_o[..., 2])
- d1 = 1. / (H / (2. * focal)) * (rays_d[..., 1] / rays_d[..., 2] - rays_o[..., 1] / rays_o[..., 2])
- d2 = 2. * near / rays_o[..., 2]
-
- rays_o = torch.stack([o0, o1, o2], -1)
- rays_d = torch.stack([d0, d1, d2], -1)
-
- return rays_o, rays_d
-
-# Hierarchical sampling (section 5.2)
-def sample_pdf(bins, weights, N_samples, det=False, pytest=False):
- device = weights.device
- # Get pdf
- weights = weights + 1e-5 # prevent nans
- pdf = weights / torch.sum(weights, -1, keepdim=True)
- cdf = torch.cumsum(pdf, -1)
- cdf = torch.cat([torch.zeros_like(cdf[..., :1]), cdf], -1) # (batch, len(bins))
-
- # Take uniform samples
- if det:
- u = torch.linspace(0., 1., steps=N_samples, device=device)
- u = u.expand(list(cdf.shape[:-1]) + [N_samples])
- else:
- u = torch.rand(list(cdf.shape[:-1]) + [N_samples], device=device)
-
- # Pytest, overwrite u with numpy's fixed random numbers
- if pytest:
- np.random.seed(0)
- new_shape = list(cdf.shape[:-1]) + [N_samples]
- if det:
- u = np.linspace(0., 1., N_samples)
- u = np.broadcast_to(u, new_shape)
- else:
- u = np.random.rand(*new_shape)
- u = torch.Tensor(u)
-
- # Invert CDF
- u = u.contiguous()
- inds = searchsorted(cdf.detach(), u, right=True)
- below = torch.max(torch.zeros_like(inds - 1), inds - 1)
- above = torch.min((cdf.shape[-1] - 1) * torch.ones_like(inds), inds)
- inds_g = torch.stack([below, above], -1) # (batch, N_samples, 2)
-
- matched_shape = [inds_g.shape[0], inds_g.shape[1], cdf.shape[-1]]
- cdf_g = torch.gather(cdf.unsqueeze(1).expand(matched_shape), 2, inds_g)
- bins_g = torch.gather(bins.unsqueeze(1).expand(matched_shape), 2, inds_g)
-
- denom = (cdf_g[..., 1] - cdf_g[..., 0])
- denom = torch.where(denom < 1e-5, torch.ones_like(denom), denom)
- t = (u - cdf_g[..., 0]) / denom
- samples = bins_g[..., 0] + t * (bins_g[..., 1] - bins_g[..., 0])
-
- return samples
-
-
-def dda(rays_o, rays_d, bbox_3D):
- inv_ray_d = 1.0 / (rays_d + 1e-6)
- t_min = (bbox_3D[:1] - rays_o) * inv_ray_d # N_rays 3
- t_max = (bbox_3D[1:] - rays_o) * inv_ray_d
- t = torch.stack((t_min, t_max)) # 2 N_rays 3
- t_min = torch.max(torch.min(t, dim=0)[0], dim=-1, keepdim=True)[0]
- t_max = torch.min(torch.max(t, dim=0)[0], dim=-1, keepdim=True)[0]
- return t_min, t_max
-
-
-def ray_marcher(rays,
- N_samples=64,
- lindisp=False,
- perturb=0,
- bbox_3D=None):
- """
- sample points along the rays
- Inputs:
- rays: ()
-
- Returns:
-
- """
-
- # Decompose the inputs
- N_rays = rays.shape[0]
- rays_o, rays_d = rays[:, 0:3], rays[:, 3:6] # both (N_rays, 3)
- near, far = rays[:, 6:7], rays[:, 7:8] # both (N_rays, 1)
-
- if bbox_3D is not None:
- # cal aabb boundles
- near, far = dda(rays_o, rays_d, bbox_3D)
-
- # Sample depth points
- z_steps = torch.linspace(0, 1, N_samples, device=rays.device) # (N_samples)
- if not lindisp: # use linear sampling in depth space
- z_vals = near * (1 - z_steps) + far * z_steps
- else: # use linear sampling in disparity space
- z_vals = 1 / (1 / near * (1 - z_steps) + 1 / far * z_steps)
-
- z_vals = z_vals.expand(N_rays, N_samples)
-
- if perturb > 0: # perturb sampling depths (z_vals)
- z_vals_mid = 0.5 * (z_vals[:, :-1] + z_vals[:, 1:]) # (N_rays, N_samples-1) interval mid points
- # get intervals between samples
- upper = torch.cat([z_vals_mid, z_vals[:, -1:]], -1)
- lower = torch.cat([z_vals[:, :1], z_vals_mid], -1)
-
- perturb_rand = perturb * torch.rand(z_vals.shape, device=rays.device)
- z_vals = lower + (upper - lower) * perturb_rand
-
- xyz_coarse_sampled = rays_o.unsqueeze(1) + \
- rays_d.unsqueeze(1) * z_vals.unsqueeze(2) # (N_rays, N_samples, 3)
-
- return xyz_coarse_sampled, rays_o, rays_d, z_vals
-
-
-def read_pfm(filename):
- file = open(filename, 'rb')
- color = None
- width = None
- height = None
- scale = None
- endian = None
-
- header = file.readline().decode('utf-8').rstrip()
- if header == 'PF':
- color = True
- elif header == 'Pf':
- color = False
- else:
- raise Exception('Not a PFM file.')
-
- dim_match = re.match(r'^(\d+)\s(\d+)\s$', file.readline().decode('utf-8'))
- if dim_match:
- width, height = map(int, dim_match.groups())
- else:
- raise Exception('Malformed PFM header.')
-
- scale = float(file.readline().rstrip())
- if scale < 0: # little-endian
- endian = '<'
- scale = -scale
- else:
- endian = '>' # big-endian
-
- data = np.fromfile(file, endian + 'f')
- shape = (height, width, 3) if color else (height, width)
-
- data = np.reshape(data, shape)
- data = np.flipud(data)
- file.close()
- return data, scale
-
-
-def ndc_bbox(all_rays):
- near_min = torch.min(all_rays[...,:3].view(-1,3),dim=0)[0]
- near_max = torch.max(all_rays[..., :3].view(-1, 3), dim=0)[0]
- far_min = torch.min((all_rays[...,:3]+all_rays[...,3:6]).view(-1,3),dim=0)[0]
- far_max = torch.max((all_rays[...,:3]+all_rays[...,3:6]).view(-1, 3), dim=0)[0]
- print(f'===> ndc bbox near_min:{near_min} near_max:{near_max} far_min:{far_min} far_max:{far_max}')
- return torch.stack((torch.minimum(near_min,far_min),torch.maximum(near_max,far_max)))
-
-import torchvision
-normalize_vgg = torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406],
- std=[0.229, 0.224, 0.225])
-
-def denormalize_vgg(img):
- im = img.clone()
- im[:, 0, :, :] *= 0.229
- im[:, 1, :, :] *= 0.224
- im[:, 2, :, :] *= 0.225
- im[:, 0, :, :] += 0.485
- im[:, 1, :, :] += 0.456
- im[:, 2, :, :] += 0.406
- return im
\ No newline at end of file
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco.py
deleted file mode 100644
index 24d2093b8b537a365c3e07261921b120b422918c..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco.py
+++ /dev/null
@@ -1,6 +0,0 @@
-_base_ = './vfnet_r50_fpn_mstrain_2x_coco.py'
-model = dict(
- backbone=dict(
- dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False),
- stage_with_dcn=(False, True, True, True)),
- bbox_head=dict(dcn_on_last_conv=True))
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/samplers/group_sampler.py b/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/samplers/group_sampler.py
deleted file mode 100644
index f88cf3439446a2eb7d8656388ddbe93196315f5b..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/samplers/group_sampler.py
+++ /dev/null
@@ -1,148 +0,0 @@
-from __future__ import division
-import math
-
-import numpy as np
-import torch
-from mmcv.runner import get_dist_info
-from torch.utils.data import Sampler
-
-
-class GroupSampler(Sampler):
-
- def __init__(self, dataset, samples_per_gpu=1):
- assert hasattr(dataset, 'flag')
- self.dataset = dataset
- self.samples_per_gpu = samples_per_gpu
- self.flag = dataset.flag.astype(np.int64)
- self.group_sizes = np.bincount(self.flag)
- self.num_samples = 0
- for i, size in enumerate(self.group_sizes):
- self.num_samples += int(np.ceil(
- size / self.samples_per_gpu)) * self.samples_per_gpu
-
- def __iter__(self):
- indices = []
- for i, size in enumerate(self.group_sizes):
- if size == 0:
- continue
- indice = np.where(self.flag == i)[0]
- assert len(indice) == size
- np.random.shuffle(indice)
- num_extra = int(np.ceil(size / self.samples_per_gpu)
- ) * self.samples_per_gpu - len(indice)
- indice = np.concatenate(
- [indice, np.random.choice(indice, num_extra)])
- indices.append(indice)
- indices = np.concatenate(indices)
- indices = [
- indices[i * self.samples_per_gpu:(i + 1) * self.samples_per_gpu]
- for i in np.random.permutation(
- range(len(indices) // self.samples_per_gpu))
- ]
- indices = np.concatenate(indices)
- indices = indices.astype(np.int64).tolist()
- assert len(indices) == self.num_samples
- return iter(indices)
-
- def __len__(self):
- return self.num_samples
-
-
-class DistributedGroupSampler(Sampler):
- """Sampler that restricts data loading to a subset of the dataset.
-
- It is especially useful in conjunction with
- :class:`torch.nn.parallel.DistributedDataParallel`. In such case, each
- process can pass a DistributedSampler instance as a DataLoader sampler,
- and load a subset of the original dataset that is exclusive to it.
-
- .. note::
- Dataset is assumed to be of constant size.
-
- Arguments:
- dataset: Dataset used for sampling.
- num_replicas (optional): Number of processes participating in
- distributed training.
- rank (optional): Rank of the current process within num_replicas.
- seed (int, optional): random seed used to shuffle the sampler if
- ``shuffle=True``. This number should be identical across all
- processes in the distributed group. Default: 0.
- """
-
- def __init__(self,
- dataset,
- samples_per_gpu=1,
- num_replicas=None,
- rank=None,
- seed=0):
- _rank, _num_replicas = get_dist_info()
- if num_replicas is None:
- num_replicas = _num_replicas
- if rank is None:
- rank = _rank
- self.dataset = dataset
- self.samples_per_gpu = samples_per_gpu
- self.num_replicas = num_replicas
- self.rank = rank
- self.epoch = 0
- self.seed = seed if seed is not None else 0
-
- assert hasattr(self.dataset, 'flag')
- self.flag = self.dataset.flag
- self.group_sizes = np.bincount(self.flag)
-
- self.num_samples = 0
- for i, j in enumerate(self.group_sizes):
- self.num_samples += int(
- math.ceil(self.group_sizes[i] * 1.0 / self.samples_per_gpu /
- self.num_replicas)) * self.samples_per_gpu
- self.total_size = self.num_samples * self.num_replicas
-
- def __iter__(self):
- # deterministically shuffle based on epoch
- g = torch.Generator()
- g.manual_seed(self.epoch + self.seed)
-
- indices = []
- for i, size in enumerate(self.group_sizes):
- if size > 0:
- indice = np.where(self.flag == i)[0]
- assert len(indice) == size
- # add .numpy() to avoid bug when selecting indice in parrots.
- # TODO: check whether torch.randperm() can be replaced by
- # numpy.random.permutation().
- indice = indice[list(
- torch.randperm(int(size), generator=g).numpy())].tolist()
- extra = int(
- math.ceil(
- size * 1.0 / self.samples_per_gpu / self.num_replicas)
- ) * self.samples_per_gpu * self.num_replicas - len(indice)
- # pad indice
- tmp = indice.copy()
- for _ in range(extra // size):
- indice.extend(tmp)
- indice.extend(tmp[:extra % size])
- indices.extend(indice)
-
- assert len(indices) == self.total_size
-
- indices = [
- indices[j] for i in list(
- torch.randperm(
- len(indices) // self.samples_per_gpu, generator=g))
- for j in range(i * self.samples_per_gpu, (i + 1) *
- self.samples_per_gpu)
- ]
-
- # subsample
- offset = self.num_samples * self.rank
- indices = indices[offset:offset + self.num_samples]
- assert len(indices) == self.num_samples
-
- return iter(indices)
-
- def __len__(self):
- return self.num_samples
-
- def set_epoch(self, epoch):
- self.epoch = epoch
diff --git a/spaces/AnimalEquality/chatbot/_proc/_docs/ingredient_vision.html b/spaces/AnimalEquality/chatbot/_proc/_docs/ingredient_vision.html
deleted file mode 100644
index f4c31b96a2381c56e86545afbf459b03f435ba6e..0000000000000000000000000000000000000000
--- a/spaces/AnimalEquality/chatbot/_proc/_docs/ingredient_vision.html
+++ /dev/null
@@ -1,802 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-lv-recipe-chatbot - ingredient_vision
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Inspiration drawn from TaskMartix aka Visual ChatGPT
-
-source
-
-
-BlipImageCaptioning
-
- BlipImageCaptioning (device:str)
-
-Useful when you want to know what is inside the photo.
-
-source
-
-
-BlipImageCaptioning.inference
-
- BlipImageCaptioning.inference
- (image:<module'PIL.Image'from'/home/evylz/
- AnimalEquality/lv-recipe-
- chatbot/env/lib/python3.10/site-
- packages/PIL/Image.py'>)
-
-
-
-
-
-
-
-image
-PIL.Image
-
-
-
-Returns
-str
-Caption for the image
-
-
-
-
-source
-
-
-BlipVQA
-
- BlipVQA (device:str)
-
-BLIP Visual Question Answering Useful when you need an answer for a question based on an image. Examples: what is the background color of this image, how many cats are in this figure, what is in this figure?
-
-source
-
-
-BlipVQA.inference
-
- BlipVQA.inference
- (image:<module'PIL.Image'from'/home/evylz/AnimalEquali
- ty/lv-recipe-chatbot/env/lib/python3.10/site-
- packages/PIL/Image.py'>, question:str)
-
-
-
-
-
-
-
-image
-PIL.Image
-
-
-
-question
-str
-
-
-
-Returns
-str
-Answer to the query on the image
-
-
-
-
-
sample_images = os.listdir(SAMPLE_IMG_DIR)
- sample_images
-
-
['veggie-fridge.jpeg',
- 'veg-groceries-table.jpg',
- 'fridge-splendid.jpg',
- 'neat-veg-groceries.jpg',
- 'veg-groceries-table.jpeg',
- 'Fruits-and-vegetables-one-a-table.jpg']
-
-
-
-
for img in sample_images:
- display(format_image(SAMPLE_IMG_DIR / img))
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-The process:
-
-Format image
-Get description (caption)
-Pass caption and ingredient queries to VQA
-
-
-
vqa = BlipVQA("cpu" )
- img_cap = BlipImageCaptioning("cpu" )
-
-
-
for img in sample_images:
- img = format_image(SAMPLE_IMG_DIR / img)
-
- display(desc, img.resize((int (img.size[0 ] * 0.5 ), int (img.size[1 ] * 0.5 ))))
-
-
CPU times: user 11.4 s, sys: 7.42 ms, total: 11.4 s
-Wall time: 1.19 s
-CPU times: user 13.5 s, sys: 7.5 ms, total: 13.5 s
-Wall time: 1.36 s
-CPU times: user 12 s, sys: 0 ns, total: 12 s
-Wall time: 1.21 s
-CPU times: user 12.5 s, sys: 0 ns, total: 12.5 s
-Wall time: 1.27 s
-CPU times: user 9.25 s, sys: 7.71 ms, total: 9.25 s
-Wall time: 936 ms
-CPU times: user 15.7 s, sys: 7.66 ms, total: 15.7 s
-Wall time: 1.58 s
-
-
-
'a refrigerator with food inside'
-
-
-
-
-
-
'a table with a variety of fruits and vegetables'
-
-
-
-
-
-
'a refrigerator filled with food and drinks'
-
-
-
-
-
-
'a counter with various foods on it'
-
-
-
-
-
-
-
-
-
-
'a table with a variety of fruits and vegetables'
-
-
-
-
-
-
-
for img in sample_images:
- img = format_image(SAMPLE_IMG_DIR / img)
- desc = img_cap.inference(img)
-
- answer += " \n " + vqa.inference(
- img, f"What are three of the fruits seen in the image if any?"
- )
- answer += " \n " + vqa.inference(
- img, f"What grains and starches are in the image if any?"
- )
- answer += " \n " + vqa.inference(img, f"Is there plant-based milk in the image?" )
- print (
- f""" { desc}
-{ answer} """
- )
- display(img.resize((int (img.size[0 ] * 0.75 ), int (img.size[1 ] * 0.75 ))))
-
-
CPU times: user 7.67 s, sys: 12.1 ms, total: 7.68 s
-Wall time: 779 ms
-a refrigerator with food inside
-cabbage lettuce onion
-apples
-rice
-yes
-CPU times: user 10.5 s, sys: 8.13 ms, total: 10.5 s
-Wall time: 1.06 s
-a table with a variety of fruits and vegetables
-broccoli and tomatoes
-bananas apples oranges
-potatoes
-yes
-CPU times: user 11.7 s, sys: 0 ns, total: 11.7 s
-Wall time: 1.18 s
-a refrigerator filled with food and drinks
-broccoli and zucchini
-bananas
-rice
-yes
-CPU times: user 11.5 s, sys: 12.2 ms, total: 11.5 s
-Wall time: 1.16 s
-a counter with various foods on it
-carrots and broccoli
-apples bananas and tomatoes
-rice
-yes
-CPU times: user 9.62 s, sys: 4.22 ms, total: 9.63 s
-Wall time: 973 ms
-a wooden table
-potatoes and carrots
-apples
-potatoes
-yes
-CPU times: user 11.1 s, sys: 8.23 ms, total: 11.1 s
-Wall time: 1.12 s
-a table with a variety of fruits and vegetables
-peppers broccoli and squash
-watermelon limes and pineapple
-rice
-no
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-source
-
-
-VeganIngredientFinder
-
- VeganIngredientFinder ()
-
-Initialize self. See help(type(self)) for accurate signature.
-
-source
-
-
-VeganIngredientFinder.list_ingredients
-
- VeganIngredientFinder.list_ingredients (img:str)
-
-
-
-
-
-
-
-img
-str
-Image file path
-
-
-Returns
-str
-
-
-
-
-
-
vegan_ingred_finder = VeganIngredientFinder()
- vegan_ingred_finder.list_ingredients(SAMPLE_IMG_DIR / sample_images[0 ])
-
-
'cabbage lettuce onion\napples\nrice\nplant-based milk'
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/drop.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/drop.py
deleted file mode 100644
index b7b4fccd457a0d51fb10c789df3c8537fe7b67c1..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/drop.py
+++ /dev/null
@@ -1,65 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-import torch.nn as nn
-
-from annotator.uniformer.mmcv import build_from_cfg
-from .registry import DROPOUT_LAYERS
-
-
-def drop_path(x, drop_prob=0., training=False):
- """Drop paths (Stochastic Depth) per sample (when applied in main path of
- residual blocks).
-
- We follow the implementation
- https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/models/layers/drop.py # noqa: E501
- """
- if drop_prob == 0. or not training:
- return x
- keep_prob = 1 - drop_prob
- # handle tensors with different dimensions, not just 4D tensors.
- shape = (x.shape[0], ) + (1, ) * (x.ndim - 1)
- random_tensor = keep_prob + torch.rand(
- shape, dtype=x.dtype, device=x.device)
- output = x.div(keep_prob) * random_tensor.floor()
- return output
-
-
-@DROPOUT_LAYERS.register_module()
-class DropPath(nn.Module):
- """Drop paths (Stochastic Depth) per sample (when applied in main path of
- residual blocks).
-
- We follow the implementation
- https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/models/layers/drop.py # noqa: E501
-
- Args:
- drop_prob (float): Probability of the path to be zeroed. Default: 0.1
- """
-
- def __init__(self, drop_prob=0.1):
- super(DropPath, self).__init__()
- self.drop_prob = drop_prob
-
- def forward(self, x):
- return drop_path(x, self.drop_prob, self.training)
-
-
-@DROPOUT_LAYERS.register_module()
-class Dropout(nn.Dropout):
- """A wrapper for ``torch.nn.Dropout``, We rename the ``p`` of
- ``torch.nn.Dropout`` to ``drop_prob`` so as to be consistent with
- ``DropPath``
-
- Args:
- drop_prob (float): Probability of the elements to be
- zeroed. Default: 0.5.
- inplace (bool): Do the operation inplace or not. Default: False.
- """
-
- def __init__(self, drop_prob=0.5, inplace=False):
- super().__init__(p=drop_prob, inplace=inplace)
-
-
-def build_dropout(cfg, default_args=None):
- """Builder for drop out layers."""
- return build_from_cfg(cfg, DROPOUT_LAYERS, default_args)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/knn.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/knn.py
deleted file mode 100644
index f335785036669fc19239825b0aae6dde3f73bf92..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/knn.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import torch
-from torch.autograd import Function
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext', ['knn_forward'])
-
-
-class KNN(Function):
- r"""KNN (CUDA) based on heap data structure.
- Modified from `PAConv `_.
-
- Find k-nearest points.
- """
-
- @staticmethod
- def forward(ctx,
- k: int,
- xyz: torch.Tensor,
- center_xyz: torch.Tensor = None,
- transposed: bool = False) -> torch.Tensor:
- """
- Args:
- k (int): number of nearest neighbors.
- xyz (Tensor): (B, N, 3) if transposed == False, else (B, 3, N).
- xyz coordinates of the features.
- center_xyz (Tensor, optional): (B, npoint, 3) if transposed ==
- False, else (B, 3, npoint). centers of the knn query.
- Default: None.
- transposed (bool, optional): whether the input tensors are
- transposed. Should not explicitly use this keyword when
- calling knn (=KNN.apply), just add the fourth param.
- Default: False.
-
- Returns:
- Tensor: (B, k, npoint) tensor with the indices of
- the features that form k-nearest neighbours.
- """
- assert (k > 0) & (k < 100), 'k should be in range(0, 100)'
-
- if center_xyz is None:
- center_xyz = xyz
-
- if transposed:
- xyz = xyz.transpose(2, 1).contiguous()
- center_xyz = center_xyz.transpose(2, 1).contiguous()
-
- assert xyz.is_contiguous() # [B, N, 3]
- assert center_xyz.is_contiguous() # [B, npoint, 3]
-
- center_xyz_device = center_xyz.get_device()
- assert center_xyz_device == xyz.get_device(), \
- 'center_xyz and xyz should be put on the same device'
- if torch.cuda.current_device() != center_xyz_device:
- torch.cuda.set_device(center_xyz_device)
-
- B, npoint, _ = center_xyz.shape
- N = xyz.shape[1]
-
- idx = center_xyz.new_zeros((B, npoint, k)).int()
- dist2 = center_xyz.new_zeros((B, npoint, k)).float()
-
- ext_module.knn_forward(
- xyz, center_xyz, idx, dist2, b=B, n=N, m=npoint, nsample=k)
- # idx shape to [B, k, npoint]
- idx = idx.transpose(2, 1).contiguous()
- if torch.__version__ != 'parrots':
- ctx.mark_non_differentiable(idx)
- return idx
-
- @staticmethod
- def backward(ctx, a=None):
- return None, None, None
-
-
-knn = KNN.apply
diff --git a/spaces/ArtGAN/Diffusion-API/diffusion_webui/__init__.py b/spaces/ArtGAN/Diffusion-API/diffusion_webui/__init__.py
deleted file mode 100644
index 6e49af236dab7f041fb4fe27d50b728eaaf552d9..0000000000000000000000000000000000000000
--- a/spaces/ArtGAN/Diffusion-API/diffusion_webui/__init__.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from diffusion_webui.diffusion_models.controlnet_inpaint_pipeline import (
- StableDiffusionControlNetInpaintGenerator,
-)
-from diffusion_webui.diffusion_models.controlnet_pipeline import (
- StableDiffusionControlNetGenerator,
-)
-from diffusion_webui.diffusion_models.img2img_app import (
- StableDiffusionImage2ImageGenerator,
-)
-from diffusion_webui.diffusion_models.inpaint_app import (
- StableDiffusionInpaintGenerator,
-)
-from diffusion_webui.diffusion_models.text2img_app import (
- StableDiffusionText2ImageGenerator,
-)
-
-__version__ = "2.5.0"
diff --git a/spaces/Ash58947/Jan/Dockerfile b/spaces/Ash58947/Jan/Dockerfile
deleted file mode 100644
index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000
--- a/spaces/Ash58947/Jan/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM node:18-bullseye-slim
-
-RUN apt-get update && \
-
-apt-get install -y git
-
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-
-WORKDIR /app
-
-RUN npm install
-
-COPY Dockerfile greeting.md* .env* ./
-
-RUN npm run build
-
-EXPOSE 7860
-
-ENV NODE_ENV=production
-
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/Aspik101/Polish_Llama2/app.py b/spaces/Aspik101/Polish_Llama2/app.py
deleted file mode 100644
index bb1e30be66b43c097358e55901412efa9baaf834..0000000000000000000000000000000000000000
--- a/spaces/Aspik101/Polish_Llama2/app.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import gradio as gr
-import random
-import time
-from ctransformers import AutoModelForCausalLM
-import datetime
-import os
-
-
-params = {
- "max_new_tokens":512,
- "stop":["" ,"<|endoftext|>"],
- "temperature":0.7,
- "top_p":0.8,
- "stream":True,
- "batch_size": 8}
-
-
-def save_log(task, to_save):
- with open("logs.txt", "a") as log_file:
- current_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
- log_file.write(f"[{current_time}] - {task}: {to_save}\n")
- print(to_save)
-
-
-llm = AutoModelForCausalLM.from_pretrained("Aspik101/Llama-2-7b-chat-hf-pl-lora_GGML", model_type="llama")
-
-with gr.Blocks() as demo:
- chatbot = gr.Chatbot()
- msg = gr.Textbox()
- clear = gr.Button("Clear")
-
- def user(user_message, history):
- return "", history + [[user_message, None]]
-
- def parse_history(hist):
- history_ = ""
- for q, a in hist:
- history_ += f": {q } \n"
- if a:
- history_ += f": {a} \n"
- return history_
-
- def bot(history):
- print("history: ",history)
- prompt = f"Jesteś AI assystentem. Odpowiadaj po polsku. {parse_history(history)}. :"
- print("prompt: ",prompt)
- stream = llm(prompt, **params)
- history[-1][1] = ""
- answer_save = ""
- for character in stream:
- history[-1][1] += character
- answer_save += character
- time.sleep(0.005)
- yield history
-
- print("answer_save: ",answer_save)
- msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then(
- bot, chatbot, chatbot
- )
- clear.click(lambda: None, None, chatbot, queue=False)
-
-demo.queue()
-demo.launch()
\ No newline at end of file
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/__init__.py
deleted file mode 100644
index 3c50c5dcfeeda2efed282200a5c5cc8c5f7542f7..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/__init__.py
+++ /dev/null
@@ -1,25 +0,0 @@
-# This file is dual licensed under the terms of the Apache License, Version
-# 2.0, and the BSD License. See the LICENSE file in the root of this repository
-# for complete details.
-
-from .__about__ import (
- __author__,
- __copyright__,
- __email__,
- __license__,
- __summary__,
- __title__,
- __uri__,
- __version__,
-)
-
-__all__ = [
- "__title__",
- "__summary__",
- "__uri__",
- "__version__",
- "__author__",
- "__email__",
- "__license__",
- "__copyright__",
-]
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/config/_validate_pyproject/extra_validations.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/config/_validate_pyproject/extra_validations.py
deleted file mode 100644
index 4130a421cfd7260d323b13cbd9d75ab8146e6030..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/config/_validate_pyproject/extra_validations.py
+++ /dev/null
@@ -1,36 +0,0 @@
-"""The purpose of this module is implement PEP 621 validations that are
-difficult to express as a JSON Schema (or that are not supported by the current
-JSON Schema library).
-"""
-
-from typing import Mapping, TypeVar
-
-from .error_reporting import ValidationError
-
-T = TypeVar("T", bound=Mapping)
-
-
-class RedefiningStaticFieldAsDynamic(ValidationError):
- """According to PEP 621:
-
- Build back-ends MUST raise an error if the metadata specifies a field
- statically as well as being listed in dynamic.
- """
-
-
-def validate_project_dynamic(pyproject: T) -> T:
- project_table = pyproject.get("project", {})
- dynamic = project_table.get("dynamic", [])
-
- for field in dynamic:
- if field in project_table:
- msg = f"You cannot provide a value for `project.{field}` and "
- msg += "list it under `project.dynamic` at the same time"
- name = f"data.project.{field}"
- value = {field: project_table[field], "...": " # ...", "dynamic": dynamic}
- raise RedefiningStaticFieldAsDynamic(msg, value, name, rule="PEP 621")
-
- return pyproject
-
-
-EXTRA_VALIDATIONS = (validate_project_dynamic,)
diff --git a/spaces/Awiny/Image2Paragraph/models/image_text_transformation.py b/spaces/Awiny/Image2Paragraph/models/image_text_transformation.py
deleted file mode 100644
index db311d9c3d9e78e86322b26bd694fad3d848d22c..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/image_text_transformation.py
+++ /dev/null
@@ -1,71 +0,0 @@
-from models.blip2_model import ImageCaptioning
-from models.grit_model import DenseCaptioning
-from models.gpt_model import ImageToText
-from models.controlnet_model import TextToImage
-from models.region_semantic import RegionSemantic
-from utils.util import read_image_width_height, display_images_and_text, resize_long_edge
-import argparse
-from PIL import Image
-import base64
-from io import BytesIO
-import os
-
-def pil_image_to_base64(image):
- buffered = BytesIO()
- image.save(buffered, format="JPEG")
- img_str = base64.b64encode(buffered.getvalue()).decode()
- return img_str
-
-
-class ImageTextTransformation:
- def __init__(self, args):
- # Load your big model here
- self.args = args
- self.init_models()
- self.ref_image = None
-
- def init_models(self):
- openai_key = os.environ['OPENAI_KEY']
- print(self.args)
- print('\033[1;34m' + "Welcome to the Image2Paragraph toolbox...".center(50, '-') + '\033[0m')
- print('\033[1;33m' + "Initializing models...".center(50, '-') + '\033[0m')
- print('\033[1;31m' + "This is time-consuming, please wait...".center(50, '-') + '\033[0m')
- self.image_caption_model = ImageCaptioning(device=self.args.image_caption_device, captioner_base_model=self.args.captioner_base_model)
- self.dense_caption_model = DenseCaptioning(device=self.args.dense_caption_device)
- self.gpt_model = ImageToText(openai_key)
- self.controlnet_model = TextToImage(device=self.args.contolnet_device)
- self.region_semantic_model = RegionSemantic(device=self.args.semantic_segment_device, image_caption_model=self.image_caption_model, region_classify_model=self.args.region_classify_model, sam_arch=self.args.sam_arch)
- print('\033[1;32m' + "Model initialization finished!".center(50, '-') + '\033[0m')
-
-
- def image_to_text(self, img_src):
- # the information to generate paragraph based on the context
- self.ref_image = Image.open(img_src)
- # resize image to long edge 384
- self.ref_image = resize_long_edge(self.ref_image, 384)
- width, height = read_image_width_height(img_src)
- print(self.args)
- if self.args.image_caption:
- image_caption = self.image_caption_model.image_caption(img_src)
- else:
- image_caption = " "
- if self.args.dense_caption:
- dense_caption = self.dense_caption_model.image_dense_caption(img_src)
- else:
- dense_caption = " "
- if self.args.semantic_segment:
- region_semantic = self.region_semantic_model.region_semantic(img_src)
- else:
- region_semantic = " "
- generated_text = self.gpt_model.paragraph_summary_with_gpt(image_caption, dense_caption, region_semantic, width, height)
- return image_caption, dense_caption, region_semantic, generated_text
-
- def text_to_image(self, text):
- generated_image = self.controlnet_model.text_to_image(text, self.ref_image)
- return generated_image
-
- def text_to_image_retrieval(self, text):
- pass
-
- def image_to_text_retrieval(self, image):
- pass
\ No newline at end of file
diff --git a/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/layers_537238KB.py b/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/layers_537238KB.py
deleted file mode 100644
index 9b127bc6427f5c60c8cf85603a3d8a093c3501c4..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/layers_537238KB.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class SeperableConv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(SeperableConv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nin,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- groups=nin,
- bias=False,
- ),
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
-
- def __call__(self, x):
- skip = self.conv1(x)
- h = self.conv2(skip)
-
- return h, skip
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
- h = self.conv(x)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
- self.conv3 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv6 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv7 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = nn.Sequential(
- Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
- )
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- feat6 = self.conv6(x)
- feat7 = self.conv7(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1)
- bottle = self.bottleneck(out)
- return bottle
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_spinners.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_spinners.py
deleted file mode 100644
index d0bb1fe751677f0ee83fc6bb876ed72443fdcde7..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_spinners.py
+++ /dev/null
@@ -1,482 +0,0 @@
-"""
-Spinners are from:
-* cli-spinners:
- MIT License
- Copyright (c) Sindre Sorhus (sindresorhus.com)
- Permission is hereby granted, free of charge, to any person obtaining a copy
- of this software and associated documentation files (the "Software"), to deal
- in the Software without restriction, including without limitation the rights to
- use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
- the Software, and to permit persons to whom the Software is furnished to do so,
- subject to the following conditions:
- The above copyright notice and this permission notice shall be included
- in all copies or substantial portions of the Software.
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
- INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
- PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE
- FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- IN THE SOFTWARE.
-"""
-
-SPINNERS = {
- "dots": {
- "interval": 80,
- "frames": "⠋⠙⠹⠸⠼⠴⠦⠧⠇⠏",
- },
- "dots2": {"interval": 80, "frames": "⣾⣽⣻⢿⡿⣟⣯⣷"},
- "dots3": {
- "interval": 80,
- "frames": "⠋⠙⠚⠞⠖⠦⠴⠲⠳⠓",
- },
- "dots4": {
- "interval": 80,
- "frames": "⠄⠆⠇⠋⠙⠸⠰⠠⠰⠸⠙⠋⠇⠆",
- },
- "dots5": {
- "interval": 80,
- "frames": "⠋⠙⠚⠒⠂⠂⠒⠲⠴⠦⠖⠒⠐⠐⠒⠓⠋",
- },
- "dots6": {
- "interval": 80,
- "frames": "⠁⠉⠙⠚⠒⠂⠂⠒⠲⠴⠤⠄⠄⠤⠴⠲⠒⠂⠂⠒⠚⠙⠉⠁",
- },
- "dots7": {
- "interval": 80,
- "frames": "⠈⠉⠋⠓⠒⠐⠐⠒⠖⠦⠤⠠⠠⠤⠦⠖⠒⠐⠐⠒⠓⠋⠉⠈",
- },
- "dots8": {
- "interval": 80,
- "frames": "⠁⠁⠉⠙⠚⠒⠂⠂⠒⠲⠴⠤⠄⠄⠤⠠⠠⠤⠦⠖⠒⠐⠐⠒⠓⠋⠉⠈⠈",
- },
- "dots9": {"interval": 80, "frames": "⢹⢺⢼⣸⣇⡧⡗⡏"},
- "dots10": {"interval": 80, "frames": "⢄⢂⢁⡁⡈⡐⡠"},
- "dots11": {"interval": 100, "frames": "⠁⠂⠄⡀⢀⠠⠐⠈"},
- "dots12": {
- "interval": 80,
- "frames": [
- "⢀⠀",
- "⡀⠀",
- "⠄⠀",
- "⢂⠀",
- "⡂⠀",
- "⠅⠀",
- "⢃⠀",
- "⡃⠀",
- "⠍⠀",
- "⢋⠀",
- "⡋⠀",
- "⠍⠁",
- "⢋⠁",
- "⡋⠁",
- "⠍⠉",
- "⠋⠉",
- "⠋⠉",
- "⠉⠙",
- "⠉⠙",
- "⠉⠩",
- "⠈⢙",
- "⠈⡙",
- "⢈⠩",
- "⡀⢙",
- "⠄⡙",
- "⢂⠩",
- "⡂⢘",
- "⠅⡘",
- "⢃⠨",
- "⡃⢐",
- "⠍⡐",
- "⢋⠠",
- "⡋⢀",
- "⠍⡁",
- "⢋⠁",
- "⡋⠁",
- "⠍⠉",
- "⠋⠉",
- "⠋⠉",
- "⠉⠙",
- "⠉⠙",
- "⠉⠩",
- "⠈⢙",
- "⠈⡙",
- "⠈⠩",
- "⠀⢙",
- "⠀⡙",
- "⠀⠩",
- "⠀⢘",
- "⠀⡘",
- "⠀⠨",
- "⠀⢐",
- "⠀⡐",
- "⠀⠠",
- "⠀⢀",
- "⠀⡀",
- ],
- },
- "dots8Bit": {
- "interval": 80,
- "frames": "⠀⠁⠂⠃⠄⠅⠆⠇⡀⡁⡂⡃⡄⡅⡆⡇⠈⠉⠊⠋⠌⠍⠎⠏⡈⡉⡊⡋⡌⡍⡎⡏⠐⠑⠒⠓⠔⠕⠖⠗⡐⡑⡒⡓⡔⡕⡖⡗⠘⠙⠚⠛⠜⠝⠞⠟⡘⡙"
- "⡚⡛⡜⡝⡞⡟⠠⠡⠢⠣⠤⠥⠦⠧⡠⡡⡢⡣⡤⡥⡦⡧⠨⠩⠪⠫⠬⠭⠮⠯⡨⡩⡪⡫⡬⡭⡮⡯⠰⠱⠲⠳⠴⠵⠶⠷⡰⡱⡲⡳⡴⡵⡶⡷⠸⠹⠺⠻"
- "⠼⠽⠾⠿⡸⡹⡺⡻⡼⡽⡾⡿⢀⢁⢂⢃⢄⢅⢆⢇⣀⣁⣂⣃⣄⣅⣆⣇⢈⢉⢊⢋⢌⢍⢎⢏⣈⣉⣊⣋⣌⣍⣎⣏⢐⢑⢒⢓⢔⢕⢖⢗⣐⣑⣒⣓⣔⣕"
- "⣖⣗⢘⢙⢚⢛⢜⢝⢞⢟⣘⣙⣚⣛⣜⣝⣞⣟⢠⢡⢢⢣⢤⢥⢦⢧⣠⣡⣢⣣⣤⣥⣦⣧⢨⢩⢪⢫⢬⢭⢮⢯⣨⣩⣪⣫⣬⣭⣮⣯⢰⢱⢲⢳⢴⢵⢶⢷"
- "⣰⣱⣲⣳⣴⣵⣶⣷⢸⢹⢺⢻⢼⢽⢾⢿⣸⣹⣺⣻⣼⣽⣾⣿",
- },
- "line": {"interval": 130, "frames": ["-", "\\", "|", "/"]},
- "line2": {"interval": 100, "frames": "⠂-–—–-"},
- "pipe": {"interval": 100, "frames": "┤┘┴└├┌┬┐"},
- "simpleDots": {"interval": 400, "frames": [". ", ".. ", "...", " "]},
- "simpleDotsScrolling": {
- "interval": 200,
- "frames": [". ", ".. ", "...", " ..", " .", " "],
- },
- "star": {"interval": 70, "frames": "✶✸✹✺✹✷"},
- "star2": {"interval": 80, "frames": "+x*"},
- "flip": {
- "interval": 70,
- "frames": "___-``'´-___",
- },
- "hamburger": {"interval": 100, "frames": "☱☲☴"},
- "growVertical": {
- "interval": 120,
- "frames": "▁▃▄▅▆▇▆▅▄▃",
- },
- "growHorizontal": {
- "interval": 120,
- "frames": "▏▎▍▌▋▊▉▊▋▌▍▎",
- },
- "balloon": {"interval": 140, "frames": " .oO@* "},
- "balloon2": {"interval": 120, "frames": ".oO°Oo."},
- "noise": {"interval": 100, "frames": "▓▒░"},
- "bounce": {"interval": 120, "frames": "⠁⠂⠄⠂"},
- "boxBounce": {"interval": 120, "frames": "▖▘▝▗"},
- "boxBounce2": {"interval": 100, "frames": "▌▀▐▄"},
- "triangle": {"interval": 50, "frames": "◢◣◤◥"},
- "arc": {"interval": 100, "frames": "◜◠◝◞◡◟"},
- "circle": {"interval": 120, "frames": "◡⊙◠"},
- "squareCorners": {"interval": 180, "frames": "◰◳◲◱"},
- "circleQuarters": {"interval": 120, "frames": "◴◷◶◵"},
- "circleHalves": {"interval": 50, "frames": "◐◓◑◒"},
- "squish": {"interval": 100, "frames": "╫╪"},
- "toggle": {"interval": 250, "frames": "⊶⊷"},
- "toggle2": {"interval": 80, "frames": "▫▪"},
- "toggle3": {"interval": 120, "frames": "□■"},
- "toggle4": {"interval": 100, "frames": "■□▪▫"},
- "toggle5": {"interval": 100, "frames": "▮▯"},
- "toggle6": {"interval": 300, "frames": "ဝ၀"},
- "toggle7": {"interval": 80, "frames": "⦾⦿"},
- "toggle8": {"interval": 100, "frames": "◍◌"},
- "toggle9": {"interval": 100, "frames": "◉◎"},
- "toggle10": {"interval": 100, "frames": "㊂㊀㊁"},
- "toggle11": {"interval": 50, "frames": "⧇⧆"},
- "toggle12": {"interval": 120, "frames": "☗☖"},
- "toggle13": {"interval": 80, "frames": "=*-"},
- "arrow": {"interval": 100, "frames": "←↖↑↗→↘↓↙"},
- "arrow2": {
- "interval": 80,
- "frames": ["⬆️ ", "↗️ ", "➡️ ", "↘️ ", "⬇️ ", "↙️ ", "⬅️ ", "↖️ "],
- },
- "arrow3": {
- "interval": 120,
- "frames": ["▹▹▹▹▹", "▸▹▹▹▹", "▹▸▹▹▹", "▹▹▸▹▹", "▹▹▹▸▹", "▹▹▹▹▸"],
- },
- "bouncingBar": {
- "interval": 80,
- "frames": [
- "[ ]",
- "[= ]",
- "[== ]",
- "[=== ]",
- "[ ===]",
- "[ ==]",
- "[ =]",
- "[ ]",
- "[ =]",
- "[ ==]",
- "[ ===]",
- "[====]",
- "[=== ]",
- "[== ]",
- "[= ]",
- ],
- },
- "bouncingBall": {
- "interval": 80,
- "frames": [
- "( ● )",
- "( ● )",
- "( ● )",
- "( ● )",
- "( ●)",
- "( ● )",
- "( ● )",
- "( ● )",
- "( ● )",
- "(● )",
- ],
- },
- "smiley": {"interval": 200, "frames": ["😄 ", "😝 "]},
- "monkey": {"interval": 300, "frames": ["🙈 ", "🙈 ", "🙉 ", "🙊 "]},
- "hearts": {"interval": 100, "frames": ["💛 ", "💙 ", "💜 ", "💚 ", "❤️ "]},
- "clock": {
- "interval": 100,
- "frames": [
- "🕛 ",
- "🕐 ",
- "🕑 ",
- "🕒 ",
- "🕓 ",
- "🕔 ",
- "🕕 ",
- "🕖 ",
- "🕗 ",
- "🕘 ",
- "🕙 ",
- "🕚 ",
- ],
- },
- "earth": {"interval": 180, "frames": ["🌍 ", "🌎 ", "🌏 "]},
- "material": {
- "interval": 17,
- "frames": [
- "█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
- "██▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
- "███▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
- "████▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
- "██████▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
- "██████▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
- "███████▁▁▁▁▁▁▁▁▁▁▁▁▁",
- "████████▁▁▁▁▁▁▁▁▁▁▁▁",
- "█████████▁▁▁▁▁▁▁▁▁▁▁",
- "█████████▁▁▁▁▁▁▁▁▁▁▁",
- "██████████▁▁▁▁▁▁▁▁▁▁",
- "███████████▁▁▁▁▁▁▁▁▁",
- "█████████████▁▁▁▁▁▁▁",
- "██████████████▁▁▁▁▁▁",
- "██████████████▁▁▁▁▁▁",
- "▁██████████████▁▁▁▁▁",
- "▁██████████████▁▁▁▁▁",
- "▁██████████████▁▁▁▁▁",
- "▁▁██████████████▁▁▁▁",
- "▁▁▁██████████████▁▁▁",
- "▁▁▁▁█████████████▁▁▁",
- "▁▁▁▁██████████████▁▁",
- "▁▁▁▁██████████████▁▁",
- "▁▁▁▁▁██████████████▁",
- "▁▁▁▁▁██████████████▁",
- "▁▁▁▁▁██████████████▁",
- "▁▁▁▁▁▁██████████████",
- "▁▁▁▁▁▁██████████████",
- "▁▁▁▁▁▁▁█████████████",
- "▁▁▁▁▁▁▁█████████████",
- "▁▁▁▁▁▁▁▁████████████",
- "▁▁▁▁▁▁▁▁████████████",
- "▁▁▁▁▁▁▁▁▁███████████",
- "▁▁▁▁▁▁▁▁▁███████████",
- "▁▁▁▁▁▁▁▁▁▁██████████",
- "▁▁▁▁▁▁▁▁▁▁██████████",
- "▁▁▁▁▁▁▁▁▁▁▁▁████████",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁███████",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁██████",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█████",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█████",
- "█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁████",
- "██▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁███",
- "██▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁███",
- "███▁▁▁▁▁▁▁▁▁▁▁▁▁▁███",
- "████▁▁▁▁▁▁▁▁▁▁▁▁▁▁██",
- "█████▁▁▁▁▁▁▁▁▁▁▁▁▁▁█",
- "█████▁▁▁▁▁▁▁▁▁▁▁▁▁▁█",
- "██████▁▁▁▁▁▁▁▁▁▁▁▁▁█",
- "████████▁▁▁▁▁▁▁▁▁▁▁▁",
- "█████████▁▁▁▁▁▁▁▁▁▁▁",
- "█████████▁▁▁▁▁▁▁▁▁▁▁",
- "█████████▁▁▁▁▁▁▁▁▁▁▁",
- "█████████▁▁▁▁▁▁▁▁▁▁▁",
- "███████████▁▁▁▁▁▁▁▁▁",
- "████████████▁▁▁▁▁▁▁▁",
- "████████████▁▁▁▁▁▁▁▁",
- "██████████████▁▁▁▁▁▁",
- "██████████████▁▁▁▁▁▁",
- "▁██████████████▁▁▁▁▁",
- "▁██████████████▁▁▁▁▁",
- "▁▁▁█████████████▁▁▁▁",
- "▁▁▁▁▁████████████▁▁▁",
- "▁▁▁▁▁████████████▁▁▁",
- "▁▁▁▁▁▁███████████▁▁▁",
- "▁▁▁▁▁▁▁▁█████████▁▁▁",
- "▁▁▁▁▁▁▁▁█████████▁▁▁",
- "▁▁▁▁▁▁▁▁▁█████████▁▁",
- "▁▁▁▁▁▁▁▁▁█████████▁▁",
- "▁▁▁▁▁▁▁▁▁▁█████████▁",
- "▁▁▁▁▁▁▁▁▁▁▁████████▁",
- "▁▁▁▁▁▁▁▁▁▁▁████████▁",
- "▁▁▁▁▁▁▁▁▁▁▁▁███████▁",
- "▁▁▁▁▁▁▁▁▁▁▁▁███████▁",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁███████",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁███████",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█████",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁████",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁████",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁████",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁███",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁███",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁██",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁██",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁██",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
- "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
- ],
- },
- "moon": {
- "interval": 80,
- "frames": ["🌑 ", "🌒 ", "🌓 ", "🌔 ", "🌕 ", "🌖 ", "🌗 ", "🌘 "],
- },
- "runner": {"interval": 140, "frames": ["🚶 ", "🏃 "]},
- "pong": {
- "interval": 80,
- "frames": [
- "▐⠂ ▌",
- "▐⠈ ▌",
- "▐ ⠂ ▌",
- "▐ ⠠ ▌",
- "▐ ⡀ ▌",
- "▐ ⠠ ▌",
- "▐ ⠂ ▌",
- "▐ ⠈ ▌",
- "▐ ⠂ ▌",
- "▐ ⠠ ▌",
- "▐ ⡀ ▌",
- "▐ ⠠ ▌",
- "▐ ⠂ ▌",
- "▐ ⠈ ▌",
- "▐ ⠂▌",
- "▐ ⠠▌",
- "▐ ⡀▌",
- "▐ ⠠ ▌",
- "▐ ⠂ ▌",
- "▐ ⠈ ▌",
- "▐ ⠂ ▌",
- "▐ ⠠ ▌",
- "▐ ⡀ ▌",
- "▐ ⠠ ▌",
- "▐ ⠂ ▌",
- "▐ ⠈ ▌",
- "▐ ⠂ ▌",
- "▐ ⠠ ▌",
- "▐ ⡀ ▌",
- "▐⠠ ▌",
- ],
- },
- "shark": {
- "interval": 120,
- "frames": [
- "▐|\\____________▌",
- "▐_|\\___________▌",
- "▐__|\\__________▌",
- "▐___|\\_________▌",
- "▐____|\\________▌",
- "▐_____|\\_______▌",
- "▐______|\\______▌",
- "▐_______|\\_____▌",
- "▐________|\\____▌",
- "▐_________|\\___▌",
- "▐__________|\\__▌",
- "▐___________|\\_▌",
- "▐____________|\\▌",
- "▐____________/|▌",
- "▐___________/|_▌",
- "▐__________/|__▌",
- "▐_________/|___▌",
- "▐________/|____▌",
- "▐_______/|_____▌",
- "▐______/|______▌",
- "▐_____/|_______▌",
- "▐____/|________▌",
- "▐___/|_________▌",
- "▐__/|__________▌",
- "▐_/|___________▌",
- "▐/|____________▌",
- ],
- },
- "dqpb": {"interval": 100, "frames": "dqpb"},
- "weather": {
- "interval": 100,
- "frames": [
- "☀️ ",
- "☀️ ",
- "☀️ ",
- "🌤 ",
- "⛅️ ",
- "🌥 ",
- "☁️ ",
- "🌧 ",
- "🌨 ",
- "🌧 ",
- "🌨 ",
- "🌧 ",
- "🌨 ",
- "⛈ ",
- "🌨 ",
- "🌧 ",
- "🌨 ",
- "☁️ ",
- "🌥 ",
- "⛅️ ",
- "🌤 ",
- "☀️ ",
- "☀️ ",
- ],
- },
- "christmas": {"interval": 400, "frames": "🌲🎄"},
- "grenade": {
- "interval": 80,
- "frames": [
- "، ",
- "′ ",
- " ´ ",
- " ‾ ",
- " ⸌",
- " ⸊",
- " |",
- " ⁎",
- " ⁕",
- " ෴ ",
- " ⁓",
- " ",
- " ",
- " ",
- ],
- },
- "point": {"interval": 125, "frames": ["∙∙∙", "●∙∙", "∙●∙", "∙∙●", "∙∙∙"]},
- "layer": {"interval": 150, "frames": "-=≡"},
- "betaWave": {
- "interval": 80,
- "frames": [
- "ρββββββ",
- "βρβββββ",
- "ββρββββ",
- "βββρβββ",
- "ββββρββ",
- "βββββρβ",
- "ββββββρ",
- ],
- },
- "aesthetic": {
- "interval": 80,
- "frames": [
- "▰▱▱▱▱▱▱",
- "▰▰▱▱▱▱▱",
- "▰▰▰▱▱▱▱",
- "▰▰▰▰▱▱▱",
- "▰▰▰▰▰▱▱",
- "▰▰▰▰▰▰▱",
- "▰▰▰▰▰▰▰",
- "▰▱▱▱▱▱▱",
- ],
- },
-}
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/spinner.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/spinner.py
deleted file mode 100644
index 91ea630e10f893bf5d6b17fcd9a1fedcecee6f02..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/spinner.py
+++ /dev/null
@@ -1,137 +0,0 @@
-from typing import cast, List, Optional, TYPE_CHECKING, Union
-
-from ._spinners import SPINNERS
-from .measure import Measurement
-from .table import Table
-from .text import Text
-
-if TYPE_CHECKING:
- from .console import Console, ConsoleOptions, RenderResult, RenderableType
- from .style import StyleType
-
-
-class Spinner:
- """A spinner animation.
-
- Args:
- name (str): Name of spinner (run python -m rich.spinner).
- text (RenderableType, optional): A renderable to display at the right of the spinner (str or Text typically). Defaults to "".
- style (StyleType, optional): Style for spinner animation. Defaults to None.
- speed (float, optional): Speed factor for animation. Defaults to 1.0.
-
- Raises:
- KeyError: If name isn't one of the supported spinner animations.
- """
-
- def __init__(
- self,
- name: str,
- text: "RenderableType" = "",
- *,
- style: Optional["StyleType"] = None,
- speed: float = 1.0,
- ) -> None:
- try:
- spinner = SPINNERS[name]
- except KeyError:
- raise KeyError(f"no spinner called {name!r}")
- self.text: "Union[RenderableType, Text]" = (
- Text.from_markup(text) if isinstance(text, str) else text
- )
- self.frames = cast(List[str], spinner["frames"])[:]
- self.interval = cast(float, spinner["interval"])
- self.start_time: Optional[float] = None
- self.style = style
- self.speed = speed
- self.frame_no_offset: float = 0.0
- self._update_speed = 0.0
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "RenderResult":
- yield self.render(console.get_time())
-
- def __rich_measure__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> Measurement:
- text = self.render(0)
- return Measurement.get(console, options, text)
-
- def render(self, time: float) -> "RenderableType":
- """Render the spinner for a given time.
-
- Args:
- time (float): Time in seconds.
-
- Returns:
- RenderableType: A renderable containing animation frame.
- """
- if self.start_time is None:
- self.start_time = time
-
- frame_no = ((time - self.start_time) * self.speed) / (
- self.interval / 1000.0
- ) + self.frame_no_offset
- frame = Text(
- self.frames[int(frame_no) % len(self.frames)], style=self.style or ""
- )
-
- if self._update_speed:
- self.frame_no_offset = frame_no
- self.start_time = time
- self.speed = self._update_speed
- self._update_speed = 0.0
-
- if not self.text:
- return frame
- elif isinstance(self.text, (str, Text)):
- return Text.assemble(frame, " ", self.text)
- else:
- table = Table.grid(padding=1)
- table.add_row(frame, self.text)
- return table
-
- def update(
- self,
- *,
- text: "RenderableType" = "",
- style: Optional["StyleType"] = None,
- speed: Optional[float] = None,
- ) -> None:
- """Updates attributes of a spinner after it has been started.
-
- Args:
- text (RenderableType, optional): A renderable to display at the right of the spinner (str or Text typically). Defaults to "".
- style (StyleType, optional): Style for spinner animation. Defaults to None.
- speed (float, optional): Speed factor for animation. Defaults to None.
- """
- if text:
- self.text = Text.from_markup(text) if isinstance(text, str) else text
- if style:
- self.style = style
- if speed:
- self._update_speed = speed
-
-
-if __name__ == "__main__": # pragma: no cover
- from time import sleep
-
- from .columns import Columns
- from .panel import Panel
- from .live import Live
-
- all_spinners = Columns(
- [
- Spinner(spinner_name, text=Text(repr(spinner_name), style="green"))
- for spinner_name in sorted(SPINNERS.keys())
- ],
- column_first=True,
- expand=True,
- )
-
- with Live(
- Panel(all_spinners, title="Spinners", border_style="blue"),
- refresh_per_second=20,
- ) as live:
- while True:
- sleep(0.1)
diff --git a/spaces/Boadiwaa/Recipes/openai/api_resources/__init__.py b/spaces/Boadiwaa/Recipes/openai/api_resources/__init__.py
deleted file mode 100644
index 1c08ef3b57b03c12605c861716de70d7d2fa5a9f..0000000000000000000000000000000000000000
--- a/spaces/Boadiwaa/Recipes/openai/api_resources/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from openai.api_resources.answer import Answer # noqa: F401
-from openai.api_resources.classification import Classification # noqa: F401
-from openai.api_resources.completion import Completion # noqa: F401
-from openai.api_resources.customer import Customer # noqa: F401
-from openai.api_resources.edit import Edit # noqa: F401
-from openai.api_resources.deployment import Deployment # noqa: F401
-from openai.api_resources.embedding import Embedding # noqa: F401
-from openai.api_resources.engine import Engine # noqa: F401
-from openai.api_resources.error_object import ErrorObject # noqa: F401
-from openai.api_resources.file import File # noqa: F401
-from openai.api_resources.fine_tune import FineTune # noqa: F401
-from openai.api_resources.model import Model # noqa: F401
-from openai.api_resources.search import Search # noqa: F401
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/ctanhf.h b/spaces/CVPR/LIVE/thrust/thrust/detail/complex/ctanhf.h
deleted file mode 100644
index f6923d1df6d723092fc7522dd197bb66fa7f3fa4..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/ctanhf.h
+++ /dev/null
@@ -1,124 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- * Copyright 2013 Filipe RNC Maia
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*-
- * Copyright (c) 2011 David Schultz
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- * notice unmodified, this list of conditions, and the following
- * disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
- * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
- * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
- * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
- * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
- * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
- * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-/*
- * Adapted from FreeBSD by Filipe Maia, filipe.c.maia@gmail.com:
- * freebsd/lib/msun/src/s_ctanhf.c
- */
-
-/*
- * Hyperbolic tangent of a complex argument z. See ctanh.c for details.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust{
-namespace detail{
-namespace complex{
-
-using thrust::complex;
-
-__host__ __device__ inline
-complex ctanhf(const complex& z){
- float x, y;
- float t, beta, s, rho, denom;
- uint32_t hx, ix;
-
- x = z.real();
- y = z.imag();
-
- get_float_word(hx, x);
- ix = hx & 0x7fffffff;
-
- if (ix >= 0x7f800000) {
- if (ix & 0x7fffff)
- return (complex(x, (y == 0.0f ? y : x * y)));
- set_float_word(x, hx - 0x40000000);
- return (complex(x,
- copysignf(0, isinf(y) ? y : sinf(y) * cosf(y))));
- }
-
- if (!isfinite(y))
- return (complex(y - y, y - y));
-
- if (ix >= 0x41300000) { /* x >= 11 */
- float exp_mx = expf(-fabsf(x));
- return (complex(copysignf(1.0f, x),
- 4.0f * sinf(y) * cosf(y) * exp_mx * exp_mx));
- }
-
- t = tanf(y);
- beta = 1.0f + t * t;
- s = sinhf(x);
- rho = sqrtf(1.0f + s * s);
- denom = 1.0f + beta * s * s;
- return (complex((beta * rho * s) / denom, t / denom));
-}
-
- __host__ __device__ inline
- complex ctanf(complex z){
- z = ctanhf(complex(-z.imag(), z.real()));
- return (complex(z.imag(), -z.real()));
- }
-
-} // namespace complex
-
-} // namespace detail
-
-template <>
-__host__ __device__
-inline complex tan(const complex& z){
- return detail::complex::ctanf(z);
-}
-
-template <>
-__host__ __device__
-inline complex tanh(const complex& z){
- return detail::complex::ctanhf(z);
-}
-
-} // namespace thrust
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/config.h b/spaces/CVPR/LIVE/thrust/thrust/detail/config.h
deleted file mode 100644
index 5a5573a410e6ee8ec7b062ee4bd330390fb37e9b..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/config.h
+++ /dev/null
@@ -1,24 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-/*! \file config.h
- * \brief Defines platform configuration.
- */
-
-#pragma once
-
-#include
-#include
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/partition.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/partition.h
deleted file mode 100644
index 80323535c9b0492af8411ad5c23f5edee1a0c906..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/partition.h
+++ /dev/null
@@ -1,87 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace tbb
-{
-namespace detail
-{
-
-
-template
- ForwardIterator stable_partition(execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last,
- Predicate pred);
-
-template
- ForwardIterator stable_partition(execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last,
- InputIterator stencil,
- Predicate pred);
-
-template
- thrust::pair
- stable_partition_copy(execution_policy &exec,
- InputIterator first,
- InputIterator last,
- OutputIterator1 out_true,
- OutputIterator2 out_false,
- Predicate pred);
-
-
-template
- thrust::pair
- stable_partition_copy(execution_policy &exec,
- InputIterator1 first,
- InputIterator1 last,
- InputIterator2 stencil,
- OutputIterator1 out_true,
- OutputIterator2 out_false,
- Predicate pred);
-
-
-} // end namespace detail
-} // end namespace tbb
-} // end namespace system
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/CVPR/MonoScene/monoscene/.ipynb_checkpoints/unet3d_nyu-checkpoint.py b/spaces/CVPR/MonoScene/monoscene/.ipynb_checkpoints/unet3d_nyu-checkpoint.py
deleted file mode 100644
index e9e3b3718999248efa1b2925658465ba59801b13..0000000000000000000000000000000000000000
--- a/spaces/CVPR/MonoScene/monoscene/.ipynb_checkpoints/unet3d_nyu-checkpoint.py
+++ /dev/null
@@ -1,90 +0,0 @@
-# encoding: utf-8
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import numpy as np
-from monoscene.CRP3D import CPMegaVoxels
-from monoscene.modules import (
- Process,
- Upsample,
- Downsample,
- SegmentationHead,
- ASPP,
-)
-
-
-class UNet3D(nn.Module):
- def __init__(
- self,
- class_num,
- norm_layer,
- feature,
- full_scene_size,
- n_relations=4,
- project_res=[],
- context_prior=True,
- bn_momentum=0.1,
- ):
- super(UNet3D, self).__init__()
- self.business_layer = []
- self.project_res = project_res
-
- self.feature_1_4 = feature
- self.feature_1_8 = feature * 2
- self.feature_1_16 = feature * 4
-
- self.feature_1_16_dec = self.feature_1_16
- self.feature_1_8_dec = self.feature_1_8
- self.feature_1_4_dec = self.feature_1_4
-
- self.process_1_4 = nn.Sequential(
- Process(self.feature_1_4, norm_layer, bn_momentum, dilations=[1, 2, 3]),
- Downsample(self.feature_1_4, norm_layer, bn_momentum),
- )
- self.process_1_8 = nn.Sequential(
- Process(self.feature_1_8, norm_layer, bn_momentum, dilations=[1, 2, 3]),
- Downsample(self.feature_1_8, norm_layer, bn_momentum),
- )
- self.up_1_16_1_8 = Upsample(
- self.feature_1_16_dec, self.feature_1_8_dec, norm_layer, bn_momentum
- )
- self.up_1_8_1_4 = Upsample(
- self.feature_1_8_dec, self.feature_1_4_dec, norm_layer, bn_momentum
- )
- self.ssc_head_1_4 = SegmentationHead(
- self.feature_1_4_dec, self.feature_1_4_dec, class_num, [1, 2, 3]
- )
-
- self.context_prior = context_prior
- size_1_16 = tuple(np.ceil(i / 4).astype(int) for i in full_scene_size)
-
- if context_prior:
- self.CP_mega_voxels = CPMegaVoxels(
- self.feature_1_16,
- size_1_16,
- n_relations=n_relations,
- bn_momentum=bn_momentum,
- )
-
- #
- def forward(self, input_dict):
- res = {}
-
- x3d_1_4 = input_dict["x3d"]
- x3d_1_8 = self.process_1_4(x3d_1_4)
- x3d_1_16 = self.process_1_8(x3d_1_8)
-
- if self.context_prior:
- ret = self.CP_mega_voxels(x3d_1_16)
- x3d_1_16 = ret["x"]
- for k in ret.keys():
- res[k] = ret[k]
-
- x3d_up_1_8 = self.up_1_16_1_8(x3d_1_16) + x3d_1_8
- x3d_up_1_4 = self.up_1_8_1_4(x3d_up_1_8) + x3d_1_4
-
- ssc_logit_1_4 = self.ssc_head_1_4(x3d_up_1_4)
-
- res["ssc_logit"] = ssc_logit_1_4
-
- return res
diff --git a/spaces/CVPR/Text2Human/Text2Human/README.md b/spaces/CVPR/Text2Human/Text2Human/README.md
deleted file mode 100644
index 8e93c456558086ba888d76351ae194973f32dd20..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Text2Human/Text2Human/README.md
+++ /dev/null
@@ -1,255 +0,0 @@
-# Text2Human - Official PyTorch Implementation
-
-
-
-This repository provides the official PyTorch implementation for the following paper:
-
-**Text2Human: Text-Driven Controllable Human Image Generation**
-[Yuming Jiang](https://yumingj.github.io/), [Shuai Yang](https://williamyang1991.github.io/), [Haonan Qiu](http://haonanqiu.com/), [Wayne Wu](https://dblp.org/pid/50/8731.html), [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/) and [Ziwei Liu](https://liuziwei7.github.io/)
-In ACM Transactions on Graphics (Proceedings of SIGGRAPH), 2022.
-
-From [MMLab@NTU](https://www.mmlab-ntu.com/index.html) affliated with S-Lab, Nanyang Technological University and SenseTime Research.
-
-
-
-
-
-
-
-
-
- The lady wears a short-sleeve T-shirt with pure color pattern, and a short and denim skirt.
- The man wears a long and floral shirt, and long pants with the pure color pattern.
- A lady is wearing a sleeveless pure-color shirt and long jeans
- The man wears a short-sleeve T-shirt with the pure color pattern and a short pants with the pure color pattern.
-
-
-
-[**[Project Page]**](https://yumingj.github.io/projects/Text2Human.html) | [**[Paper]**](https://arxiv.org/pdf/2205.15996.pdf) | [**[Dataset]**](https://github.com/yumingj/DeepFashion-MultiModal) | [**[Demo Video]**](https://youtu.be/yKh4VORA_E0)
-
-
-## Updates
-
-- [05/2022] Paper and demo video are released.
-- [05/2022] Code is released.
-- [05/2022] This website is created.
-
-## Installation
-**Clone this repo:**
-```bash
-git clone https://github.com/yumingj/Text2Human.git
-cd Text2Human
-```
-**Dependencies:**
-
-All dependencies for defining the environment are provided in `environment/text2human_env.yaml`.
-We recommend using [Anaconda](https://docs.anaconda.com/anaconda/install/) to manage the python environment:
-```bash
-conda env create -f ./environment/text2human_env.yaml
-conda activate text2human
-conda install -c huggingface tokenizers=0.9.4
-conda install -c huggingface transformers=4.0.0
-conda install -c conda-forge sentence-transformers=2.0.0
-```
-
-If it doesn't work, you may need to install the following packages on your own:
- - Python 3.6
- - PyTorch 1.7.1
- - CUDA 10.1
- - [sentence-transformers](https://huggingface.co/sentence-transformers) 2.0.0
- - [tokenizers](https://pypi.org/project/tokenizers/) 0.9.4
- - [transformers](https://huggingface.co/docs/transformers/installation) 4.0.0
-
-## (1) Dataset Preparation
-
-In this work, we contribute a large-scale high-quality dataset with rich multi-modal annotations named [DeepFashion-MultiModal](https://github.com/yumingj/DeepFashion-MultiModal) Dataset.
-Here we pre-processed the raw annotations of the original dataset for the task of text-driven controllable human image generation. The pre-processing pipeline consists of:
- - align the human body in the center of the images according to the human pose
- - fuse the clothing color and clothing fabric annotations into one texture annotation
- - do some annotation cleaning and image filtering
- - split the whole dataset into the training set and testing set
-
-You can download our processed dataset from this [Google Drive](https://drive.google.com/file/d/1KIoFfRZNQVn6RV_wTxG2wZmY8f2T_84B/view?usp=sharing). If you want to access the raw annotations, please refer to the [DeepFashion-MultiModal](https://github.com/yumingj/DeepFashion-MultiModal) Dataset.
-
-After downloading the dataset, unzip the file and put them under the dataset folder with the following structure:
-```
-./datasets
-├── train_images
- ├── xxx.png
- ...
- ├── xxx.png
- └── xxx.png
-├── test_images
- % the same structure as in train_images
-├── densepose
- % the same structure as in train_images
-├── segm
- % the same structure as in train_images
-├── shape_ann
- ├── test_ann_file.txt
- ├── train_ann_file.txt
- └── val_ann_file.txt
-└── texture_ann
- ├── test
- ├── lower_fused.txt
- ├── outer_fused.txt
- └── upper_fused.txt
- ├── train
- % the same files as in test
- └── val
- % the same files as in test
-```
-
-## (2) Sampling
-
-### Inference Notebook
-
-Coming soon.
-
-
-### Pretrained Models
-
-Pretrained models can be downloaded from this [Google Drive](https://drive.google.com/file/d/1VyI8_AbPwAUaZJPaPba8zxsFIWumlDen/view?usp=sharing). Unzip the file and put them under the dataset folder with the following structure:
-```
-pretrained_models
-├── index_pred_net.pth
-├── parsing_gen.pth
-├── parsing_token.pth
-├── sampler.pth
-├── vqvae_bottom.pth
-└── vqvae_top.pth
-```
-
-### Generation from Paring Maps
-You can generate images from given parsing maps and pre-defined texture annotations:
-```python
-python sample_from_parsing.py -opt ./configs/sample_from_parsing.yml
-```
-The results are saved in the folder `./results/sampling_from_parsing`.
-
-### Generation from Poses
-You can generate images from given human poses and pre-defined clothing shape and texture annotations:
-```python
-python sample_from_pose.py -opt ./configs/sample_from_pose.yml
-```
-
-**Remarks**: The above two scripts generate images without language interactions. If you want to generate images using texts, you can use the notebook or our user interface.
-
-### User Interface
-
-```python
-python ui_demo.py
-```
-
-
-The descriptions for shapes should follow the following format:
-```
-, , , , , ...
-
-Note: The outer clothing type and accessories can be omitted.
-
-Examples:
-man, sleeveless T-shirt, long pants
-woman, short-sleeve T-shirt, short jeans
-```
-
-The descriptions for textures should follow the following format:
-```
-, ,
-
-Note: Currently, we only support 5 types of textures, i.e., pure color, stripe/spline, plaid/lattice,
- floral, denim. Your inputs should be restricted to these textures.
-```
-
-## (3) Training Text2Human
-
-### Stage I: Pose to Parsing
-Train the parsing generation network. If you want to skip the training of this network, you can download our pretrained model from [here](https://drive.google.com/file/d/1MNyFLGqIQcOMg_HhgwCmKqdwfQSjeg_6/view?usp=sharing).
-```python
-python train_parsing_gen.py -opt ./configs/parsing_gen.yml
-```
-
-### Stage II: Parsing to Human
-
-**Step 1: Train the top level of the hierarchical VQVAE.**
-We provide our pretrained model [here](https://drive.google.com/file/d/1TwypUg85gPFJtMwBLUjVS66FKR3oaTz8/view?usp=sharing). This model is trained by:
-```python
-python train_vqvae.py -opt ./configs/vqvae_top.yml
-```
-
-**Step 2: Train the bottom level of the hierarchical VQVAE.**
-We provide our pretrained model [here](https://drive.google.com/file/d/15hzbY-RG-ILgzUqqGC0qMzlS4OayPdRH/view?usp=sharing). This model is trained by:
-```python
-python train_vqvae.py -opt ./configs/vqvae_bottom.yml
-```
-
-**Stage 3 & 4: Train the sampler with mixture-of-experts.** To train the sampler, we first need to train a model to tokenize the parsing maps. You can access our pretrained parsing maps [here](https://drive.google.com/file/d/1GLHoOeCP6sMao1-R63ahJMJF7-J00uir/view?usp=sharing).
-```python
-python train_parsing_token.py -opt ./configs/parsing_token.yml
-```
-
-With the parsing tokenization model, the sampler is trained by:
-```python
-python train_sampler.py -opt ./configs/sampler.yml
-```
-Our pretrained sampler is provided [here](https://drive.google.com/file/d/1OQO_kG2fK7eKiG1VJH1OL782X71UQAmS/view?usp=sharing).
-
-**Stage 5: Train the index prediction network.**
-We provide our pretrained index prediction network [here](https://drive.google.com/file/d/1rqhkQD-JGd7YBeIfDvMV-vjfbNHpIhYm/view?usp=sharing). It is trained by:
-```python
-python train_index_prediction.py -opt ./configs/index_pred_net.yml
-```
-
-
-**Remarks**: In the config files, we use the path to our models as the required pretrained models. If you want to train the models from scratch, please replace the path to your own one. We set the numbers of the training epochs as large numbers and you can choose the best epoch for each model. For your reference, our pretrained parsing generation network is trained for 50 epochs, top-level VQVAE is trained for 135 epochs, bottom-level VQVAE is trained for 70 epochs, parsing tokenization network is trained for 20 epochs, sampler is trained for 95 epochs, and the index prediction network is trained for 70 epochs.
-
-## (4) Results
-
-Please visit our [Project Page](https://yumingj.github.io/projects/Text2Human.html#results) to view more results.
-You can select the attribtues to customize the desired human images.
-[
-](https://yumingj.github.io/projects/Text2Human.html#results)
-
-## DeepFashion-MultiModal Dataset
-
-
-
-In this work, we also propose **DeepFashion-MultiModal**, a large-scale high-quality human dataset with rich multi-modal annotations. It has the following properties:
-1. It contains 44,096 high-resolution human images, including 12,701 full body human images.
-2. For each full body images, we **manually annotate** the human parsing labels of 24 classes.
-3. For each full body images, we **manually annotate** the keypoints.
-4. We extract DensePose for each human image.
-5. Each image is **manually annotated** with attributes for both clothes shapes and textures.
-6. We provide a textual description for each image.
-
-
-
-Please refer to [this repo](https://github.com/yumingj/DeepFashion-MultiModal) for more details about our proposed dataset.
-
-## TODO List
-
-- [ ] Release 1024x512 version of Text2Human.
-- [ ] Train the Text2Human using [SHHQ dataset](https://stylegan-human.github.io/).
-
-## Citation
-
-If you find this work useful for your research, please consider citing our paper:
-
-```bibtex
-@article{jiang2022text2human,
- title={Text2Human: Text-Driven Controllable Human Image Generation},
- author={Jiang, Yuming and Yang, Shuai and Qiu, Haonan and Wu, Wayne and Loy, Chen Change and Liu, Ziwei},
- journal={ACM Transactions on Graphics (TOG)},
- volume={41},
- number={4},
- articleno={162},
- pages={1--11},
- year={2022},
- publisher={ACM New York, NY, USA},
- doi={10.1145/3528223.3530104},
-}
-```
-
-## Acknowledgments
-
-Part of the code is borrowed from [unleashing-transformers](https://github.com/samb-t/unleashing-transformers), [taming-transformers](https://github.com/CompVis/taming-transformers) and [mmsegmentation](https://github.com/open-mmlab/mmsegmentation).
diff --git a/spaces/CVPR/lama-example/bin/paper_runfiles/generate_test_celeba-hq.sh b/spaces/CVPR/lama-example/bin/paper_runfiles/generate_test_celeba-hq.sh
deleted file mode 100644
index 7e04bba426f1c6c0528d88a0e28a5da0dde7ca3e..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/bin/paper_runfiles/generate_test_celeba-hq.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/usr/bin/env bash
-
-# paths to data are valid for mml-ws01
-OUT_DIR="/media/inpainting/paper_data/CelebA-HQ_val_test"
-
-source "$(dirname $0)/env.sh"
-
-for datadir in "val" "test"
-do
- for conf in random_thin_256 random_medium_256 random_thick_256 random_thin_512 random_medium_512 random_thick_512
- do
- "$BINDIR/gen_mask_dataset_hydra.py" -cn $conf datadir=$datadir location=mml-ws01-celeba-hq \
- location.out_dir=$OUT_DIR cropping.out_square_crop=False
-
- "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats"
- done
-done
diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/__init__.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/__init__.py
deleted file mode 100644
index 0655f96b4618d716f62290ce65e7ae82335ea61f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/modeling/__init__.py
+++ /dev/null
@@ -1,58 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from detectron2.layers import ShapeSpec
-
-from .anchor_generator import build_anchor_generator, ANCHOR_GENERATOR_REGISTRY
-from .backbone import (
- BACKBONE_REGISTRY,
- FPN,
- Backbone,
- ResNet,
- ResNetBlockBase,
- build_backbone,
- build_resnet_backbone,
- make_stage,
-)
-from .meta_arch import (
- META_ARCH_REGISTRY,
- SEM_SEG_HEADS_REGISTRY,
- GeneralizedRCNN,
- PanopticFPN,
- ProposalNetwork,
- RetinaNet,
- SemanticSegmentor,
- build_model,
- build_sem_seg_head,
-)
-from .postprocessing import detector_postprocess
-from .proposal_generator import (
- PROPOSAL_GENERATOR_REGISTRY,
- build_proposal_generator,
- RPN_HEAD_REGISTRY,
- build_rpn_head,
-)
-from .roi_heads import (
- ROI_BOX_HEAD_REGISTRY,
- ROI_HEADS_REGISTRY,
- ROI_KEYPOINT_HEAD_REGISTRY,
- ROI_MASK_HEAD_REGISTRY,
- ROIHeads,
- StandardROIHeads,
- BaseMaskRCNNHead,
- BaseKeypointRCNNHead,
- FastRCNNOutputLayers,
- build_box_head,
- build_keypoint_head,
- build_mask_head,
- build_roi_heads,
-)
-from .test_time_augmentation import DatasetMapperTTA, GeneralizedRCNNWithTTA
-from .mmdet_wrapper import MMDetBackbone, MMDetDetector
-
-_EXCLUDE = {"ShapeSpec"}
-__all__ = [k for k in globals().keys() if k not in _EXCLUDE and not k.startswith("_")]
-
-
-from detectron2.utils.env import fixup_module_metadata
-
-fixup_module_metadata(__name__, globals(), __all__)
-del fixup_module_metadata
diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/setup.py b/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/setup.py
deleted file mode 100644
index 3696632660ac9f33b1603f8a225278b7b90cc881..0000000000000000000000000000000000000000
--- a/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/setup.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from setuptools import find_packages, setup
-
-setup(
- name="SAM",
- version="1.0",
- install_requires=[],
- packages=find_packages(exclude="notebooks"),
- extras_require={
- "all": ["matplotlib", "pycocotools", "opencv-python", "onnx", "onnxruntime"],
- "dev": ["flake8", "isort", "black", "mypy"],
- },
-)
diff --git a/spaces/ChandlerGIS/shortgpt/README.md b/spaces/ChandlerGIS/shortgpt/README.md
deleted file mode 100644
index 917c46e53c2686872d228fadf96177b906bb30fb..0000000000000000000000000000000000000000
--- a/spaces/ChandlerGIS/shortgpt/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Shortgpt
-emoji: 📚
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.38.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CjangCjengh/Shanghainese-TTS/commons.py b/spaces/CjangCjengh/Shanghainese-TTS/commons.py
deleted file mode 100644
index 2153153f527d94e2abb641ea00c80b518ff6c5bd..0000000000000000000000000000000000000000
--- a/spaces/CjangCjengh/Shanghainese-TTS/commons.py
+++ /dev/null
@@ -1,97 +0,0 @@
-import math
-import torch
-from torch.nn import functional as F
-import torch.jit
-
-
-def script_method(fn, _rcb=None):
- return fn
-
-
-def script(obj, optimize=True, _frames_up=0, _rcb=None):
- return obj
-
-
-torch.jit.script_method = script_method
-torch.jit.script = script
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
diff --git a/spaces/Cyril666/ContourNet-ABI/det_demo.py b/spaces/Cyril666/ContourNet-ABI/det_demo.py
deleted file mode 100644
index 4334707d7df9fef132d39942891c7e6ba74bc14c..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/det_demo.py
+++ /dev/null
@@ -1,625 +0,0 @@
-import os
-import cv2
-import torch
-from torchvision import transforms as T
-import torch.nn as nn
-
-from maskrcnn_benchmark.modeling.detector import build_detection_model
-from maskrcnn_benchmark.utils.checkpoint import DetectronCheckpointer
-from maskrcnn_benchmark.structures.image_list import to_image_list
-from maskrcnn_benchmark.config import cfg
-from maskrcnn_benchmark.utils.chars import getstr_grid, get_tight_rect
-from maskrcnn_benchmark.data.datasets.evaluation.word.alfashape import getAlfaShapes
-from maskrcnn_benchmark.modeling.roi_heads.boundary_head.inference import Masker
-from shapely.geometry import *
-import random
-from torchvision.transforms import functional as F
-
-from PIL import Image
-import numpy as np
-import argparse
-
-class Resize(object):
- def __init__(self, min_size, max_size):
- if not isinstance(min_size, (list, tuple)):
- min_size = (min_size,)
- self.min_size = min_size
- self.max_size = max_size
-
- # modified from torchvision to add support for max size
- def get_size(self, image_size):
- w, h = image_size
- size = random.choice(self.min_size)
- max_size = self.max_size
- if max_size is not None:
- min_original_size = float(min((w, h)))
- max_original_size = float(max((w, h)))
- if max_original_size / min_original_size * size > max_size:
- size = int(round(max_size * min_original_size / max_original_size))
-
- if (w <= h and w == size) or (h <= w and h == size):
- return (h, w)
-
- if w < h:
- ow = size
- oh = int(size * h / w)
- else:
- oh = size
- ow = int(size * w / h)
-
- return (oh, ow)
-
- def __call__(self, image):
- size = self.get_size(image.size)
- image = F.resize(image, size)
- return image
-
-class DetDemo(object):
- def __init__(
- self,
- cfg,
- confidence_threshold=0.7,
- min_image_size=(1200,2000),
- output_polygon=True
- ):
- self.cfg = cfg.clone()
- self.model = build_detection_model(cfg)
- self.model.eval()
- self.device = torch.device(cfg.MODEL.DEVICE)
- self.model.to(self.device)
- self.min_image_size = min_image_size
-
- checkpointer = DetectronCheckpointer(cfg, self.model, save_dir=cfg.OUTPUT_DIR)
- _ = checkpointer.load(cfg.MODEL.WEIGHT)
-
- self.transforms = self.build_transform()
- self.cpu_device = torch.device("cpu")
- self.confidence_threshold = confidence_threshold
- self.output_polygon = output_polygon
-
- def build_transform(self):
- """
- Creates a basic transformation that was used to train the models
- """
- cfg = self.cfg
- # we are loading images with OpenCV, so we don't need to convert them
- # to BGR, they are already! So all we need to do is to normalize
- # by 255 if we want to convert to BGR255 format, or flip the channels
- # if we want it to be in RGB in [0-1] range.
- if cfg.INPUT.TO_BGR255:
- to_bgr_transform = T.Lambda(lambda x: x * 255)
- else:
- to_bgr_transform = T.Lambda(lambda x: x[[2, 1, 0]])
-
- normalize_transform = T.Normalize(
- mean=cfg.INPUT.PIXEL_MEAN, std=cfg.INPUT.PIXEL_STD
- )
- min_size = cfg.INPUT.MIN_SIZE_TEST
- max_size = cfg.INPUT.MAX_SIZE_TEST
-
- transform = T.Compose(
- [
- T.ToPILImage(),
- Resize(min_size, max_size),
- T.ToTensor(),
- to_bgr_transform,
- normalize_transform,
- ]
- )
- return transform
-
- def run_on_opencv_image(self, image):
- """
- Arguments:
- image (np.ndarray): an image as returned by OpenCV
- Returns:
- result_polygons (list): detection results
- result_words (list): recognition results
- """
- result_polygons = self.compute_prediction(image)
- return result_polygons
-
- def contour_to_valid(self, cnt, image_shape):
- """Convert rect to xys, i.e., eight points
- The `image_shape` is used to to make sure all points return are valid, i.e., within image area
- """
- # rect = cv2.minAreaRect(cnt)
- if len(cnt.shape) != 3:
- assert 1 < 0
- rect = cnt.reshape([cnt.shape[0], cnt.shape[2]])
- h, w = image_shape[0:2]
-
- def get_valid_x(x):
- if x < 0:
- return 0
- if x >= w:
- return w - 1
- return x
-
- def get_valid_y(y):
- if y < 0:
- return 0
- if y >= h:
- return h - 1
- return y
- for i_xy, (x, y) in enumerate(rect):
- x = get_valid_x(x)
- y = get_valid_y(y)
- rect[i_xy, :] = [x, y]
-
- points = np.reshape(rect, -1)
- return points
-
- def _nms_y(self, heat, kernel=3):
- pad = (kernel - 1) // 2
- hmax = nn.functional.max_pool2d(
- heat, (1, kernel), stride=1, padding=(0, pad))
- keep = (hmax == heat).float()
- return heat * keep
-
- def _nms_x(self, heat, kernel=3):
- pad = (kernel - 1) // 2
- hmax = nn.functional.max_pool2d(
- heat, (kernel, 1), stride=1, padding=(pad, 0))
- keep = (hmax == heat).float()
- return heat * keep
-
- def CTW_order_lr(self, map_in):
- line_out_l2r = []
- line_out_r2l = []
-
- map_in = torch.tensor(map_in)
- value, top = torch.topk(map_in, 2, dim=0)
- value = value.numpy()
- top = top.numpy()
- top_th = np.where(value[1] > 0.1)[0] # L
- # print(top_th)
- if len(top_th) == 0:
- return []
- top1 = np.sort(top, axis=0)
- for i in range(len(top_th)):
- line_out_l2r.append([top_th[i], top1[0][top_th[i]]])
- line_out_r2l.append([top_th[i], top1[1][top_th[i]]])
- line_out = line_out_l2r+line_out_r2l[::-1]
- # print(line_out)
- return line_out
-
- def CTW_order_bt(self, map_in):
- line_out_t2b = []
- line_out_b2t = []
-
- map_in = torch.tensor(map_in)
- value, top = torch.topk(map_in, 2, dim=1)
- value = value.numpy()
- top = top.numpy()
- top_th = np.where(value[:, 1] > 0.1)[0] # H
- if len(top_th) == 0:
- return []
- top1 = np.sort(top, axis=1)
- for i in range(len(top_th)):
- line_out_b2t.append([top1[top_th[i]][0], top_th[i]])
- line_out_t2b.append([top1[top_th[i]][1], top_th[i]])
- line_out = line_out_b2t[::-1] + line_out_t2b
- # print(line_out)
- return line_out
-
- def boundary_to_mask_ic(self, bo_x, bo_y):
-
- # NMS Hmap and Vmap
- Vmap = self._nms_x(bo_x, kernel=5)
- Hmap = self._nms_y(bo_y, kernel=3)
- Vmap = Vmap[0]
- Hmap = Hmap[0]
- ploys_Alfa_x = Vmap.clone().numpy()
- ploys_Alfa_y = Hmap.clone().numpy()
-
- # Threshold Hmap and Vmap
- thresh = 0.5
- ploys_Alfa_x[ploys_Alfa_x < thresh] = 0
- ploys_Alfa_x[ploys_Alfa_x >= thresh] = 1
- ploys_Alfa_y[ploys_Alfa_y < thresh] = 0
- ploys_Alfa_y[ploys_Alfa_y >= thresh] = 1
- # Output points with strong texture inforamtion in both maps
- ploys_Alfa = ploys_Alfa_x + ploys_Alfa_y
- ploys_Alfa[ploys_Alfa < 2] = 0
- ploys_Alfa[ploys_Alfa == 2] = 1
- img_draw = np.zeros([ploys_Alfa_y.shape[-1], ploys_Alfa_y.shape[-1]], dtype=np.uint8)
-
- # calculate polygon by Alpha-Shape Algorithm
- if ploys_Alfa.sum() == 0:
- return img_draw
- ploys_Alfa_inds = np.argwhere(ploys_Alfa == 1)
- zero_detect_x = ploys_Alfa_inds[:, 0] - ploys_Alfa_inds[0, 0]
- zero_detect_y = ploys_Alfa_inds[:, 1] - ploys_Alfa_inds[0, 1]
- if np.where(zero_detect_x != 0)[0].shape[0] == 0 or np.where(zero_detect_y != 0)[0].shape[0] == 0 or \
- ploys_Alfa_inds.shape[0] < 4:
- draw_line = ploys_Alfa_inds[np.newaxis, np.newaxis, :, :]
- cv2.fillPoly(img_draw, draw_line, 1)
- return img_draw
- ploys_Alfa_inds = ploys_Alfa_inds.tolist()
- ploys_Alfa_inds = [tuple(ploys_Alfa_ind) for ploys_Alfa_ind in ploys_Alfa_inds]
- lines = getAlfaShapes(ploys_Alfa_inds, alfas=[1])
- draw_line = np.array(lines)
- if len(draw_line.shape) == 4:
- if draw_line.shape[1] == 1:
- draw_line[0, 0, :, :] = draw_line[0, 0, :, ::-1]
- cv2.fillPoly(img_draw, draw_line, 1)
- else:
- i_draw = 0
- for draw_l in draw_line[0]:
- img_draw_new = np.zeros([28, 28], dtype=np.uint8)
- draw_l = draw_l[np.newaxis, np.newaxis, :, :]
- cv2.fillPoly(img_draw, np.int32(draw_l), 1)
- cv2.fillPoly(img_draw_new, np.int32(draw_l), 1)
- i_draw += 1
-
- else:
- for i, line in enumerate(lines[0]):
- draw_line = np.array(line)
- draw_line = draw_line[np.newaxis, np.newaxis, :, :]
- draw_line[0, 0, :, :] = draw_line[0, 0, :, ::-1]
- cv2.fillPoly(img_draw, draw_line, 1)
- return img_draw
-
- def boundary_to_mask_ctw(self, bo_x, bo_y, p_temp_box):
- w_half = (p_temp_box[2] - p_temp_box[0]) * .5
- h_half = (p_temp_box[3] - p_temp_box[1]) * .5
- thresh_total = 0.5
-
- if w_half >= h_half:
- # point re-scoring
- bo_x = self._nms_x(bo_x, kernel=9)
- bo_x = bo_x[0]
- bo_y = bo_y[0]
- ploys_Alfa_x = bo_x.clone().numpy()
- ploys_Alfa_y = bo_y.clone().numpy()
- thresh_x = thresh_total
- thresh_y = thresh_total
- ploys_Alfa_x_1 = bo_x.clone().numpy()
- ploys_Alfa_y_1 = bo_y.clone().numpy()
- ploys_Alfa__1 = ploys_Alfa_x_1 + ploys_Alfa_y_1
- ploys_Alfa_x[ploys_Alfa_x < thresh_x] = 0
- ploys_Alfa_x[ploys_Alfa_x >= thresh_x] = 1
- ploys_Alfa_y[ploys_Alfa_y < thresh_y] = 0
- ploys_Alfa_y[ploys_Alfa_y >= thresh_y] = 1
- ploys_Alfa = ploys_Alfa_x + ploys_Alfa_y
- ploys_Alfa[ploys_Alfa < 2] = 0
- ploys_Alfa[ploys_Alfa == 2] = 1
- ploys_Alfa *= ploys_Alfa__1
- # rebuild text region from contour points
- img_draw = np.zeros([ploys_Alfa_y.shape[-1], ploys_Alfa_y.shape[-1]], dtype=np.uint8)
- if ploys_Alfa.sum() == 0:
- return img_draw
- lines = self.CTW_order_lr(ploys_Alfa)
- else:
- bo_y = self._nms_y(bo_y,kernel=9)
- bo_x = bo_x[0]
- bo_y = bo_y[0]
- ploys_Alfa_x = bo_x.clone().numpy()
- ploys_Alfa_y = bo_y.clone().numpy()
- thresh_x = thresh_total
- thresh_y = thresh_total
- ploys_Alfa_x_1 = bo_x.clone().numpy()
- ploys_Alfa_y_1 = bo_y.clone().numpy()
- ploys_Alfa__1 = ploys_Alfa_x_1 + ploys_Alfa_y_1
- ploys_Alfa_x[ploys_Alfa_x < thresh_x] = 0
- ploys_Alfa_x[ploys_Alfa_x >= thresh_x] = 1
- ploys_Alfa_y[ploys_Alfa_y < thresh_y] = 0
- ploys_Alfa_y[ploys_Alfa_y >= thresh_y] = 1
- ploys_Alfa = ploys_Alfa_x + ploys_Alfa_y
- ploys_Alfa[ploys_Alfa < 2] = 0
- ploys_Alfa[ploys_Alfa == 2] = 1
- ploys_Alfa *= ploys_Alfa__1
- img_draw = np.zeros([ploys_Alfa_y.shape[-1], ploys_Alfa_y.shape[-1]], dtype=np.uint8)
- if ploys_Alfa.sum() == 0:
- return img_draw
- lines = self.CTW_order_bt(ploys_Alfa)
- if len(lines) <=10:
- return img_draw
- draw_line = np.array(lines)
- draw_line = draw_line[np.newaxis, np.newaxis, :, :]
- cv2.fillPoly(img_draw, draw_line, 1)
- img_draw = img_draw.astype(np.uint8)
- kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5))
- img_draw = cv2.morphologyEx(img_draw, cv2.MORPH_CLOSE, kernel)
- return img_draw
-
- def contour_to_xys(self, cnt, image_shape):
- """Convert rect to xys, i.e., eight points
- The `image_shape` is used to to make sure all points return are valid, i.e., within image area
- """
- rect = cv2.minAreaRect(cnt)
- h, w = image_shape[0:2]
-
- def get_valid_x(x):
- if x < 0:
- return 0
- if x >= w:
- return w - 1
- return x
-
- def get_valid_y(y):
- if y < 0:
- return 0
- if y >= h:
- return h - 1
- return y
-
- points = cv2.boxPoints(rect)
- points = np.int0(points)
- for i_xy, (x, y) in enumerate(points):
- x = get_valid_x(x)
- y = get_valid_y(y)
- points[i_xy, :] = [x, y]
- points = np.reshape(points, -1)
- return points
-
- def mask_to_roRect(self, mask, img_shape):
- ## convert mask into rotated rect
- e = mask[0, :, :]
- _, countours, hier = cv2.findContours(e.clone().numpy(), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE) # Aarlog
- if len(countours) == 0:
- return np.zeros((1, 8))
- t_c = countours[0].copy()
- quad = self.contour_to_xys(t_c, img_shape)
- return quad
-
- def mask_to_contours(self, mask, img_shape):
- e = mask[0, :, :]
-
- countours, hier = cv2.findContours(e.clone().numpy(), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE) # Aarlog
-
- if len(countours) == 0:
- return np.zeros((1, 8))
- t_c = countours[0].copy()
- quad = self.contour_to_valid(t_c, img_shape)
- return quad
-
- def py_cpu_pnms(self, dets, scores, thresh):
- pts = []
- for det in dets:
- pts.append([[det[i][0], det[i][1]] for i in range(len(det))])
- order = scores.argsort()[::-1]
- areas = np.zeros(scores.shape)
- order = scores.argsort()[::-1]
- inter_areas = np.zeros((scores.shape[0], scores.shape[0]))
- for il in range(len(pts)):
- poly = Polygon(pts[il])
- areas[il] = poly.area
- for jl in range(il, len(pts)):
- polyj = Polygon(pts[jl])
- try:
- inS = poly.intersection(polyj)
- except:
- print(poly, polyj)
- inter_areas[il][jl] = inS.area
- inter_areas[jl][il] = inS.area
-
- keep = []
- while order.size > 0:
- i = order[0]
- keep.append(i)
- ovr = inter_areas[i][order[1:]] / (areas[i] + areas[order[1:]] - inter_areas[i][order[1:]])
- inds = np.where(ovr <= thresh)[0]
- order = order[inds + 1]
- return keep
-
- def esd_pnms(self, esd, pnms_thresh):
- scores = []
- dets = []
- for ele in esd:
- score = ele['score']
- quad = ele['seg_rorect']
- # det = np.array([[quad[0][0], quad[0][1]], [quad[1][0], quad[1][1]],[quad[2][0], quad[2][1]],[quad[3][0], quad[3][1]]])
- det = np.array([[quad[0], quad[1]], [quad[2], quad[3]], [quad[4], quad[5]], [quad[6], quad[7]]])
- scores.append(score)
- dets.append(det)
- scores = np.array(scores)
- dets = np.array(dets)
- keep = self.py_cpu_pnms(dets, scores, pnms_thresh)
- return keep
-
- def compute_prediction(self, original_image):
- # apply pre-processing to image
- image = self.transforms(original_image)
- # convert to an ImageList, padded so that it is divisible by
- # cfg.DATALOADER.SIZE_DIVISIBILITY
- image_list = to_image_list(image, self.cfg.DATALOADER.SIZE_DIVISIBILITY)
- image_list = image_list.to(self.device)
- # compute predictions
- with torch.no_grad():
- output = self.model(image_list)
- prediction = [o.to(self.cpu_device) for o in output][0]
- #global_predictions = predictions[0]
- #char_predictions = predictions[1]
- #char_mask = char_predictions['char_mask']
- #char_boxes = char_predictions['boxes']
- #words, rec_scores = self.process_char_mask(char_mask, char_boxes)
- #seq_words = char_predictions['seq_outputs']
- #seq_scores = char_predictions['seq_scores']
-
- # reshape prediction (a BoxList) into the original image size
- image_height, image_width = original_image.shape[:-1]
- prediction = prediction.resize((image_width, image_height))
- if len(prediction) == 0:
- return
- prediction = prediction.convert("xyxy")
- boxes = prediction.bbox.tolist()
- scores = prediction.get_field("scores").tolist()
- masks_x = prediction.get_field("mask_x")
- masks_y = prediction.get_field("mask_y")
- #masks = [self.boundary_to_mask_ic(mask_x, mask_y) for
- # mask_x, mask_y in zip(masks_x, masks_y)]
- masks = [self.boundary_to_mask_ctw(mask_x, mask_y, p_temp) for
- mask_x, mask_y, p_temp in zip(masks_x, masks_y, prediction.bbox)]
- masks = torch.from_numpy(np.array(masks)[:, np.newaxis, :, :])
- # Masker is necessary only if masks haven't been already resized.
- masker = Masker(threshold=0.5, padding=1)
- if list(masks.shape[-2:]) != [image_height, image_width]:
- masks = masker(masks.expand(1, -1, -1, -1, -1), prediction)
- masks = masks[0]
-
- '''
- rects = [self.mask_to_roRect(mask, [image_height, image_width]) for mask in masks]
-
- esd = []
- for k, rect in enumerate(rects):
- if rect.all() == 0:
- continue
- else:
- esd.append(
- {
- "seg_rorect": rect.tolist(),
- "score": scores[k],
- }
- )
-
- if cfg.PROCESS.PNMS:
- pnms_thresh = cfg.PROCESS.NMS_THRESH
- keep = self.esd_pnms(esd, pnms_thresh)
- im_write = cv2.imread('./demo/1.jpg')[:, :, ::-1]
- for i in keep:
- box = esd[i]
- # print(box)
- # assert 1<0
- box = np.array(box['seg_rorect'])
- box = np.around(box).astype(np.int32)
- cv2.polylines(im_write[:, :, ::-1], [box.astype(np.int32).reshape((-1, 1, 2))], True,
- color=(0, 255, 0), thickness=2) # 0,255,255 y 0,255,0 g
- cv2.imwrite('./demo/example_results.jpg', im_write[:, :, ::-1])
-
- '''
- contours = [self.mask_to_contours(mask, [image_height, image_width]) for mask in masks]
- '''
- im_write = original_image[:, :, ::-1]
- for box in contours:
- box = np.array(box)
- box = np.around(box).astype(np.int32)
- cv2.polylines(im_write[:, :, ::-1], [box.astype(np.int32).reshape((-1, 1, 2))], True, color=(0, 255, 0), thickness=2) # 0,255,255 y 0,255,0 g
- cv2.imwrite('./demo/example_results.jpg', im_write[:, :, ::-1])
- '''
-
- return contours, np.array(masks.repeat(1,3,1,1)).astype(np.bool_).transpose(0,2,3,1), np.array(boxes).astype(int)
-
- def process_char_mask(self, char_masks, boxes, threshold=192):
- texts, rec_scores = [], []
- for index in range(char_masks.shape[0]):
- box = list(boxes[index])
- box = list(map(int, box))
- text, rec_score, _, _ = getstr_grid(char_masks[index,:,:,:].copy(), box, threshold=threshold)
- texts.append(text)
- rec_scores.append(rec_score)
- return texts, rec_scores
-
- def mask2polygon(self, mask, box, im_size, threshold=0.5, output_polygon=True):
- # mask 32*128
- image_width, image_height = im_size[1], im_size[0]
- box_h = box[3] - box[1]
- box_w = box[2] - box[0]
- cls_polys = (mask*255).astype(np.uint8)
- poly_map = np.array(Image.fromarray(cls_polys).resize((box_w, box_h)))
- poly_map = poly_map.astype(np.float32) / 255
- poly_map=cv2.GaussianBlur(poly_map,(3,3),sigmaX=3)
- ret, poly_map = cv2.threshold(poly_map,0.5,1,cv2.THRESH_BINARY)
- if output_polygon:
- SE1=cv2.getStructuringElement(cv2.MORPH_RECT,(3,3))
- poly_map = cv2.erode(poly_map,SE1)
- poly_map = cv2.dilate(poly_map,SE1);
- poly_map = cv2.morphologyEx(poly_map,cv2.MORPH_CLOSE,SE1)
- try:
- _, contours, _ = cv2.findContours((poly_map * 255).astype(np.uint8), cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
- except:
- contours, _ = cv2.findContours((poly_map * 255).astype(np.uint8), cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
- if len(contours)==0:
- print(contours)
- print(len(contours))
- return None
- max_area=0
- max_cnt = contours[0]
- for cnt in contours:
- area=cv2.contourArea(cnt)
- if area > max_area:
- max_area = area
- max_cnt = cnt
- perimeter = cv2.arcLength(max_cnt,True)
- epsilon = 0.01*cv2.arcLength(max_cnt,True)
- approx = cv2.approxPolyDP(max_cnt,epsilon,True)
- pts = approx.reshape((-1,2))
- pts[:,0] = pts[:,0] + box[0]
- pts[:,1] = pts[:,1] + box[1]
- polygon = list(pts.reshape((-1,)))
- polygon = list(map(int, polygon))
- if len(polygon)<6:
- return None
- else:
- SE1=cv2.getStructuringElement(cv2.MORPH_RECT,(3,3))
- poly_map = cv2.erode(poly_map,SE1)
- poly_map = cv2.dilate(poly_map,SE1);
- poly_map = cv2.morphologyEx(poly_map,cv2.MORPH_CLOSE,SE1)
- idy,idx=np.where(poly_map == 1)
- xy=np.vstack((idx,idy))
- xy=np.transpose(xy)
- hull = cv2.convexHull(xy, clockwise=True)
- #reverse order of points.
- if hull is None:
- return None
- hull=hull[::-1]
- #find minimum area bounding box.
- rect = cv2.minAreaRect(hull)
- corners = cv2.boxPoints(rect)
- corners = np.array(corners, dtype="int")
- pts = get_tight_rect(corners, box[0], box[1], image_height, image_width, 1)
- polygon = [x * 1.0 for x in pts]
- polygon = list(map(int, polygon))
- return polygon
-
- def visualization(self, image, polygons, masks, boxes, words):
- green = np.ones(image.shape).astype(np.uint8)
- green[...,0] = 0
- green[...,1] = 255
- green[...,2] = 0
- for mask, word, box in zip(masks, words, boxes):
- image[mask] = image[mask] * 0.5 + green[mask] * 0.5
- cv2.putText(image, word, (box[0], box[1]), cv2.FONT_HERSHEY_COMPLEX, 0.6, (0,0,255), 1)
- '''
- for box in boxes:
- cv2.rectangle(image,(box[0], box[1]), (box[2], box[3]), (0,0,255), 2)
- '''
- '''
- for polygon in polygons:
- pts = np.array(polygon, np.int32)
- pts = pts.reshape((-1,1,2))
- xmin = min(pts[:,0,0])
- ymin = min(pts[:,0,1])
- cv2.polylines(image,[pts],True,(0,0,255))
- #cv2.putText(image, word, (xmin, ymin), cv2.FONT_HERSHEY_COMPLEX, 1, (0,0,255), 2)
- '''
- return image
-
-
-def main(args):
- # update the config options with the config file
- cfg.merge_from_file(args.config_file)
- # manual override some options
- # cfg.merge_from_list(["MODEL.DEVICE", "cpu"])
-
- text_demo = TextDemo(
- cfg,
- min_image_size=(1200,2000),
- confidence_threshold=0.85,
- output_polygon=True
- )
- # load image and then run prediction
-
- image = cv2.imread(args.image_path)
- result_polygons, result_masks = text_demo.run_on_opencv_image(image)
- image = text_demo.visualization(image, result_polygons, result_masks)
- cv2.imwrite(args.visu_path, image)
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(description='parameters for demo')
- parser.add_argument("--config-file", type=str, default='./configs/ctw/r50_baseline.yaml')
- parser.add_argument("--image_path", type=str, default='./det_visual/1223.jpg')
- parser.add_argument("--visu_path", type=str, default='./demo/example_results.jpg')
- args = parser.parse_args()
- main(args)
diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/processors/functional_video.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/processors/functional_video.py
deleted file mode 100644
index 597a29315d4e1a575e7209edb0618eeaf4fc024a..0000000000000000000000000000000000000000
--- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/processors/functional_video.py
+++ /dev/null
@@ -1,121 +0,0 @@
-"""
- Copyright (c) 2022, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-import warnings
-
-import torch
-
-
-def _is_tensor_video_clip(clip):
- if not torch.is_tensor(clip):
- raise TypeError("clip should be Tensor. Got %s" % type(clip))
-
- if not clip.ndimension() == 4:
- raise ValueError("clip should be 4D. Got %dD" % clip.dim())
-
- return True
-
-
-def crop(clip, i, j, h, w):
- """
- Args:
- clip (torch.tensor): Video clip to be cropped. Size is (C, T, H, W)
- """
- if len(clip.size()) != 4:
- raise ValueError("clip should be a 4D tensor")
- return clip[..., i : i + h, j : j + w]
-
-
-def resize(clip, target_size, interpolation_mode):
- if len(target_size) != 2:
- raise ValueError(
- f"target size should be tuple (height, width), instead got {target_size}"
- )
- return torch.nn.functional.interpolate(
- clip, size=target_size, mode=interpolation_mode, align_corners=False
- )
-
-
-def resized_crop(clip, i, j, h, w, size, interpolation_mode="bilinear"):
- """
- Do spatial cropping and resizing to the video clip
- Args:
- clip (torch.tensor): Video clip to be cropped. Size is (C, T, H, W)
- i (int): i in (i,j) i.e coordinates of the upper left corner.
- j (int): j in (i,j) i.e coordinates of the upper left corner.
- h (int): Height of the cropped region.
- w (int): Width of the cropped region.
- size (tuple(int, int)): height and width of resized clip
- Returns:
- clip (torch.tensor): Resized and cropped clip. Size is (C, T, H, W)
- """
- if not _is_tensor_video_clip(clip):
- raise ValueError("clip should be a 4D torch.tensor")
- clip = crop(clip, i, j, h, w)
- clip = resize(clip, size, interpolation_mode)
- return clip
-
-
-def center_crop(clip, crop_size):
- if not _is_tensor_video_clip(clip):
- raise ValueError("clip should be a 4D torch.tensor")
- h, w = clip.size(-2), clip.size(-1)
- th, tw = crop_size
- if h < th or w < tw:
- raise ValueError("height and width must be no smaller than crop_size")
-
- i = int(round((h - th) / 2.0))
- j = int(round((w - tw) / 2.0))
- return crop(clip, i, j, th, tw)
-
-
-def to_tensor(clip):
- """
- Convert tensor data type from uint8 to float, divide value by 255.0 and
- permute the dimensions of clip tensor
- Args:
- clip (torch.tensor, dtype=torch.uint8): Size is (T, H, W, C)
- Return:
- clip (torch.tensor, dtype=torch.float): Size is (C, T, H, W)
- """
- _is_tensor_video_clip(clip)
- if not clip.dtype == torch.uint8:
- raise TypeError(
- "clip tensor should have data type uint8. Got %s" % str(clip.dtype)
- )
- return clip.float().permute(3, 0, 1, 2) / 255.0
-
-
-def normalize(clip, mean, std, inplace=False):
- """
- Args:
- clip (torch.tensor): Video clip to be normalized. Size is (C, T, H, W)
- mean (tuple): pixel RGB mean. Size is (3)
- std (tuple): pixel standard deviation. Size is (3)
- Returns:
- normalized clip (torch.tensor): Size is (C, T, H, W)
- """
- if not _is_tensor_video_clip(clip):
- raise ValueError("clip should be a 4D torch.tensor")
- if not inplace:
- clip = clip.clone()
- mean = torch.as_tensor(mean, dtype=clip.dtype, device=clip.device)
- std = torch.as_tensor(std, dtype=clip.dtype, device=clip.device)
- clip.sub_(mean[:, None, None, None]).div_(std[:, None, None, None])
- return clip
-
-
-def hflip(clip):
- """
- Args:
- clip (torch.tensor): Video clip to be normalized. Size is (C, T, H, W)
- Returns:
- flipped clip (torch.tensor): Size is (C, T, H, W)
- """
- if not _is_tensor_video_clip(clip):
- raise ValueError("clip should be a 4D torch.tensor")
- return clip.flip(-1)
diff --git a/spaces/DD0101/Disfluency-base/app.py b/spaces/DD0101/Disfluency-base/app.py
deleted file mode 100644
index f0b7311f32de119c1b40f7c1752cc590af47d530..0000000000000000000000000000000000000000
--- a/spaces/DD0101/Disfluency-base/app.py
+++ /dev/null
@@ -1,96 +0,0 @@
-import os
-
-import transformers
-from transformers import pipeline
-from transformers.pipelines.token_classification import TokenClassificationPipeline
-import py_vncorenlp
-
-os.system('pwd')
-os.system('sudo update-alternatives --config java')
-os.mkdir('/home/user/app/vncorenlp')
-py_vncorenlp.download_model(save_dir='/home/user/app/vncorenlp')
-rdrsegmenter = py_vncorenlp.VnCoreNLP(annotators=["wseg"], save_dir='/home/user/app/vncorenlp')
-
-# I have to make some changes to the preprocess() method since they (Hugging Face) had changed some attributes
-class MyPipeline(TokenClassificationPipeline):
- def preprocess(self, sentence, offset_mapping=None, **preprocess_params):
- tokenizer_params = preprocess_params.pop("tokenizer_params", {})
- truncation = True if self.tokenizer.model_max_length and self.tokenizer.model_max_length > 0 else False
- inputs = self.tokenizer(
- sentence,
- return_tensors=self.framework,
- truncation=truncation,
- return_special_tokens_mask=True,
- return_offsets_mapping=self.tokenizer.is_fast,
- **tokenizer_params,
- )
- inputs.pop("overflow_to_sample_mapping", None)
- num_chunks = len(inputs["input_ids"])
-
- # Override preprocess method with these offset_mapping lines
- length = len(inputs['input_ids'][0]) - 2
- tokens = self.tokenizer.tokenize(sentence)
- seek = 0
- offset_mapping_list = [[(0, 0)]]
- for i in range(length):
- if tokens[i][-2:] == '@@':
- offset_mapping_list[0].append((seek, seek + len(tokens[i]) - 2))
- seek += len(tokens[i]) - 2
- else:
- offset_mapping_list[0].append((seek, seek + len(tokens[i])))
- seek += len(tokens[i]) + 1
- offset_mapping_list[0].append((0, 0))
-
- for i in range(num_chunks):
- if self.framework == "tf":
- model_inputs = {k: tf.expand_dims(v[i], 0) for k, v in inputs.items()}
- else:
- model_inputs = {k: v[i].unsqueeze(0) for k, v in inputs.items()}
-
- model_inputs['offset_mapping'] = offset_mapping_list
- model_inputs["sentence"] = sentence if i == 0 else None
- model_inputs["is_last"] = i == num_chunks - 1
-
- yield model_inputs
-
-model_checkpoint = "DD0101/disfluency-large"
-
-my_classifier = pipeline(
- "token-classification", model=model_checkpoint, aggregation_strategy="simple", pipeline_class=MyPipeline)
-
-
-import gradio as gr
-
-def ner(text):
- text = " ".join(rdrsegmenter.word_segment(text))
-
- output = my_classifier(text)
- for entity in output:
- entity['entity'] = entity.pop('entity_group')
-
- return {'text': text, 'entities': output}, text
-
-examples = ['Tôi cần thuê à tôi muốn bay một chuyến khứ hồi từ Đà Nẵng đến Đà Lạt',
- 'Giá vé một chiều à không khứ hồi từ Đà Nẵng đến Vinh dưới 2 triệu đồng giá vé khứ hồi từ Quy Nhơn đến Vinh dưới 3 triệu đồng giá vé khứ hồi từ Buôn Ma Thuột đến Quy Nhơn à đến Vinh dưới 4 triệu rưỡi',
- 'Cho tôi biết các chuyến bay đến Đà Nẵng vào ngày 12 mà không ngày 14 tháng sáu',
- 'Những chuyến bay nào khởi hành từ Thành phố Hồ Chí Minh bay đến Frankfurt mà nối chuyến ở Singapore và hạ cánh trước 10 giờ ý tôi là 9 giờ tối'
-]
-
-demo = gr.Interface(ner,
- gr.Textbox(label='Text', placeholder="Enter sentence here..."),
- outputs=[gr.HighlightedText(label='Highlighted Output'), gr.Textbox(label='Word-Segmentation Preprocessing')],
- examples=examples,
- title="Disfluency Detection",
- description="This is an easy-to-use built in Gradio for desmontrating a NER System that identifies disfluency-entities in \
- Vietnamese utterances",
- theme=gr.themes.Soft())
-
-demo.launch()
-
-
-
-
-
-
-
-
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/subset/svg.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/subset/svg.py
deleted file mode 100644
index f6d74a4002b534810b534bc5e860af251d42d4ae..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/subset/svg.py
+++ /dev/null
@@ -1,251 +0,0 @@
-from __future__ import annotations
-
-import re
-from functools import lru_cache
-from itertools import chain, count
-from typing import Dict, Iterable, Iterator, List, Optional, Set, Tuple
-
-try:
- from lxml import etree
-except ImportError:
- # lxml is required for subsetting SVG, but we prefer to delay the import error
- # until subset_glyphs() is called (i.e. if font to subset has an 'SVG ' table)
- etree = None
-
-from fontTools import ttLib
-from fontTools.subset.util import _add_method
-from fontTools.ttLib.tables.S_V_G_ import SVGDocument
-
-
-__all__ = ["subset_glyphs"]
-
-
-GID_RE = re.compile(r"^glyph(\d+)$")
-
-NAMESPACES = {
- "svg": "http://www.w3.org/2000/svg",
- "xlink": "http://www.w3.org/1999/xlink",
-}
-XLINK_HREF = f'{{{NAMESPACES["xlink"]}}}href'
-
-
-# TODO(antrotype): Replace with functools.cache once we are 3.9+
-@lru_cache(maxsize=None)
-def xpath(path):
- # compile XPath upfront, caching result to reuse on multiple elements
- return etree.XPath(path, namespaces=NAMESPACES)
-
-
-def group_elements_by_id(tree: etree.Element) -> Dict[str, etree.Element]:
- # select all svg elements with 'id' attribute no matter where they are
- # including the root element itself:
- # https://github.com/fonttools/fonttools/issues/2548
- return {el.attrib["id"]: el for el in xpath("//svg:*[@id]")(tree)}
-
-
-def parse_css_declarations(style_attr: str) -> Dict[str, str]:
- # https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute/style
- # https://developer.mozilla.org/en-US/docs/Web/CSS/Syntax#css_declarations
- result = {}
- for declaration in style_attr.split(";"):
- if declaration.count(":") == 1:
- property_name, value = declaration.split(":")
- property_name = property_name.strip()
- result[property_name] = value.strip()
- elif declaration.strip():
- raise ValueError(f"Invalid CSS declaration syntax: {declaration}")
- return result
-
-
-def iter_referenced_ids(tree: etree.Element) -> Iterator[str]:
- # Yield all the ids that can be reached via references from this element tree.
- # We currently support xlink:href (as used by and gradient templates),
- # and local url(#...) links found in fill or clip-path attributes
- # TODO(anthrotype): Check we aren't missing other supported kinds of reference
- find_svg_elements_with_references = xpath(
- ".//svg:*[ "
- "starts-with(@xlink:href, '#') "
- "or starts-with(@fill, 'url(#') "
- "or starts-with(@clip-path, 'url(#') "
- "or contains(@style, ':url(#') "
- "]",
- )
- for el in chain([tree], find_svg_elements_with_references(tree)):
- ref_id = href_local_target(el)
- if ref_id is not None:
- yield ref_id
-
- attrs = el.attrib
- if "style" in attrs:
- attrs = {**dict(attrs), **parse_css_declarations(el.attrib["style"])}
- for attr in ("fill", "clip-path"):
- if attr in attrs:
- value = attrs[attr]
- if value.startswith("url(#") and value.endswith(")"):
- ref_id = value[5:-1]
- assert ref_id
- yield ref_id
-
-
-def closure_element_ids(
- elements: Dict[str, etree.Element], element_ids: Set[str]
-) -> None:
- # Expand the initial subset of element ids to include ids that can be reached
- # via references from the initial set.
- unvisited = element_ids
- while unvisited:
- referenced: Set[str] = set()
- for el_id in unvisited:
- if el_id not in elements:
- # ignore dangling reference; not our job to validate svg
- continue
- referenced.update(iter_referenced_ids(elements[el_id]))
- referenced -= element_ids
- element_ids.update(referenced)
- unvisited = referenced
-
-
-def subset_elements(el: etree.Element, retained_ids: Set[str]) -> bool:
- # Keep elements if their id is in the subset, or any of their children's id is.
- # Drop elements whose id is not in the subset, and either have no children,
- # or all their children are being dropped.
- if el.attrib.get("id") in retained_ids:
- # if id is in the set, don't recurse; keep whole subtree
- return True
- # recursively subset all the children; we use a list comprehension instead
- # of a parentheses-less generator expression because we don't want any() to
- # short-circuit, as our function has a side effect of dropping empty elements.
- if any([subset_elements(e, retained_ids) for e in el]):
- return True
- assert len(el) == 0
- parent = el.getparent()
- if parent is not None:
- parent.remove(el)
- return False
-
-
-def remap_glyph_ids(
- svg: etree.Element, glyph_index_map: Dict[int, int]
-) -> Dict[str, str]:
- # Given {old_gid: new_gid} map, rename all elements containing id="glyph{gid}"
- # special attributes
- elements = group_elements_by_id(svg)
- id_map = {}
- for el_id, el in elements.items():
- m = GID_RE.match(el_id)
- if not m:
- continue
- old_index = int(m.group(1))
- new_index = glyph_index_map.get(old_index)
- if new_index is not None:
- if old_index == new_index:
- continue
- new_id = f"glyph{new_index}"
- else:
- # If the old index is missing, the element correspond to a glyph that was
- # excluded from the font's subset.
- # We rename it to avoid clashes with the new GIDs or other element ids.
- new_id = f".{el_id}"
- n = count(1)
- while new_id in elements:
- new_id = f"{new_id}.{next(n)}"
-
- id_map[el_id] = new_id
- el.attrib["id"] = new_id
-
- return id_map
-
-
-def href_local_target(el: etree.Element) -> Optional[str]:
- if XLINK_HREF in el.attrib:
- href = el.attrib[XLINK_HREF]
- if href.startswith("#") and len(href) > 1:
- return href[1:] # drop the leading #
- return None
-
-
-def update_glyph_href_links(svg: etree.Element, id_map: Dict[str, str]) -> None:
- # update all xlink:href="#glyph..." attributes to point to the new glyph ids
- for el in xpath(".//svg:*[starts-with(@xlink:href, '#glyph')]")(svg):
- old_id = href_local_target(el)
- assert old_id is not None
- if old_id in id_map:
- new_id = id_map[old_id]
- el.attrib[XLINK_HREF] = f"#{new_id}"
-
-
-def ranges(ints: Iterable[int]) -> Iterator[Tuple[int, int]]:
- # Yield sorted, non-overlapping (min, max) ranges of consecutive integers
- sorted_ints = iter(sorted(set(ints)))
- try:
- start = end = next(sorted_ints)
- except StopIteration:
- return
- for v in sorted_ints:
- if v - 1 == end:
- end = v
- else:
- yield (start, end)
- start = end = v
- yield (start, end)
-
-
-@_add_method(ttLib.getTableClass("SVG "))
-def subset_glyphs(self, s) -> bool:
- if etree is None:
- raise ImportError("No module named 'lxml', required to subset SVG")
-
- # glyph names (before subsetting)
- glyph_order: List[str] = s.orig_glyph_order
- # map from glyph names to original glyph indices
- rev_orig_glyph_map: Dict[str, int] = s.reverseOrigGlyphMap
- # map from original to new glyph indices (after subsetting)
- glyph_index_map: Dict[int, int] = s.glyph_index_map
-
- new_docs: List[SVGDocument] = []
- for doc in self.docList:
-
- glyphs = {
- glyph_order[i] for i in range(doc.startGlyphID, doc.endGlyphID + 1)
- }.intersection(s.glyphs)
- if not glyphs:
- # no intersection: we can drop the whole record
- continue
-
- svg = etree.fromstring(
- # encode because fromstring dislikes xml encoding decl if input is str.
- # SVG xml encoding must be utf-8 as per OT spec.
- doc.data.encode("utf-8"),
- parser=etree.XMLParser(
- # Disable libxml2 security restrictions to support very deep trees.
- # Without this we would get an error like this:
- # `lxml.etree.XMLSyntaxError: internal error: Huge input lookup`
- # when parsing big fonts e.g. noto-emoji-picosvg.ttf.
- huge_tree=True,
- # ignore blank text as it's not meaningful in OT-SVG; it also prevents
- # dangling tail text after removing an element when pretty_print=True
- remove_blank_text=True,
- ),
- )
-
- elements = group_elements_by_id(svg)
- gids = {rev_orig_glyph_map[g] for g in glyphs}
- element_ids = {f"glyph{i}" for i in gids}
- closure_element_ids(elements, element_ids)
-
- if not subset_elements(svg, element_ids):
- continue
-
- if not s.options.retain_gids:
- id_map = remap_glyph_ids(svg, glyph_index_map)
- update_glyph_href_links(svg, id_map)
-
- new_doc = etree.tostring(svg, pretty_print=s.options.pretty_svg).decode("utf-8")
-
- new_gids = (glyph_index_map[i] for i in gids)
- for start, end in ranges(new_gids):
- new_docs.append(SVGDocument(new_doc, start, end, doc.compressed))
-
- self.docList = new_docs
-
- return bool(self.docList)
diff --git a/spaces/DaFujaTyping/hf-Chat-ui/src/routes/conversation/[id]/stop-generating/+server.ts b/spaces/DaFujaTyping/hf-Chat-ui/src/routes/conversation/[id]/stop-generating/+server.ts
deleted file mode 100644
index b27c0ccf2aaafda990d853d34e1f5432c8ad5eaf..0000000000000000000000000000000000000000
--- a/spaces/DaFujaTyping/hf-Chat-ui/src/routes/conversation/[id]/stop-generating/+server.ts
+++ /dev/null
@@ -1,27 +0,0 @@
-import { collections } from "$lib/server/database";
-import { error } from "@sveltejs/kit";
-import { ObjectId } from "mongodb";
-
-/**
- * Ideally, we'd be able to detect the client-side abort, see https://github.com/huggingface/chat-ui/pull/88#issuecomment-1523173850
- */
-export async function POST({ params, locals }) {
- const conversationId = new ObjectId(params.id);
-
- const conversation = await collections.conversations.findOne({
- _id: conversationId,
- sessionId: locals.sessionId,
- });
-
- if (!conversation) {
- throw error(404, "Conversation not found");
- }
-
- await collections.abortedGenerations.updateOne(
- { conversationId },
- { $set: { updatedAt: new Date() }, $setOnInsert: { createdAt: new Date() } },
- { upsert: true }
- );
-
- return new Response();
-}
diff --git a/spaces/DaleChen/AutoGPT/autogpt/memory/redismem.py b/spaces/DaleChen/AutoGPT/autogpt/memory/redismem.py
deleted file mode 100644
index 082a812c5362cc9f19e35bf1bb10269b558f7724..0000000000000000000000000000000000000000
--- a/spaces/DaleChen/AutoGPT/autogpt/memory/redismem.py
+++ /dev/null
@@ -1,156 +0,0 @@
-"""Redis memory provider."""
-from __future__ import annotations
-
-from typing import Any
-
-import numpy as np
-import redis
-from colorama import Fore, Style
-from redis.commands.search.field import TextField, VectorField
-from redis.commands.search.indexDefinition import IndexDefinition, IndexType
-from redis.commands.search.query import Query
-
-from autogpt.llm_utils import create_embedding_with_ada
-from autogpt.logs import logger
-from autogpt.memory.base import MemoryProviderSingleton
-
-SCHEMA = [
- TextField("data"),
- VectorField(
- "embedding",
- "HNSW",
- {"TYPE": "FLOAT32", "DIM": 1536, "DISTANCE_METRIC": "COSINE"},
- ),
-]
-
-
-class RedisMemory(MemoryProviderSingleton):
- def __init__(self, cfg):
- """
- Initializes the Redis memory provider.
-
- Args:
- cfg: The config object.
-
- Returns: None
- """
- redis_host = cfg.redis_host
- redis_port = cfg.redis_port
- redis_password = cfg.redis_password
- self.dimension = 1536
- self.redis = redis.Redis(
- host=redis_host,
- port=redis_port,
- password=redis_password,
- db=0, # Cannot be changed
- )
- self.cfg = cfg
-
- # Check redis connection
- try:
- self.redis.ping()
- except redis.ConnectionError as e:
- logger.typewriter_log(
- "FAILED TO CONNECT TO REDIS",
- Fore.RED,
- Style.BRIGHT + str(e) + Style.RESET_ALL,
- )
- logger.double_check(
- "Please ensure you have setup and configured Redis properly for use. "
- + f"You can check out {Fore.CYAN + Style.BRIGHT}"
- f"https://github.com/Torantulino/Auto-GPT#redis-setup{Style.RESET_ALL}"
- " to ensure you've set up everything correctly."
- )
- exit(1)
-
- if cfg.wipe_redis_on_start:
- self.redis.flushall()
- try:
- self.redis.ft(f"{cfg.memory_index}").create_index(
- fields=SCHEMA,
- definition=IndexDefinition(
- prefix=[f"{cfg.memory_index}:"], index_type=IndexType.HASH
- ),
- )
- except Exception as e:
- print("Error creating Redis search index: ", e)
- existing_vec_num = self.redis.get(f"{cfg.memory_index}-vec_num")
- self.vec_num = int(existing_vec_num.decode("utf-8")) if existing_vec_num else 0
-
- def add(self, data: str) -> str:
- """
- Adds a data point to the memory.
-
- Args:
- data: The data to add.
-
- Returns: Message indicating that the data has been added.
- """
- if "Command Error:" in data:
- return ""
- vector = create_embedding_with_ada(data)
- vector = np.array(vector).astype(np.float32).tobytes()
- data_dict = {b"data": data, "embedding": vector}
- pipe = self.redis.pipeline()
- pipe.hset(f"{self.cfg.memory_index}:{self.vec_num}", mapping=data_dict)
- _text = (
- f"Inserting data into memory at index: {self.vec_num}:\n" f"data: {data}"
- )
- self.vec_num += 1
- pipe.set(f"{self.cfg.memory_index}-vec_num", self.vec_num)
- pipe.execute()
- return _text
-
- def get(self, data: str) -> list[Any] | None:
- """
- Gets the data from the memory that is most relevant to the given data.
-
- Args:
- data: The data to compare to.
-
- Returns: The most relevant data.
- """
- return self.get_relevant(data, 1)
-
- def clear(self) -> str:
- """
- Clears the redis server.
-
- Returns: A message indicating that the memory has been cleared.
- """
- self.redis.flushall()
- return "Obliviated"
-
- def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None:
- """
- Returns all the data in the memory that is relevant to the given data.
- Args:
- data: The data to compare to.
- num_relevant: The number of relevant data to return.
-
- Returns: A list of the most relevant data.
- """
- query_embedding = create_embedding_with_ada(data)
- base_query = f"*=>[KNN {num_relevant} @embedding $vector AS vector_score]"
- query = (
- Query(base_query)
- .return_fields("data", "vector_score")
- .sort_by("vector_score")
- .dialect(2)
- )
- query_vector = np.array(query_embedding).astype(np.float32).tobytes()
-
- try:
- results = self.redis.ft(f"{self.cfg.memory_index}").search(
- query, query_params={"vector": query_vector}
- )
- except Exception as e:
- print("Error calling Redis search: ", e)
- return None
- return [result.data for result in results.docs]
-
- def get_stats(self):
- """
- Returns: The stats of the memory index.
- """
- return self.redis.ft(f"{self.cfg.memory_index}").info()
diff --git a/spaces/DemoLou/moe-tts/text/cleaners.py b/spaces/DemoLou/moe-tts/text/cleaners.py
deleted file mode 100644
index eedbeaee8ad73dd4aaf6c12e3f900fc34a1ee630..0000000000000000000000000000000000000000
--- a/spaces/DemoLou/moe-tts/text/cleaners.py
+++ /dev/null
@@ -1,150 +0,0 @@
-import re
-import pyopenjtalk
-
-pyopenjtalk._lazy_init()
-
-
-def japanese_cleaners(text):
- from text.japanese import japanese_to_romaji_with_accent
- text = japanese_to_romaji_with_accent(text)
- text = re.sub(r'([A-Za-z])$', r'\1.', text)
- return text
-
-
-def japanese_cleaners2(text):
- return japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…')
-
-
-def korean_cleaners(text):
- '''Pipeline for Korean text'''
- from text.korean import latin_to_hangul, number_to_hangul, divide_hangul
- text = latin_to_hangul(text)
- text = number_to_hangul(text)
- text = divide_hangul(text)
- text = re.sub(r'([\u3131-\u3163])$', r'\1.', text)
- return text
-
-
-def chinese_cleaners(text):
- '''Pipeline for Chinese text'''
- from text.mandarin import number_to_chinese, chinese_to_bopomofo, latin_to_bopomofo
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- text = re.sub(r'([ˉˊˇˋ˙])$', r'\1。', text)
- return text
-
-
-def zh_ja_mixture_cleaners(text):
- from text.mandarin import chinese_to_romaji
- from text.japanese import japanese_to_romaji_with_accent
- text = re.sub(r'\[ZH\](.*?)\[ZH\]',
- lambda x: chinese_to_romaji(x.group(1)) + ' ', text)
- text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_romaji_with_accent(
- x.group(1)).replace('ts', 'ʦ').replace('u', 'ɯ').replace('...', '…') + ' ', text)
- text = re.sub(r'\s+$', '', text)
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
- return text
-
-
-def sanskrit_cleaners(text):
- text = text.replace('॥', '।').replace('ॐ', 'ओम्')
- if text[-1] != '।':
- text += ' ।'
- return text
-
-
-def cjks_cleaners(text):
- from text.mandarin import chinese_to_lazy_ipa
- from text.japanese import japanese_to_ipa
- from text.korean import korean_to_lazy_ipa
- from text.sanskrit import devanagari_to_ipa
- from text.english import english_to_lazy_ipa
- text = re.sub(r'\[ZH\](.*?)\[ZH\]',
- lambda x: chinese_to_lazy_ipa(x.group(1)) + ' ', text)
- text = re.sub(r'\[JA\](.*?)\[JA\]',
- lambda x: japanese_to_ipa(x.group(1)) + ' ', text)
- text = re.sub(r'\[KO\](.*?)\[KO\]',
- lambda x: korean_to_lazy_ipa(x.group(1)) + ' ', text)
- text = re.sub(r'\[SA\](.*?)\[SA\]',
- lambda x: devanagari_to_ipa(x.group(1)) + ' ', text)
- text = re.sub(r'\[EN\](.*?)\[EN\]',
- lambda x: english_to_lazy_ipa(x.group(1)) + ' ', text)
- text = re.sub(r'\s+$', '', text)
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
- return text
-
-
-def cjke_cleaners(text):
- from text.mandarin import chinese_to_lazy_ipa
- from text.japanese import japanese_to_ipa
- from text.korean import korean_to_ipa
- from text.english import english_to_ipa2
- text = re.sub(r'\[ZH\](.*?)\[ZH\]', lambda x: chinese_to_lazy_ipa(x.group(1)).replace(
- 'ʧ', 'tʃ').replace('ʦ', 'ts').replace('ɥan', 'ɥæn') + ' ', text)
- text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_ipa(x.group(1)).replace('ʧ', 'tʃ').replace(
- 'ʦ', 'ts').replace('ɥan', 'ɥæn').replace('ʥ', 'dz') + ' ', text)
- text = re.sub(r'\[KO\](.*?)\[KO\]',
- lambda x: korean_to_ipa(x.group(1)) + ' ', text)
- text = re.sub(r'\[EN\](.*?)\[EN\]', lambda x: english_to_ipa2(x.group(1)).replace('ɑ', 'a').replace(
- 'ɔ', 'o').replace('ɛ', 'e').replace('ɪ', 'i').replace('ʊ', 'u') + ' ', text)
- text = re.sub(r'\s+$', '', text)
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
- return text
-
-
-def cjke_cleaners2(text):
- from text.mandarin import chinese_to_ipa
- from text.japanese import japanese_to_ipa2
- from text.korean import korean_to_ipa
- from text.english import english_to_ipa2
- text = re.sub(r'\[ZH\](.*?)\[ZH\]',
- lambda x: chinese_to_ipa(x.group(1)) + ' ', text)
- text = re.sub(r'\[JA\](.*?)\[JA\]',
- lambda x: japanese_to_ipa2(x.group(1)) + ' ', text)
- text = re.sub(r'\[KO\](.*?)\[KO\]',
- lambda x: korean_to_ipa(x.group(1)) + ' ', text)
- text = re.sub(r'\[EN\](.*?)\[EN\]',
- lambda x: english_to_ipa2(x.group(1)) + ' ', text)
- text = re.sub(r'\s+$', '', text)
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
- return text
-
-
-def thai_cleaners(text):
- from text.thai import num_to_thai, latin_to_thai
- text = num_to_thai(text)
- text = latin_to_thai(text)
- return text
-
-
-def shanghainese_cleaners(text):
- from text.shanghainese import shanghainese_to_ipa
- text = shanghainese_to_ipa(text)
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
- return text
-
-
-def chinese_dialect_cleaners(text):
- from text.mandarin import chinese_to_ipa2
- from text.japanese import japanese_to_ipa3
- from text.shanghainese import shanghainese_to_ipa
- from text.cantonese import cantonese_to_ipa
- from text.english import english_to_lazy_ipa2
- from text.ngu_dialect import ngu_dialect_to_ipa
- text = re.sub(r'\[ZH\](.*?)\[ZH\]',
- lambda x: chinese_to_ipa2(x.group(1)) + ' ', text)
- text = re.sub(r'\[JA\](.*?)\[JA\]',
- lambda x: japanese_to_ipa3(x.group(1)).replace('Q', 'ʔ') + ' ', text)
- text = re.sub(r'\[SH\](.*?)\[SH\]', lambda x: shanghainese_to_ipa(x.group(1)).replace('1', '˥˧').replace('5',
- '˧˧˦').replace(
- '6', '˩˩˧').replace('7', '˥').replace('8', '˩˨').replace('ᴀ', 'ɐ').replace('ᴇ', 'e') + ' ', text)
- text = re.sub(r'\[GD\](.*?)\[GD\]',
- lambda x: cantonese_to_ipa(x.group(1)) + ' ', text)
- text = re.sub(r'\[EN\](.*?)\[EN\]',
- lambda x: english_to_lazy_ipa2(x.group(1)) + ' ', text)
- text = re.sub(r'\[([A-Z]{2})\](.*?)\[\1\]', lambda x: ngu_dialect_to_ipa(x.group(2), x.group(
- 1)).replace('ʣ', 'dz').replace('ʥ', 'dʑ').replace('ʦ', 'ts').replace('ʨ', 'tɕ') + ' ', text)
- text = re.sub(r'\s+$', '', text)
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
- return text
diff --git a/spaces/Demosthene-OR/avr23-cds-translation/tabs/data_viz_tab.py b/spaces/Demosthene-OR/avr23-cds-translation/tabs/data_viz_tab.py
deleted file mode 100644
index d46a60cc101fc472e066eb282d8016097ac22529..0000000000000000000000000000000000000000
--- a/spaces/Demosthene-OR/avr23-cds-translation/tabs/data_viz_tab.py
+++ /dev/null
@@ -1,387 +0,0 @@
-import streamlit as st
-from PIL import Image
-import os
-import ast
-import contextlib
-import numpy as np
-import pandas as pd
-import matplotlib.pyplot as plt
-import seaborn as sns
-import plotly.express as px
-import plotly.graph_objects as go
-import plotly.figure_factory as ff
-from wordcloud import WordCloud
-import nltk
-from nltk.corpus import stopwords
-from gensim import corpora
-import networkx as nx
-from sklearn.manifold import TSNE
-from gensim.models import KeyedVectors
-
-
-title = "Data Vizualization"
-sidebar_name = "Data Vizualization"
-
-with contextlib.redirect_stdout(open(os.devnull, "w")):
- nltk.download('stopwords')
-
-# Première ligne à charger
-first_line = 0
-# Nombre maximum de lignes à charger
-max_lines = 140000
-if ((first_line+max_lines)>137860):
- max_lines = max(137860-first_line ,0)
-# Nombre maximum de ligne à afficher pour les DataFrame
-max_lines_to_display = 50
-
-@st.cache_data(ttl='1h00s')
-def load_data(path):
-
- input_file = os.path.join(path)
- with open(input_file, "r", encoding="utf-8") as f:
- data = f.read()
-
- # On convertit les majuscules en minulcule
- data = data.lower()
-
- data = data.split('\n')
- return data[first_line:min(len(data),first_line+max_lines)]
-
-@st.cache_data(ttl='1h00s')
-def load_preprocessed_data(path,data_type):
-
- input_file = os.path.join(path)
- if data_type == 1:
- return pd.read_csv(input_file, encoding="utf-8", index_col=0)
- else:
- with open(input_file, "r", encoding="utf-8") as f:
- data = f.read()
- data = data.split('\n')
- if data_type==0:
- data=data[:-1]
- elif data_type == 2:
- data=[eval(i) for i in data[:-1]]
- elif data_type ==3:
- data2 = []
- for d in data[:-1]:
- data2.append(ast.literal_eval(d))
- data=data2
- return data
-
-@st.cache_data(ttl='1h00s')
-def load_all_preprocessed_data(lang):
- txt =load_preprocessed_data('data/preprocess_txt_'+lang,0)
- corpus =load_preprocessed_data('data/preprocess_corpus_'+lang,0)
- txt_split = load_preprocessed_data('data/preprocess_txt_split_'+lang,3)
- df_count_word = pd.concat([load_preprocessed_data('data/preprocess_df_count_word1_'+lang,1), load_preprocessed_data('data/preprocess_df_count_word2_'+lang,1)])
- sent_len =load_preprocessed_data('data/preprocess_sent_len_'+lang,2)
- vec_model= KeyedVectors.load_word2vec_format('data/mini.wiki.'+lang+'.align.vec')
- return txt, corpus, txt_split, df_count_word,sent_len, vec_model
-
-#Chargement des textes complet dans les 2 langues
-full_txt_en, full_corpus_en, full_txt_split_en, full_df_count_word_en,full_sent_len_en, vec_model_en = load_all_preprocessed_data('en')
-full_txt_fr, full_corpus_fr, full_txt_split_fr, full_df_count_word_fr,full_sent_len_fr, vec_model_fr = load_all_preprocessed_data('fr')
-
-
-def plot_word_cloud(text, title, masque, stop_words, background_color = "white"):
-
- mask_coloring = np.array(Image.open(str(masque)))
- # Définir le calque du nuage des mots
- wc = WordCloud(background_color=background_color, max_words=200,
- stopwords=stop_words, mask = mask_coloring,
- max_font_size=50, random_state=42)
- # Générer et afficher le nuage de mots
- fig=plt.figure(figsize= (20,10))
- plt.title(title, fontsize=25, color="green")
- wc.generate(text)
-
- # getting current axes
- a = plt.gca()
-
- # set visibility of x-axis as False
- xax = a.axes.get_xaxis()
- xax = xax.set_visible(False)
-
- # set visibility of y-axis as False
- yax = a.axes.get_yaxis()
- yax = yax.set_visible(False)
-
- plt.imshow(wc)
- # plt.show()
- st.pyplot(fig)
-
-def drop_df_null_col(df):
- # Check if all values in each column are 0
- columns_to_drop = df.columns[df.eq(0).all()]
- # Drop the columns with all values as 0
- return df.drop(columns=columns_to_drop)
-
-def calcul_occurence(df_count_word):
- nb_occurences = pd.DataFrame(df_count_word.sum().sort_values(axis=0,ascending=False))
- nb_occurences.columns = ['occurences']
- nb_occurences.index.name = 'mot'
- nb_occurences['mots'] = nb_occurences.index
- return nb_occurences
-
-def dist_frequence_mots(df_count_word):
-
- df_count_word = drop_df_null_col(df_count_word)
- nb_occurences = calcul_occurence(df_count_word)
-
- sns.set()
- fig = plt.figure() #figsize=(4,4)
- plt.title("Nombre d'apparitions des mots", fontsize=16)
-
- chart = sns.barplot(x='mots',y='occurences',data=nb_occurences.iloc[:40]);
- chart.set_xticklabels(chart.get_xticklabels(), rotation=45, horizontalalignment='right', size=8)
- st.pyplot(fig)
-
-def dist_longueur_phrase(sent_len,sent_len2, lang1, lang2 ):
- '''
- fig = px.histogram(sent_len, nbins=16, range_x=[3, 18],labels={'count': 'Count', 'variable': 'Nb de mots'},
- color_discrete_sequence=['rgb(200, 0, 0)'], # Couleur des barres de l'histogramme
- opacity=0.7)
- fig.update_traces(marker=dict(color='rgb(200, 0, 0)', line=dict(color='white', width=2)), showlegend=False,)
- fig.update_layout(
- title={'text': 'Distribution du nb de mots/phrase', 'y':1.0, 'x':0.5, 'xanchor': 'center', 'yanchor': 'top'},
- title_font=dict(size=28), # Ajuste la taille de la police du titre
- xaxis_title=None,
- xaxis=dict(
- title_font=dict(size=30), # Ajuste la taille de la police de l'axe X
- tickfont=dict(size=22),
- showgrid=True, gridcolor='white'
- ),
- yaxis_title='Count',
- yaxis=dict(
- title_font= dict(size=30, color='black'), # Ajuste la taille de la police de l'axe Y
- title_standoff=10, # Éloigne le label de l'axe X du graphique
- tickfont=dict(size=22),
- showgrid=True, gridcolor='white'
- ),
- margin=dict(l=20, r=20, t=40, b=20), # Ajustez les valeurs de 'r' pour déplacer les commandes à droite
- # legend=dict(x=1, y=1), # Position de la légende à droite en haut
- # width = 600
- height=600, # Définir la hauteur de la figure
- plot_bgcolor='rgba(220, 220, 220, 0.6)',
- )
- st.plotly_chart(fig, use_container_width=True)
- '''
- df = pd.DataFrame({lang1:sent_len,lang2:sent_len2})
- sns.set()
- fig = plt.figure() # figsize=(12, 6*row_nb)
-
- fig.tight_layout()
- chart = sns.histplot(df, color=['r','b'], label=[lang1,lang2], binwidth=1, binrange=[2,22], element="step",
- common_norm=False, multiple="layer", discrete=True, stat='proportion')
- plt.xticks([2,4,6,8,10,12,14,16,18,20,22])
- chart.set(title='Distribution du nombre de mots sur '+str(len(sent_len))+' phrase(s)');
- st.pyplot(fig)
-
- '''
- # fig = ff.create_distplot([sent_len], ['Nb de mots'],bin_size=1, colors=['rgb(200, 0, 0)'])
-
- distribution = pd.DataFrame({'Nb mots':sent_len, 'Nb phrases':[1]*len(sent_len)})
- fig = px.histogram(distribution, x='Nb mots', y='Nb phrases', marginal="box",range_x=[3, 18], nbins=16, hover_data=distribution.columns)
- fig.update_layout(height=600,title={'text': 'Distribution du nb de mots/phrase', 'y':1.0, 'x':0.5, 'xanchor': 'center', 'yanchor': 'top'})
- fig.update_traces(marker=dict(color='rgb(200, 0, 0)', line=dict(color='white', width=2)), showlegend=False,)
- st.plotly_chart(fig, use_container_width=True)
- '''
-
-
-def graphe_co_occurence(txt_split,corpus):
- dic = corpora.Dictionary(txt_split) # dictionnaire de tous les mots restant dans le token
- # Equivalent (ou presque) de la DTM : DFM, Document Feature Matrix
- dfm = [dic.doc2bow(tok) for tok in txt_split]
-
- mes_labels = [k for k, v in dic.token2id.items()]
-
- from gensim.matutils import corpus2csc
- term_matrice = corpus2csc(dfm)
-
- term_matrice = np.dot(term_matrice, term_matrice.T)
-
- for i in range(len(mes_labels)):
- term_matrice[i,i]= 0
- term_matrice.eliminate_zeros()
-
- G = nx.from_scipy_sparse_matrix(term_matrice)
- G.add_nodes = dic
- pos=nx.spring_layout(G, k=5) # position des nodes
-
-
- fig = plt.figure();
- # plt.title("", fontsize=30, color='b',fontweight="bold")
-
- # nx.draw_networkx_labels(G,pos,dic,font_size=15, font_color='b', bbox={"boxstyle": "round,pad=0.2", "fc":"white", "ec":"black", "lw":"0.8", "alpha" : 0.8} )
- nx.draw_networkx_labels(G,pos,dic,font_size=8, font_color='b')
- nx.draw_networkx_nodes(G,pos, dic, \
- node_color="tab:red", \
- node_size=90, \
- cmap=plt.cm.Reds_r, \
- alpha=0.8);
- nx.draw_networkx_edges(G,pos,width=1.0,alpha=0.1)
-
- plt.axis("off");
- st.pyplot(fig)
-
-def proximite():
- global vec_model_en,vec_model_fr
-
- # Creates and TSNE model and plots it"
- labels = []
- tokens = []
-
- nb_words = st.slider('Nombre de mots à afficher :',8,50, value=20)
- df = pd.read_csv('data/dict_we_en_fr',header=0,index_col=0, encoding ="utf-8", keep_default_na=False)
- words_en = df.index.to_list()[:nb_words]
- words_fr = df['Francais'].to_list()[:nb_words]
-
- for word in words_en:
- tokens.append(vec_model_en[word])
- labels.append(word)
- for word in words_fr:
- tokens.append(vec_model_fr[word])
- labels.append(word)
- tokens = pd.DataFrame(tokens)
-
- tsne_model = TSNE(perplexity=10, n_components=2, init='pca', n_iter=2000, random_state=23)
- new_values = tsne_model.fit_transform(tokens)
-
- fig =plt.figure(figsize=(16, 16))
- x = []
- y = []
- for value in new_values:
- x.append(value[0])
- y.append(value[1])
-
- for i in range(len(x)):
- if i137860):
- max_lines = max(137860-first_line,0)
-
- # Chargement des textes sélectionnés (max lignes = max_lines)
- last_line = first_line+max_lines
- if (Langue == 'Anglais'):
- txt_en = full_txt_en[first_line:last_line]
- corpus_en = full_corpus_en[first_line:last_line]
- txt_split_en = full_txt_split_en[first_line:last_line]
- df_count_word_en =full_df_count_word_en.loc[first_line:last_line-1]
- sent_len_en = full_sent_len_en[first_line:last_line]
- sent_len_fr = full_sent_len_fr[first_line:last_line]
- else:
- txt_fr = full_txt_fr[first_line:last_line]
- corpus_fr = full_corpus_fr[first_line:last_line]
- txt_split_fr = full_txt_split_fr[first_line:last_line]
- df_count_word_fr =full_df_count_word_fr.loc[first_line:last_line-1]
- sent_len_fr = full_sent_len_fr[first_line:last_line]
- sent_len_en = full_sent_len_en[first_line:last_line]
-
- if (Langue=='Anglais'):
- st.dataframe(pd.DataFrame(data=full_txt_en,columns=['Texte']).loc[first_line:last_line-1].head(max_lines_to_display), width=800)
- else:
- st.dataframe(pd.DataFrame(data=full_txt_fr,columns=['Texte']).loc[first_line:last_line-1].head(max_lines_to_display), width=800)
- st.write("")
-
- tab1, tab2, tab3, tab4, tab5 = st.tabs(["World Cloud", "Frequence","Distribution longueur", "Co-occurence", "Proximité"])
-
- with tab1:
- st.subheader("World Cloud")
- st.markdown(
- """
- On remarque, en changeant de langue, que certains mot de taille importante dans une langue,
- apparaissent avec une taille identique dans l'autre langue.
- La traduction mot à mot sera donc peut-être bonne.
- """
- )
- if (Langue == 'Anglais'):
- text = ""
- # Initialiser la variable des mots vides
- stop_words = set(stopwords.words('english'))
- for e in txt_en : text += e
- plot_word_cloud(text, "English words corpus", "images/coeur.png", stop_words)
- else:
- text = ""
- # Initialiser la variable des mots vides
- stop_words = set(stopwords.words('french'))
- for e in txt_fr : text += e
- plot_word_cloud(text,"Mots français du corpus", "images/coeur.png", stop_words)
-
- with tab2:
- st.subheader("Frequence d'apparition des mots")
- st.markdown(
- """
- On remarque, en changeant de langue, que certains mot fréquents dans une langue,
- apparaissent aussi fréquemment dans l'autre langue.
- Cela peut nous laisser penser que la traduction mot à mot sera peut-être bonne.
- """
- )
- if (Langue == 'Anglais'):
- dist_frequence_mots(df_count_word_en)
- else:
- dist_frequence_mots(df_count_word_fr)
- with tab3:
- st.subheader("Distribution des longueurs de phases")
- st.markdown(
- """
- Malgré quelques différences entre les 2 langues (les phrases anglaises sont généralement un peu plus courtes),
- on constate une certaine similitude dans les ditributions de longueur de phrases.
- Cela peut nous laisser penser que la traduction mot à mot ne sera pas si mauvaise.
- """
- )
- if (Langue == 'Anglais'):
- dist_longueur_phrase(sent_len_en, sent_len_fr, 'Anglais','Français')
- else:
- dist_longueur_phrase(sent_len_fr, sent_len_en, 'Français', 'Anglais')
- with tab4:
- st.subheader("Co-occurence des mots dans une phrase")
- if (Langue == 'Anglais'):
- graphe_co_occurence(txt_split_en[:1000],corpus_en)
- else:
- graphe_co_occurence(txt_split_fr[:1000],corpus_fr)
- with tab5:
- st.subheader("Proximité sémantique des mots (Word Embedding)")
- st.markdown(
- """
- MUSE est une bibliothèque Python pour l'intégration de mots multilingues, qui fournit
- notamment des "Word Embedding" multilingues
- Facebook fournit des dictionnaires de référence. Ces embeddings sont des embeddings fastText Wikipedia pour 30 langues qui ont été alignés dans un espace espace vectoriel unique.
- Dans notre cas, nous avons utilisé 2 mini-dictionnaires d'environ 3000 mots (Français et Anglais).
-
- En novembre 2015, l'équipe de recherche de Facebook a créé fastText qui est une extension de la bibliothèque word2vec.
- Elle s'appuie sur Word2Vec en apprenant des représentations vectorielles pour chaque mot et les n-grammes trouvés dans chaque mot.
- """
- )
- st.write("")
- proximite()
-
\ No newline at end of file
diff --git a/spaces/Detomo/ai-comic-generation/src/app/interface/grid/index.tsx b/spaces/Detomo/ai-comic-generation/src/app/interface/grid/index.tsx
deleted file mode 100644
index 83bdf555fc742405b59e5e15d9052e918c0e9713..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-comic-generation/src/app/interface/grid/index.tsx
+++ /dev/null
@@ -1,26 +0,0 @@
-"use client"
-
-import { ReactNode } from "react"
-
-import { cn } from "@/lib/utils"
-import { useStore } from "@/app/store"
-
-export function Grid({ children, className }: { children: ReactNode; className: string }) {
- const zoomLevel = useStore(state => state.zoomLevel)
-
- return (
-
- {children}
-
- )
-}
-
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/upfirdn2d.h b/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/upfirdn2d.h
deleted file mode 100644
index c9e2032bcac9d2abde7a75eea4d812da348afadd..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/upfirdn2d.h
+++ /dev/null
@@ -1,59 +0,0 @@
-// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-#include
-
-//------------------------------------------------------------------------
-// CUDA kernel parameters.
-
-struct upfirdn2d_kernel_params
-{
- const void* x;
- const float* f;
- void* y;
-
- int2 up;
- int2 down;
- int2 pad0;
- int flip;
- float gain;
-
- int4 inSize; // [width, height, channel, batch]
- int4 inStride;
- int2 filterSize; // [width, height]
- int2 filterStride;
- int4 outSize; // [width, height, channel, batch]
- int4 outStride;
- int sizeMinor;
- int sizeMajor;
-
- int loopMinor;
- int loopMajor;
- int loopX;
- int launchMinor;
- int launchMajor;
-};
-
-//------------------------------------------------------------------------
-// CUDA kernel specialization.
-
-struct upfirdn2d_kernel_spec
-{
- void* kernel;
- int tileOutW;
- int tileOutH;
- int loopMinor;
- int loopX;
-};
-
-//------------------------------------------------------------------------
-// CUDA kernel selection.
-
-template upfirdn2d_kernel_spec choose_upfirdn2d_kernel(const upfirdn2d_kernel_params& p);
-
-//------------------------------------------------------------------------
diff --git a/spaces/Duskfallcrew/Animated_Dreams/app.py b/spaces/Duskfallcrew/Animated_Dreams/app.py
deleted file mode 100644
index 97484c33b799fe9b3b38c9c7a43d8c015d191d2b..0000000000000000000000000000000000000000
--- a/spaces/Duskfallcrew/Animated_Dreams/app.py
+++ /dev/null
@@ -1,137 +0,0 @@
-from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler
-import gradio as gr
-import torch
-from PIL import Image
-
-model_id = 'Duskfallcrew/Animated_Dreams'
-prefix = ''
-
-scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler")
-
-pipe = StableDiffusionPipeline.from_pretrained(
- model_id,
- torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
- scheduler=scheduler)
-
-pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained(
- model_id,
- torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
- scheduler=scheduler)
-
-if torch.cuda.is_available():
- pipe = pipe.to("cuda")
- pipe_i2i = pipe_i2i.to("cuda")
-
-def error_str(error, title="Error"):
- return f"""#### {title}
- {error}""" if error else ""
-
-def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False):
-
- generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None
- prompt = f"{prefix} {prompt}" if auto_prefix else prompt
-
- try:
- if img is not None:
- return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None
- else:
- return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None
- except Exception as e:
- return None, error_str(e)
-
-def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator):
-
- result = pipe(
- prompt,
- negative_prompt = neg_prompt,
- num_inference_steps = int(steps),
- guidance_scale = guidance,
- width = width,
- height = height,
- generator = generator)
-
- return result.images[0]
-
-def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator):
-
- ratio = min(height / img.height, width / img.width)
- img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS)
- result = pipe_i2i(
- prompt,
- negative_prompt = neg_prompt,
- init_image = img,
- num_inference_steps = int(steps),
- strength = strength,
- guidance_scale = guidance,
- width = width,
- height = height,
- generator = generator)
-
- return result.images[0]
-
-css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem}
-"""
-with gr.Blocks(css=css) as demo:
- gr.HTML(
- f"""
-
-
-
Animated Dreams
-
-
- Demo for Animated Dreams Stable Diffusion model. Running on Free CPU, if there's a queue make sure you duplicate the space to your own and if you got the funds upgrade to GPU. No prefix tokens. If you like what you see consider donating here: Ko-Fi Duskfallcrew
- {"Add the following tokens to your prompts for the model to work properly: prefix " if prefix else ""}
-
- Running on {"
GPU 🔥 " if torch.cuda.is_available() else f"
CPU 🥶 . For faster inference it is recommended to
upgrade to GPU in Settings "} after duplicating the space
-
-
- """
- )
- with gr.Row():
-
- with gr.Column(scale=55):
- with gr.Group():
- with gr.Row():
- prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False)
- generate = gr.Button(value="Generate").style(rounded=(False, True, True, False))
-
- image_out = gr.Image(height=512)
- error_output = gr.Markdown()
-
- with gr.Column(scale=45):
- with gr.Tab("Options"):
- with gr.Group():
- neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image")
- auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically ()", value=prefix, visible=prefix)
-
- with gr.Row():
- guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15)
- steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1)
-
- with gr.Row():
- width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8)
- height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8)
-
- seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1)
-
- with gr.Tab("Image to image"):
- with gr.Group():
- image = gr.Image(label="Image", height=256, tool="editor", type="pil")
- strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5)
-
- auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False)
-
- inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix]
- outputs = [image_out, error_output]
- prompt.submit(inference, inputs=inputs, outputs=outputs)
- generate.click(inference, inputs=inputs, outputs=outputs)
-
- gr.HTML("""
-
- """)
-
-demo.queue(concurrency_count=1)
-demo.launch()
diff --git a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/data/coco.py b/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/data/coco.py
deleted file mode 100644
index 2b2f7838448cb63dcf96daffe9470d58566d975a..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/data/coco.py
+++ /dev/null
@@ -1,176 +0,0 @@
-import os
-import json
-import albumentations
-import numpy as np
-from PIL import Image
-from tqdm import tqdm
-from torch.utils.data import Dataset
-
-from taming.data.sflckr import SegmentationBase # for examples included in repo
-
-
-class Examples(SegmentationBase):
- def __init__(self, size=256, random_crop=False, interpolation="bicubic"):
- super().__init__(data_csv="data/coco_examples.txt",
- data_root="data/coco_images",
- segmentation_root="data/coco_segmentations",
- size=size, random_crop=random_crop,
- interpolation=interpolation,
- n_labels=183, shift_segmentation=True)
-
-
-class CocoBase(Dataset):
- """needed for (image, caption, segmentation) pairs"""
- def __init__(self, size=None, dataroot="", datajson="", onehot_segmentation=False, use_stuffthing=False,
- crop_size=None, force_no_crop=False, given_files=None):
- self.split = self.get_split()
- self.size = size
- if crop_size is None:
- self.crop_size = size
- else:
- self.crop_size = crop_size
-
- self.onehot = onehot_segmentation # return segmentation as rgb or one hot
- self.stuffthing = use_stuffthing # include thing in segmentation
- if self.onehot and not self.stuffthing:
- raise NotImplemented("One hot mode is only supported for the "
- "stuffthings version because labels are stored "
- "a bit different.")
-
- data_json = datajson
- with open(data_json) as json_file:
- self.json_data = json.load(json_file)
- self.img_id_to_captions = dict()
- self.img_id_to_filepath = dict()
- self.img_id_to_segmentation_filepath = dict()
-
- assert data_json.split("/")[-1] in ["captions_train2017.json",
- "captions_val2017.json"]
- if self.stuffthing:
- self.segmentation_prefix = (
- "data/cocostuffthings/val2017" if
- data_json.endswith("captions_val2017.json") else
- "data/cocostuffthings/train2017")
- else:
- self.segmentation_prefix = (
- "data/coco/annotations/stuff_val2017_pixelmaps" if
- data_json.endswith("captions_val2017.json") else
- "data/coco/annotations/stuff_train2017_pixelmaps")
-
- imagedirs = self.json_data["images"]
- self.labels = {"image_ids": list()}
- for imgdir in tqdm(imagedirs, desc="ImgToPath"):
- self.img_id_to_filepath[imgdir["id"]] = os.path.join(dataroot, imgdir["file_name"])
- self.img_id_to_captions[imgdir["id"]] = list()
- pngfilename = imgdir["file_name"].replace("jpg", "png")
- self.img_id_to_segmentation_filepath[imgdir["id"]] = os.path.join(
- self.segmentation_prefix, pngfilename)
- if given_files is not None:
- if pngfilename in given_files:
- self.labels["image_ids"].append(imgdir["id"])
- else:
- self.labels["image_ids"].append(imgdir["id"])
-
- capdirs = self.json_data["annotations"]
- for capdir in tqdm(capdirs, desc="ImgToCaptions"):
- # there are in average 5 captions per image
- self.img_id_to_captions[capdir["image_id"]].append(np.array([capdir["caption"]]))
-
- self.rescaler = albumentations.SmallestMaxSize(max_size=self.size)
- if self.split=="validation":
- self.cropper = albumentations.CenterCrop(height=self.crop_size, width=self.crop_size)
- else:
- self.cropper = albumentations.RandomCrop(height=self.crop_size, width=self.crop_size)
- self.preprocessor = albumentations.Compose(
- [self.rescaler, self.cropper],
- additional_targets={"segmentation": "image"})
- if force_no_crop:
- self.rescaler = albumentations.Resize(height=self.size, width=self.size)
- self.preprocessor = albumentations.Compose(
- [self.rescaler],
- additional_targets={"segmentation": "image"})
-
- def __len__(self):
- return len(self.labels["image_ids"])
-
- def preprocess_image(self, image_path, segmentation_path):
- image = Image.open(image_path)
- if not image.mode == "RGB":
- image = image.convert("RGB")
- image = np.array(image).astype(np.uint8)
-
- segmentation = Image.open(segmentation_path)
- if not self.onehot and not segmentation.mode == "RGB":
- segmentation = segmentation.convert("RGB")
- segmentation = np.array(segmentation).astype(np.uint8)
- if self.onehot:
- assert self.stuffthing
- # stored in caffe format: unlabeled==255. stuff and thing from
- # 0-181. to be compatible with the labels in
- # https://github.com/nightrome/cocostuff/blob/master/labels.txt
- # we shift stuffthing one to the right and put unlabeled in zero
- # as long as segmentation is uint8 shifting to right handles the
- # latter too
- assert segmentation.dtype == np.uint8
- segmentation = segmentation + 1
-
- processed = self.preprocessor(image=image, segmentation=segmentation)
- image, segmentation = processed["image"], processed["segmentation"]
- image = (image / 127.5 - 1.0).astype(np.float32)
-
- if self.onehot:
- assert segmentation.dtype == np.uint8
- # make it one hot
- n_labels = 183
- flatseg = np.ravel(segmentation)
- onehot = np.zeros((flatseg.size, n_labels), dtype=np.bool)
- onehot[np.arange(flatseg.size), flatseg] = True
- onehot = onehot.reshape(segmentation.shape + (n_labels,)).astype(int)
- segmentation = onehot
- else:
- segmentation = (segmentation / 127.5 - 1.0).astype(np.float32)
- return image, segmentation
-
- def __getitem__(self, i):
- img_path = self.img_id_to_filepath[self.labels["image_ids"][i]]
- seg_path = self.img_id_to_segmentation_filepath[self.labels["image_ids"][i]]
- image, segmentation = self.preprocess_image(img_path, seg_path)
- captions = self.img_id_to_captions[self.labels["image_ids"][i]]
- # randomly draw one of all available captions per image
- caption = captions[np.random.randint(0, len(captions))]
- example = {"image": image,
- "caption": [str(caption[0])],
- "segmentation": segmentation,
- "img_path": img_path,
- "seg_path": seg_path,
- "filename_": img_path.split(os.sep)[-1]
- }
- return example
-
-
-class CocoImagesAndCaptionsTrain(CocoBase):
- """returns a pair of (image, caption)"""
- def __init__(self, size, onehot_segmentation=False, use_stuffthing=False, crop_size=None, force_no_crop=False):
- super().__init__(size=size,
- dataroot="data/coco/train2017",
- datajson="data/coco/annotations/captions_train2017.json",
- onehot_segmentation=onehot_segmentation,
- use_stuffthing=use_stuffthing, crop_size=crop_size, force_no_crop=force_no_crop)
-
- def get_split(self):
- return "train"
-
-
-class CocoImagesAndCaptionsValidation(CocoBase):
- """returns a pair of (image, caption)"""
- def __init__(self, size, onehot_segmentation=False, use_stuffthing=False, crop_size=None, force_no_crop=False,
- given_files=None):
- super().__init__(size=size,
- dataroot="data/coco/val2017",
- datajson="data/coco/annotations/captions_val2017.json",
- onehot_segmentation=onehot_segmentation,
- use_stuffthing=use_stuffthing, crop_size=crop_size, force_no_crop=force_no_crop,
- given_files=given_files)
-
- def get_split(self):
- return "validation"
diff --git a/spaces/EricaCorral/Chinese-Tools-FAST/app.py b/spaces/EricaCorral/Chinese-Tools-FAST/app.py
deleted file mode 100644
index 475153039e5459379084c9fdd0488adb5bad0e21..0000000000000000000000000000000000000000
--- a/spaces/EricaCorral/Chinese-Tools-FAST/app.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from pypinyin import pinyin
-from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
-from LAC import LAC
-import gradio as gr
-import torch
-
-model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-zh-en")
-model.eval()
-tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-zh-en")
-lac = LAC(mode="seg")
-
-def make_request(chinese_text):
- with torch.no_grad():
- encoded_zh = tokenizer.prepare_seq2seq_batch([chinese_text], return_tensors="pt")
- generated_tokens = model.generate(**encoded_zh)
- return tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
-
-def generatepinyin(input):
- pinyin_list = pinyin(input)
- pinyin_string = ""
- for piece in pinyin_list:
- pinyin_string = pinyin_string+" "+piece[0]
- return pinyin_string
-
-def generate_response(Chinese_to_translate):
- response = []
- response.append([Chinese_to_translate,make_request(Chinese_to_translate),generatepinyin(Chinese_to_translate)])
- segmented_string_list = lac.run(Chinese_to_translate)
- for piece in segmented_string_list:
- response.append([piece,make_request(piece),generatepinyin(piece)])
- return response
-
-iface = gr.Interface(
- fn=generate_response,
- title="Chinese to English",
- description="Chinese to English with Helsinki Research's Chinese to English model. Makes for extremely FAST translations.",
- inputs=gr.inputs.Textbox(lines=5, placeholder="Enter text in Chinese"),
- outputs="text")
-
-iface.launch()
\ No newline at end of file
diff --git a/spaces/EsoCode/text-generation-webui/css/chat_style-messenger.css b/spaces/EsoCode/text-generation-webui/css/chat_style-messenger.css
deleted file mode 100644
index 0e5528d86a1298651e7b1c7b5f97eac834db50f4..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/css/chat_style-messenger.css
+++ /dev/null
@@ -1,99 +0,0 @@
-.message {
- padding-bottom: 25px;
- font-size: 15px;
- font-family: Helvetica, Arial, sans-serif;
- line-height: 1.428571429;
-}
-
-.circle-you {
- width: 50px;
- height: 50px;
- background-color: rgb(238, 78, 59);
- border-radius: 50%;
-}
-
-.circle-bot {
- width: 50px;
- height: 50px;
- background-color: rgb(59, 78, 244);
- border-radius: 50%;
- float: left;
- margin-right: 10px;
- margin-top: 5px;
-}
-
-.circle-bot img,
-.circle-you img {
- border-radius: 50%;
- width: 100%;
- height: 100%;
- object-fit: cover;
-}
-
-.circle-you {
- margin-top: 5px;
- float: right;
-}
-
-.circle-bot + .text, .circle-you + .text {
- border-radius: 18px;
- padding: 8px 12px;
-}
-
-.circle-bot + .text {
- background-color: #E4E6EB;
- float: left;
-}
-
-.circle-you + .text {
- float: right;
- background-color: rgb(0, 132, 255);
- margin-right: 10px;
-}
-
-.circle-you + .text div, .circle-you + .text *, .dark .circle-you + .text div, .dark .circle-you + .text * {
- color: #FFF !important;
-}
-
-.circle-you + .text .username {
- text-align: right;
-}
-
-.dark .circle-bot + .text div, .dark .circle-bot + .text * {
- color: #000;
-}
-
-.text {
- max-width: 80%;
-}
-
-.text p {
- margin-top: 5px;
-}
-
-.username {
- font-weight: bold;
-}
-
-.message-body {
-}
-
-.message-body img {
- max-width: 300px;
- max-height: 300px;
- border-radius: 20px;
-}
-
-.message-body p {
- margin-bottom: 0 !important;
- font-size: 15px !important;
- line-height: 1.428571429 !important;
-}
-
-.dark .message-body p em {
- color: rgb(138, 138, 138) !important;
-}
-
-.message-body p em {
- color: rgb(110, 110, 110) !important;
-}
diff --git a/spaces/EsoCode/text-generation-webui/docs/WSL-installation-guide.md b/spaces/EsoCode/text-generation-webui/docs/WSL-installation-guide.md
deleted file mode 100644
index 30b7fa3e6f4613898fbb0d0bd16b77db5d79c14b..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/docs/WSL-installation-guide.md
+++ /dev/null
@@ -1,82 +0,0 @@
-Guide created by [@jfryton](https://github.com/jfryton). Thank you jfryton.
-
------
-
-Here's an easy-to-follow, step-by-step guide for installing Windows Subsystem for Linux (WSL) with Ubuntu on Windows 10/11:
-
-## Step 1: Enable WSL
-
-1. Press the Windows key + X and click on "Windows PowerShell (Admin)" or "Windows Terminal (Admin)" to open PowerShell or Terminal with administrator privileges.
-2. In the PowerShell window, type the following command and press Enter:
-
-```
-wsl --install
-```
-
-If this command doesn't work, you can enable WSL with the following command for Windows 10:
-
-```
-wsl --set-default-version 1
-```
-
-For Windows 11, you can use:
-
-```
-wsl --set-default-version 2
-```
-
-You may be prompted to restart your computer. If so, save your work and restart.
-
-## Step 2: Install Ubuntu
-
-1. Open the Microsoft Store.
-2. Search for "Ubuntu" in the search bar.
-3. Choose the desired Ubuntu version (e.g., Ubuntu 20.04 LTS) and click "Get" or "Install" to download and install the Ubuntu app.
-4. Once the installation is complete, click "Launch" or search for "Ubuntu" in the Start menu and open the app.
-
-## Step 3: Set up Ubuntu
-
-1. When you first launch the Ubuntu app, it will take a few minutes to set up. Be patient as it installs the necessary files and sets up your environment.
-2. Once the setup is complete, you will be prompted to create a new UNIX username and password. Choose a username and password, and make sure to remember them, as you will need them for future administrative tasks within the Ubuntu environment.
-
-## Step 4: Update and upgrade packages
-
-1. After setting up your username and password, it's a good idea to update and upgrade your Ubuntu system. Run the following commands in the Ubuntu terminal:
-
-```
-sudo apt update
-sudo apt upgrade
-```
-
-2. Enter your password when prompted. This will update the package list and upgrade any outdated packages.
-
-Congratulations! You have now installed WSL with Ubuntu on your Windows 10/11 system. You can use the Ubuntu terminal for various tasks, like running Linux commands, installing packages, or managing files.
-
-You can launch your WSL Ubuntu installation by selecting the Ubuntu app (like any other program installed on your computer) or typing 'ubuntu' into Powershell or Terminal.
-
-## Step 5: Proceed with Linux instructions
-
-1. You can now follow the Linux setup instructions. If you receive any error messages about a missing tool or package, just install them using apt:
-
-```
-sudo apt install [missing package]
-```
-
-You will probably need to install build-essential
-
-```
-sudo apt install build-essential
-```
-
-If you face any issues or need to troubleshoot, you can always refer to the official Microsoft documentation for WSL: https://docs.microsoft.com/en-us/windows/wsl/
-
-#### WSL2 performance using /mnt:
-when you git clone a repository, put it inside WSL and not outside. To understand more, take a look at this [issue](https://github.com/microsoft/WSL/issues/4197#issuecomment-604592340)
-
-## Bonus: Port Forwarding
-
-By default, you won't be able to access the webui from another device on your local network. You will need to setup the appropriate port forwarding using the following command (using PowerShell or Terminal with administrator privileges).
-
-```
-netsh interface portproxy add v4tov4 listenaddress=0.0.0.0 listenport=7860 connectaddress=localhost connectport=7860
-```
diff --git a/spaces/EsoCode/text-generation-webui/modules/block_requests.py b/spaces/EsoCode/text-generation-webui/modules/block_requests.py
deleted file mode 100644
index 4358a820c26a45612ed03385af85694ccd32f10a..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/modules/block_requests.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import requests
-
-from modules.logging_colors import logger
-
-
-class RequestBlocker:
-
- def __enter__(self):
- self.original_get = requests.get
- requests.get = my_get
-
- def __exit__(self, exc_type, exc_value, traceback):
- requests.get = self.original_get
-
-
-def my_get(url, **kwargs):
- logger.info('Unwanted HTTP request redirected to localhost :)')
- kwargs.setdefault('allow_redirects', True)
- return requests.api.request('get', 'http://127.0.0.1/', **kwargs)
diff --git a/spaces/EuroPython2022/bloom-prompts-spanish/app.py b/spaces/EuroPython2022/bloom-prompts-spanish/app.py
deleted file mode 100644
index 3014b9698778b5b5021d42d05d08f5bc9815054a..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/bloom-prompts-spanish/app.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import gradio as gr
-import requests
-import json
-import os
-from pathlib import Path
-
-title = "🌸 BLOOM 🌸"
-description = """Gradio Demo for using BLOOM with Spanish prompts. Heavily based on [Bloom demo](https://huggingface.co/spaces/huggingface/bloom_demo)
-Tips:
-- Do NOT talk to BLOOM as an entity, it's not a chatbot but a webpage/blog/article completion model.
-- For the best results: MIMIC a few sentences of a webpage similar to the content you want to generate.
-Start a paragraph as if YOU were writing a blog, webpage, math post, coding article and BLOOM will generate a coherent follow-up. Longer prompts usually give more interesting results.
-Options:
-- sampling: imaginative completions (may be not super accurate e.g. math/history)
-- greedy: accurate completions (may be more boring or have repetitions)
-"""
-
-API_URL = os.getenv("API_URL")
-API_TOKEN = os.getenv("API_TOKEN")
-
-examples = [
- ['Traduce español de España a español de Argentina\nEl coche es rojo - el auto es rojo\nEl ordenador es nuevo - la computadora es nueva\nel boligrafo es negro -', 16, "Sample"],
- ['Estos ejemplos quitan vocales de las palabras\nEjemplos:\nhola - hl\nmanzana - mnzn\npapas - pps\nalacran - lcrn\npapa -', 16, "Sample"],
- ["Un ejemplo de ecuación sencilla sería:\n4x = 40 ; en este caso el valor de x es", 16, "Greedy"],
- ["Si Pedro tiene 4 manzanas y María le quita 2, entonces a Pedro le quedan", 16, "Sample"],
- ["Esta es una conversación entre el modelo de lenguaje BLOOM y uno de sus creadores:\nCreador: Hola, BLOOM! ¿Tienes sentimientos?\nBLOOM:", 32, "Sample"],
- ["Había una vez un circo que alegraba siempre el", 32, "Sample"],
- ['''A continuación se clasifican reseñas de películas:\nComentario: "La película fue un horror"\nEtiqueta: Mala\n\nComentario: "La película me gustó mucho"\nEtiqueta: Buena\n\nComentario: "Es un despropósito de película"\nEtiqueta:''', 16, "Greedy"],
- ['''# La siguiente función hace un petición a la API y devuelve la respuesta en formato JSON\ndef query(payload, model_id, api_token):\n\theaders = {"Authorization": f"Bearer {api_token}"}\n\tAPI_URL = f"https://api-inference.huggingface.co/models/{model_id}"\n\tresponse =''',32, "Sample"],
- ['''Ingredientes de la paella:\n\nArroz bomba - 1500 g\nPollo de corral - 1\nConejo - 0.5 kg\nJudía verde plana''', 32, "Sample"],
- ['''En Barcelona podemos visitiar los siguientes edificios:\n\n- La Sagrada Familia\n- Las Ramblas''', 32, "Sample"]
-]
-
-def query(payload):
- print(payload)
- headers = {"Authorization": f"Bearer {API_TOKEN}"}
- response = requests.request("POST", API_URL, headers=headers, json=payload)
- print(response)
- return json.loads(response.content.decode("utf-8"))
-
-def inference(input_sentence, max_length, sample_or_greedy, seed=42):
-
- if sample_or_greedy == "Sample":
- parameters = {"max_new_tokens": max_length,
- "top_p": 0.9,
- "do_sample": True,
- "seed": seed,
- "early_stopping": False,
- "length_penalty": 0.0,
- "eos_token_id": None}
- else:
- parameters = {"max_new_tokens": max_length,
- "do_sample": False,
- "seed": seed,
- "early_stopping": False,
- "length_penalty": 0.0,
- "eos_token_id": None}
-
- payload = {"inputs": input_sentence,
- "parameters": parameters}
-
- data = query(
- payload
- )
-
- print(data)
- return data[0]['generated_text']
-
-
-gr.Interface(
- inference,
- [
- gr.inputs.Textbox(label="Input"),
- gr.inputs.Slider(1, 64, default=32, step=1, label="Tokens to generate"),
- gr.inputs.Radio(["Sample", "Greedy"], label="Decoding", default="Sample")
- ],
- ["text"],
- examples=examples,
- # article=article,
- cache_examples=False,
- title=title,
- description=description
-).launch()
\ No newline at end of file
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/schedules/schedule_sgd_1200e.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/schedules/schedule_sgd_1200e.py
deleted file mode 100644
index bc7fbf69b42b11ea9b8ae4d14216d2fcf20e717c..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/schedules/schedule_sgd_1200e.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# optimizer
-optimizer = dict(type='SGD', lr=0.007, momentum=0.9, weight_decay=0.0001)
-optimizer_config = dict(grad_clip=None)
-# learning policy
-lr_config = dict(policy='poly', power=0.9, min_lr=1e-7, by_epoch=True)
-# running settings
-runner = dict(type='EpochBasedRunner', max_epochs=1200)
-checkpoint_config = dict(interval=100)
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/nrtr/nrtr_modality_transform_academic.py b/spaces/EuroPython2022/mmocr-demo/configs/textrecog/nrtr/nrtr_modality_transform_academic.py
deleted file mode 100644
index 471926ba998640123ff356c146dc8bbdb9b3c261..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/nrtr/nrtr_modality_transform_academic.py
+++ /dev/null
@@ -1,32 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/recog_models/nrtr_modality_transform.py',
- '../../_base_/schedules/schedule_adam_step_6e.py',
- '../../_base_/recog_datasets/ST_MJ_train.py',
- '../../_base_/recog_datasets/academic_test.py',
- '../../_base_/recog_pipelines/nrtr_pipeline.py'
-]
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline = {{_base_.train_pipeline}}
-test_pipeline = {{_base_.test_pipeline}}
-
-data = dict(
- samples_per_gpu=128,
- workers_per_gpu=4,
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline))
-
-evaluation = dict(interval=1, metric='acc')
diff --git a/spaces/EzioArno/Goofy/Dockerfile b/spaces/EzioArno/Goofy/Dockerfile
deleted file mode 100644
index e6158e4b2d67eeea6e30ad3c1bb6043ec09b7b9b..0000000000000000000000000000000000000000
--- a/spaces/EzioArno/Goofy/Dockerfile
+++ /dev/null
@@ -1,11 +0,0 @@
-FROM node:18-bullseye-slim
-RUN apt-get update && \
-apt-get install -y git
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-WORKDIR /app
-RUN npm install
-COPY Dockerfile greeting.md* .env* ./
-RUN npm run build
-EXPOSE 7860
-ENV NODE_ENV=production
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/Faridmaruf/rvc-genshin-v2/lib/infer_pack/commons.py b/spaces/Faridmaruf/rvc-genshin-v2/lib/infer_pack/commons.py
deleted file mode 100644
index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000
--- a/spaces/Faridmaruf/rvc-genshin-v2/lib/infer_pack/commons.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += (
- 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
- )
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def slice_segments2(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
- num_timescales - 1
- )
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
- )
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1.0 / norm_type)
- return total_norm
diff --git a/spaces/Felix123456/bingo/src/pages/api/healthz.ts b/spaces/Felix123456/bingo/src/pages/api/healthz.ts
deleted file mode 100644
index f6ae44ff0fd66ccd3f7feaa550025fbf2a83bf77..0000000000000000000000000000000000000000
--- a/spaces/Felix123456/bingo/src/pages/api/healthz.ts
+++ /dev/null
@@ -1,7 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- res.status(200).end('ok')
-}
diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/ops/upfirdn2d/upfirdn2d.py b/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/ops/upfirdn2d/upfirdn2d.py
deleted file mode 100644
index 667f96e1ded35d48f163f37e21d1ed8ff191aac3..0000000000000000000000000000000000000000
--- a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/ops/upfirdn2d/upfirdn2d.py
+++ /dev/null
@@ -1,186 +0,0 @@
-# modify from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/upfirdn2d.py # noqa:E501
-
-import torch
-from torch.autograd import Function
-from torch.nn import functional as F
-
-try:
- from . import upfirdn2d_ext
-except ImportError:
- import os
- BASICSR_JIT = os.getenv('BASICSR_JIT')
- if BASICSR_JIT == 'True':
- from torch.utils.cpp_extension import load
- module_path = os.path.dirname(__file__)
- upfirdn2d_ext = load(
- 'upfirdn2d',
- sources=[
- os.path.join(module_path, 'src', 'upfirdn2d.cpp'),
- os.path.join(module_path, 'src', 'upfirdn2d_kernel.cu'),
- ],
- )
-
-
-class UpFirDn2dBackward(Function):
-
- @staticmethod
- def forward(ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size):
-
- up_x, up_y = up
- down_x, down_y = down
- g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad
-
- grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1)
-
- grad_input = upfirdn2d_ext.upfirdn2d(
- grad_output,
- grad_kernel,
- down_x,
- down_y,
- up_x,
- up_y,
- g_pad_x0,
- g_pad_x1,
- g_pad_y0,
- g_pad_y1,
- )
- grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3])
-
- ctx.save_for_backward(kernel)
-
- pad_x0, pad_x1, pad_y0, pad_y1 = pad
-
- ctx.up_x = up_x
- ctx.up_y = up_y
- ctx.down_x = down_x
- ctx.down_y = down_y
- ctx.pad_x0 = pad_x0
- ctx.pad_x1 = pad_x1
- ctx.pad_y0 = pad_y0
- ctx.pad_y1 = pad_y1
- ctx.in_size = in_size
- ctx.out_size = out_size
-
- return grad_input
-
- @staticmethod
- def backward(ctx, gradgrad_input):
- kernel, = ctx.saved_tensors
-
- gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1)
-
- gradgrad_out = upfirdn2d_ext.upfirdn2d(
- gradgrad_input,
- kernel,
- ctx.up_x,
- ctx.up_y,
- ctx.down_x,
- ctx.down_y,
- ctx.pad_x0,
- ctx.pad_x1,
- ctx.pad_y0,
- ctx.pad_y1,
- )
- # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0],
- # ctx.out_size[1], ctx.in_size[3])
- gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1])
-
- return gradgrad_out, None, None, None, None, None, None, None, None
-
-
-class UpFirDn2d(Function):
-
- @staticmethod
- def forward(ctx, input, kernel, up, down, pad):
- up_x, up_y = up
- down_x, down_y = down
- pad_x0, pad_x1, pad_y0, pad_y1 = pad
-
- kernel_h, kernel_w = kernel.shape
- batch, channel, in_h, in_w = input.shape
- ctx.in_size = input.shape
-
- input = input.reshape(-1, in_h, in_w, 1)
-
- ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1]))
-
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1
- ctx.out_size = (out_h, out_w)
-
- ctx.up = (up_x, up_y)
- ctx.down = (down_x, down_y)
- ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1)
-
- g_pad_x0 = kernel_w - pad_x0 - 1
- g_pad_y0 = kernel_h - pad_y0 - 1
- g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1
- g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1
-
- ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1)
-
- out = upfirdn2d_ext.upfirdn2d(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1)
- # out = out.view(major, out_h, out_w, minor)
- out = out.view(-1, channel, out_h, out_w)
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- kernel, grad_kernel = ctx.saved_tensors
-
- grad_input = UpFirDn2dBackward.apply(
- grad_output,
- kernel,
- grad_kernel,
- ctx.up,
- ctx.down,
- ctx.pad,
- ctx.g_pad,
- ctx.in_size,
- ctx.out_size,
- )
-
- return grad_input, None, None, None, None
-
-
-def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)):
- if input.device.type == 'cpu':
- out = upfirdn2d_native(input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1])
- else:
- out = UpFirDn2d.apply(input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1]))
-
- return out
-
-
-def upfirdn2d_native(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1):
- _, channel, in_h, in_w = input.shape
- input = input.reshape(-1, in_h, in_w, 1)
-
- _, in_h, in_w, minor = input.shape
- kernel_h, kernel_w = kernel.shape
-
- out = input.view(-1, in_h, 1, in_w, 1, minor)
- out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])
- out = out.view(-1, in_h * up_y, in_w * up_x, minor)
-
- out = F.pad(out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)])
- out = out[:, max(-pad_y0, 0):out.shape[1] - max(-pad_y1, 0), max(-pad_x0, 0):out.shape[2] - max(-pad_x1, 0), :, ]
-
- out = out.permute(0, 3, 1, 2)
- out = out.reshape([-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1])
- w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)
- out = F.conv2d(out, w)
- out = out.reshape(
- -1,
- minor,
- in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,
- in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,
- )
- out = out.permute(0, 2, 3, 1)
- out = out[:, ::down_y, ::down_x, :]
-
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1
-
- return out.view(-1, channel, out_h, out_w)
diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train5_gptmixcliport2_new_pickplace.sh b/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train5_gptmixcliport2_new_pickplace.sh
deleted file mode 100644
index 54d705899f2751c9c8436938417a73d2f81d710a..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train5_gptmixcliport2_new_pickplace.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/bash
-#SBATCH -c 10
-#SBATCH -n 1
-#SBATCH -o logs/%j.out
-#SBATCH --exclusive
-STEPS=${1-'50000'}
-
-
-sh scripts/traintest_scripts/train_test_multi_task_goal.sh data \
- "[stack-block-pyramid,color-coordinated-sphere-insertion,rainbow-stack,put-block-in-bowl,vertical-insertion-blocks,stack-blocks-in-container]" \
- "[put-block-in-bowl,stack-block-pyramid]" \
- gpt5_mixcliport2_task_new
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x1024_40k_cityscapes.py
deleted file mode 100644
index 7243d0390f6394fdd528c881bb128b2c13d08037..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x1024_40k_cityscapes.py
+++ /dev/null
@@ -1,5 +0,0 @@
-_base_ = [
- '../_base_/models/deeplabv3plus_r50-d8.py',
- '../_base_/datasets/cityscapes.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_40k.py'
-]
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x1024_160k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x1024_160k_cityscapes.py
deleted file mode 100644
index 9f04e935c39b08de66629f913b30675ffff2a8fe..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x1024_160k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/fcn_hr18.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
-]
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/optim/dadam.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/optim/dadam.py
deleted file mode 100644
index a84402f744867610180b9576b2ee3302501fd035..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/optim/dadam.py
+++ /dev/null
@@ -1,252 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-from typing import TYPE_CHECKING, Any
-
-import torch
-import torch.optim
-import torch.distributed as dist
-
-if TYPE_CHECKING:
- from torch.optim.optimizer import _params_t
-else:
- _params_t = Any
-
-
-logger = logging.getLogger(__name__)
-
-
-def to_real(x):
- if torch.is_complex(x):
- return x.real
- else:
- return x
-
-
-class DAdaptAdam(torch.optim.Optimizer):
- """Adam with D-Adaptation automatic step-sizes.
- Leave LR set to 1 unless you encounter instability.
-
- Args:
- params (iterable):
- Iterable of parameters to optimize or dicts defining parameter groups.
- lr (float):
- Learning rate adjustment parameter. Increases or decreases the D-adapted learning rate.
- betas (tuple[float, float], optional): coefficients used for computing
- running averages of gradient and its square (default: (0.9, 0.999))
- momentum (float):
- Momentum value in the range [0,1) (default: 0.9).
- eps (float):
- Term added to the denominator outside of the root operation to improve numerical stability. (default: 1e-8).
- weight_decay (float):
- Weight decay, i.e. a L2 penalty (default: 0).
- log_every (int):
- Log using print every k steps, default 0 (no logging).
- decouple (boolean):
- Use AdamW style decoupled weight decay
- d0 (float):
- Initial D estimate for D-adaptation (default 1e-6). Rarely needs changing.
- growth_rate (float):
- prevent the D estimate from growing faster than this multiplicative rate.
- Default is inf, for unrestricted. Values like 1.02 give a kind of learning
- rate warmup effect.
- fsdp_in_use (bool):
- If you're using sharded parameters, this should be set to True. The optimizer
- will attempt to auto-detect this, but if you're using an implementation other
- than PyTorch's builtin version, the auto-detection won't work.
- """
- def __init__(self, params, lr=1.0,
- betas=(0.9, 0.999),
- eps=1e-8,
- weight_decay=0,
- log_every=0,
- decouple=True,
- d0=1e-6,
- growth_rate=float('inf')):
- if not 0.0 < d0:
- raise ValueError("Invalid d0 value: {}".format(d0))
- if not 0.0 < lr:
- raise ValueError("Invalid learning rate: {}".format(lr))
- if not 0.0 < eps:
- raise ValueError("Invalid epsilon value: {}".format(eps))
- if not 0.0 <= betas[0] < 1.0:
- raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
- if not 0.0 <= betas[1] < 1.0:
- raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
-
- if decouple:
- logger.info("Using decoupled weight decay")
-
- from .fsdp import is_fsdp_used
- fsdp_in_use = is_fsdp_used()
- defaults = dict(lr=lr, betas=betas, eps=eps,
- weight_decay=weight_decay,
- d=d0,
- k=0,
- gsq_weighted=0.0,
- log_every=log_every,
- decouple=decouple,
- growth_rate=growth_rate,
- fsdp_in_use=fsdp_in_use)
-
- super().__init__(params, defaults)
-
- @property
- def supports_memory_efficient_fp16(self):
- return False
-
- @property
- def supports_flat_params(self):
- return True
-
- def step(self, closure=None):
- """Performs a single optimization step.
-
- Args:
- closure (callable, optional): A closure that reevaluates the model
- and returns the loss.
- """
- loss = None
- if closure is not None:
- loss = closure()
-
- g_sq = 0.0
- sksq_weighted = 0.0
- sk_l1 = 0.0
-
- lr = max(group['lr'] for group in self.param_groups)
-
- group = self.param_groups[0]
- gsq_weighted = group['gsq_weighted']
- d = group['d']
- dlr = d*lr
-
- growth_rate = group['growth_rate']
- decouple = group['decouple']
- fsdp_in_use = group['fsdp_in_use']
- log_every = group['log_every']
-
- beta1, beta2 = group['betas']
-
- for group in self.param_groups:
- group_lr = group['lr']
- decay = group['weight_decay']
- k = group['k']
- eps = group['eps']
-
- if group_lr not in [lr, 0.0]:
- raise RuntimeError("Setting different lr values in different parameter "
- "groups is only supported for values of 0")
-
- for p in group['params']:
- if p.grad is None:
- continue
- if hasattr(p, "_fsdp_flattened"):
- fsdp_in_use = True
- grad = p.grad.data
-
- # Apply weight decay (coupled variant)
- if decay != 0 and not decouple:
- grad.add_(p.data, alpha=decay)
-
- state = self.state[p]
-
- # State initialization
- if 'step' not in state:
- state['step'] = 0
- state['s'] = torch.zeros_like(p.data, memory_format=torch.preserve_format).detach()
- # Exponential moving average of gradient values
- state['exp_avg'] = torch.zeros_like(p.data, memory_format=torch.preserve_format).detach()
- # Exponential moving average of squared gradient values
- state['exp_avg_sq'] = torch.zeros_like(
- to_real(p.data), memory_format=torch.preserve_format).detach()
-
- exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
-
- grad_grad = to_real(grad * grad.conj())
-
- # Adam EMA updates
- if group_lr > 0:
- exp_avg.mul_(beta1).add_(grad, alpha=dlr*(1-beta1))
- exp_avg_sq.mul_(beta2).add_(grad_grad, alpha=1-beta2)
-
- denom = exp_avg_sq.sqrt().add_(eps)
-
- g_sq += grad_grad.div_(denom).sum().item()
-
- s = state['s']
- s.mul_(beta2).add_(grad, alpha=dlr*(1-beta2))
- sksq_weighted += to_real(s * s.conj()).div_(denom).sum().item()
- sk_l1 += s.abs().sum().item()
-
- ######
-
- gsq_weighted = beta2*gsq_weighted + g_sq*(dlr**2)*(1-beta2)
- d_hat = d
-
- # if we have not done any progres, return
- # if we have any gradients available, will have sk_l1 > 0 (unless \|g\|=0)
- if sk_l1 == 0:
- return loss
-
- if lr > 0.0:
- if fsdp_in_use:
- dist_tensor = torch.zeros(3, device='cuda')
- dist_tensor[0] = sksq_weighted
- dist_tensor[1] = gsq_weighted
- dist_tensor[2] = sk_l1
- dist.all_reduce(dist_tensor, op=dist.ReduceOp.SUM)
- global_sksq_weighted = dist_tensor[0]
- global_gsq_weighted = dist_tensor[1]
- global_sk_l1 = dist_tensor[2]
- else:
- global_sksq_weighted = sksq_weighted
- global_gsq_weighted = gsq_weighted
- global_sk_l1 = sk_l1
-
- d_hat = (global_sksq_weighted/(1-beta2) - global_gsq_weighted)/global_sk_l1
- d = max(d, min(d_hat, d*growth_rate))
-
- if log_every > 0 and k % log_every == 0:
- logger.info(
- f"(k={k}) dlr: {dlr:1.1e} d_hat: {d_hat:1.1e}, d: {d:1.8}. "
- f"sksq_weighted={global_sksq_weighted:1.1e} gsq_weighted={global_gsq_weighted:1.1e} "
- f"sk_l1={global_sk_l1:1.1e}{' (FSDP)' if fsdp_in_use else ''}")
-
- for group in self.param_groups:
- group['gsq_weighted'] = gsq_weighted
- group['d'] = d
-
- group_lr = group['lr']
- decay = group['weight_decay']
- k = group['k']
- eps = group['eps']
-
- for p in group['params']:
- if p.grad is None:
- continue
- grad = p.grad.data
-
- state = self.state[p]
-
- exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
-
- state['step'] += 1
-
- denom = exp_avg_sq.sqrt().add_(eps)
- denom = denom.type(p.type())
-
- # Apply weight decay (decoupled variant)
- if decay != 0 and decouple and group_lr > 0:
- p.data.add_(p.data, alpha=-decay * dlr)
-
- # Take step
- p.data.addcdiv_(exp_avg, denom, value=-1)
-
- group['k'] = k + 1
-
- return loss
diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/tests/modules/test_rope.py b/spaces/GrandaddyShmax/MusicGen_Plus/tests/modules/test_rope.py
deleted file mode 100644
index 067c6f067acbf27fb0fef5c2b812c22474c4fcd0..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/MusicGen_Plus/tests/modules/test_rope.py
+++ /dev/null
@@ -1,168 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-from audiocraft.modules.rope import RotaryEmbedding
-from audiocraft.modules.transformer import StreamingTransformer, set_efficient_attention_backend
-
-
-def test_rope():
- set_efficient_attention_backend('xformers')
- B, T, H, C = 8, 75, 16, 128
-
- rope = RotaryEmbedding(dim=C)
- xq = torch.rand((B, T, H, C))
- xk = torch.rand((B, T, H, C))
- xq_out, xk_out = rope.rotate_qk(xq, xk, start=7)
-
- assert list(xq_out.shape) == [B, T, H, C]
- assert list(xk_out.shape) == [B, T, H, C]
-
-
-def test_rope_io_dtypes():
- set_efficient_attention_backend('xformers')
- B, T, H, C = 8, 75, 16, 128
-
- rope_32 = RotaryEmbedding(dim=C, dtype=torch.float32)
- rope_64 = RotaryEmbedding(dim=C, dtype=torch.float64)
-
- # Test bfloat16 inputs w/ both 32 and 64 precision rope.
- xq_16 = torch.rand((B, T, H, C)).to(torch.bfloat16)
- xk_16 = torch.rand((B, T, H, C)).to(torch.bfloat16)
- xq_out, xk_out = rope_32.rotate_qk(xq_16, xk_16)
- assert xq_out.dtype == torch.bfloat16
- xq_out, xk_out = rope_64.rotate_qk(xq_16, xk_16)
- assert xq_out.dtype == torch.bfloat16
-
- # Test float32 inputs w/ both 32 and 64 precision rope.
- xq_32 = torch.rand((B, T, H, C)).to(torch.float32)
- xk_32 = torch.rand((B, T, H, C)).to(torch.float32)
- xq_out, xk_out = rope_32.rotate_qk(xq_32, xk_32)
- assert xq_out.dtype == torch.float32
- xq_out, xk_out = rope_64.rotate_qk(xq_32, xk_32)
- assert xq_out.dtype == torch.float32
-
-
-def test_transformer_with_rope():
- set_efficient_attention_backend('xformers')
- torch.manual_seed(1234)
- for pos in ['rope', 'sin_rope']:
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., layer_scale=0.1,
- positional_embedding=pos)
- tr.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- out = tr(x)
- assert list(out.shape) == list(x.shape)
-
-
-@torch.no_grad()
-def test_rope_streaming():
- set_efficient_attention_backend('xformers')
- torch.manual_seed(1234)
- tr = StreamingTransformer(
- 16, 4, 2, causal=True, dropout=0.,
- custom=True, positional_embedding='rope')
- tr.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- ref = tr(x)
-
- with tr.streaming():
- outs = []
- frame_sizes = [1] * steps
-
- for frame_size in frame_sizes:
- frame = x[:, :frame_size]
- x = x[:, frame_size:]
- outs.append(tr(frame))
-
- out = torch.cat(outs, dim=1)
- assert list(out.shape) == [3, steps, 16]
- delta = torch.norm(out - ref) / torch.norm(out)
- assert delta < 1e-6, delta
-
-
-@torch.no_grad()
-def test_rope_streaming_past_context():
- set_efficient_attention_backend('xformers')
- torch.manual_seed(1234)
-
- for context in [None, 10]:
- tr = StreamingTransformer(
- 16, 4, 1 if context else 2,
- causal=True, past_context=context, custom=True,
- dropout=0., positional_embedding='rope')
- tr.eval()
-
- steps = 20
- x = torch.randn(3, steps, 16)
- ref = tr(x)
-
- with tr.streaming():
- outs = []
- frame_sizes = [1] * steps
-
- for frame_size in frame_sizes:
- frame = x[:, :frame_size]
- x = x[:, frame_size:]
- outs.append(tr(frame))
-
- out = torch.cat(outs, dim=1)
- assert list(out.shape) == [3, steps, 16]
- delta = torch.norm(out - ref) / torch.norm(out)
- assert delta < 1e-6, delta
-
-
-def test_rope_memory_efficient():
- set_efficient_attention_backend('xformers')
- torch.manual_seed(1234)
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., layer_scale=0.1,
- positional_embedding='rope')
- tr_mem_efficient = StreamingTransformer(
- 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1,
- positional_embedding='rope')
- tr_mem_efficient.load_state_dict(tr.state_dict())
- tr.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- with torch.no_grad():
- y = tr(x)
- y2 = tr_mem_efficient(x)
- # Check at float precision b/c this is the rope default.
- assert torch.allclose(y, y2, atol=1e-7), (y - y2).norm()
-
-
-def test_rope_with_xpos():
- set_efficient_attention_backend('xformers')
- B, T, H, C = 8, 75, 16, 128
-
- rope = RotaryEmbedding(dim=C, xpos=True)
- xq = torch.rand((B, T, H, C))
- xk = torch.rand((B, T, H, C))
- xq_out, xk_out = rope.rotate_qk(xq, xk, start=7)
-
- assert list(xq_out.shape) == [B, T, H, C]
- assert list(xk_out.shape) == [B, T, H, C]
-
-
-def test_positional_scale():
- set_efficient_attention_backend('xformers')
- B, T, H, C = 8, 75, 16, 128
-
- rope = RotaryEmbedding(dim=C, xpos=True, scale=0.0)
- xq = torch.rand((B, T, H, C))
- xk = torch.rand((B, T, H, C))
- xq_out, xk_out = rope.rotate_qk(xq, xk, start=7)
-
- assert torch.allclose(xq, xq_out)
- assert torch.allclose(xk, xk_out)
diff --git a/spaces/Grezz/generate_human_motion/VQ-Trans/train_t2m_trans.py b/spaces/Grezz/generate_human_motion/VQ-Trans/train_t2m_trans.py
deleted file mode 100644
index 8da444f87aa7ca71cd8bc3604868cf30a6c70e02..0000000000000000000000000000000000000000
--- a/spaces/Grezz/generate_human_motion/VQ-Trans/train_t2m_trans.py
+++ /dev/null
@@ -1,191 +0,0 @@
-import os
-import torch
-import numpy as np
-
-from torch.utils.tensorboard import SummaryWriter
-from os.path import join as pjoin
-from torch.distributions import Categorical
-import json
-import clip
-
-import options.option_transformer as option_trans
-import models.vqvae as vqvae
-import utils.utils_model as utils_model
-import utils.eval_trans as eval_trans
-from dataset import dataset_TM_train
-from dataset import dataset_TM_eval
-from dataset import dataset_tokenize
-import models.t2m_trans as trans
-from options.get_eval_option import get_opt
-from models.evaluator_wrapper import EvaluatorModelWrapper
-import warnings
-warnings.filterwarnings('ignore')
-
-##### ---- Exp dirs ---- #####
-args = option_trans.get_args_parser()
-torch.manual_seed(args.seed)
-
-args.out_dir = os.path.join(args.out_dir, f'{args.exp_name}')
-args.vq_dir= os.path.join("./dataset/KIT-ML" if args.dataname == 'kit' else "./dataset/HumanML3D", f'{args.vq_name}')
-os.makedirs(args.out_dir, exist_ok = True)
-os.makedirs(args.vq_dir, exist_ok = True)
-
-##### ---- Logger ---- #####
-logger = utils_model.get_logger(args.out_dir)
-writer = SummaryWriter(args.out_dir)
-logger.info(json.dumps(vars(args), indent=4, sort_keys=True))
-
-##### ---- Dataloader ---- #####
-train_loader_token = dataset_tokenize.DATALoader(args.dataname, 1, unit_length=2**args.down_t)
-
-from utils.word_vectorizer import WordVectorizer
-w_vectorizer = WordVectorizer('./glove', 'our_vab')
-val_loader = dataset_TM_eval.DATALoader(args.dataname, False, 32, w_vectorizer)
-
-dataset_opt_path = 'checkpoints/kit/Comp_v6_KLD005/opt.txt' if args.dataname == 'kit' else 'checkpoints/t2m/Comp_v6_KLD005/opt.txt'
-
-wrapper_opt = get_opt(dataset_opt_path, torch.device('cuda'))
-eval_wrapper = EvaluatorModelWrapper(wrapper_opt)
-
-##### ---- Network ---- #####
-clip_model, clip_preprocess = clip.load("ViT-B/32", device=torch.device('cuda'), jit=False, download_root='/apdcephfs_cq2/share_1290939/maelyszhang/.cache/clip') # Must set jit=False for training
-clip.model.convert_weights(clip_model) # Actually this line is unnecessary since clip by default already on float16
-clip_model.eval()
-for p in clip_model.parameters():
- p.requires_grad = False
-
-net = vqvae.HumanVQVAE(args, ## use args to define different parameters in different quantizers
- args.nb_code,
- args.code_dim,
- args.output_emb_width,
- args.down_t,
- args.stride_t,
- args.width,
- args.depth,
- args.dilation_growth_rate)
-
-
-trans_encoder = trans.Text2Motion_Transformer(num_vq=args.nb_code,
- embed_dim=args.embed_dim_gpt,
- clip_dim=args.clip_dim,
- block_size=args.block_size,
- num_layers=args.num_layers,
- n_head=args.n_head_gpt,
- drop_out_rate=args.drop_out_rate,
- fc_rate=args.ff_rate)
-
-
-print ('loading checkpoint from {}'.format(args.resume_pth))
-ckpt = torch.load(args.resume_pth, map_location='cpu')
-net.load_state_dict(ckpt['net'], strict=True)
-net.eval()
-net.cuda()
-
-if args.resume_trans is not None:
- print ('loading transformer checkpoint from {}'.format(args.resume_trans))
- ckpt = torch.load(args.resume_trans, map_location='cpu')
- trans_encoder.load_state_dict(ckpt['trans'], strict=True)
-trans_encoder.train()
-trans_encoder.cuda()
-
-##### ---- Optimizer & Scheduler ---- #####
-optimizer = utils_model.initial_optim(args.decay_option, args.lr, args.weight_decay, trans_encoder, args.optimizer)
-scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=args.lr_scheduler, gamma=args.gamma)
-
-##### ---- Optimization goals ---- #####
-loss_ce = torch.nn.CrossEntropyLoss()
-
-nb_iter, avg_loss_cls, avg_acc = 0, 0., 0.
-right_num = 0
-nb_sample_train = 0
-
-##### ---- get code ---- #####
-for batch in train_loader_token:
- pose, name = batch
- bs, seq = pose.shape[0], pose.shape[1]
-
- pose = pose.cuda().float() # bs, nb_joints, joints_dim, seq_len
- target = net.encode(pose)
- target = target.cpu().numpy()
- np.save(pjoin(args.vq_dir, name[0] +'.npy'), target)
-
-
-train_loader = dataset_TM_train.DATALoader(args.dataname, args.batch_size, args.nb_code, args.vq_name, unit_length=2**args.down_t)
-train_loader_iter = dataset_TM_train.cycle(train_loader)
-
-
-##### ---- Training ---- #####
-best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, writer, logger = eval_trans.evaluation_transformer(args.out_dir, val_loader, net, trans_encoder, logger, writer, 0, best_fid=1000, best_iter=0, best_div=100, best_top1=0, best_top2=0, best_top3=0, best_matching=100, clip_model=clip_model, eval_wrapper=eval_wrapper)
-while nb_iter <= args.total_iter:
-
- batch = next(train_loader_iter)
- clip_text, m_tokens, m_tokens_len = batch
- m_tokens, m_tokens_len = m_tokens.cuda(), m_tokens_len.cuda()
- bs = m_tokens.shape[0]
- target = m_tokens # (bs, 26)
- target = target.cuda()
-
- text = clip.tokenize(clip_text, truncate=True).cuda()
-
- feat_clip_text = clip_model.encode_text(text).float()
-
- input_index = target[:,:-1]
-
- if args.pkeep == -1:
- proba = np.random.rand(1)[0]
- mask = torch.bernoulli(proba * torch.ones(input_index.shape,
- device=input_index.device))
- else:
- mask = torch.bernoulli(args.pkeep * torch.ones(input_index.shape,
- device=input_index.device))
- mask = mask.round().to(dtype=torch.int64)
- r_indices = torch.randint_like(input_index, args.nb_code)
- a_indices = mask*input_index+(1-mask)*r_indices
-
- cls_pred = trans_encoder(a_indices, feat_clip_text)
- cls_pred = cls_pred.contiguous()
-
- loss_cls = 0.0
- for i in range(bs):
- # loss function (26), (26, 513)
- loss_cls += loss_ce(cls_pred[i][:m_tokens_len[i] + 1], target[i][:m_tokens_len[i] + 1]) / bs
-
- # Accuracy
- probs = torch.softmax(cls_pred[i][:m_tokens_len[i] + 1], dim=-1)
-
- if args.if_maxtest:
- _, cls_pred_index = torch.max(probs, dim=-1)
-
- else:
- dist = Categorical(probs)
- cls_pred_index = dist.sample()
- right_num += (cls_pred_index.flatten(0) == target[i][:m_tokens_len[i] + 1].flatten(0)).sum().item()
-
- ## global loss
- optimizer.zero_grad()
- loss_cls.backward()
- optimizer.step()
- scheduler.step()
-
- avg_loss_cls = avg_loss_cls + loss_cls.item()
- nb_sample_train = nb_sample_train + (m_tokens_len + 1).sum().item()
-
- nb_iter += 1
- if nb_iter % args.print_iter == 0 :
- avg_loss_cls = avg_loss_cls / args.print_iter
- avg_acc = right_num * 100 / nb_sample_train
- writer.add_scalar('./Loss/train', avg_loss_cls, nb_iter)
- writer.add_scalar('./ACC/train', avg_acc, nb_iter)
- msg = f"Train. Iter {nb_iter} : Loss. {avg_loss_cls:.5f}, ACC. {avg_acc:.4f}"
- logger.info(msg)
- avg_loss_cls = 0.
- right_num = 0
- nb_sample_train = 0
-
- if nb_iter % args.eval_iter == 0:
- best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, writer, logger = eval_trans.evaluation_transformer(args.out_dir, val_loader, net, trans_encoder, logger, writer, nb_iter, best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, clip_model=clip_model, eval_wrapper=eval_wrapper)
-
- if nb_iter == args.total_iter:
- msg_final = f"Train. Iter {best_iter} : FID. {best_fid:.5f}, Diversity. {best_div:.4f}, TOP1. {best_top1:.4f}, TOP2. {best_top2:.4f}, TOP3. {best_top3:.4f}"
- logger.info(msg_final)
- break
\ No newline at end of file
diff --git a/spaces/HOLYBOY/Customer_Churn_App/app.py b/spaces/HOLYBOY/Customer_Churn_App/app.py
deleted file mode 100644
index ea396bde2b825caed041daef91dd36ce7ea916c3..0000000000000000000000000000000000000000
--- a/spaces/HOLYBOY/Customer_Churn_App/app.py
+++ /dev/null
@@ -1,324 +0,0 @@
-import streamlit as st
-import joblib
-import pandas as pd
-import numpy as np
-import plotly.graph_objects as go
-from PIL import Image
-import time
-import matplotlib.pyplot as plt
-from io import BytesIO
-
-
-num_imputer = joblib.load('numerical_imputer.joblib')
-cat_imputer = joblib.load('cat_imputer.joblib')
-encoder = joblib.load('encoder.joblib')
-scaler = joblib.load('scaler.joblib')
-lr_model = joblib.load('lr_smote_model.joblib')
-
-
-def preprocess_input(input_data):
- input_df = pd.DataFrame(input_data, index=[0])
-
- cat_columns = [col for col in input_df.columns if input_df[col].dtype == 'object']
- num_columns = [col for col in input_df.columns if input_df[col].dtype != 'object']
-
- input_df_imputed_cat = cat_imputer.transform(input_df[cat_columns])
- input_df_imputed_num = num_imputer.transform(input_df[num_columns])
-
- input_encoded_df = pd.DataFrame(encoder.transform(input_df_imputed_cat).toarray(),
- columns=encoder.get_feature_names_out(cat_columns))
-
- input_df_scaled = scaler.transform(input_df_imputed_num)
- input_scaled_df = pd.DataFrame(input_df_scaled, columns=num_columns)
- final_df = pd.concat([input_encoded_df, input_scaled_df], axis=1)
- final_df = final_df.reindex(columns=original_feature_names, fill_value=0)
-
- return final_df
-
-
-original_feature_names = ['MONTANT', 'FREQUENCE_RECH', 'REVENUE', 'ARPU_SEGMENT', 'FREQUENCE',
- 'DATA_VOLUME', 'ON_NET', 'ORANGE', 'TIGO', 'ZONE1', 'ZONE2', 'REGULARITY', 'FREQ_TOP_PACK',
- 'REGION_DAKAR', 'REGION_DIOURBEL', 'REGION_FATICK', 'REGION_KAFFRINE', 'REGION_KAOLACK',
- 'REGION_KEDOUGOU', 'REGION_KOLDA', 'REGION_LOUGA', 'REGION_MATAM', 'REGION_SAINT-LOUIS',
- 'REGION_SEDHIOU', 'REGION_TAMBACOUNDA', 'REGION_THIES', 'REGION_ZIGUINCHOR',
- 'TENURE_Long-term', 'TENURE_Medium-term', 'TENURE_Mid-term', 'TENURE_Short-term',
- 'TENURE_Very short-term', 'TOP_PACK_VAS', 'TOP_PACK_data', 'TOP_PACK_international',
- 'TOP_PACK_messaging', 'TOP_PACK_other_services', 'TOP_PACK_social_media',
- 'TOP_PACK_voice']
-
-# Set up the Streamlit app
-st.set_page_config(layout="wide")
-
-# Main page - Churn Prediction
-st.title('CUSTOMER CHURN PREDICTION APP (CCPA)')
-
-# Main page - Churn Prediction
-st.markdown("Churn is a one of the biggest problem in the telecom industry. Research has shown that the average monthly churn rate among the top 4 wireless carriers in the US is 1.9% - 2%")
-st.image("bg.png", use_column_width=True)
-
- # How to use
-st.sidebar.image("welcome.png", use_column_width=True)
-# st.sidebar.title("ENTER THE DETAILS OF THE CUSTOMER HERE")
-
-# Define a dictionary of models with their names, actual models, and types
-models = {
- 'Logistic Regression': {'Logistic Regression': lr_model, 'type': 'logistic_regression'},
- #'ComplementNB': {'ComplementNB': cnb_model, 'type': 'Complement NB'}
-}
-
-# Allow the user to select a model from the sidebar
-model_name = st.sidebar.selectbox('Logistic Regression', list(models.keys()))
-
-# Retrieve the selected model and its type from the dictionary
-model = models[model_name]['Logistic Regression']
-model_type = models[model_name]['type']
-
-
-# Collect input from the user
-st.sidebar.title('ENTER CUSTOMER DETAILS')
-input_features = {
- 'MONTANT': st.sidebar.number_input('Top-up Amount (MONTANT)'),
- 'FREQUENCE_RECH': st.sidebar.number_input('No. of Times the Customer Refilled (FREQUENCE_RECH)'),
- 'REVENUE': st.sidebar.number_input('Monthly income of the client (REVENUE)'),
- 'ARPU_SEGMENT': st.sidebar.number_input('Income over 90 days / 3 (ARPU_SEGMENT)'),
- 'FREQUENCE': st.sidebar.number_input('Number of times the client has made an income (FREQUENCE)'),
- 'DATA_VOLUME': st.sidebar.number_input('Number of Connections (DATA_VOLUME)'),
- 'ON_NET': st.sidebar.number_input('Inter Expresso Call (ON_NET)'),
- 'ORANGE': st.sidebar.number_input('Call to Orange (ORANGE)'),
- 'TIGO': st.sidebar.number_input('Call to Tigo (TIGO)'),
- 'ZONE1': st.sidebar.number_input('Call to Zone 1 (ZONE1)'),
- 'ZONE2': st.sidebar.number_input('Call to Zone 2 (ZONE2)'),
- 'REGULARITY': st.sidebar.number_input('Number of Times the Client is Active for 90 Days (REGULARITY)'),
- 'FREQ_TOP_PACK': st.sidebar.number_input('Number of Times the Client has Activated the Top Packs (FREQ_TOP_PACK)'),
- 'REGION': st.sidebar.selectbox('Location of Each Client (REGION)', ['DAKAR','DIOURBEL','FATICK','AFFRINE','KAOLACK',
- 'KEDOUGOU','KOLDA','LOUGA','MATAM','SAINT-LOUIS',
- 'SEDHIOU','TAMBACOUNDA','HIES','ZIGUINCHOR' ]),
-
- 'TENURE': st.sidebar.selectbox('Duration in the Network (TENURE)', ['Long-term','Medium-term','Mid-term','Short-term',
- 'Very short-term']),
- 'TOP_PACK': st.sidebar.selectbox('Most Active Pack (TOP_PACK)', ['VAS', 'data', 'international',
- 'messaging','other_services', 'social_media',
- 'voice'])
-
-}
-
-# Input validation
-valid_input = True
-error_messages = []
-
-# Validate numeric inputs
-numeric_ranges = {
- 'MONTANT': [0, 1000000],
- 'FREQUENCE_RECH': [0, 100],
- 'REVENUE': [0, 1000000],
- 'ARPU_SEGMENT': [0, 100000],
- 'FREQUENCE': [0, 100],
- 'DATA_VOLUME': [0, 100000],
- 'ON_NET': [0, 100000],
- 'ORANGE': [0, 100000],
- 'TIGO': [0, 100000],
- 'ZONE1': [0, 100000],
- 'ZONE2': [0, 100000],
- 'REGULARITY': [0, 100],
- 'FREQ_TOP_PACK': [0, 100]
-}
-
-for feature, value in input_features.items():
- range_min, range_max = numeric_ranges.get(feature, [None, None])
- if range_min is not None and range_max is not None:
- if not range_min <= value <= range_max:
- valid_input = False
- error_messages.append(f"{feature} should be between {range_min} and {range_max}.")
-
-#Churn Prediction
-
-def predict_churn(input_data, model):
- # Preprocess the input data
- preprocessed_data = preprocess_input(input_data)
-
- # Calculate churn probabilities using the model
- probabilities = model.predict_proba(preprocessed_data)
-
- # Determine churn labels based on the model type
- if model_type == "logistic_regression":
- churn_labels = ["No Churn", "Churn"]
- #elif model_type == "ComplementNB":
- churn_labels = ["Churn", "No Churn"]
- # Extract churn probability for the first sample
- churn_probability = probabilities[0]
-
- # Create a dictionary mapping churn labels to their indices
- churn_indices = {label: idx for idx, label in enumerate(churn_labels)}
-
- # Determine the index with the highest churn probability
- churn_index = np.argmax(churn_probability)
-
- # Return churn labels, churn probabilities, churn indices, and churn index
- return churn_labels, churn_probability, churn_indices, churn_index
-
-# Predict churn based on user input
-if st.sidebar.button('Predict Churn'):
- try:
- with st.spinner("Wait, Results loading..."):
- # Simulate a long-running process
- progress_bar = st.progress(0)
- step = 20 # A big step will reduce the execution time
- for i in range(0, 100, step):
- time.sleep(0.1)
- progress_bar.progress(i + step)
-
- #churn_labels, churn_probability = predict_churn(input_features, model) # Pass model1 or model2 based on the selected model
- churn_labels, churn_probability, churn_indices, churn_index = predict_churn(input_features, model)
-
- st.subheader('CHURN PREDICTION RESULTS')
-
-
-
- col1, col2 = st.columns(2)
-
- if churn_labels[churn_index] == "Churn":
- churn_prob = churn_probability[churn_index]
- with col1:
- st.error(f"DANGER! This customer is likely to churn with a probability of {churn_prob * 100:.2f}% 😢")
- resized_churn_image = Image.open('Churn.jpeg')
- resized_churn_image = resized_churn_image.resize((350, 300)) # Adjust the width and height as desired
- st.image(resized_churn_image)
- # Add suggestions for retaining churned customers in the 'Churn' group
- with col2:
- st.info("ADVICE TO EXPRESSOR MANAGEMENT:\n"
- "- Identify Reasons for Churn\n"
- "- Offer Incentives\n"
- "- Showcase Improvements\n"
- "- Gather Feedback\n"
- "- Customer Surveys\n"
- "- Personalized Recommendations\n"
- "- Reestablish Trust\n"
- "- Follow-Up Communication\n"
- "- Reactivation Campaigns\n"
- "- Improve product or service offerings based on customer feedback\n"
- " SUMMARY NOTE\n"
- "- Remember that winning back churning customers takes time and persistence.\n"
- "- It\s crucial to genuinely address their concerns and provide value to rebuild their trust in your business\n"
- "- Regularly evaluate the effectiveness of your strategies and adjust them as needed based on customer responses and feedback\n")
- else:
- churn_prob = churn_probability[churn_index]
- with col1:
- st.success(f"This customer is a loyal (not churn) with a probability of {churn_prob * 100:.2f}% 😀")
- resized_not_churn_image = Image.open('NotChurn.png')
- resized_not_churn_image = resized_not_churn_image.resize((350, 300)) # Adjust the width and height as desired
- st.image(resized_not_churn_image)
- # Add suggestions for retaining churned customers in the 'Churn' group
- with col2:
- st.info("ADVICE TO EXPRESSOR MANAGEMENT\n"
- "- Quality Products/Services\n"
- "- Personalized Experience\n"
- "- Loyalty Programs\n"
- "- Excellent Customer Service\n"
- "- Exclusive Content\n"
- "- Early Access\n"
- "- Personal Thank-You Notes\n"
- "- Surprise Gifts or Discounts\n"
- "- Feedback Opportunities\n"
- "- Community Engagement\n"
- "- Anniversary Celebrations\n"
- "- Refer-a-Friend Programs\n"
- "SUMMARY NOTE\n"
- "- Remember that the key to building lasting loyalty is consistency.\n"
- "- Continuously demonstrate your commitment to meeting customers needs and enhancing their experience.\n"
- "- Regularly assess the effectiveness of your loyalty initiatives and adapt them based on customer feedback and preferences.")
-
- st.subheader('Churn Probability')
-
- # Create a donut chart to display probabilities
- fig = go.Figure(data=[go.Pie(
- labels=churn_labels,
- values=churn_probability,
- hole=0.5,
- textinfo='label+percent',
- marker=dict(colors=['#FFA07A', '#6495ED', '#FFD700', '#32CD32', '#FF69B4', '#8B008B']))])
-
- fig.update_traces(
- hoverinfo='label+percent',
- textfont_size=12,
- textposition='inside',
- texttemplate='%{label}: %{percent:.2f}%'
- )
-
- fig.update_layout(
- title='Churn Probability',
- title_x=0.5,
- showlegend=False,
- width=500,
- height=500
- )
-
- st.plotly_chart(fig, use_container_width=True)
-
- # Calculate the average churn rate (replace with your actual value)
-
- st.subheader('Customer Churn Probability Comparison')
-
- average_churn_rate = 19
-
- # Convert the overall churn rate to churn probability
- main_data_churn_probability = average_churn_rate / 100
-
- # Retrieve the predicted churn probability for the selected customer
- predicted_churn_prob = churn_probability[churn_index]
-
- if churn_labels[churn_index] == "Churn":
- churn_prob = churn_probability[churn_index]
- # Create a bar chart comparing the churn probability with the average churn rate
- labels = ['Churn Probability', 'Average Churn Probability']
- values = [predicted_churn_prob, main_data_churn_probability]
-
- fig = go.Figure(data=[go.Bar(x=labels, y=values)])
- fig.update_layout(
- xaxis_title='Churn Probability',
- yaxis_title='Probability',
- title='Comparison with Average Churn Rate',
- yaxis=dict(range=[0, 1]) # Set the y-axis limits between 0 and 1
- )
-
- # Add explanations
- if predicted_churn_prob > main_data_churn_probability:
- churn_comparison = "higher"
- elif predicted_churn_prob < main_data_churn_probability:
- churn_comparison = "lower"
- else:
- churn_comparison = "equal"
-
- explanation = f"This compares the churn probability of the selected customer " \
- f"with the average churn rate of all customers. It provides insights into how the " \
- f"individual customer's churn likelihood ({predicted_churn_prob:.2f}) compares to the " \
- f"overall trend. The 'Churn Probability' represents the likelihood of churn " \
- f"for the selected customer, while the 'Average Churn Rate' represents the average " \
- f"churn rate across all customers ({main_data_churn_probability:.2f}).\n\n" \
- f"The customer's churn rate is {churn_comparison} than the average churn rate."
-
- st.plotly_chart(fig)
- st.write(explanation)
- else:
- # Create a bar chart comparing the no-churn probability with the average churn rate
- labels = ['No-Churn Probability', 'Average Churn Probability']
- values = [1 - predicted_churn_prob, main_data_churn_probability]
-
- fig = go.Figure(data=[go.Bar(x=labels, y=values)])
- fig.update_layout(
- xaxis_title='Churn Probability',
- yaxis_title='Probability',
- title='Comparison with Average Churn Rate',
- yaxis=dict(range=[0, 1]) # Set the y-axis limits between 0 and 1
- )
-
- explanation = f"This bar chart compares the churn probability of the selected customer " \
- f"with the average churn rate of all customers. It provides insights into how the " \
- f"individual customer's churn likelihood ({predicted_churn_prob:.2f}) compares to the " \
- f"overall trend." \
- f"The prediction indicates that the customer is not likely to churn. " \
- f"The churn probability is lower than the no-churn probability."
-
- st.plotly_chart(fig)
- st.write(explanation)
- except Exception as e:
- st.error(f"An error occurred: {str(e)}")
diff --git a/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan2/stylegan2-pytorch/op/fused_act.py b/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan2/stylegan2-pytorch/op/fused_act.py
deleted file mode 100644
index 7e3d464ae656920c6875bc877281cadb2eaa4105..0000000000000000000000000000000000000000
--- a/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan2/stylegan2-pytorch/op/fused_act.py
+++ /dev/null
@@ -1,92 +0,0 @@
-import os
-import platform
-
-import torch
-from torch import nn
-from torch.autograd import Function
-import torch.nn.functional as F
-from torch.utils.cpp_extension import load
-
-use_fallback = False
-
-# Try loading precompiled, otherwise use native fallback
-try:
- import fused
-except ModuleNotFoundError as e:
- print('StyleGAN2: Optimized CUDA op FusedLeakyReLU not available, using native PyTorch fallback.')
- use_fallback = True
-
-
-class FusedLeakyReLUFunctionBackward(Function):
- @staticmethod
- def forward(ctx, grad_output, out, negative_slope, scale):
- ctx.save_for_backward(out)
- ctx.negative_slope = negative_slope
- ctx.scale = scale
-
- empty = grad_output.new_empty(0)
-
- grad_input = fused.fused_bias_act(
- grad_output, empty, out, 3, 1, negative_slope, scale
- )
-
- dim = [0]
-
- if grad_input.ndim > 2:
- dim += list(range(2, grad_input.ndim))
-
- grad_bias = grad_input.sum(dim).detach()
-
- return grad_input, grad_bias
-
- @staticmethod
- def backward(ctx, gradgrad_input, gradgrad_bias):
- out, = ctx.saved_tensors
- gradgrad_out = fused.fused_bias_act(
- gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale
- )
-
- return gradgrad_out, None, None, None
-
-
-class FusedLeakyReLUFunction(Function):
- @staticmethod
- def forward(ctx, input, bias, negative_slope, scale):
- empty = input.new_empty(0)
- out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale)
- ctx.save_for_backward(out)
- ctx.negative_slope = negative_slope
- ctx.scale = scale
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- out, = ctx.saved_tensors
-
- grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply(
- grad_output, out, ctx.negative_slope, ctx.scale
- )
-
- return grad_input, grad_bias, None, None
-
-
-class FusedLeakyReLU(nn.Module):
- def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5):
- super().__init__()
-
- self.bias = nn.Parameter(torch.zeros(channel))
- self.negative_slope = negative_slope
- self.scale = scale
-
- def forward(self, input):
- return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale)
-
-
-def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5):
- if use_fallback or input.device.type == 'cpu':
- return scale * F.leaky_relu(
- input + bias.view((1, -1)+(1,)*(input.ndim-2)), negative_slope=negative_slope
- )
- else:
- return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/quant_noise/README.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/quant_noise/README.md
deleted file mode 100644
index a04d7e4e8a077f11c9f63cfa3d1f20e2b899be8c..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/quant_noise/README.md
+++ /dev/null
@@ -1,298 +0,0 @@
-# Training with Quantization Noise for Extreme Model Compression ({Fan\*, Stock\*} *et al.*, 2020)
-This page contains information for how to train and quantize models with Quantization Noise, for both scalar quantization like `int8` and Iterative Product Quantization.
-Check out our paper [here](https://arxiv.org/abs/2004.07320).
-
-Looking for pretrained models? They will be added shortly.
-Looking for code to train vision models? We are working on open sourcing our code as part of ClassyVision. Please check back, but note that both the Scalar and Iterative Product Quantization counterparts of the `nn.Conv2d` module are already included in this release.
-
-**Contents**:
-- [Walk through of code](#walk-through-the-code)
-- [Reproduce NLP Results](#looking-to-reproduce-the-nlp-results-in-the-paper)
-- [Reproduce Vision Results](#looking-to-reproduce-the-vision-results-in-the-paper)
-
-
-## Citation
-```bibtex
-@article{fan2020training,
- title={Training with Quantization Noise for Extreme Model Compression},
- author={Angela Fan* and Pierre Stock* and and Benjamin Graham and Edouard Grave and Remi Gribonval and Herve Jegou and Armand Joulin},
- year={2020},
- eprint={2004.07320},
- archivePrefix={arXiv},
- primaryClass={cs.ML}
-}
-```
-
-## Walk through the code
-
-Training a model with Quant-Noise improves the performance in subsequent inference-time quantization by training models to be robust to quantization. This technique is useful for both scalar and product quantization methods, as well as multiple domains. We detail below our approach to train, quantize models and integrate our code to quantize your favorite models.
-
-### Scalar Quantization
-
-Unlike the section [Iterative Product Quantization](#iterative-product-quantization) which gives state-of-the-art compression, this section showcases the usefulness of our approach for simple scalar quantization baselines such as int8 using on-GPU Fake Quantization.
-
-#### Training
-
-Scalar quantization with Quant-Noise consists in randomly quantizing a proportion `p` of the weights during training. Scalar quantization is implemented [here](https://github.com/pytorch/fairseq/tree/main/fairseq/modules/quantization/scalar) under the form of Fake Quantization, meaning that we emulate int8 on GPU by quantizing and de-quantizing both the weights and the activations. We rely on PyTorch's [quantization primitives](https://github.com/pytorch/pytorch/tree/master/torch/quantization).
-
-To train a model with Quant-Noise, add the following flag:
-```
---quant-noise-scalar 0.5
-```
-Large values of noise make the network easier to quantize but may result in higher non-quantized test and validation perplexities.
-
-#### Quantization
-
-When evaluating a network, all quantized modules and activation hooks automatically switch to `p=1` so the validation accuracy reported by Fairseq is actually the quantized one, nothing more to do.
-
-
-#### Integration with your own code
-
-Looking to quantize your own models with Quant-Noise + Scalar Quantization?
-- Use the function `quantize_model_` implemented [here](https://github.com/pytorch/fairseq/tree/main/fairseq/modules/quantization/scalar/utils.py) to (1) replace all your modules by their quantized counterparts and (2) add hooks to those modules to quantize the activations.
-- Then, perform your training as usual. Note that in `eval()` mode, the network is always fully quantized (weights and activations) by default (`p=1`).
-
-
-
-### Iterative Product Quantization
-
-
-Iterative Product Quantization with Quant-Noise proceeds in two steps. First, a model must be trained uncompressed with Quant-Noise. Second, the model must be quantized with iPQ. Note that we implement here the simplest form of noise, which consists in randomly dropping a proportion `p` of blocks, and that worked as well as assigning those blocks to their current centroid.
-
-#### Training
-
-To train a model with Quant-Noise, add the following flags:
-```
---quant-noise-pq 0.1 --quant-noise-pq-block-size 8
-```
-`quant-noise-pq` controls how much dropout is applied to the blocks of the weight matrix. `quant-noise-pq-block-size` controls the size of the weight matrix blocks.
-We recommend training with 0.05 to 0.2 Quant-Noise, a value that worked well in our experiments. For the block-size, we recommend training with block-size of 8. Note that the block size must be a multiple of `input_features`, see the size checks [here](https://github.com/pytorch/fairseq/tree/main/fairseq/modules/quant_noise.py). Large block sizes result in higher compression ratio but may induce a loss in accuracy.
-
-We currently support training Transformer based models, such as sequence-to-sequence, language models, and BERT architectures. The `quant_noise` function [here](https://github.com/pytorch/fairseq/tree/main/fairseq/modules/quant_noise.py) wraps a module. It splits a weight matrix into blocks and applies random dropout to these blocks.
-In the Transformer architectures, quant-noise is applied to the input and output embeddings, the attention, and the FFN.
-
-Quant-Noise can also be combined with **LayerDrop** (see [here](https://github.com/pytorch/fairseq/tree/main/examples/layerdrop)) to add its pruning effect to the quantized model and make the model even smaller. We recommend training with LayerDrop 0.1 or 0.2.
-
-#### Quantization
-
-We implement an improved version of product quantization from Stock et al, **iPQ**, described [here](https://arxiv.org/abs/1907.05686), see code with old API [here](https://github.com/facebookresearch/kill-the-bits). Note that we improved the iPQ API in terms of both compute speed and usability as described below.
-
-For the particular case of PQ, quantization is made sequentially. We recommend first quantizing the FFNs, then the EMBs, and finally the ATTNs. Quantization is done in two sub-steps:
-- First, perform `n` steps of Product Quantization (generally `n=20` is enough).
-- Then, finetune the obtained centroids.
-
-#### Integration with your own code
-
-Looking to quantize your own models with Quant-Noise + iPQ?
-- First wrap your modules with the `quant_noise` function [here](https://github.com/pytorch/fairseq/tree/main/fairseq/modules/quant_noise.py), which is module-agnostic and train your favorite model.
-- Then, quantize your trained model using the code [here](https://github.com/pytorch/fairseq/tree/main/fairseq/modules/quantization/pq). This can be done *without any changes to your training loop*. Below is an example code for integration.
-Note that we tried our approach only on Transformers and various Convolutional Models such as EfficientNets.
-
-```python
-from fairseq.modules.quantization.pq import quantize_model_, SizeTracker
-
-# get configuration parameters
-n_centroids_config = config["n_centroids"]
-block_sizes_config = config["block_sizes"]
-layers_to_quantize = config["layers_to_quantize"]
-
-# size tracker for keeping track of assignments, centroids and non-compressed sizes
-size_tracker = SizeTracker(model)
-
-# Quantize model by stages
-for step in range(len(layers_to_quantize)):
-
- # quantize model in-place
- quantized_layers = quantize_model_(
- model,
- size_tracker,
- layers_to_quantize,
- block_sizes_config,
- n_centroids_config,
- step=step,
- )
- logger.info(f"Finetuning stage {step}, quantized layers: {quantized_layers}")
- logger.info(f"{size_tracker}")
-
- # Don't forget to re-create/update trainer/optimizer since model parameters have changed
- optimizer = ...
-
- # Finetune the centroids with your usual training loop for a few epochs
- trainer.train_epoch()
-```
-
-
-## Looking to reproduce the NLP results in the paper?
-
-We detail below how to reproduce the state-of-the-art results in reported in the paper for Quant-Noise + Iterative Product Quantization.
-
-### Training with Quant-Noise
-
-To **train** RoBERTa + QuantNoise, we followed this setting [here](https://github.com/pytorch/fairseq/tree/main/examples/roberta).
-The following command can be used to train a RoBERTa Base + QuantNoise model:
-
-```bash
-TOTAL_UPDATES=125000
-WARMUP_UPDATES=10000
-PEAK_LR=0.0005
-TOKENS_PER_SAMPLE=512
-MAX_POSITIONS=512
-MAX_SENTENCES=16
-UPDATE_FREQ=2
-DATA_DIR=/path/to/data/here
-
-fairseq-train $DATA_DIR \
- --task masked_lm --criterion masked_lm --arch roberta_base \
- --sample-break-mode complete \
- --tokens-per-sample $TOKENS_PER_SAMPLE --max-positions $MAX_POSITIONS \
- --optimizer adam --adam-betas '(0.9, 0.98)' --adam-eps 1e-6 \
- --clip-norm 0.0 \
- --lr-scheduler polynomial_decay --lr $PEAK_LR \
- --warmup-updates $WARMUP_UPDATES --total-num-update $TOTAL_UPDATES \
- --dropout 0.1 --attention-dropout 0.1 \
- --weight-decay 0.01 \
- --batch-size $MAX_SENTENCES \
- --update-freq $UPDATE_FREQ --max-update $TOTAL_UPDATES \
- --save-dir checkpoint/roberta \
- --ddp-backend legacy_ddp --encoder-layerdrop 0.2 \
- --quant-noise-pq 0.2 --quant-noise-pq-block-size 8 --untie-weights-roberta
-```
-
-To **finetune** RoBERTa + QuantNoise, we followed this setting [here](https://github.com/pytorch/fairseq/blob/main/examples/roberta/README.glue.md).
-The following command can be used to finetune a RoBERTa Base + QuantNoise model on the RTE dataset:
-
-```bash
-TOTAL_NUM_UPDATES=2036
-WARMUP_UPDATES=122
-LR=2e-05
-NUM_CLASSES=2
-MAX_SENTENCES=16
-ROBERTA_PATH=/path/to/roberta_quantnoise/model.pt
-
-fairseq-train /path/to/rte/data/ \
- --restore-file $ROBERTA_PATH \
- --max-positions 512 \
- --batch-size $MAX_SENTENCES \
- --max-tokens 4400 \
- --task sentence_prediction \
- --reset-optimizer --reset-dataloader --reset-meters \
- --required-batch-size-multiple 1 \
- --init-token 0 --separator-token 2 \
- --arch roberta_large \
- --criterion sentence_prediction \
- --num-classes $NUM_CLASSES \
- --dropout 0.1 --attention-dropout 0.1 \
- --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \
- --clip-norm 0.0 \
- --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \
- --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \
- --max-epoch 10 \
- --find-unused-parameters \
- --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \
- --ddp-backend legacy_ddp \
- --quant-noise-pq 0.2 --quant-noise-pq-block-size 8
-```
-
-To **train** Language Models on Wikitext-103, we followed this setting [here](https://github.com/pytorch/fairseq/tree/main/examples/language_model).
-The following command can be used to train a Transformer + QuantNoise model on Wikitext-103:
-
-```bash
-fairseq-train --task language_modeling /path/to/wikitext-103/data \
- --save-dir checkpoints/transformer_wikitext-103 \
- --adaptive-input --adaptive-input-cutoff 20000,60000 --adaptive-input-factor 4 \
- --adaptive-softmax-cutoff 20000,60000 --adaptive-softmax-dropout 0.2 --adaptive-softmax-factor 4.0 \
- --tie-adaptive-proj --tie-adaptive-weights \
- --arch transformer_lm_gbw \
- --attention-dropout 0.1 --dropout 0.2 --relu-dropout 0.1 \
- --clip-norm 0.1 --criterion adaptive_loss \
- --ddp-backend legacy_ddp \
- --decoder-attention-heads 8 --decoder-embed-dim 1024 --decoder-ffn-embed-dim 4096 --decoder-input-dim 1024 \
- --decoder-layers 16 --decoder-normalize-before --decoder-output-dim 1024 \
- --min-lr 0.0001 --lr-period-updates 270000 --lr-scheduler cosine --lr-shrink 0.75 --lr 1.0 --t-mult 2.0 \
- --max-tokens 3072 --tokens-per-sample 3072 --momentum 0.99 --optimizer nag \
- --sample-break-mode none --update-freq 3 \
- --warmup-init-lr 1e-07 --warmup-updates 16000 \
- --weight-decay 0 --seed 1 --stop-min-lr 1e-09 \
- --quant-noise-pq 0.05 --quant-noise-pq-block-size 8
-```
-
-To **evaluate** this model, note you need to use the `eval.py` script. The following command can be used to evaluate:
-
-```bash
-fairseq-eval-lm /path/to/wikitext-103/data --path /path/to/model/checkpoint \
- --sample-break-mode complete \
- --max-tokens 3072 \
- --context-window 2560 \
- --softmax-batch 1024 \
- --gen-subset valid
-```
-and change the `--gen-subset` to `test` if you would like to evaluate on the test set instead.
-
-
-### Iterative Product Quantization
-
-To quantize the finetuned RoBERTa model, we use this command on 1 GPU. This should run in a day.
-```bash
-TOTAL_NUM_UPDATES=6108 # 2036 updates for each iteration
-WARMUP_UPDATES=122
-LR=2e-05
-NUM_CLASSES=2
-MAX_SENTENCES=16
-fairseq-train --task sentence_prediction /path/to/data/ \
- --restore-file $ROBERTA_PATH \
- --save-dir checkpoints/roberta_finetuned \
- --max-positions 512 \
- --batch-size $MAX_SENTENCES \
- --max-tokens 4400 \
- --init-token 0 --separator-token 2 \
- --arch roberta_large \
- --criterion sentence_prediction \
- --num-classes $NUM_CLASSES \
- --dropout 0.1 --attention-dropout 0.1 \
- --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \
- --clip-norm 0.0 --lr-scheduler polynomial_decay \
- --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \
- --no-progress-bar --skip-invalid-size-inputs-valid-test --ddp-backend legacy_ddp \
- --quantization-config-path /path/to/config/yaml
-```
-
-To quantize the trained Language Model, we use this command on 8 V100 23GB GPUs. This should run in a couple of hours.
-```bash
-fairseq-train --task language_modeling /path/to/wikitext-103/data \
- --save-dir checkpoints/transformer_wikitext-103 \
- --adaptive-input --adaptive-input-cutoff 20000,60000 --adaptive-input-factor 4 \
- --adaptive-softmax-cutoff 20000,60000 --adaptive-softmax-dropout 0.2 --adaptive-softmax-factor 4.0 \
- --arch transformer_lm_gbw \
- --attention-dropout 0.1 --dropout 0.2 --relu-dropout 0.1 \
- --bucket-cap-mb 25 --char-embedder-highway-layers 2 --character-embedding-dim 4 \
- --clip-norm 0.1 --criterion adaptive_loss \
- --ddp-backend legacy_ddp \
- --decoder-attention-heads 8 --decoder-embed-dim 1024 --decoder-ffn-embed-dim 4096 --decoder-input-dim 1024 --decoder-layers 16 --decoder-normalize-before --decoder-output-dim 1024 \
- --fp16 --keep-last-epochs -1 \
- --min-lr 0.0001 --lr-period-updates 270000 --lr-scheduler cosine --lr-shrink 0.75 --lr 0.05 --stop-min-lr 1e-09 \
- --max-tokens 2944 --tokens-per-sample 2944\
- --momentum 0.99 --no-epoch-checkpoints --no-progress-bar --optimizer nag --required-batch-size-multiple 8 \
- --sample-break-mode none --t-mult 2.0 --skip-invalid-size-inputs-valid-test \
- --tie-adaptive-proj --tie-adaptive-weights --update-freq 3 --weight-decay 0 --seed 1 \
- --log-interval 100 --no-progress-bar --skip-invalid-size-inputs-valid-test \
- --restore-file path/to/trained/lm/with/quant/noise \
- --max-update 13500 --quantization-config-path /path/to/config/yaml
-```
-If you have less capacity or if your distributed training freezes, try reducing `--max-tokens` and `--tokens-per-sample` (this may reduce the quantized accuracy a bit).
-
-### Remarks
-
-We try to keep the open-sourced code as readable and as easy-to-plug as possible. Therefore, we did not test it for the following cases:
-- Scalar quantization with RoBERTa.
-- Quantization with iPQ and `int8` combined.
-
-If you have trouble adapting it, we will be more than happy to help!
-
-## Looking to reproduce the Vision results in the paper?
-
-We are working on open sourcing our code as part of ClassyVision. Please check back.
-
-
-## Having an issue or have a question?
-
-Please open an issue in this repository with the details of your question. Thanks!
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/utils.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/utils.py
deleted file mode 100644
index 7aced08d38301b98b19e2df7d19f1c61150107bc..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/utils.py
+++ /dev/null
@@ -1,55 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-import torch
-from examples.textless_nlp.gslm.unit2speech.tacotron2.model import Tacotron2
-from examples.textless_nlp.gslm.unit2speech.tacotron2.waveglow_denoiser import (
- Denoiser,
-)
-
-
-def load_quantized_audio_from_file(file_path):
- base_fname_batch, quantized_units_batch = [], []
- with open(file_path) as f:
- for line in f:
- base_fname, quantized_units_str = line.rstrip().split("|")
- quantized_units = [int(q) for q in quantized_units_str.split(" ")]
- base_fname_batch.append(base_fname)
- quantized_units_batch.append(quantized_units)
- return base_fname_batch, quantized_units_batch
-
-
-def synthesize_audio(model, waveglow, denoiser, inp, lab=None, strength=0.0):
- assert inp.size(0) == 1
- inp = inp.cuda()
- if lab is not None:
- lab = torch.LongTensor(1).cuda().fill_(lab)
-
- with torch.no_grad():
- _, mel, _, ali, has_eos = model.inference(inp, lab, ret_has_eos=True)
- aud = waveglow.infer(mel, sigma=0.666)
- aud_dn = denoiser(aud, strength=strength).squeeze(1)
- return mel, aud, aud_dn, has_eos
-
-
-def load_tacotron(tacotron_model_path, max_decoder_steps):
- ckpt_dict = torch.load(tacotron_model_path)
- hparams = ckpt_dict["hparams"]
- hparams.max_decoder_steps = max_decoder_steps
- sr = hparams.sampling_rate
- model = Tacotron2(hparams)
- model.load_state_dict(ckpt_dict["model_dict"])
- model = model.cuda().eval().half()
- return model, sr, hparams
-
-
-def load_waveglow(waveglow_path):
- waveglow = torch.load(waveglow_path)["model"]
- waveglow = waveglow.cuda().eval().half()
- for k in waveglow.convinv:
- k.float()
- denoiser = Denoiser(waveglow)
- return waveglow, denoiser
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/audio/hubert_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/audio/hubert_dataset.py
deleted file mode 100644
index f00fe301a64a8740ed3ce07e44f6774edb933926..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/audio/hubert_dataset.py
+++ /dev/null
@@ -1,358 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import itertools
-import logging
-import os
-import sys
-from typing import Any, List, Optional, Union
-
-import numpy as np
-
-import torch
-import torch.nn.functional as F
-from fairseq.data import data_utils
-from fairseq.data.fairseq_dataset import FairseqDataset
-
-logger = logging.getLogger(__name__)
-
-
-def load_audio(manifest_path, max_keep, min_keep):
- n_long, n_short = 0, 0
- names, inds, sizes = [], [], []
- with open(manifest_path) as f:
- root = f.readline().strip()
- for ind, line in enumerate(f):
- items = line.strip().split("\t")
- assert len(items) == 2, line
- sz = int(items[1])
- if min_keep is not None and sz < min_keep:
- n_short += 1
- elif max_keep is not None and sz > max_keep:
- n_long += 1
- else:
- names.append(items[0])
- inds.append(ind)
- sizes.append(sz)
- tot = ind + 1
- logger.info(
- (
- f"max_keep={max_keep}, min_keep={min_keep}, "
- f"loaded {len(names)}, skipped {n_short} short and {n_long} long, "
- f"longest-loaded={max(sizes)}, shortest-loaded={min(sizes)}"
- )
- )
- return root, names, inds, tot, sizes
-
-
-def load_label(label_path, inds, tot):
- with open(label_path) as f:
- labels = [line.rstrip() for line in f]
- assert (
- len(labels) == tot
- ), f"number of labels does not match ({len(labels)} != {tot})"
- labels = [labels[i] for i in inds]
- return labels
-
-
-def load_label_offset(label_path, inds, tot):
- with open(label_path) as f:
- code_lengths = [len(line.encode("utf-8")) for line in f]
- assert (
- len(code_lengths) == tot
- ), f"number of labels does not match ({len(code_lengths)} != {tot})"
- offsets = list(itertools.accumulate([0] + code_lengths))
- offsets = [(offsets[i], offsets[i + 1]) for i in inds]
- return offsets
-
-
-def verify_label_lengths(
- audio_sizes,
- audio_rate,
- label_path,
- label_rate,
- inds,
- tot,
- tol=0.1, # tolerance in seconds
-):
- if label_rate < 0:
- logger.info(f"{label_path} is sequence label. skipped")
- return
-
- with open(label_path) as f:
- lengths = [len(line.rstrip().split()) for line in f]
- assert len(lengths) == tot
- lengths = [lengths[i] for i in inds]
- num_invalid = 0
- for i, ind in enumerate(inds):
- dur_from_audio = audio_sizes[i] / audio_rate
- dur_from_label = lengths[i] / label_rate
- if abs(dur_from_audio - dur_from_label) > tol:
- logger.warning(
- (
- f"audio and label duration differ too much "
- f"(|{dur_from_audio} - {dur_from_label}| > {tol}) "
- f"in line {ind+1} of {label_path}. Check if `label_rate` "
- f"is correctly set (currently {label_rate}). "
- f"num. of samples = {audio_sizes[i]}; "
- f"label length = {lengths[i]}"
- )
- )
- num_invalid += 1
- if num_invalid > 0:
- logger.warning(
- f"total {num_invalid} (audio, label) pairs with mismatched lengths"
- )
-
-
-class HubertDataset(FairseqDataset):
- def __init__(
- self,
- manifest_path: str,
- sample_rate: float,
- label_paths: List[str],
- label_rates: Union[List[float], float], # -1 for sequence labels
- pad_list: List[str],
- eos_list: List[str],
- label_processors: Optional[List[Any]] = None,
- max_keep_sample_size: Optional[int] = None,
- min_keep_sample_size: Optional[int] = None,
- max_sample_size: Optional[int] = None,
- shuffle: bool = True,
- pad_audio: bool = False,
- normalize: bool = False,
- store_labels: bool = True,
- random_crop: bool = False,
- single_target: bool = False,
- ):
- self.audio_root, self.audio_names, inds, tot, self.sizes = load_audio(
- manifest_path, max_keep_sample_size, min_keep_sample_size
- )
- self.sample_rate = sample_rate
- self.shuffle = shuffle
- self.random_crop = random_crop
-
- self.num_labels = len(label_paths)
- self.pad_list = pad_list
- self.eos_list = eos_list
- self.label_processors = label_processors
- self.single_target = single_target
- self.label_rates = (
- [label_rates for _ in range(len(label_paths))]
- if isinstance(label_rates, int)
- else label_rates
- )
- self.store_labels = store_labels
- if store_labels:
- self.label_list = [load_label(p, inds, tot) for p in label_paths]
- else:
- self.label_paths = label_paths
- self.label_offsets_list = [
- load_label_offset(p, inds, tot) for p in label_paths
- ]
- assert (
- label_processors is None
- or len(label_processors) == self.num_labels
- )
- for label_path, label_rate in zip(label_paths, self.label_rates):
- verify_label_lengths(
- self.sizes, sample_rate, label_path, label_rate, inds, tot
- )
-
- self.max_sample_size = (
- max_sample_size if max_sample_size is not None else sys.maxsize
- )
- self.pad_audio = pad_audio
- self.normalize = normalize
- logger.info(
- f"pad_audio={pad_audio}, random_crop={random_crop}, "
- f"normalize={normalize}, max_sample_size={self.max_sample_size}"
- )
-
- def get_audio(self, index):
- import soundfile as sf
-
- wav_path = os.path.join(self.audio_root, self.audio_names[index])
- wav, cur_sample_rate = sf.read(wav_path)
- wav = torch.from_numpy(wav).float()
- wav = self.postprocess(wav, cur_sample_rate)
- return wav
-
- def get_label(self, index, label_idx):
- if self.store_labels:
- label = self.label_list[label_idx][index]
- else:
- with open(self.label_paths[label_idx]) as f:
- offset_s, offset_e = self.label_offsets_list[label_idx][index]
- f.seek(offset_s)
- label = f.read(offset_e - offset_s)
-
- if self.label_processors is not None:
- label = self.label_processors[label_idx](label)
- return label
-
- def get_labels(self, index):
- return [self.get_label(index, i) for i in range(self.num_labels)]
-
- def __getitem__(self, index):
- wav = self.get_audio(index)
- labels = self.get_labels(index)
- return {"id": index, "source": wav, "label_list": labels}
-
- def __len__(self):
- return len(self.sizes)
-
- def crop_to_max_size(self, wav, target_size):
- size = len(wav)
- diff = size - target_size
- if diff <= 0:
- return wav, 0
-
- start, end = 0, target_size
- if self.random_crop:
- start = np.random.randint(0, diff + 1)
- end = size - diff + start
- return wav[start:end], start
-
- def collater(self, samples):
- # target = max(sizes) -> random_crop not used
- # target = max_sample_size -> random_crop used for long
- samples = [s for s in samples if s["source"] is not None]
- if len(samples) == 0:
- return {}
-
- audios = [s["source"] for s in samples]
- audio_sizes = [len(s) for s in audios]
- if self.pad_audio:
- audio_size = min(max(audio_sizes), self.max_sample_size)
- else:
- audio_size = min(min(audio_sizes), self.max_sample_size)
- collated_audios, padding_mask, audio_starts = self.collater_audio(
- audios, audio_size
- )
-
- targets_by_label = [
- [s["label_list"][i] for s in samples]
- for i in range(self.num_labels)
- ]
- targets_list, lengths_list, ntokens_list = self.collater_label(
- targets_by_label, audio_size, audio_starts
- )
-
- net_input = {"source": collated_audios, "padding_mask": padding_mask}
- batch = {
- "id": torch.LongTensor([s["id"] for s in samples]),
- "net_input": net_input,
- }
-
- if self.single_target:
- batch["target_lengths"] = lengths_list[0]
- batch["ntokens"] = ntokens_list[0]
- batch["target"] = targets_list[0]
- else:
- batch["target_lengths_list"] = lengths_list
- batch["ntokens_list"] = ntokens_list
- batch["target_list"] = targets_list
- return batch
-
- def collater_audio(self, audios, audio_size):
- collated_audios = audios[0].new_zeros(len(audios), audio_size)
- padding_mask = (
- torch.BoolTensor(collated_audios.shape).fill_(False)
- # if self.pad_audio else None
- )
- audio_starts = [0 for _ in audios]
- for i, audio in enumerate(audios):
- diff = len(audio) - audio_size
- if diff == 0:
- collated_audios[i] = audio
- elif diff < 0:
- assert self.pad_audio
- collated_audios[i] = torch.cat(
- [audio, audio.new_full((-diff,), 0.0)]
- )
- padding_mask[i, diff:] = True
- else:
- collated_audios[i], audio_starts[i] = self.crop_to_max_size(
- audio, audio_size
- )
- return collated_audios, padding_mask, audio_starts
-
- def collater_frm_label(
- self, targets, audio_size, audio_starts, label_rate, pad
- ):
- assert label_rate > 0
- s2f = label_rate / self.sample_rate
- frm_starts = [int(round(s * s2f)) for s in audio_starts]
- frm_size = int(round(audio_size * s2f))
- if not self.pad_audio:
- rem_size = [len(t) - s for t, s in zip(targets, frm_starts)]
- frm_size = min(frm_size, *rem_size)
- targets = [t[s: s + frm_size] for t, s in zip(targets, frm_starts)]
- logger.debug(f"audio_starts={audio_starts}")
- logger.debug(f"frame_starts={frm_starts}")
- logger.debug(f"frame_size={frm_size}")
-
- lengths = torch.LongTensor([len(t) for t in targets])
- ntokens = lengths.sum().item()
- targets = data_utils.collate_tokens(
- targets, pad_idx=pad, left_pad=False
- )
- return targets, lengths, ntokens
-
- def collater_seq_label(self, targets, pad):
- lengths = torch.LongTensor([len(t) for t in targets])
- ntokens = lengths.sum().item()
- targets = data_utils.collate_tokens(
- targets, pad_idx=pad, left_pad=False
- )
- return targets, lengths, ntokens
-
- def collater_label(self, targets_by_label, audio_size, audio_starts):
- targets_list, lengths_list, ntokens_list = [], [], []
- itr = zip(targets_by_label, self.label_rates, self.pad_list)
- for targets, label_rate, pad in itr:
- if label_rate == -1:
- targets, lengths, ntokens = self.collater_seq_label(
- targets, pad
- )
- else:
- targets, lengths, ntokens = self.collater_frm_label(
- targets, audio_size, audio_starts, label_rate, pad
- )
- targets_list.append(targets)
- lengths_list.append(lengths)
- ntokens_list.append(ntokens)
- return targets_list, lengths_list, ntokens_list
-
- def num_tokens(self, index):
- return self.size(index)
-
- def size(self, index):
- if self.pad_audio:
- return self.sizes[index]
- return min(self.sizes[index], self.max_sample_size)
-
- def ordered_indices(self):
- if self.shuffle:
- order = [np.random.permutation(len(self))]
- else:
- order = [np.arange(len(self))]
-
- order.append(self.sizes)
- return np.lexsort(order)[::-1]
-
- def postprocess(self, wav, cur_sample_rate):
- if wav.dim() == 2:
- wav = wav.mean(-1)
- assert wav.dim() == 1, wav.dim()
-
- if cur_sample_rate != self.sample_rate:
- raise Exception(f"sr {cur_sample_rate} != {self.sample_rate}")
-
- if self.normalize:
- with torch.no_grad():
- wav = F.layer_norm(wav, wav.shape)
- return wav
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/replace_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/replace_dataset.py
deleted file mode 100644
index 5aac2ba96bee0a8bb65f4c9e56fa0b17248ee1d9..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/replace_dataset.py
+++ /dev/null
@@ -1,36 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import BaseWrapperDataset
-
-
-class ReplaceDataset(BaseWrapperDataset):
- """Replaces tokens found in the dataset by a specified replacement token
-
- Args:
- dataset (~torch.utils.data.Dataset): dataset to replace tokens in
- replace_map(Dictionary[int,int]): map of token to replace -> replacement token
- offsets (List[int]): do not replace tokens before (from left if pos, right if neg) this offset. should be
- as many as the number of objects returned by the underlying dataset __getitem__ method.
- """
-
- def __init__(self, dataset, replace_map, offsets):
- super().__init__(dataset)
- assert len(replace_map) > 0
- self.replace_map = replace_map
- self.offsets = offsets
-
- def __getitem__(self, index):
- item = self.dataset[index]
- is_tuple = isinstance(item, tuple)
- srcs = item if is_tuple else [item]
-
- for offset, src in zip(self.offsets, srcs):
- for k, v in self.replace_map.items():
- src_off = src[offset:] if offset >= 0 else src[:offset]
- src_off.masked_fill_(src_off == k, v)
-
- item = srcs if is_tuple else srcs[0]
- return item
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/speech_to_text/modules/emformer.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/speech_to_text/modules/emformer.py
deleted file mode 100644
index 6ef76bd012ba40b0395fec2ca9ae9e9c136ffe40..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/speech_to_text/modules/emformer.py
+++ /dev/null
@@ -1,1837 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) 2017-present, Facebook, Inc.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the LICENSE file in
-# the root directory of this source tree. An additional grant of patent rights
-# can be found in the PATENTS file in the same directory.
-
-
-import math
-import re
-from functools import partial
-from typing import List, Optional, Tuple
-
-import torch
-import torch.nn as nn
-from fairseq.models import (
- FairseqEncoder,
-)
-from fairseq.models.speech_to_text.utils import (
- NoOp,
- lengths_to_padding_mask,
- segments_to_sequence,
-)
-from fairseq.models.speech_to_text.utils import (
- attention_suppression,
- layer_norm_backward_hook,
-)
-from torch import Tensor, device as Device
-from torch.quantization.qconfig import (
- default_dynamic_qconfig,
- per_channel_dynamic_qconfig,
-)
-
-
-class RelativePositionEmbedding(nn.Module):
- """
- Implementation according to https://arxiv.org/abs/1803.02155
- """
-
- def __init__(self, head_dim, max_position, norm_init=True):
- super().__init__()
- self.head_dim = head_dim
- self.max_position = max_position
- self.embeddings = nn.Parameter(torch.Tensor(max_position * 2 + 1, head_dim))
- if norm_init:
- nn.init.xavier_normal_(self.embeddings)
- else:
- nn.init.xavier_uniform_(self.embeddings)
-
- def forward(self, input: Tensor):
- output = nn.functional.embedding(input.long(), self.embeddings)
- return output
-
-
-class Fp32LayerNorm(nn.Module):
- def __init__(
- self,
- input_dim,
- clamp_grad=True,
- max_grad_value=256,
- eps=1e-5,
- elementwise_affine=True,
- ):
- super().__init__()
- self.torch_module = torch.nn.LayerNorm(
- input_dim, eps=eps, elementwise_affine=elementwise_affine
- )
- if clamp_grad:
- hook = partial(layer_norm_backward_hook, clamp_value=max_grad_value)
- self.torch_module.register_backward_hook(hook)
-
- def forward(self, input):
- output = torch.nn.functional.layer_norm(
- input.float(),
- self.torch_module.normalized_shape,
- self.torch_module.weight.float()
- if self.torch_module.weight is not None
- else None,
- self.torch_module.bias.float()
- if self.torch_module.bias is not None
- else None,
- self.torch_module.eps,
- ).type_as(input)
- return output
-
-
-# ------------------------------------------------------------------------------
-# PositionwiseFF
-# ------------------------------------------------------------------------------
-
-
-class PositionwiseFF(nn.Module):
- """
- FFN layer in transformer.
-
- Args:
- input_dim: input embedding dimension
- ffn_dim: FFN layer inner dimension
- dropout_on_fc1: dropout for first linear layer
- dropout_on_fc2: dropout fr second linear layer
- activation_fn: activation function used after first linear layer. \
- Only relu or gelu is supported.
-
- """
-
- def __init__(
- self, input_dim, ffn_dim, dropout_on_fc1, dropout_on_fc2, activation_fn
- ):
- super(PositionwiseFF, self).__init__()
-
- self.input_dim = input_dim
- self.ffn_dim = ffn_dim
- if activation_fn == "relu":
- ac = nn.ReLU()
- elif activation_fn == "gelu":
- ac = nn.GELU()
- else:
- raise ValueError("Unsupported activation_fn = ({})".format(activation_fn))
-
- # fc1 -> ac -> dropout -> fc2 -> dropout
- self.module = nn.Sequential(
- nn.Linear(input_dim, ffn_dim),
- ac,
- nn.Dropout(dropout_on_fc1),
- nn.Linear(ffn_dim, input_dim),
- nn.Dropout(dropout_on_fc2),
- )
-
- self.layer_norm = Fp32LayerNorm(input_dim)
-
- def forward(self, input):
- module_out = self.module(self.layer_norm(input))
- output = module_out + input
-
- return output
-
- def quantize_(self, params=None):
- if params and "per_channel" in params and params["per_channel"]:
- qconfig = per_channel_dynamic_qconfig
- else:
- qconfig = default_dynamic_qconfig
- torch.quantization.quantize_dynamic(
- self, {torch.nn.Linear: qconfig}, dtype=torch.qint8, inplace=True
- )
- return self
-
-
-# ------------------------------------------------------------------------------
-# SummarizationLayer
-# ------------------------------------------------------------------------------
-
-
-class SummarizationLayer(nn.Module):
- def __init__(self, method, segment_size, embedding_dim):
- super(SummarizationLayer, self).__init__()
- self.segment_size = segment_size
- self.embedding_dim = embedding_dim
- nonlin_match = re.match(r"nonlinear\((?P[a-z]+),(?P[0-9]+)\)", method)
- self.method = method
- if method == "mean":
- self.module = nn.AvgPool1d(
- kernel_size=segment_size,
- stride=segment_size,
- ceil_mode=True,
- )
- elif method == "max":
- self.module = nn.MaxPool1d(
- kernel_size=segment_size,
- stride=segment_size,
- ceil_mode=True,
- )
- elif method == "linear":
- self.module = nn.Linear(segment_size, 1)
- elif nonlin_match:
- nonlin_args = nonlin_match.groupdict()
- act_type = nonlin_args["act"]
- hid_dim = int(nonlin_args["dim"])
- if act_type == "relu":
- act = nn.ReLU()
- elif act_type == "gelu":
- act = nn.GELU()
- else:
- raise ValueError("Unsupported activation_fn = ({})".format(act_type))
- self.module = nn.Sequential(
- nn.Linear(segment_size, hid_dim),
- act,
- nn.Linear(hid_dim, 1),
- )
- else:
- raise ValueError("Unsupported summarization method = ({})".format(method))
-
- def forward(self, input):
- # T, B, D -> B, D, T
- input = input.permute(1, 2, 0)
-
- if self.method == "mean" or self.method == "max":
- output = self.module(input)
- output = output.permute(2, 0, 1)
- return output
-
- full_seg_length = input.size(2) // self.segment_size * self.segment_size
- if full_seg_length > 0:
- # at least one seg is full
- B = input.size(0)
- D = input.size(1)
- input_todo = (
- input[:, :, :full_seg_length]
- .contiguous()
- .view(B, -1, self.segment_size)
- )
- output = self.module(input_todo)
- output = output.view(B, D, -1)
- else:
- output = input.new_zeros(input.size(0), input.size(1), 0)
- left = input.size(2) - full_seg_length
- if left > 0:
- # when last seg is not full, use zeros as last memory placeholder
- zeros = input.new_zeros(input.size(0), input.size(1), 1)
- output = torch.cat([output, zeros], dim=2)
- output = output.permute(2, 0, 1)
- return output
-
-
-# ------------------------------------------------------------------------------
-# NoSegAugmentedMemoryMultiheadAttentionBmm
-# ------------------------------------------------------------------------------
-
-
-class NoSegAugmentedMemoryMultiheadAttentionBmm(nn.Module):
- """
- Whole utterance augmented memory multihead attention using BMM.
-
- Different with previous augmented memory multihead attention where
- the utterance is chunked into segments. Here we use attention mask
- achieve so. The input embedding [right_context, utterance, summary]
- is a concatenation of right context, utterance and summary.
-
- Right context block is the concatenation of all the right context for
- each segments. [right_context_0, right_context_1, ..., right_context_n]
- For example, if we have utterance = [v0, v1, v2, ...., v20]. segment
- size 8, right_context size 4. Then the right context blocks =
- [v8, v9, v10, v11, v16, v17, v18, v19, 0, 0, 0, 0], where v8, v9, v10,
- and v11 are the right context for first segment. v16, v17, v18 and v19
- are the right context for second segment. 0, 0, 0 and 0 are right context
- for the last segment.
-
- utterance is corresponding to input embedding sequence
-
- summary is concatenation of average of each segments. [summary_0,
- summary_1, ..., ].
-
- In augmented memory multihead attention, the query is [right_context,
- utterance, summary], key is [memory, right_context, utterance]. Different
- with AugmentedMemoryMultiheadAttentionBmm, memory here is passed from
- previous attention layer. For the first attention layer, memory is average
- of each segment.
-
- Memory is a concatenation of memory from each segments in previous attention
- layer. For example, current layer is i, then memory is [m_0, m_1, ..., m_n].
- Each m_k is the output from seg_k in layer i-1.
-
- args:
- input_dim: input embedding dimension
- num_heads: number of heads in multihead self-attention
- dropout: attention dropout
- std_scale: if std_scale is not None. The weak attention suppression is
- turned on. For std_scale = 0.5, all the attention smaller than
- mean + 0.5 * std will be suppressed.
- scaled_init: whether to use scaled init for linear weight
- tanh_on_mem: whether to use tanh on memory output
- use_mem: whether to use memory or not. When max_memory_size is 0, then
- we don't have memory anymore.
- layer_index: current self-attention layer index that is used in depth
- initialization
- max_relative_position: max relative position used in relative position
- embedding
- rpe_old_option: To be compatible with previous model. The previous model
- was trained with attention += attention + rpe. The correct equation
- should be attention = attention + rpe
-
- """
-
- def __init__(
- self,
- input_dim,
- num_heads,
- dropout=0.0,
- std_scale=None,
- scaled_init=False,
- tanh_on_mem=False,
- use_mem=True,
- mini_batches=False,
- negative_inf="-inf",
- layer_index=-1,
- max_relative_position=0,
- rpe_old_option=True,
- ):
- if input_dim % num_heads:
- raise ValueError(
- "input_dim ({}) must be divisible by num_heads ({})".format(
- input_dim, num_heads
- )
- )
-
- super().__init__()
-
- embed_dim = input_dim
- self.e2h_kv = torch.nn.Linear(input_dim, 2 * input_dim, bias=True)
- self.e2h_q = torch.nn.Linear(input_dim, input_dim, bias=True)
- self.rpe_old_option = rpe_old_option
- if max_relative_position > 0:
- self.use_rpe = True
- self.rpe_k = RelativePositionEmbedding(
- head_dim=input_dim // num_heads,
- max_position=max_relative_position,
- )
- self.rpe_v = RelativePositionEmbedding(
- head_dim=input_dim // num_heads,
- max_position=max_relative_position,
- )
- else:
- self.use_rpe = False
- self.rpe_k = None
- self.rpe_v = None
- if scaled_init:
- if layer_index == -1:
- gain = 1.0 / math.sqrt(2)
- else:
- # https://arxiv.org/abs/2005.09684 depthwise initialization
- # stablize the training greatly. Use depthwise initialization to
- # replace incremental loss.
- gain = 1.0 / math.sqrt(layer_index + 1)
- torch.nn.init.xavier_uniform_(self.e2h_kv.weight, gain=gain)
- torch.nn.init.xavier_uniform_(self.e2h_q.weight, gain=gain)
-
- self.out_proj = torch.nn.Linear(embed_dim, embed_dim, bias=True)
-
- self.embed_dim = embed_dim
- self.num_heads = num_heads
- self.dropout = dropout
-
- self.head_dim = embed_dim // num_heads
- self.scaling = self.head_dim ** -0.5
-
- self.std_scale = std_scale
- self.use_mem = use_mem
- self.mini_batches = mini_batches
- self.negative_inf = negative_inf
-
- if tanh_on_mem:
- self.squash_mem = torch.tanh
- self.nonlinear_squash_mem = True
- else:
- self.squash_mem = NoOp()
- self.nonlinear_squash_mem = False
-
- def prepare_qkv(
- self,
- input: Tensor,
- mems: Tensor,
- lengths: Tensor,
- summary_length: int,
- lc_length: int,
- ):
- # T: right_context length + utterance_length + summary_length
- T, B, D = input.shape
- mem_length = mems.size(0)
- utterance_length = torch.max(lengths)
-
- right_context_blocks_length = T - utterance_length - summary_length
- rc_block = input[:right_context_blocks_length, :, :]
- utterance_block = input[right_context_blocks_length : T - summary_length, :, :]
-
- if B == 1:
- padding_mask = None
- else:
- klengths = lengths + mem_length + right_context_blocks_length + lc_length
- padding_mask = lengths_to_padding_mask(lengths=klengths)
-
- mem_rc_input = torch.cat([mems, rc_block, utterance_block], dim=0)
-
- # In training lc_length = 0
- key_length = mem_rc_input.size(0) + lc_length
- rc_input_sum = input
- q = self.e2h_q(rc_input_sum)
- kv = self.e2h_kv(mem_rc_input)
- k, v = kv.chunk(chunks=2, dim=2)
- result_qkv = (q, k, v)
- input_shape = (T, B, D)
- result_lengths_info = (
- mem_length,
- utterance_length,
- right_context_blocks_length,
- key_length,
- )
- if padding_mask is not None:
- assert padding_mask.size(0) == B
- assert padding_mask.size(1) == key_length
-
- return result_qkv, input_shape, result_lengths_info, padding_mask
-
- def prepare_attention_weights(
- self,
- q: Tensor,
- new_k: Tensor,
- new_v: Tensor,
- input_shape: Tuple[int, int, int],
- rpe: Optional[Tensor],
- ) -> Tuple[Tensor, Tensor, Tensor]:
- T, B, D = input_shape
- q = (
- q.contiguous().view(-1, B * self.num_heads, self.head_dim).transpose(0, 1)
- * self.scaling
- )
-
- k = (
- new_k.contiguous()
- .view(-1, B * self.num_heads, self.head_dim)
- .transpose(0, 1)
- )
-
- v = (
- new_v.contiguous()
- .view(-1, B * self.num_heads, self.head_dim)
- .transpose(0, 1)
- )
-
- attention_weights = torch.bmm(q, k.transpose(1, 2))
- if self.use_rpe and rpe is not None and self.rpe_v is not None:
- r_k = self.rpe_k(rpe)
- # [q, B*h, d] * [q, k, d] -> [B*h, q, k]
- attention_weights_rpe = torch.matmul(
- q.transpose(0, 1), r_k.transpose(1, 2)
- ).transpose(0, 1)
- attention_weights = attention_weights + attention_weights_rpe
- attention_weights_float = attention_weights.float()
-
- return attention_weights, attention_weights_float, v
-
- def prepare_attention_output(
- self,
- attention_weights: Tensor,
- attention_weights_float: Tensor,
- v: Tensor,
- input_shape: Tuple[int, int, int],
- key_length: int,
- padding_mask: Optional[Tensor],
- rpe: Optional[Tensor],
- ) -> Tensor:
- T, B, D = input_shape
- if padding_mask is not None:
- attention_weights_float = attention_weights_float.view(
- B, self.num_heads, T, key_length
- )
- attention_weights_float = attention_weights_float.masked_fill(
- padding_mask.unsqueeze(1).unsqueeze(2).to(torch.bool), float("-inf")
- )
- attention_weights_float = attention_weights_float.view(
- B * self.num_heads, T, key_length
- )
-
- if self.std_scale is not None:
- attention_weights_float = attention_suppression(
- attention_weights_float, self.std_scale
- )
-
- attention_weights_float = torch.nn.functional.softmax(
- attention_weights_float, dim=-1
- )
- attention_weights = attention_weights_float.type_as(attention_weights)
-
- attention_probs = torch.nn.functional.dropout(
- attention_weights, p=self.dropout, training=self.training
- )
-
- # [T, key_length, B, n_head]+ [key_length, B, n_head, d_head]
- # -> [T, B, n_head, d_head]
- attention = torch.bmm(attention_probs, v)
- if self.use_rpe and rpe is not None and self.rpe_v is not None:
- r_v = self.rpe_v(rpe)
- attention_rpe = torch.matmul(
- attention_probs.transpose(0, 1), r_v
- ).transpose(0, 1)
-
- if self.rpe_old_option:
- attention += attention + attention_rpe
- else:
- attention = attention + attention_rpe
-
- assert list(attention.shape) == [B * self.num_heads, T, self.head_dim]
-
- attention = attention.transpose(0, 1).contiguous().view(T, B, self.embed_dim)
-
- rc_output_memory = self.out_proj(attention)
- return rc_output_memory
-
- @torch.jit.unused
- def forward(
- self,
- input: Tensor,
- lengths: Tensor,
- mems: Tensor,
- attention_mask: Tensor,
- pre_mems: Optional[Tensor] = None,
- left_context_key: Optional[Tensor] = None,
- left_context_val: Optional[Tensor] = None,
- rpe: Optional[Tensor] = None,
- ) -> Tuple[Tensor, Tensor, Tensor, Tensor]:
- """
- forward function for NoSegAugmentedMemoryMultiheadAttentionBmm in training.
-
- args:
- input: formed in the following way
- [right_context_0, right_contex_1, ..., seg_0, seg_1,
- ..., summary_0, summary_1,..]
- lengths: the length of query which is [seg_0, seg_1, ....]
- mems: [mem_0, mem_1, ...].
- attention_mask: attention mask for query = [right_context, query, summary]
- key = [mem, right_context, query]. This is only used for traing.
-
- """
- if self.use_mem:
- mem_length = mems.size(0)
- summary_length = mem_length + 1
- if pre_mems is not None:
- mems = torch.cat([pre_mems, mems], dim=0)
- else:
- mem_length = 0
- summary_length = 0
-
- # In training, lc_length = 0
- if left_context_key is not None:
- lc_length = left_context_key.size(0)
- else:
- lc_length = 0
- results = self.prepare_qkv(
- input=input,
- mems=mems,
- lengths=lengths,
- summary_length=summary_length,
- lc_length=lc_length,
- )
- result_qkv, input_shape, result_lengths_info, padding_mask = results
- q, k, v = result_qkv
- (
- mem_length,
- utterance_length,
- right_context_blocks_length,
- key_length,
- ) = result_lengths_info
-
- if left_context_key is not None:
- # add the cache key and value
- new_k = torch.cat(
- [
- k[: mem_length + right_context_blocks_length, :, :],
- left_context_key,
- k[-utterance_length:, :, :],
- ],
- dim=0,
- )
- new_v = torch.cat(
- [
- v[: mem_length + right_context_blocks_length, :, :],
- left_context_val,
- v[-utterance_length:, :, :],
- ],
- dim=0,
- )
- next_k = new_k[mem_length + right_context_blocks_length :, :, :]
- next_v = new_v[mem_length + right_context_blocks_length :, :, :]
- else:
- new_k = k
- new_v = v
- next_k = None
- next_v = None
-
- attention_weights, attention_weights_float, v = self.prepare_attention_weights(
- q=q,
- new_k=new_k,
- new_v=new_v,
- input_shape=input_shape,
- rpe=rpe,
- )
-
- # mask attention
- attention_mask = attention_mask.unsqueeze(0)
- attention_weights_float = attention_weights_float.masked_fill(
- attention_mask, float(self.negative_inf)
- )
-
- rc_output_memory = self.prepare_attention_output(
- attention_weights=attention_weights,
- attention_weights_float=attention_weights_float,
- v=v,
- input_shape=input_shape,
- key_length=key_length,
- padding_mask=padding_mask,
- rpe=rpe,
- )
-
- if self.use_mem:
- # next_m length equals to summary length - 1
- # last memory is ignored
- if self.mini_batches:
- next_m = rc_output_memory[-summary_length:]
- else:
- next_m = rc_output_memory[-summary_length:-1]
-
- next_m = self.squash_mem(next_m)
- # rc and output
- rc_output = rc_output_memory[:-summary_length]
- if not self.nonlinear_squash_mem:
- next_m = torch.clamp(next_m, min=-10, max=10)
- else:
- next_m = mems
- rc_output = rc_output_memory
-
- return rc_output, next_m, next_k, next_v
-
- @torch.jit.export
- def forward_jit(
- self,
- input: Tensor,
- lengths: Tensor,
- mems: Tensor,
- left_context_key: Tensor,
- left_context_val: Tensor,
- rpe: Optional[Tensor],
- ) -> Tuple[Tensor, Tensor, Tensor, Tensor]:
- """
- forward function for NoSegAugmentedMemoryMultiheadAttentionBmm in decoding.
-
- args:
- input: formed in the following way
- [right_context_0, right_contex_1, ..., seg_0, seg_1,
- ..., summary_0, summary_1,..]
- lengths: the length of query which is [seg_0, seg_1, ....]
- mems: [mem_0, mem_1, ...].
- left_context_key: left_context for key part. This is only used for online
- decoding. In training, this is empty tensor
- left_context_val: left_context for value part. This is only used for online
- decoding. In training, this is empty tensor
-
- """
- lc_length = left_context_key.size(0)
-
- # In decoding, summary_length = 1 or 0
- if self.use_mem:
- summary_length = 1
- else:
- summary_length = 0
-
- results = self.prepare_qkv(
- input=input,
- mems=mems,
- lengths=lengths,
- summary_length=summary_length,
- lc_length=lc_length,
- )
- result_qkv, input_shape, result_lengths_info, padding_mask = results
- q, k, v = result_qkv
- (
- mem_length,
- utterance_length,
- right_context_blocks_length,
- key_length,
- ) = result_lengths_info
-
- # add the cache key and value
- new_k = torch.cat(
- [
- k[: mem_length + right_context_blocks_length, :, :],
- left_context_key,
- k[-utterance_length:, :, :],
- ],
- dim=0,
- )
- new_v = torch.cat(
- [
- v[: mem_length + right_context_blocks_length, :, :],
- left_context_val,
- v[-utterance_length:, :, :],
- ],
- dim=0,
- )
- next_k = new_k[mem_length + right_context_blocks_length :, :, :]
- next_v = new_v[mem_length + right_context_blocks_length :, :, :]
-
- attention_weights, attention_weights_float, v = self.prepare_attention_weights(
- q=q,
- new_k=new_k,
- new_v=new_v,
- input_shape=input_shape,
- rpe=rpe,
- )
- # In online decoding, we don't have attention mask. But we still need
- # to disable the attention from summary query to memory
- attention_weights_float[:, -1, :mem_length] = float(self.negative_inf)
- rc_output_memory = self.prepare_attention_output(
- attention_weights=attention_weights,
- attention_weights_float=attention_weights_float,
- v=v,
- input_shape=input_shape,
- key_length=key_length,
- padding_mask=padding_mask,
- rpe=rpe,
- )
-
- # In decoding, summary length is 1
- if self.use_mem:
- next_m = rc_output_memory[-1:]
- next_m = self.squash_mem(next_m)
- # rc and output
- rc_output = rc_output_memory[:-1]
- if not self.nonlinear_squash_mem:
- next_m = torch.clamp(next_m, min=-10, max=10)
- else:
- rc_output = rc_output_memory
- # empty tensor as input mems
- next_m = mems
-
- return rc_output, next_m, next_k, next_v
-
- def quantize_(self, params=None):
- if params and "per_channel" in params and params["per_channel"]:
- qconfig = per_channel_dynamic_qconfig
- else:
- qconfig = default_dynamic_qconfig
- torch.quantization.quantize_dynamic(
- self, {torch.nn.Linear: qconfig}, dtype=torch.qint8, inplace=True
- )
- return self
-
-
-class NoSegAugmentedMemoryTransformer(nn.Module):
- """
- Whole utterance augmented memory transformer.
-
- This is not pyspeech nn layer. It is used as a module in a master layer where
- multiple transformers is used.
- """
-
- def __init__(
- self,
- input_dim,
- num_heads,
- ffn_dim,
- dropout_in_attn=0.0,
- dropout_on_attn=None,
- dropout_on_fc1=None,
- dropout_on_fc2=None,
- activation_fn="relu",
- tanh_on_mem=False,
- std_scale=None,
- scaled_init=False,
- segment_size=128,
- use_mem=True,
- mini_batches=False,
- negative_inf="-inf",
- layer_index=-1,
- summarization_method="mean",
- max_relative_position=0,
- rpe_old_option=True,
- ):
- super(NoSegAugmentedMemoryTransformer, self).__init__()
-
- self.attention = NoSegAugmentedMemoryMultiheadAttentionBmm(
- input_dim=input_dim,
- num_heads=num_heads,
- dropout=dropout_in_attn,
- scaled_init=scaled_init,
- tanh_on_mem=tanh_on_mem,
- std_scale=std_scale,
- use_mem=use_mem,
- mini_batches=mini_batches,
- negative_inf=negative_inf,
- layer_index=layer_index,
- max_relative_position=max_relative_position,
- )
- self.dropout = nn.Dropout(dropout_on_attn)
- self.pos_ff = PositionwiseFF(
- input_dim=input_dim,
- ffn_dim=ffn_dim,
- dropout_on_fc1=dropout_on_fc1,
- dropout_on_fc2=dropout_on_fc2,
- activation_fn=activation_fn,
- )
- self.layer_norm_pre = Fp32LayerNorm(input_dim)
- self.layer_norm = Fp32LayerNorm(input_dim)
- self.segment_size = segment_size
- self.use_mem = use_mem
-
- self.memory_op = SummarizationLayer(
- summarization_method, segment_size, input_dim
- )
-
- def set_mini_batches(self, mini_batches):
- self.attention.mini_batches = mini_batches
-
- def gen_summary_queries(self, input):
- sum_input = self.memory_op(input)
- return sum_input
-
- def pre_attention_ops(self, input, right_context_blocks):
- rc_length = right_context_blocks.size(0)
- input_length = input.size(0)
-
- rc_and_input = torch.cat([right_context_blocks, input], dim=0)
- residual_input = rc_and_input
- rc_and_input = self.layer_norm_pre(rc_and_input)
-
- query_input = rc_and_input[-input_length:, :, :]
- return rc_length, input_length, residual_input, query_input, rc_and_input
-
- def after_attention_ops(self, attention_output, residual_input):
- output = self.dropout(attention_output)
- output = output + residual_input
- output = self.pos_ff(output)
- output = self.layer_norm(output)
- return output
-
- @torch.jit.export
- def forward_jit(
- self,
- input: Tensor,
- lengths: Tensor,
- mems: Tensor,
- left_context_key: Tensor,
- left_context_val: Tensor,
- right_context_blocks: Tensor,
- rpe: Optional[Tensor],
- ) -> Tuple[Tensor, Tensor, Tensor, Tensor, Tensor]:
-
- results = self.pre_attention_ops(input, right_context_blocks)
- rc_length, input_length, residual_input, query_input, rc_and_input = results
-
- # In online decoding, the summary query size is always 1 or 0
- if self.use_mem:
- summary_query = self.gen_summary_queries(query_input)
- summary_query = summary_query[0:1, :, :]
- rc_qu_su = torch.cat([rc_and_input, summary_query], dim=0)
- else:
- rc_qu_su = rc_and_input
-
- rc_output, next_m, next_k, next_v = self.attention.forward_jit(
- input=rc_qu_su,
- lengths=lengths,
- mems=mems,
- left_context_key=left_context_key,
- left_context_val=left_context_val,
- rpe=rpe,
- )
- rc_output = self.after_attention_ops(rc_output, residual_input)
- results = (
- rc_output[-input_length:, :, :],
- next_m,
- rc_output[0:rc_length, :, :],
- next_k,
- next_v,
- )
- return results
-
- @torch.jit.unused
- def forward(
- self,
- input,
- lengths,
- mems,
- right_context_blocks,
- attention_mask,
- pre_mems,
- left_context_key,
- left_context_val,
- rpe,
- ):
-
- results = self.pre_attention_ops(input, right_context_blocks)
- rc_length, input_length, residual_input, query_input, rc_and_input = results
- if self.use_mem:
- summary_query = self.gen_summary_queries(query_input)
- rc_qu_su = torch.cat([rc_and_input, summary_query], dim=0)
- else:
- rc_qu_su = rc_and_input
-
- rc_output, next_m, next_k, next_v = self.attention(
- input=rc_qu_su,
- lengths=lengths,
- mems=mems,
- attention_mask=attention_mask,
- pre_mems=pre_mems,
- left_context_key=left_context_key,
- left_context_val=left_context_val,
- rpe=rpe,
- )
-
- # [TODO] Note memory did not go through pos_ff. What happen if we pass
- # memory through the pos_ff as well?
- rc_output = self.after_attention_ops(rc_output, residual_input)
- results = (
- rc_output[-input_length:, :, :],
- next_m,
- rc_output[0:rc_length, :, :],
- next_k,
- next_v,
- )
-
- return results
-
-
-class NoSegAugmentedMemoryTransformerEncoderLayer(FairseqEncoder):
- """
- Whole utterance augmented memory transformer encoder layer. This is a master layer
- where we can define multiple augmented memory transformers. There are two reasons
- to setup the master layer.
- 1. We only need to define once about the attention mask. All the layers in the master
- layer share the same mask.
- 2. pyspeech nn layer has special input and output format. Defining one master layer is
- easier to passing memory between different layes inside the master layer
-
- args:
- input_dim: input embedding dimension
- num_heads: number of heads in multihead self-attention
- ffn_dim: ffn dimension in FFN layer
- num_layers: number of augmented memory transformer layers
- dropout_in_attn: dropout used in multi-head self-attention
- dropout_on_attn: dropout used for output from te multihead self-attention
- dropout_on_fc1: dropout used in FFN layer for the first linear layer
- dropout_on_fc2: dropout used in FFN layer for the second linear layer
- segment_size: segment size for each segment
- context_config: (left_context_size, right_context_size) defines the surround context size
- for each segment
- max_memory_size: maximum memory size used for each segment
- scaled_init: whether use scaled init for weight initialization in attention layer
- std_scale: if std_scale is not None. The weak attention suppression is
- turned on. For std_scale = 0.5, all the attention smaller than
- mean + 0.5 * std will be suppressed.
- activation_fn: activation function used in FFN layer. [ReLU, GELU] supported
- tanh_on_mem: whether use tanh on memory
- mini_batches: use mini-btach training
- negative_inf: the negative infinity value used in attention masking. default is "-inf".
- For some situation, e.g. LM. it is better to use "-1e8" to avoid nan issue.
- summarization_method: method to generate segment summrization embedding
- max_relative_position: max relatie position for relative position embedding
- rpe_old_option: To be compatible with previous model. The previous model
- was trained with attention += attention + rpe. The correct equation
- should be attention = attention + rpe
- [TODO]: remove the rpe_old_option by the end of 2021 Q1.
-
- """
-
- def __init__(
- self,
- input_dim,
- num_heads,
- ffn_dim,
- num_layers=1,
- dropout_in_attn=0.0,
- dropout_on_attn=0.0,
- dropout_on_fc1=0.0,
- dropout_on_fc2=0.0,
- segment_size=128,
- context_config=(0, 0),
- max_memory_size=0,
- scaled_init=True,
- std_scale=None,
- activation_fn="relu",
- tanh_on_mem=False,
- mini_batches=False,
- negative_inf="-inf",
- deep_init=True,
- summarization_method="mean",
- max_relative_position=0,
- rpe_old_option=True,
- ):
- super().__init__(None)
- if input_dim % num_heads:
- raise ValueError(
- "input_dim ({}) must be divisible by num_heads ({})".format(
- input_dim, num_heads
- )
- )
-
- # we used to support growing memory size. However, it will cause
- # cross stream batching failure. Now we need to have exact max memory size
- if max_memory_size < 0:
- raise ValueError("max_memory_size must be >= 0")
-
- # Only assign right_context. In decoding, left context will be cached.
- # No need to let the online decoder to re-assign the left context
- self.left_context, self.right_context = context_config
- self.segment_size = segment_size
- self.memory_dim = input_dim
- self.max_memory_size = max_memory_size
- self.mini_batches = mini_batches
- if self.max_memory_size != 0:
- self.use_mem = True
- else:
- self.use_mem = False
-
- self.memory_op = SummarizationLayer(
- summarization_method, segment_size, input_dim
- )
-
- self.layers = torch.nn.ModuleList()
- self.num_layers = num_layers
- self.max_relative_position = max_relative_position
- if self.max_relative_position > 0:
- self.use_rpe = True
- else:
- self.use_rpe = False
- for i in range(self.num_layers):
- if deep_init:
- layer_index = i
- else:
- layer_index = -1
-
- self.layers.append(
- NoSegAugmentedMemoryTransformer(
- num_heads=num_heads,
- input_dim=input_dim,
- ffn_dim=ffn_dim,
- dropout_in_attn=dropout_in_attn,
- dropout_on_attn=dropout_on_attn,
- dropout_on_fc1=dropout_on_fc1,
- dropout_on_fc2=dropout_on_fc2,
- segment_size=segment_size,
- std_scale=std_scale,
- activation_fn=activation_fn,
- tanh_on_mem=tanh_on_mem,
- scaled_init=scaled_init,
- use_mem=self.use_mem,
- mini_batches=mini_batches,
- negative_inf=negative_inf,
- layer_index=layer_index,
- summarization_method=summarization_method,
- max_relative_position=max_relative_position,
- rpe_old_option=rpe_old_option,
- )
- )
-
- def set_mini_batches(self, mini_batches):
- # handy function only used for unit test
- self.mini_batches = mini_batches
- for layer in self.layers:
- layer.set_mini_batches(mini_batches)
-
- def _get_relative_position(
- self,
- input: Tensor,
- max_relative_position: int,
- left_context_length: int,
- past_length: int,
- is_decoding: bool,
- ):
- # For training, we copy the right context to the start of the utterance
- # First dimension in distance is corresponding to query.
- # [right context, utterance, summary vector]
- # Second dimension in distance is corresponding to key.
- # [Memory bank, right context, utterance]
- # For summary vector in query part, the distance with
- # all other position is 2*max_position. For memory bank in key,
- # the distance with all other positions is 0.
-
- T, B, D = input.shape
- num_segs = math.ceil((T - self.right_context) / self.segment_size)
-
- # utterance
- u_st = past_length * self.segment_size
- u_ed = u_st + T
- utterance_ranges = torch.arange(u_st, u_ed - self.right_context)
-
- # left context. Only in minibatch or decoding
- left_context_ranges = torch.arange(u_st - left_context_length, u_st)
-
- # Right context block
- # right context + utterance
- right_context_blocks = []
- for i in range(0, num_segs - 1):
- st = (i + 1) * self.segment_size + u_st
- ed = st + self.right_context
- assert ed < u_ed
- temp = torch.arange(st, ed)
- right_context_blocks.append(temp)
- right_context_blocks.append(torch.arange(u_ed - self.right_context, u_ed))
- right_context_ranges = torch.cat(right_context_blocks)
-
- if self.use_mem:
- # Memory bank
- # The position for memory -n, .., -1
- if is_decoding:
- memory_size = min(past_length, self.max_memory_size)
- else:
- memory_size = num_segs + past_length - 1
- memory_bank_ranges = torch.arange(
- -max_relative_position - 1, -max_relative_position - 1 - memory_size, -1
- )
-
- # summary vector
- # The position for summary vector as the T+max_relative_position+1.
- # After the clamping, the relative position is max_relative_position
- summary_pos_st = u_ed + max_relative_position + 1
- summary_vector_ranges = torch.arange(
- summary_pos_st, summary_pos_st + num_segs
- )
-
- key_ranges = torch.cat(
- [
- memory_bank_ranges,
- right_context_ranges,
- left_context_ranges,
- utterance_ranges,
- ]
- )
-
- query_ranges = torch.cat(
- [right_context_ranges, utterance_ranges, summary_vector_ranges]
- )
- else:
- key_ranges = torch.cat(
- [right_context_ranges, left_context_ranges, utterance_ranges]
- )
-
- query_ranges = torch.cat([right_context_ranges, utterance_ranges])
-
- distance = key_ranges[None, :] - query_ranges[:, None]
- distance_clamp = (
- torch.clamp(distance, -max_relative_position, max_relative_position)
- + max_relative_position
- )
- distance_clamp = distance_clamp.to(input.device).long().detach()
- return distance_clamp
-
- def _get_attention_mask(self, input, past_length=0, left_context_cache=0):
- # attention mask for each query contains three parts:
- # 1. memory part
- # 2. left_context + segment
- # 3. right_context_block
- # so for each segment and its correspoinding right context block,
- # the attention matrix is formed by 9 parts:
- # [0, m, 0, 0, right_context, 0, 0, seg, 0]
- # [before memory, memory, after memory, before right context, right_context,
- # after right context, before seg, seg, after seg]
- #
- # Query is formed in the way as [right_context_blocks, utterance, summary]
- #
- # Note: put m and right_context before segment is convenient
- # for padding_mask operation.
- # Key lengths = m_length + right_context_block_length + lengths
- utterance_length, batch_size, _ = input.shape
- summary_length = math.ceil(utterance_length / self.segment_size)
- num_segs = summary_length
- rc_length = self.right_context * num_segs
- rc = self.right_context
- lc = self.left_context
-
- # using mini-batches, there is left context cache available for current
- # sequence.
- lcc = left_context_cache
-
- # max_memory_size is 0 then we don't have memory and summary
- # past_length is the memory carry from previous sequence
- if self.use_mem:
- mem_length = num_segs - 1 + past_length
- else:
- mem_length = 0
- rc_mask = []
- query_mask = []
- summary_mask = []
- for j in range(0, num_segs):
- ssize = min(self.segment_size, utterance_length - j * self.segment_size)
-
- rc_size = rc
- rc_mat = []
- q_mat = []
- s_mat = []
- m_start = max(j + past_length - self.max_memory_size, 0)
-
- # max_memory_size is 0, then we don't use memory
- if self.use_mem:
- # part 0: before memory
- rc_mat.append(input.new_zeros(rc_size, m_start))
- q_mat.append(input.new_zeros(ssize, m_start))
- s_mat.append(input.new_zeros(1, m_start))
-
- # part 1: memory
- col_1 = j + past_length - m_start
- rc_mat.append(torch.ones(rc_size, col_1, device=input.device))
- q_mat.append(torch.ones(ssize, col_1, device=input.device))
- # based on D22875746, disable summary query attention
- # on memeory is better for long form utterance
- s_mat.append(input.new_zeros(1, col_1))
-
- # part 2: after memory
- col_2 = mem_length - (j + past_length)
- rc_mat.append(input.new_zeros(rc_size, col_2))
- q_mat.append(input.new_zeros(ssize, col_2))
- s_mat.append(input.new_zeros(1, col_2))
-
- # part 3: before right context
- rc_start = j * rc
- rc_mat.append(input.new_zeros(rc_size, rc_start))
- q_mat.append(input.new_zeros(ssize, rc_start))
- s_mat.append(input.new_zeros(1, rc_start))
-
- # part 4: right context
- rc_end = rc_start + rc
- col_4 = rc
- rc_mat.append(torch.ones(rc_size, col_4, device=input.device))
- q_mat.append(torch.ones(ssize, col_4, device=input.device))
- s_mat.append(torch.ones(1, col_4, device=input.device))
-
- # part 5: after right context
- col_5 = rc_length - rc_end
- rc_mat.append(input.new_zeros(rc_size, col_5))
- q_mat.append(input.new_zeros(ssize, col_5))
- s_mat.append(input.new_zeros(1, col_5))
-
- # part 6: before query segment
- seg_start = max(j * self.segment_size + lcc - lc, 0)
- rc_mat.append(input.new_zeros(rc_size, seg_start))
- q_mat.append(input.new_zeros(ssize, seg_start))
- s_mat.append(input.new_zeros(1, seg_start))
-
- # part 7: query segment
- # note: right context is put in right context block
- # here we only need to consider about left context
- seg_end = min((j + 1) * self.segment_size + lcc, utterance_length + lcc)
- col_7 = seg_end - seg_start
- rc_mat.append(torch.ones(rc_size, col_7, device=input.device))
- q_mat.append(torch.ones(ssize, col_7, device=input.device))
- s_mat.append(torch.ones(1, col_7, device=input.device))
-
- # part 8: after query segment
- col_8 = utterance_length + lcc - seg_end
- rc_mat.append(input.new_zeros(rc_size, col_8))
- q_mat.append(input.new_zeros(ssize, col_8))
- s_mat.append(input.new_zeros(1, col_8))
-
- rc_mask.append(torch.cat(rc_mat, dim=1))
- query_mask.append(torch.cat(q_mat, dim=1))
- summary_mask.append(torch.cat(s_mat, dim=1))
-
- # no memory, then we don't need summary either
- if self.use_mem:
- attention_mask = (
- 1
- - torch.cat(
- [
- torch.cat(rc_mask, dim=0),
- torch.cat(query_mask, dim=0),
- torch.cat(summary_mask, dim=0),
- ],
- dim=0,
- )
- ).to(torch.bool)
- else:
- attention_mask = (
- 1
- - torch.cat(
- [torch.cat(rc_mask, dim=0), torch.cat(query_mask, dim=0)], dim=0
- )
- ).to(torch.bool)
-
- return attention_mask
-
- @torch.jit.export
- def init_state(
- self, batch_size: int, device: Optional[Device] = None
- ) -> List[Tensor]:
- empty_memory = torch.zeros(
- self.num_layers,
- self.max_memory_size,
- batch_size,
- self.memory_dim,
- device=device,
- )
- left_context_key = torch.zeros(
- self.num_layers,
- self.left_context,
- batch_size,
- self.memory_dim,
- device=device,
- )
- left_context_val = torch.zeros(
- self.num_layers,
- self.left_context,
- batch_size,
- self.memory_dim,
- device=device,
- )
- past_length = torch.zeros(1, batch_size, dtype=torch.int32, device=device)
-
- return [empty_memory, left_context_key, left_context_val, past_length]
-
- @torch.jit.export
- def batch_state(self, states: List[List[Tensor]]) -> List[Tensor]:
- if len(states) == 0:
- return []
- batched_m = []
- batched_lc_key = []
- batched_lc_val = []
- batched_past_length = []
- for state in states:
- if len(state) == 0:
- continue
- m, lc_key, lc_val, past_length = state
- batched_m.append(m)
- batched_lc_key.append(lc_key)
- batched_lc_val.append(lc_val)
- batched_past_length.append(past_length)
-
- if (
- (len(batched_m) == 0)
- or (len(batched_lc_key) == 0)
- or (len(batched_lc_val) == 0)
- or (len(batched_past_length) == 0)
- ):
- return [
- torch.tensor([]),
- torch.tensor([]),
- torch.tensor([]),
- torch.tensor([]),
- ]
-
- batched_m = torch.cat(batched_m, dim=2)
- batched_lc_key = torch.cat(batched_lc_key, dim=2)
- batched_lc_val = torch.cat(batched_lc_val, dim=2)
- batched_past_length = torch.cat(batched_past_length, dim=1)
- return [batched_m, batched_lc_key, batched_lc_val, batched_past_length]
-
- @torch.jit.export
- def reorder_state(self, state: List[Tensor], indices: Tensor) -> List[Tensor]:
- if len(state) == 0:
- return []
- m, lc_key, lc_val, past_length = state
- indices = indices.to(device=m.device)
- reord_m = torch.index_select(m, 2, indices)
- reord_lc_key = torch.index_select(lc_key, 2, indices)
- reord_lc_val = torch.index_select(lc_val, 2, indices)
- reord_past_length = torch.index_select(past_length, 1, indices)
- return [reord_m, reord_lc_key, reord_lc_val, reord_past_length]
-
- @torch.jit.export
- def reset_state(self, state: List[Tensor], indices: Tensor) -> List[Tensor]:
- m, lc_key, lc_val, past_length = state
- m = m.index_fill(dim=2, index=indices, value=0.0)
- lc_key = lc_key.index_fill(dim=2, index=indices, value=0.0)
- lc_val = lc_val.index_fill(dim=2, index=indices, value=0.0)
- past_length = past_length.index_fill(dim=1, index=indices, value=0)
-
- return [m, lc_key, lc_val, past_length]
-
- @torch.jit.export
- def state_size(self) -> int:
- return 4
-
- @torch.jit.export
- def batch_size_in_state(
- self, state: Optional[List[Tensor]], sloppy: bool = True
- ) -> Optional[int]:
- if state is None:
- return None
- return state[0].size(2)
-
- def gen_summary_queries(self, input):
- sum_input = self.memory_op(input)
- return sum_input
-
- def _gen_right_context_padded_input(self, input):
- # This function deals with input that is already
- # padded with right context (e.g. minibatch training)
- right_context_blocks = []
- T, B, D = input.shape
- num_segs = math.ceil((T - self.right_context) / self.segment_size)
- for i in range(0, num_segs - 1):
- st = (i + 1) * self.segment_size
- ed = st + self.right_context
- assert ed < T
- temp = input[st:ed, :, :]
- right_context_blocks.append(temp)
-
- # last segment right context is already available
- right_context_blocks.append(input[T - self.right_context :, :, :])
- return torch.cat(right_context_blocks, dim=0)
-
- def _gen_segs_right_context(self, input, lengths):
- segments = []
- T, B, D = input.size()
- nT = T - self.right_context
-
- # assume input is right context padded
- num_segs = math.ceil(nT / self.segment_size)
- # pad zeros to the utterance to make sure each
- # segment has the same right context. For the
- for i in range(0, num_segs - 1):
- st = i * self.segment_size
- ed = min(T, st + self.segment_size + self.right_context)
- temp = input[st:ed, :, :]
- rest_lengths = torch.clamp(
- lengths - self.segment_size, min=0, max=nT - (i + 1) * self.segment_size
- )
- segments.append((temp, lengths - rest_lengths + self.right_context))
- lengths = rest_lengths
-
- last_seg = input[st + self.segment_size :, :, :]
- segments.append((last_seg, rest_lengths + self.right_context))
-
- return segments
-
- @torch.jit.unused
- def forward(
- self, input: Tensor, padding_masks: Tensor, state: Optional[List[Tensor]] = None
- ) -> Tuple[Tensor, Tensor, List[Tensor], List[Tensor]]:
- # Xutai: originally the second argument is lengths.
- lengths = (~padding_masks).sum(dim=1).long()
- # mini batch training.
- if self.mini_batches:
- return self.forward_mini_batches(input, lengths, state)
-
- # regular full sequence training. Note, assume the right context in provided
- # in the input.
- T, B, D = input.size()
- right_context_blocks = self._gen_right_context_padded_input(input)
-
- # generate the relative positional embedding
- if self.use_rpe:
- rpe = self._get_relative_position(
- input=input,
- max_relative_position=self.max_relative_position,
- left_context_length=0,
- past_length=0,
- is_decoding=False,
- )
- else:
- rpe = None
- input = input[: T - self.right_context, :, :]
-
- attention_mask = self._get_attention_mask(input)
-
- # firt layer use each segment mean as memory
- # ignore the last one seg average
- if self.use_mem:
- mems = self.gen_summary_queries(input)[:-1, :, :]
- else:
- mems = torch.zeros(0, input.size(1), input.size(2), device=input.device)
- mems = mems.type_as(input)
-
- output = input
- all_outputs = []
-
- for layer in self.layers:
- output, mems, right_context_blocks, _, _ = layer(
- input=output,
- lengths=lengths,
- attention_mask=attention_mask,
- mems=mems,
- right_context_blocks=right_context_blocks,
- pre_mems=None,
- left_context_key=None,
- left_context_val=None,
- rpe=rpe,
- )
- all_outputs.append(output)
- return output, padding_masks, [], all_outputs
-
- def forward_jit_mini_batch_init(
- self,
- seg: Tensor,
- state: Optional[List[Tensor]] = None,
- is_decoding: bool = False,
- ):
- # Prepare state. In whole sequence training, state is ignored.
- # For minibatch training, we need to prepare state
- if state is None:
- state = self.init_state(batch_size=seg.size(1), device=seg.device)
- if seg.dtype == torch.half:
- state = [state[0].half(), state[1].half(), state[2].half(), state[3]]
-
- if self.use_mem:
- # note input average only on seg, not on right context
- # first layer use each segmetn mean as memory. the last
- # one segment average is used in state
- full_mems = self.gen_summary_queries(seg)
- if is_decoding:
- mems = full_mems[0:1, :, :]
- state_mems = torch.cat([state[0][0], mems], dim=0)
- else:
- mems = full_mems[:-1, :, :]
- state_mems = torch.cat([state[0][0], full_mems], dim=0)
- else:
- mems = state[0][0]
- state_mems = mems
-
- # track processed segment number or memory number
- # the same batch as the same bumber of past length
- past_length = state[3][0][0].item()
- past_left_context = min(past_length * self.segment_size, self.left_context)
- past_length = min(self.max_memory_size, past_length)
-
- return state, mems, state_mems, past_length, past_left_context
-
- def state_update_before(
- self, layer: int, state: List[Tensor], past_length: int, past_left_context: int
- ):
- pre_mems = state[0][layer][self.max_memory_size - past_length :, :, :]
- lc_key = state[1][layer][self.left_context - past_left_context :, :, :]
- lc_val = state[2][layer][self.left_context - past_left_context :, :, :]
- return pre_mems, lc_key, lc_val
-
- def state_update_after(
- self,
- layer: int,
- state: List[Tensor],
- mems: Tensor,
- next_key: Tensor,
- next_val: Tensor,
- mems_list: List[Tensor],
- lc_key_list: List[Tensor],
- lc_val_list: List[Tensor],
- ):
- # mems is used for next layer
- if layer < self.num_layers - 1:
- state_mems = torch.cat([state[0][layer + 1], mems], dim=0)
- mems_list.append(state_mems[-self.max_memory_size :, :, :])
-
- # when mems pass to next sequence, we need the last memory. when mems
- # use for the next layer, we can ignore the last memory
- mems = mems[:-1, :, :]
-
- # note state[1][i] and state[2][i] original length equals to self.left_context
- new_k = torch.cat([state[1][layer], next_key], dim=0)
- new_v = torch.cat([state[2][layer], next_val], dim=0)
- lc_key_list.append(new_k[-self.left_context :, :, :])
- lc_val_list.append(new_v[-self.left_context :, :, :])
- return mems_list, lc_key_list, lc_val_list, mems
-
- def state_update_after_loop(
- self,
- state: List[Tensor],
- mems_list: List[Tensor],
- lc_key_list: List[Tensor],
- lc_val_list: List[Tensor],
- update_length: int,
- ):
- state[0] = torch.stack(mems_list, dim=0)
- state[1] = torch.stack(lc_key_list, dim=0)
- state[2] = torch.stack(lc_val_list, dim=0)
- state[3] = state[3] + update_length
- return state
-
- @torch.jit.unused
- def forward_mini_batches(
- self, input: Tensor, lengths: Tensor, state: Optional[List[Tensor]] = None
- ) -> Tuple[Tensor, Tensor, List[Tensor], List[Tensor]]:
- T, B, D = input.size()
-
- # input without right context
- seg = input[: T - self.right_context, :, :]
-
- # get right context blocks
- right_context_blocks = self._gen_right_context_padded_input(input)
-
- mems_list = []
- lc_key_list = []
- lc_val_list = []
- results = self.forward_jit_mini_batch_init(seg, state, False)
- state, mems, state_mems, past_length, past_left_context = results
-
- # relative position embedding
- if self.use_rpe:
- rpe = self._get_relative_position(
- input=input,
- max_relative_position=self.max_relative_position,
- left_context_length=past_left_context,
- past_length=past_length,
- is_decoding=False,
- )
- else:
- rpe = None
-
- # get attention mask based on seg (not include right context) and available
- # left context
- attention_mask = self._get_attention_mask(seg, past_length, past_left_context)
- mems_list.append(state_mems[-self.max_memory_size :, :, :])
- output = seg
- i = 0
- all_outputs = []
- for layer in self.layers:
- # In order to make cross stream batching work, mem, left context key
- # and left context value in the state should always be the same shape.
- # We use the past length to track the processed segment number. In this
- # way, we take out the essential memory, left context key and left
- # context val from the state. After finish the forward for current segment
- # we add the new memory, left context key and left context value into the
- # staate and trim out the oldest part to keep the shape consistent.
- pre_mems, lc_key, lc_val = self.state_update_before(
- i, state, past_length, past_left_context
- )
-
- output, mems, right_context_blocks, next_key, next_val = layer.forward(
- input=output,
- lengths=lengths,
- attention_mask=attention_mask,
- mems=mems,
- right_context_blocks=right_context_blocks,
- pre_mems=pre_mems,
- left_context_key=lc_key,
- left_context_val=lc_val,
- rpe=rpe,
- )
- all_outputs.append(output)
- mems_list, lc_key_list, lc_val_list, mems = self.state_update_after(
- layer=i,
- state=state,
- mems=mems,
- next_key=next_key,
- next_val=next_val,
- mems_list=mems_list,
- lc_key_list=lc_key_list,
- lc_val_list=lc_val_list,
- )
-
- i += 1
-
- # update state
- update_length = math.ceil((T - self.right_context) / self.segment_size)
- state = self.state_update_after_loop(
- state=state,
- mems_list=mems_list,
- lc_key_list=lc_key_list,
- lc_val_list=lc_val_list,
- update_length=update_length,
- )
-
- return output, lengths, state, all_outputs
-
- def forward_jit_test(
- self, input: Tensor, lengths: Tensor, state: Optional[List[Tensor]] = None
- ) -> Tuple[Tensor, Tensor, List[Tensor]]:
- """
- This one simulate sequence encoder forward jit. This is for unit test purpose.
- It is not used in training or decoding. Note, extra_right_context is set in
- the model. In unit test, input = [utterance, right_context], lengths =
- [utterance_length].
- args:
- input: input utterance
- lengths: utterance input length
- state: None here. input is whole utterance
- """
- # [TODO] sequence_to_segment has bug in lengths.
- seg_src_tokens_lengths = self._gen_segs_right_context(input, lengths)
-
- seg_enc_tokens_lengths: List[Tuple[Tensor, Tensor]] = []
- state: Optional[List[Tensor]] = None
- for seg_src_tokens, seg_src_lengths in seg_src_tokens_lengths:
- seg_enc_tokens, seg_enc_lengths, state = self.forward_jit(
- input=seg_src_tokens, lengths=seg_src_lengths, state=state
- )
- seg_enc_tokens_lengths.append((seg_enc_tokens, seg_enc_lengths))
-
- enc_tokens, enc_lengths = segments_to_sequence(
- segments=seg_enc_tokens_lengths, time_axis=0
- )
-
- state = [] # returns trivial state
-
- return enc_tokens, enc_lengths, state
-
- @torch.jit.export
- def forward_jit(
- self, input: Tensor, lengths: Tensor, state: Optional[List[Tensor]] = None
- ) -> Tuple[Tensor, Tensor, List[Tensor]]:
- """
- Forward helper for online decoding.
-
- args:
- input: [seg, right_context]. We assume in online we
- always padding the right context to the preset right context size.
- For the last segment, we may have short segment size, but right
- context size is the same as other segments
- lengths: utterance input length is the utterance segment length and
- right context size
- state: [memory, left_context_key, left_context_val]. To improve throughput,
- in addition to memory, we also cache key and value for left_context in
- multihead self-attention
- """
- # In online decoding, input = [segment, right_context]
- # Lengths = [segment_length, right_context_length]
- # so we need strip right context in output
- T, B, D = input.size()
- rc_str = T - self.right_context
- rc_end = T
- right_context_blocks = input[rc_str:rc_end, :, :]
- seg = input[:rc_str, :, :]
- lengths = torch.clamp(lengths - self.right_context, min=0)
- mems_list = []
- lc_key_list = []
- lc_val_list = []
-
- results = self.forward_jit_mini_batch_init(seg, state, True)
- state, mems, state_mems, past_length, past_left_context = results
-
- # relative position embedding
- if self.use_rpe:
- rpe = self._get_relative_position(
- input=input,
- max_relative_position=self.max_relative_position,
- left_context_length=past_left_context,
- past_length=past_length,
- is_decoding=True,
- )
- else:
- rpe = None
-
- # memory for first layer.
- mems_list.append(state_mems[-self.max_memory_size :, :, :])
- output = seg
- i = 0
- for layer in self.layers:
- # In order to make cross stream batching work, mem, left context key
- # and left context value in the state should always be the same shape.
- # We use the past length to track the processed segment number. In this
- # way, we take out the essential memory, left context key and left
- # context val from the state. After finish the forward for current segment
- # we add the new memory, left context key and left context value into the
- # staate and trim out the oldest part to keep the shape consistent.
- true_mems, lc_key, lc_val = self.state_update_before(
- layer=i,
- state=state,
- past_length=past_length,
- past_left_context=past_left_context,
- )
-
- output, mems, right_context_blocks, next_key, next_val = layer.forward_jit(
- input=output,
- lengths=lengths,
- mems=true_mems,
- right_context_blocks=right_context_blocks,
- left_context_key=lc_key,
- left_context_val=lc_val,
- rpe=rpe,
- )
- # mems is used for next layer
- mems_list, lc_key_list, lc_val_list, _ = self.state_update_after(
- layer=i,
- state=state,
- mems_list=mems_list,
- mems=mems,
- next_key=next_key,
- next_val=next_val,
- lc_key_list=lc_key_list,
- lc_val_list=lc_val_list,
- )
- i += 1
-
- # update state
- state = self.state_update_after_loop(
- state=state,
- mems_list=mems_list,
- lc_key_list=lc_key_list,
- lc_val_list=lc_val_list,
- update_length=1,
- )
-
- return output, lengths, state
-
- def quantize_(self, params=None):
- if params and "per_channel" in params and params["per_channel"]:
- qconfig = per_channel_dynamic_qconfig
- else:
- qconfig = default_dynamic_qconfig
- torch.quantization.quantize_dynamic(
- self, {torch.nn.Linear: qconfig}, dtype=torch.qint8, inplace=True
- )
- return self
-
-
-# ------------------------------------------------------------------------------
-# Emformer encoder for seq2seq model
-# This is a wrapper over the original emformer
-# ------------------------------------------------------------------------------
-def emformer_encoder(klass):
- class SpeechEncoder(klass):
- def __init__(self, args):
- super().__init__(args)
- stride = SpeechEncoder.conv_layer_stride(args)
- trf_left_context = args.segment_left_context // stride
- trf_right_context = args.segment_right_context // stride
- context_config = [trf_left_context, trf_right_context]
- self.transformer_layers = nn.ModuleList(
- [
- NoSegAugmentedMemoryTransformerEncoderLayer(
- input_dim=args.encoder_embed_dim,
- num_heads=args.encoder_attention_heads,
- ffn_dim=args.encoder_ffn_embed_dim,
- num_layers=args.encoder_layers,
- dropout_in_attn=args.dropout,
- dropout_on_attn=args.dropout,
- dropout_on_fc1=args.dropout,
- dropout_on_fc2=args.dropout,
- activation_fn=args.activation_fn,
- context_config=context_config,
- segment_size=args.segment_length,
- max_memory_size=args.max_memory_size,
- scaled_init=True, # TODO: use constant for now.
- tanh_on_mem=args.amtrf_tanh_on_mem,
- )
- ]
- )
-
- def forward(self, src_tokens, src_lengths):
- encoder_out = super().forward(src_tokens, src_lengths)
- output = encoder_out["encoder_out"][0]
- encoder_padding_masks = encoder_out["encoder_padding_mask"][0]
-
- # This is because that in the original implementation
- # the output didn't consider the last segment as right context.
- encoder_padding_masks = encoder_padding_masks[:, : output.size(0)]
-
- return {
- "encoder_out": [output],
- "encoder_padding_mask": [encoder_padding_masks],
- "encoder_embedding": [],
- "encoder_states": [],
- "src_tokens": [],
- "src_lengths": [],
- }
-
- @staticmethod
- def conv_layer_stride(args):
- # TODO: make it configurable from the args
- return 4
-
- SpeechEncoder.__name__ = klass.__name__
- return SpeechEncoder
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_convtbc.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_convtbc.py
deleted file mode 100644
index 3a3c9b91e70f597ab77b9b01459cc429db5d7956..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_convtbc.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import unittest
-
-import torch
-import torch.nn as nn
-from fairseq.modules import ConvTBC
-
-
-class TestConvTBC(unittest.TestCase):
- def test_convtbc(self):
- # ksz, in_channels, out_channels
- conv_tbc = ConvTBC(4, 5, kernel_size=3, padding=1)
- # out_channels, in_channels, ksz
- conv1d = nn.Conv1d(4, 5, kernel_size=3, padding=1)
-
- conv_tbc.weight.data.copy_(conv1d.weight.data.transpose(0, 2))
- conv_tbc.bias.data.copy_(conv1d.bias.data)
-
- input_tbc = torch.randn(7, 2, 4, requires_grad=True)
- input1d = input_tbc.data.transpose(0, 1).transpose(1, 2)
- input1d.requires_grad = True
-
- output_tbc = conv_tbc(input_tbc)
- output1d = conv1d(input1d)
-
- self.assertAlmostEqual(
- output_tbc.data.transpose(0, 1).transpose(1, 2), output1d.data
- )
-
- grad_tbc = torch.randn(output_tbc.size())
- grad1d = grad_tbc.transpose(0, 1).transpose(1, 2).contiguous()
-
- output_tbc.backward(grad_tbc)
- output1d.backward(grad1d)
-
- self.assertAlmostEqual(
- conv_tbc.weight.grad.data.transpose(0, 2), conv1d.weight.grad.data
- )
- self.assertAlmostEqual(conv_tbc.bias.grad.data, conv1d.bias.grad.data)
- self.assertAlmostEqual(
- input_tbc.grad.data.transpose(0, 1).transpose(1, 2), input1d.grad.data
- )
-
- def assertAlmostEqual(self, t1, t2):
- self.assertEqual(t1.size(), t2.size(), "size mismatch")
- self.assertLess((t1 - t2).abs().max(), 1e-4)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/hifi_gan/meldataset.py b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/hifi_gan/meldataset.py
deleted file mode 100644
index 134f98fbeff6a854e704baf4b9692920631bd946..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/hifi_gan/meldataset.py
+++ /dev/null
@@ -1,241 +0,0 @@
-import math
-import os
-import random
-import torch
-import torch.utils.data
-import numpy as np
-from librosa.util import normalize
-from scipy.io.wavfile import read
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def load_wav(full_path):
- sampling_rate, data = read(full_path)
- return data, sampling_rate
-
-
-def dynamic_range_compression(x, C=1, clip_val=1e-5):
- return np.log(np.clip(x, a_min=clip_val, a_max=None) * C)
-
-
-def dynamic_range_decompression(x, C=1):
- return np.exp(x) / C
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def mel_spectrogram(
- y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False
-):
- if torch.min(y) < -1.0:
- print("min value is ", torch.min(y))
- if torch.max(y) > 1.0:
- print("max value is ", torch.max(y))
-
- global mel_basis, hann_window
- if fmax not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[str(fmax) + "_" + str(y.device)] = (
- torch.from_numpy(mel).float().to(y.device)
- )
- hann_window[str(y.device)] = torch.hann_window(win_size).to(y.device)
-
- y = torch.nn.functional.pad(
- y.unsqueeze(1),
- (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)),
- mode="reflect",
- )
- y = y.squeeze(1)
-
- spec = torch.stft(
- y,
- n_fft,
- hop_length=hop_size,
- win_length=win_size,
- window=hann_window[str(y.device)],
- center=center,
- pad_mode="reflect",
- normalized=False,
- onesided=True,
- )
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + (1e-9))
-
- spec = torch.matmul(mel_basis[str(fmax) + "_" + str(y.device)], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
-
-
-def get_dataset_filelist(a):
- with open(a.input_training_file, "r", encoding="utf-8") as fi:
- training_files = [
- os.path.join(a.input_wavs_dir, os.path.basename(x.split("|")[0]))
- for x in fi.read().split("\n")
- if len(x) > 0
- ]
-
- with open(a.input_validation_file, "r", encoding="utf-8") as fi:
- validation_files = [
- os.path.join(a.input_wavs_dir, os.path.basename(x.split("|")[0]))
- for x in fi.read().split("\n")
- if len(x) > 0
- ]
- return training_files, validation_files
-
-
-class MelDataset(torch.utils.data.Dataset):
- def __init__(
- self,
- training_files,
- segment_size,
- n_fft,
- num_mels,
- hop_size,
- win_size,
- sampling_rate,
- fmin,
- fmax,
- split=True,
- shuffle=True,
- n_cache_reuse=1,
- device=None,
- fmax_loss=None,
- fine_tuning=False,
- base_mels_path=None,
- ):
- self.audio_files = training_files
- random.seed(1234)
- if shuffle:
- random.shuffle(self.audio_files)
- self.segment_size = segment_size
- self.sampling_rate = sampling_rate
- self.split = split
- self.n_fft = n_fft
- self.num_mels = num_mels
- self.hop_size = hop_size
- self.win_size = win_size
- self.fmin = fmin
- self.fmax = fmax
- self.fmax_loss = fmax_loss
- self.cached_wav = None
- self.n_cache_reuse = n_cache_reuse
- self._cache_ref_count = 0
- self.device = device
- self.fine_tuning = fine_tuning
- self.base_mels_path = base_mels_path
-
- def __getitem__(self, index):
- filename = self.audio_files[index]
- if self._cache_ref_count == 0:
- audio, sampling_rate = load_wav(filename)
- audio = audio / MAX_WAV_VALUE
- if not self.fine_tuning:
- audio = normalize(audio) * 0.95
- self.cached_wav = audio
- if sampling_rate != self.sampling_rate:
- raise ValueError(
- "{} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate
- )
- )
- self._cache_ref_count = self.n_cache_reuse
- else:
- audio = self.cached_wav
- self._cache_ref_count -= 1
-
- audio = torch.FloatTensor(audio)
- audio = audio.unsqueeze(0)
-
- if not self.fine_tuning:
- if self.split:
- if audio.size(1) >= self.segment_size:
- max_audio_start = audio.size(1) - self.segment_size
- audio_start = random.randint(0, max_audio_start)
- audio = audio[:, audio_start : audio_start + self.segment_size]
- else:
- audio = torch.nn.functional.pad(
- audio, (0, self.segment_size - audio.size(1)), "constant"
- )
-
- mel = mel_spectrogram(
- audio,
- self.n_fft,
- self.num_mels,
- self.sampling_rate,
- self.hop_size,
- self.win_size,
- self.fmin,
- self.fmax,
- center=False,
- )
- else:
- mel = np.load(
- os.path.join(
- self.base_mels_path,
- os.path.splitext(os.path.split(filename)[-1])[0] + ".npy",
- )
- )
- mel = torch.from_numpy(mel)
-
- if len(mel.shape) < 3:
- mel = mel.unsqueeze(0)
-
- if self.split:
- frames_per_seg = math.ceil(self.segment_size / self.hop_size)
-
- if audio.size(1) >= self.segment_size:
- mel_start = random.randint(0, mel.size(2) - frames_per_seg - 1)
- mel = mel[:, :, mel_start : mel_start + frames_per_seg]
- audio = audio[
- :,
- mel_start
- * self.hop_size : (mel_start + frames_per_seg)
- * self.hop_size,
- ]
- else:
- mel = torch.nn.functional.pad(
- mel, (0, frames_per_seg - mel.size(2)), "constant"
- )
- audio = torch.nn.functional.pad(
- audio, (0, self.segment_size - audio.size(1)), "constant"
- )
-
- mel_loss = mel_spectrogram(
- audio,
- self.n_fft,
- self.num_mels,
- self.sampling_rate,
- self.hop_size,
- self.win_size,
- self.fmin,
- self.fmax_loss,
- center=False,
- )
-
- return (mel.squeeze(), audio.squeeze(0), filename, mel_loss.squeeze())
-
- def __len__(self):
- return len(self.audio_files)
diff --git a/spaces/Hexamind/swarms/setup.sh b/spaces/Hexamind/swarms/setup.sh
deleted file mode 100644
index 702f92fc1ce4479094969d80cf653ec3a9ecf1bd..0000000000000000000000000000000000000000
--- a/spaces/Hexamind/swarms/setup.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-mkdir -p ~/.streamlit/
-echo "\
-[server]\n\git init
-git add README.md
-git commit -m "first commit"
-git branch -M master
-git remote add origin
-headless = true\n\
-port = $PORT\n\
-enableCORS = false\n\
-\n\
-" > ~/.streamlit/config.toml
\ No newline at end of file
diff --git a/spaces/ICML2022/OFA/fairseq/examples/multilingual/data_scripts/remove_valid_test_in_train.py b/spaces/ICML2022/OFA/fairseq/examples/multilingual/data_scripts/remove_valid_test_in_train.py
deleted file mode 100644
index ef618adef7c7d010f8de38fb5ebeb5a35d2d3cac..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/multilingual/data_scripts/remove_valid_test_in_train.py
+++ /dev/null
@@ -1,290 +0,0 @@
-import os, sys
-import glob, itertools
-import pandas as pd
-
-WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None)
-
-if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip():
- print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."')
- sys.exit(-1)
-
-
-def load_langs(path):
- with open(path) as fr:
- langs = [l.strip() for l in fr]
- return langs
-
-
-
-def load_sentences(raw_data, split, direction):
- src, tgt = direction.split('-')
- src_path = f"{raw_data}/{split}.{direction}.{src}"
- tgt_path = f"{raw_data}/{split}.{direction}.{tgt}"
- if os.path.exists(src_path) and os.path.exists(tgt_path):
- return [(src, open(src_path).read().splitlines()), (tgt, open(tgt_path).read().splitlines())]
- else:
- return []
-
-def swap_direction(d):
- src, tgt = d.split('-')
- return f'{tgt}-{src}'
-
-def get_all_test_data(raw_data, directions, split='test'):
- test_data = [
- x
- for dd in directions
- for d in [dd, swap_direction(dd)]
- for x in load_sentences(raw_data, split, d)
- ]
- # all_test_data = {s for _, d in test_data for s in d}
- all_test_data = {}
- for lang, d in test_data:
- for s in d:
- s = s.strip()
- lgs = all_test_data.get(s, set())
- lgs.add(lang)
- all_test_data[s] = lgs
- return all_test_data, test_data
-
-def check_train_sentences(raw_data, direction, all_test_data, mess_up_train={}):
- src, tgt = direction.split('-')
- tgt_path = f"{raw_data}/train.{direction}.{tgt}"
- src_path = f"{raw_data}/train.{direction}.{src}"
- print(f'check training data in {raw_data}/train.{direction}')
- size = 0
- if not os.path.exists(tgt_path) or not os.path.exists(src_path):
- return mess_up_train, size
- with open(src_path) as f, open(tgt_path) as g:
- for src_line, tgt_line in zip(f, g):
- s = src_line.strip()
- t = tgt_line.strip()
- size += 1
- if s in all_test_data:
- langs = mess_up_train.get(s, set())
- langs.add(direction)
- mess_up_train[s] = langs
- if t in all_test_data:
- langs = mess_up_train.get(t, set())
- langs.add(direction)
- mess_up_train[t] = langs
- return mess_up_train, size
-
-def check_train_all(raw_data, directions, all_test_data):
- mess_up_train = {}
- data_sizes = {}
- for direction in directions:
- _, size = check_train_sentences(raw_data, direction, all_test_data, mess_up_train)
- data_sizes[direction] = size
- return mess_up_train, data_sizes
-
-def count_train_in_other_set(mess_up_train):
- train_in_others = [(direction, s) for s, directions in mess_up_train.items() for direction in directions]
- counts = {}
- for direction, s in train_in_others:
- counts[direction] = counts.get(direction, 0) + 1
- return counts
-
-def train_size_if_remove_in_otherset(data_sizes, mess_up_train):
- counts_in_other = count_train_in_other_set(mess_up_train)
- remain_sizes = []
- for direction, count in counts_in_other.items():
- remain_sizes.append((direction, data_sizes[direction] - count, data_sizes[direction], count, 100 * count / data_sizes[direction] ))
- return remain_sizes
-
-
-def remove_messed_up_sentences(raw_data, direction, mess_up_train, mess_up_train_pairs, corrected_langs):
- split = 'train'
- src_lang, tgt_lang = direction.split('-')
-
- tgt = f"{raw_data}/{split}.{direction}.{tgt_lang}"
- src = f"{raw_data}/{split}.{direction}.{src_lang}"
- print(f'working on {direction}: ', src, tgt)
- if not os.path.exists(tgt) or not os.path.exists(src) :
- return
-
- corrected_tgt = f"{to_folder}/{split}.{direction}.{tgt_lang}"
- corrected_src = f"{to_folder}/{split}.{direction}.{src_lang}"
- line_num = 0
- keep_num = 0
- with open(src, encoding='utf8',) as fsrc, \
- open(tgt, encoding='utf8',) as ftgt, \
- open(corrected_src, 'w', encoding='utf8') as fsrc_corrected, \
- open(corrected_tgt, 'w', encoding='utf8') as ftgt_corrected:
- for s, t in zip(fsrc, ftgt):
- s = s.strip()
- t = t.strip()
- if t not in mess_up_train \
- and s not in mess_up_train \
- and (s, t) not in mess_up_train_pairs \
- and (t, s) not in mess_up_train_pairs:
- corrected_langs.add(direction)
- print(s, file=fsrc_corrected)
- print(t, file=ftgt_corrected)
- keep_num += 1
- line_num += 1
- if line_num % 1000 == 0:
- print(f'completed {line_num} lines', end='\r')
- return line_num, keep_num
-
-##########
-
-
-def merge_valid_test_messup(mess_up_train_valid, mess_up_train_test):
- merged_mess = []
- for s in set(list(mess_up_train_valid.keys()) + list(mess_up_train_test.keys())):
- if not s:
- continue
- valid = mess_up_train_valid.get(s, set())
- test = mess_up_train_test.get(s, set())
- merged_mess.append((s, valid | test))
- return dict(merged_mess)
-
-
-
-#########
-def check_train_pairs(raw_data, direction, all_test_data, mess_up_train={}):
- src, tgt = direction.split('-')
- #a hack; TODO: check the reversed directions
- path1 = f"{raw_data}/train.{src}-{tgt}.{src}"
- path2 = f"{raw_data}/train.{src}-{tgt}.{tgt}"
- if not os.path.exists(path1) or not os.path.exists(path2) :
- return
-
- with open(path1) as f1, open(path2) as f2:
- for src_line, tgt_line in zip(f1, f2):
- s = src_line.strip()
- t = tgt_line.strip()
- if (s, t) in all_test_data or (t, s) in all_test_data:
- langs = mess_up_train.get( (s, t), set())
- langs.add(src)
- langs.add(tgt)
- mess_up_train[(s, t)] = langs
-
-
-def load_pairs(raw_data, split, direction):
- src, tgt = direction.split('-')
- src_f = f"{raw_data}/{split}.{direction}.{src}"
- tgt_f = f"{raw_data}/{split}.{direction}.{tgt}"
- if tgt != 'en_XX':
- src_f, tgt_f = tgt_f, src_f
- if os.path.exists(src_f) and os.path.exists(tgt_f):
- return list(zip(open(src_f).read().splitlines(),
- open(tgt_f).read().splitlines(),
- ))
- else:
- return []
-
-# skip_langs = ['cs_CZ', 'en_XX', 'tl_XX', 'tr_TR']
-def get_messed_up_test_pairs(split, directions):
- test_pairs = [
- (d, load_pairs(raw_data, split, d))
- for d in directions
- ]
- # all_test_data = {s for _, d in test_data for s in d}
- all_test_pairs = {}
- for direction, d in test_pairs:
- src, tgt = direction.split('-')
- for s in d:
- langs = all_test_pairs.get(s, set())
- langs.add(src)
- langs.add(tgt)
- all_test_pairs[s] = langs
- mess_up_train_pairs = {}
- for direction in directions:
- check_train_pairs(raw_data, direction, all_test_pairs, mess_up_train_pairs)
- return all_test_pairs, mess_up_train_pairs
-
-
-
-if __name__ == "__main__":
- #######
- import argparse
- parser = argparse.ArgumentParser()
- parser.add_argument(
- '--from-folder',
- required=True,
- type=str)
- parser.add_argument(
- '--to-folder',
- required=True,
- type=str)
- parser.add_argument(
- '--directions',
- default=None,
- type=str)
-
-
- args = parser.parse_args()
- raw_data = args.from_folder
- to_folder = args.to_folder
- os.makedirs(to_folder, exist_ok=True)
-
- if args.directions:
- directions = args.directions.split(',')
- else:
- raw_files = itertools.chain(
- glob.glob(f'{raw_data}/train*'),
- glob.glob(f'{raw_data}/valid*'),
- glob.glob(f'{raw_data}/test*'),
- )
- directions = [os.path.split(file_path)[-1].split('.')[1] for file_path in raw_files]
- print('working on directions: ', directions)
-
- ##########
-
-
-
- all_test_data, test_data = get_all_test_data(raw_data, directions, 'test')
- print('==loaded test data==')
- all_valid_data, valid_data = get_all_test_data(raw_data, directions, 'valid')
- print('==loaded valid data==')
- all_valid_test_data = merge_valid_test_messup(all_test_data, all_valid_data)
- mess_up_train, data_sizes = check_train_all(raw_data, directions, all_valid_test_data)
- print('training messing up with valid, test data:', len(mess_up_train))
- data_situation = train_size_if_remove_in_otherset(data_sizes, mess_up_train)
- df = pd.DataFrame(data_situation, columns=['direction', 'train_size_after_remove', 'orig_size', 'num_to_remove', 'remove_percent'])
- df.sort_values('remove_percent', ascending=False)
- df.to_csv(f'{raw_data}/clean_summary.tsv', sep='\t')
- print(f'projected data clean summary in: {raw_data}/clean_summary.tsv')
-
- # correct the dataset:
- all_test_pairs, mess_up_test_train_pairs = get_messed_up_test_pairs('test', directions)
- all_valid_pairs, mess_up_valid_train_pairs = get_messed_up_test_pairs('valid', directions)
-
- all_messed_pairs = set(mess_up_test_train_pairs.keys()).union(set(mess_up_valid_train_pairs.keys()))
- corrected_directions = set()
-
- real_data_situation = []
- for direction in directions:
- org_size, new_size = remove_messed_up_sentences(raw_data, direction, mess_up_train, all_messed_pairs, corrected_directions)
- if org_size == 0:
- print(f"{direction} has size 0")
- continue
- real_data_situation.append(
- (direction, new_size, org_size, org_size - new_size, (org_size - new_size) / org_size * 100)
- )
- print('corrected directions: ', corrected_directions)
- df = pd.DataFrame(real_data_situation, columns=['direction', 'train_size_after_remove', 'orig_size', 'num_to_remove', 'remove_percent'])
- df.sort_values('remove_percent', ascending=False)
- df.to_csv(f'{raw_data}/actual_clean_summary.tsv', sep='\t')
- print(f'actual data clean summary (which can be different from the projected one because of duplications) in: {raw_data}/actual_clean_summary.tsv')
-
- import shutil
- for direction in directions:
- src_lang, tgt_lang = direction.split('-')
- for split in ['train', 'valid', 'test']:
- # copying valid, test and uncorrected train
- if direction in corrected_directions and split == 'train':
- continue
- tgt = f"{raw_data}/{split}.{direction}.{tgt_lang}"
- src = f"{raw_data}/{split}.{direction}.{src_lang}"
- if not (os.path.exists(src) and os.path.exists(tgt)):
- continue
- corrected_tgt = f"{to_folder}/{split}.{direction}.{tgt_lang}"
- corrected_src = f"{to_folder}/{split}.{direction}.{src_lang}"
- print(f'copying {src} to {corrected_src}')
- shutil.copyfile(src, corrected_src)
- print(f'copying {tgt} to {corrected_tgt}')
- shutil.copyfile(tgt, corrected_tgt)
-
- print('completed')
\ No newline at end of file
diff --git a/spaces/Illumotion/Koboldcpp/examples/llama-bench/llama-bench.cpp b/spaces/Illumotion/Koboldcpp/examples/llama-bench/llama-bench.cpp
deleted file mode 100644
index a04115c962655ac70a3de0cd721537ecf40d3f58..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/examples/llama-bench/llama-bench.cpp
+++ /dev/null
@@ -1,1078 +0,0 @@
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-#include "ggml.h"
-#include "llama.h"
-#include "common.h"
-#include "build-info.h"
-#include "ggml-cuda.h"
-
-// utils
-static uint64_t get_time_ns() {
- using clock = std::chrono::high_resolution_clock;
- return std::chrono::nanoseconds(clock::now().time_since_epoch()).count();
-}
-
-template
-static std::string join(const std::vector & values, const std::string & delim) {
- std::ostringstream str;
- for (size_t i = 0; i < values.size(); i++) {
- str << values[i];
- if (i < values.size() - 1) {
- str << delim;
- }
- }
- return str.str();
-}
-
-template
-static std::vector split(const std::string & str, char delim) {
- std::vector values;
- std::istringstream str_stream(str);
- std::string token;
- while (std::getline(str_stream, token, delim)) {
- T value;
- std::istringstream token_stream(token);
- token_stream >> value;
- values.push_back(value);
- }
- return values;
-}
-
-template
-static T avg(const std::vector & v) {
- if (v.empty()) {
- return 0;
- }
- T sum = std::accumulate(v.begin(), v.end(), T(0));
- return sum / (T)v.size();
-}
-
-template
-static T stdev(const std::vector & v) {
- if (v.size() <= 1) {
- return 0;
- }
- T mean = avg(v);
- T sq_sum = std::inner_product(v.begin(), v.end(), v.begin(), T(0));
- T stdev = std::sqrt(sq_sum / (T)(v.size() - 1) - mean * mean * (T)v.size() / (T)(v.size() - 1));
- return stdev;
-}
-
-static std::string get_cpu_info() {
- std::string id;
-#ifdef __linux__
- FILE * f = fopen("/proc/cpuinfo", "r");
- if (f) {
- char buf[1024];
- while (fgets(buf, sizeof(buf), f)) {
- if (strncmp(buf, "model name", 10) == 0) {
- char * p = strchr(buf, ':');
- if (p) {
- p++;
- while (std::isspace(*p)) {
- p++;
- }
- while (std::isspace(p[strlen(p) - 1])) {
- p[strlen(p) - 1] = '\0';
- }
- id = p;
- break;
- }
- }
- }
- }
-#endif
- // TODO: other platforms
- return id;
-}
-
-static std::string get_gpu_info() {
- std::string id;
-#ifdef GGML_USE_CUBLAS
- int count = ggml_cuda_get_device_count();
- for (int i = 0; i < count; i++) {
- char buf[128];
- ggml_cuda_get_device_description(i, buf, sizeof(buf));
- id += buf;
- if (i < count - 1) {
- id += "/";
- }
- }
-#endif
- // TODO: other backends
- return id;
-}
-
-// command line params
-enum output_formats {CSV, JSON, MARKDOWN, SQL};
-
-struct cmd_params {
- std::vector model;
- std::vector n_prompt;
- std::vector n_gen;
- std::vector n_batch;
- std::vector f32_kv;
- std::vector n_threads;
- std::vector n_gpu_layers;
- std::vector main_gpu;
- std::vector mul_mat_q;
- std::vector> tensor_split;
- int reps;
- bool verbose;
- output_formats output_format;
-};
-
-static const cmd_params cmd_params_defaults = {
- /* model */ {"models/7B/ggml-model-q4_0.gguf"},
- /* n_prompt */ {512},
- /* n_gen */ {128},
- /* n_batch */ {512},
- /* f32_kv */ {false},
- /* n_threads */ {get_num_physical_cores()},
- /* n_gpu_layers */ {99},
- /* main_gpu */ {0},
- /* mul_mat_q */ {true},
- /* tensor_split */ {{}},
- /* reps */ 5,
- /* verbose */ false,
- /* output_format */ MARKDOWN
-};
-
-static void print_usage(int /* argc */, char ** argv) {
- printf("usage: %s [options]\n", argv[0]);
- printf("\n");
- printf("options:\n");
- printf(" -h, --help\n");
- printf(" -m, --model (default: %s)\n", join(cmd_params_defaults.model, ",").c_str());
- printf(" -p, --n-prompt (default: %s)\n", join(cmd_params_defaults.n_prompt, ",").c_str());
- printf(" -n, --n-gen (default: %s)\n", join(cmd_params_defaults.n_gen, ",").c_str());
- printf(" -b, --batch-size (default: %s)\n", join(cmd_params_defaults.n_batch, ",").c_str());
- printf(" --memory-f32 <0|1> (default: %s)\n", join(cmd_params_defaults.f32_kv, ",").c_str());
- printf(" -t, --threads (default: %s)\n", join(cmd_params_defaults.n_threads, ",").c_str());
- printf(" -ngl, --n-gpu-layers (default: %s)\n", join(cmd_params_defaults.n_gpu_layers, ",").c_str());
- printf(" -mg, --main-gpu (default: %s)\n", join(cmd_params_defaults.main_gpu, ",").c_str());
- printf(" -mmq, --mul-mat-q <0|1> (default: %s)\n", join(cmd_params_defaults.mul_mat_q, ",").c_str());
- printf(" -ts, --tensor_split \n");
- printf(" -r, --repetitions (default: %d)\n", cmd_params_defaults.reps);
- printf(" -o, --output (default: %s)\n", cmd_params_defaults.output_format == CSV ? "csv" : cmd_params_defaults.output_format == JSON ? "json" : cmd_params_defaults.output_format == MARKDOWN ? "md" : "sql");
- printf(" -v, --verbose (default: %s)\n", cmd_params_defaults.verbose ? "1" : "0");
- printf("\n");
- printf("Multiple values can be given for each parameter by separating them with ',' or by specifying the parameter multiple times.\n");
-
-}
-
-static cmd_params parse_cmd_params(int argc, char ** argv) {
- cmd_params params;
- std::string arg;
- bool invalid_param = false;
- const std::string arg_prefix = "--";
- const char split_delim = ',';
-
- params.verbose = cmd_params_defaults.verbose;
- params.output_format = cmd_params_defaults.output_format;
- params.reps = cmd_params_defaults.reps;
-
- for (int i = 1; i < argc; i++) {
- arg = argv[i];
- if (arg.compare(0, arg_prefix.size(), arg_prefix) == 0) {
- std::replace(arg.begin(), arg.end(), '_', '-');
- }
-
- if (arg == "-h" || arg == "--help") {
- print_usage(argc, argv);
- exit(0);
- } else if (arg == "-m" || arg == "--model") {
- if (++i >= argc) {
- invalid_param = true;
- break;
- }
- auto p = split(argv[i], split_delim);
- params.model.insert(params.model.end(), p.begin(), p.end());
- } else if (arg == "-p" || arg == "--n-prompt") {
- if (++i >= argc) {
- invalid_param = true;
- break;
- }
- auto p = split(argv[i], split_delim);
- params.n_prompt.insert(params.n_prompt.end(), p.begin(), p.end());
- } else if (arg == "-n" || arg == "--n-gen") {
- if (++i >= argc) {
- invalid_param = true;
- break;
- }
- auto p = split(argv[i], split_delim);
- params.n_gen.insert(params.n_gen.end(), p.begin(), p.end());
- } else if (arg == "-b" || arg == "--batch-size") {
- if (++i >= argc) {
- invalid_param = true;
- break;
- }
- auto p = split(argv[i], split_delim);
- params.n_batch.insert(params.n_batch.end(), p.begin(), p.end());
- } else if (arg == "--memory-f32") {
- if (++i >= argc) {
- invalid_param = true;
- break;
- }
- auto p = split(argv[i], split_delim);
- params.f32_kv.insert(params.f32_kv.end(), p.begin(), p.end());
- } else if (arg == "-t" || arg == "--threads") {
- if (++i >= argc) {
- invalid_param = true;
- break;
- }
- auto p = split(argv[i], split_delim);
- params.n_threads.insert(params.n_threads.end(), p.begin(), p.end());
- } else if (arg == "-ngl" || arg == "--n-gpu-layers") {
- if (++i >= argc) {
- invalid_param = true;
- break;
- }
- auto p = split(argv[i], split_delim);
- params.n_gpu_layers.insert(params.n_gpu_layers.end(), p.begin(), p.end());
- } else if (arg == "-mg" || arg == "--main-gpu") {
- if (++i >= argc) {
- invalid_param = true;
- break;
- }
- params.main_gpu = split(argv[i], split_delim);
- } else if (arg == "-mmq" || arg == "--mul-mat-q") {
- if (++i >= argc) {
- invalid_param = true;
- break;
- }
- auto p = split(argv[i], split_delim);
- params.mul_mat_q.insert(params.mul_mat_q.end(), p.begin(), p.end());
- } else if (arg == "-ts" || arg == "--tensor-split") {
- if (++i >= argc) {
- invalid_param = true;
- break;
- }
- for (auto ts : split(argv[i], split_delim)) {
- // split string by ; and /
- const std::regex regex{R"([;/]+)"};
- std::sregex_token_iterator it{ts.begin(), ts.end(), regex, -1};
- std::vector split_arg{it, {}};
- GGML_ASSERT(split_arg.size() <= LLAMA_MAX_DEVICES);
-
- std::array tensor_split;
- for (size_t i = 0; i < LLAMA_MAX_DEVICES; ++i) {
- if (i < split_arg.size()) {
- tensor_split[i] = std::stof(split_arg[i]);
- } else {
- tensor_split[i] = 0.0f;
- }
- }
- params.tensor_split.push_back(tensor_split);
- }
- } else if (arg == "-r" || arg == "--repetitions") {
- if (++i >= argc) {
- invalid_param = true;
- break;
- }
- params.reps = std::stoi(argv[i]);
- } else if (arg == "-o" || arg == "--output") {
- if (++i >= argc) {
- invalid_param = true;
- break;
- }
- if (argv[i] == std::string("csv")) {
- params.output_format = CSV;
- } else if (argv[i] == std::string("json")) {
- params.output_format = JSON;
- } else if (argv[i] == std::string("md")) {
- params.output_format = MARKDOWN;
- } else if (argv[i] == std::string("sql")) {
- params.output_format = SQL;
- } else {
- invalid_param = true;
- break;
- }
- } else if (arg == "-v" || arg == "--verbose") {
- params.verbose = true;
- } else {
- invalid_param = true;
- break;
- }
- }
- if (invalid_param) {
- fprintf(stderr, "error: invalid parameter for argument: %s\n", arg.c_str());
- print_usage(argc, argv);
- exit(1);
- }
-
- // set defaults
- if (params.model.empty()) { params.model = cmd_params_defaults.model; }
- if (params.n_prompt.empty()) { params.n_prompt = cmd_params_defaults.n_prompt; }
- if (params.n_gen.empty()) { params.n_gen = cmd_params_defaults.n_gen; }
- if (params.n_batch.empty()) { params.n_batch = cmd_params_defaults.n_batch; }
- if (params.f32_kv.empty()) { params.f32_kv = cmd_params_defaults.f32_kv; }
- if (params.n_gpu_layers.empty()) { params.n_gpu_layers = cmd_params_defaults.n_gpu_layers; }
- if (params.main_gpu.empty()) { params.main_gpu = cmd_params_defaults.main_gpu; }
- if (params.mul_mat_q.empty()) { params.mul_mat_q = cmd_params_defaults.mul_mat_q; }
- if (params.tensor_split.empty()) { params.tensor_split = cmd_params_defaults.tensor_split; }
- if (params.n_threads.empty()) { params.n_threads = cmd_params_defaults.n_threads; }
-
- return params;
-}
-
-struct cmd_params_instance {
- std::string model;
- int n_prompt;
- int n_gen;
- int n_batch;
- bool f32_kv;
- int n_threads;
- int n_gpu_layers;
- int main_gpu;
- bool mul_mat_q;
- std::array tensor_split;
-
- llama_model_params to_llama_mparams() const {
- llama_model_params mparams = llama_model_default_params();
-
- mparams.n_gpu_layers = n_gpu_layers;
- mparams.main_gpu = main_gpu;
- mparams.tensor_split = tensor_split.data();
-
- return mparams;
- }
-
- bool equal_mparams(const cmd_params_instance & other) const {
- return model == other.model &&
- n_gpu_layers == other.n_gpu_layers &&
- main_gpu == other.main_gpu &&
- tensor_split == other.tensor_split;
- }
-
- llama_context_params to_llama_cparams() const {
- llama_context_params cparams = llama_context_default_params();
-
- cparams.n_ctx = n_prompt + n_gen;
- cparams.n_batch = n_batch;
- cparams.f16_kv = !f32_kv;
- cparams.mul_mat_q = mul_mat_q;
-
- return cparams;
- }
-};
-
-static std::vector get_cmd_params_instances_int(const cmd_params & params, int n_gen, int n_prompt) {
- std::vector instances;
-
- for (const auto & m : params.model)
- for (const auto & nl : params.n_gpu_layers)
- for (const auto & mg : params.main_gpu)
- for (const auto & ts : params.tensor_split)
- for (const auto & nb : params.n_batch)
- for (const auto & fk : params.f32_kv)
- for (const auto & mmq : params.mul_mat_q)
- for (const auto & nt : params.n_threads) {
- cmd_params_instance instance = {
- /* .model = */ m,
- /* .n_prompt = */ n_prompt,
- /* .n_gen = */ n_gen,
- /* .n_batch = */ nb,
- /* .f32_kv = */ fk,
- /* .n_threads = */ nt,
- /* .n_gpu_layers = */ nl,
- /* .main_gpu = */ mg,
- /* .mul_mat_q = */ mmq,
- /* .tensor_split = */ ts,
- };
- instances.push_back(instance);
- }
- return instances;
-}
-
-static std::vector get_cmd_params_instances(const cmd_params & params) {
- std::vector instances;
-
-#if 1
- // this ordering minimizes the number of times that each model needs to be reloaded
- for (const auto & m : params.model)
- for (const auto & nl : params.n_gpu_layers)
- for (const auto & mg : params.main_gpu)
- for (const auto & ts : params.tensor_split)
- for (const auto & nb : params.n_batch)
- for (const auto & fk : params.f32_kv)
- for (const auto & mmq : params.mul_mat_q)
- for (const auto & nt : params.n_threads) {
- for (const auto & n_prompt : params.n_prompt) {
- if (n_prompt == 0) {
- continue;
- }
- cmd_params_instance instance = {
- /* .model = */ m,
- /* .n_prompt = */ n_prompt,
- /* .n_gen = */ 0,
- /* .n_batch = */ nb,
- /* .f32_kv = */ fk,
- /* .n_threads = */ nt,
- /* .n_gpu_layers = */ nl,
- /* .main_gpu = */ mg,
- /* .mul_mat_q = */ mmq,
- /* .tensor_split = */ ts,
- };
- instances.push_back(instance);
- }
-
- for (const auto & n_gen : params.n_gen) {
- if (n_gen == 0) {
- continue;
- }
- cmd_params_instance instance = {
- /* .model = */ m,
- /* .n_prompt = */ 0,
- /* .n_gen = */ n_gen,
- /* .n_batch = */ nb,
- /* .f32_kv = */ fk,
- /* .n_threads = */ nt,
- /* .n_gpu_layers = */ nl,
- /* .main_gpu = */ mg,
- /* .mul_mat_q = */ mmq,
- /* .tensor_split = */ ts,
- };
- instances.push_back(instance);
- }
- }
-#else
- // this ordering separates the prompt and generation tests
- for (const auto & n_prompt : params.n_prompt) {
- if (n_prompt == 0) {
- continue;
- }
- auto instances_prompt = get_cmd_params_instances_int(params, 0, n_prompt);
- instances.insert(instances.end(), instances_prompt.begin(), instances_prompt.end());
- }
-
- for (const auto & n_gen : params.n_gen) {
- if (n_gen == 0) {
- continue;
- }
- auto instances_gen = get_cmd_params_instances_int(params, n_gen, 0);
- instances.insert(instances.end(), instances_gen.begin(), instances_gen.end());
- }
-#endif
-
- return instances;
-}
-
-struct test {
- static const std::string build_commit;
- static const int build_number;
- static const bool cuda;
- static const bool opencl;
- static const bool metal;
- static const bool gpu_blas;
- static const bool blas;
- static const std::string cpu_info;
- static const std::string gpu_info;
- std::string model_filename;
- std::string model_type;
- uint64_t model_size;
- uint64_t model_n_params;
- int n_batch;
- int n_threads;
- bool f32_kv;
- int n_gpu_layers;
- int main_gpu;
- bool mul_mat_q;
- std::array tensor_split;
- int n_prompt;
- int n_gen;
- std::string test_time;
- std::vector samples_ns;
-
- test(const cmd_params_instance & inst, const llama_model * lmodel, const llama_context * ctx) {
- model_filename = inst.model;
- char buf[128];
- llama_model_desc(lmodel, buf, sizeof(buf));
- model_type = buf;
- model_size = llama_model_size(lmodel);
- model_n_params = llama_model_n_params(lmodel);
- n_batch = inst.n_batch;
- n_threads = inst.n_threads;
- f32_kv = inst.f32_kv;
- n_gpu_layers = inst.n_gpu_layers;
- main_gpu = inst.main_gpu;
- mul_mat_q = inst.mul_mat_q;
- tensor_split = inst.tensor_split;
- n_prompt = inst.n_prompt;
- n_gen = inst.n_gen;
- // RFC 3339 date-time format
- time_t t = time(NULL);
- std::strftime(buf, sizeof(buf), "%FT%TZ", gmtime(&t));
- test_time = buf;
-
- (void) ctx;
- }
-
- uint64_t avg_ns() const {
- return ::avg(samples_ns);
- }
-
- uint64_t stdev_ns() const {
- return ::stdev(samples_ns);
- }
-
- std::vector get_ts() const {
- int n_tokens = n_prompt + n_gen;
- std::vector ts;
- std::transform(samples_ns.begin(), samples_ns.end(), std::back_inserter(ts), [n_tokens](uint64_t t) { return 1e9 * n_tokens / t; });
- return ts;
- }
-
- double avg_ts() const {
- return ::avg(get_ts());
- }
-
- double stdev_ts() const {
- return ::stdev(get_ts());
- }
-
- static std::string get_backend() {
- if (cuda) {
- return GGML_CUDA_NAME;
- }
- if (opencl) {
- return "OpenCL";
- }
- if (metal) {
- return "Metal";
- }
- if (gpu_blas) {
- return "GPU BLAS";
- }
- if (blas) {
- return "BLAS";
- }
- return "CPU";
- }
-
- static const std::vector & get_fields() {
- static const std::vector fields = {
- "build_commit", "build_number",
- "cuda", "opencl", "metal", "gpu_blas", "blas",
- "cpu_info", "gpu_info",
- "model_filename", "model_type", "model_size", "model_n_params",
- "n_batch", "n_threads", "f16_kv",
- "n_gpu_layers", "main_gpu", "mul_mat_q", "tensor_split",
- "n_prompt", "n_gen", "test_time",
- "avg_ns", "stddev_ns",
- "avg_ts", "stddev_ts"
- };
- return fields;
- }
-
- enum field_type {STRING, BOOL, INT, FLOAT};
-
- static field_type get_field_type(const std::string & field) {
- if (field == "build_number" || field == "n_batch" || field == "n_threads" ||
- field == "model_size" || field == "model_n_params" ||
- field == "n_gpu_layers" || field == "main_gpu" ||
- field == "n_prompt" || field == "n_gen" ||
- field == "avg_ns" || field == "stddev_ns") {
- return INT;
- }
- if (field == "cuda" || field == "opencl" || field == "metal" || field == "gpu_blas" || field == "blas" ||
- field == "f16_kv" || field == "mul_mat_q") {
- return BOOL;
- }
- if (field == "avg_ts" || field == "stddev_ts") {
- return FLOAT;
- }
- return STRING;
- }
-
- std::vector get_values() const {
- std::string tensor_split_str;
- int max_nonzero = 0;
- for (int i = 0; i < LLAMA_MAX_DEVICES; i++) {
- if (tensor_split[i] > 0) {
- max_nonzero = i;
- }
- }
- for (int i = 0; i <= max_nonzero; i++) {
- char buf[32];
- snprintf(buf, sizeof(buf), "%.2f", tensor_split[i]);
- tensor_split_str += buf;
- if (i < max_nonzero) {
- tensor_split_str += "/";
- }
- }
- std::vector values = {
- build_commit, std::to_string(build_number),
- std::to_string(cuda), std::to_string(opencl), std::to_string(metal), std::to_string(gpu_blas), std::to_string(blas),
- cpu_info, gpu_info,
- model_filename, model_type, std::to_string(model_size), std::to_string(model_n_params),
- std::to_string(n_batch), std::to_string(n_threads), std::to_string(!f32_kv),
- std::to_string(n_gpu_layers), std::to_string(main_gpu), std::to_string(mul_mat_q), tensor_split_str,
- std::to_string(n_prompt), std::to_string(n_gen), test_time,
- std::to_string(avg_ns()), std::to_string(stdev_ns()),
- std::to_string(avg_ts()), std::to_string(stdev_ts())
- };
- return values;
- }
-
- std::map get_map() const {
- std::map map;
- auto fields = get_fields();
- auto values = get_values();
- std::transform(fields.begin(), fields.end(), values.begin(),
- std::inserter(map, map.end()), std::make_pair);
- return map;
- }
-};
-
-const std::string test::build_commit = BUILD_COMMIT;
-const int test::build_number = BUILD_NUMBER;
-const bool test::cuda = !!ggml_cpu_has_cublas();
-const bool test::opencl = !!ggml_cpu_has_clblast();
-const bool test::metal = !!ggml_cpu_has_metal();
-const bool test::gpu_blas = !!ggml_cpu_has_gpublas();
-const bool test::blas = !!ggml_cpu_has_blas();
-const std::string test::cpu_info = get_cpu_info();
-const std::string test::gpu_info = get_gpu_info();
-
-struct printer {
- virtual ~printer() {}
-
- FILE * fout;
- virtual void print_header(const cmd_params & params) { (void) params; }
- virtual void print_test(const test & t) = 0;
- virtual void print_footer() { }
-};
-
-struct csv_printer : public printer {
- static std::string escape_csv(const std::string & field) {
- std::string escaped = "\"";
- for (auto c : field) {
- if (c == '"') {
- escaped += "\"";
- }
- escaped += c;
- }
- escaped += "\"";
- return escaped;
- }
-
- void print_header(const cmd_params & params) override {
- std::vector fields = test::get_fields();
- fprintf(fout, "%s\n", join(fields, ",").c_str());
- (void) params;
- }
-
- void print_test(const test & t) override {
- std::vector values = t.get_values();
- std::transform(values.begin(), values.end(), values.begin(), escape_csv);
- fprintf(fout, "%s\n", join(values, ",").c_str());
- }
-};
-
-struct json_printer : public printer {
- bool first = true;
-
- static std::string escape_json(const std::string & value) {
- std::string escaped;
- for (auto c : value) {
- if (c == '"') {
- escaped += "\\\"";
- } else if (c == '\\') {
- escaped += "\\\\";
- } else if (c <= 0x1f) {
- char buf[8];
- snprintf(buf, sizeof(buf), "\\u%04x", c);
- escaped += buf;
- } else {
- escaped += c;
- }
- }
- return escaped;
- }
-
- static std::string format_value(const std::string & field, const std::string & value) {
- switch (test::get_field_type(field)) {
- case test::STRING:
- return "\"" + escape_json(value) + "\"";
- case test::BOOL:
- return value == "0" ? "false" : "true";
- default:
- return value;
- }
- }
-
- void print_header(const cmd_params & params) override {
- fprintf(fout, "[\n");
- (void) params;
- }
-
- void print_fields(const std::vector & fields, const std::vector & values) {
- assert(fields.size() == values.size());
- for (size_t i = 0; i < fields.size(); i++) {
- fprintf(fout, " \"%s\": %s,\n", fields.at(i).c_str(), format_value(fields.at(i), values.at(i)).c_str());
- }
- }
-
- void print_test(const test & t) override {
- if (first) {
- first = false;
- } else {
- fprintf(fout, ",\n");
- }
- fprintf(fout, " {\n");
- print_fields(test::get_fields(), t.get_values());
- fprintf(fout, " \"samples_ns\": [ %s ],\n", join(t.samples_ns, ", ").c_str());
- fprintf(fout, " \"samples_ts\": [ %s ]\n", join(t.get_ts(), ", ").c_str());
- fprintf(fout, " }");
- fflush(fout);
- }
-
- void print_footer() override {
- fprintf(fout, "\n]\n");
- }
-};
-
-struct markdown_printer : public printer {
- std::vector fields;
-
- static int get_field_width(const std::string & field) {
- if (field == "model") {
- return -30;
- }
- if (field == "t/s") {
- return 16;
- }
- if (field == "size" || field == "params") {
- return 10;
- }
- if (field == "n_gpu_layers") {
- return 3;
- }
-
- int width = std::max((int)field.length(), 10);
-
- if (test::get_field_type(field) == test::STRING) {
- return -width;
- }
- return width;
- }
-
- static std::string get_field_display_name(const std::string & field) {
- if (field == "n_gpu_layers") {
- return "ngl";
- }
- if (field == "n_threads") {
- return "threads";
- }
- if (field == "mul_mat_q") {
- return "mmq";
- }
- if (field == "tensor_split") {
- return "ts";
- }
- return field;
- }
-
- void print_header(const cmd_params & params) override {
- // select fields to print
- fields.push_back("model");
- fields.push_back("size");
- fields.push_back("params");
- fields.push_back("backend");
- bool is_cpu_backend = test::get_backend() == "CPU" || test::get_backend() == "BLAS";
- if (!is_cpu_backend) {
- fields.push_back("n_gpu_layers");
- }
- if (params.n_threads.size() > 1 || params.n_threads != cmd_params_defaults.n_threads || is_cpu_backend) {
- fields.push_back("n_threads");
- }
- if (params.n_batch.size() > 1 || params.n_batch != cmd_params_defaults.n_batch) {
- fields.push_back("n_batch");
- }
- if (params.f32_kv.size() > 1 || params.f32_kv != cmd_params_defaults.f32_kv) {
- fields.push_back("f16_kv");
- }
- if (params.main_gpu.size() > 1 || params.main_gpu != cmd_params_defaults.main_gpu) {
- fields.push_back("main_gpu");
- }
- if (params.mul_mat_q.size() > 1 || params.mul_mat_q != cmd_params_defaults.mul_mat_q) {
- fields.push_back("mul_mat_q");
- }
- if (params.tensor_split.size() > 1 || params.tensor_split != cmd_params_defaults.tensor_split) {
- fields.push_back("tensor_split");
- }
- fields.push_back("test");
- fields.push_back("t/s");
-
- fprintf(fout, "|");
- for (const auto & field : fields) {
- fprintf(fout, " %*s |", get_field_width(field), get_field_display_name(field).c_str());
- }
- fprintf(fout, "\n");
- fprintf(fout, "|");
- for (const auto & field : fields) {
- int width = get_field_width(field);
- fprintf(fout, " %s%s |", std::string(std::abs(width) - 1, '-').c_str(), width > 0 ? ":" : "-");
- }
- fprintf(fout, "\n");
- }
-
- void print_test(const test & t) override {
- std::map vmap = t.get_map();
-
- fprintf(fout, "|");
- for (const auto & field : fields) {
- std::string value;
- char buf[128];
- if (field == "model") {
- value = t.model_type;
- } else if (field == "size") {
- if (t.model_size < 1024*1024*1024) {
- snprintf(buf, sizeof(buf), "%.2f MiB", t.model_size / 1024.0 / 1024.0);
- } else {
- snprintf(buf, sizeof(buf), "%.2f GiB", t.model_size / 1024.0 / 1024.0 / 1024.0);
- }
- value = buf;
- } else if (field == "params") {
- if (t.model_n_params < 1000*1000*1000) {
- snprintf(buf, sizeof(buf), "%.2f M", t.model_n_params / 1e6);
- } else {
- snprintf(buf, sizeof(buf), "%.2f B", t.model_n_params / 1e9);
- }
- value = buf;
- } else if (field == "backend") {
- value = test::get_backend();
- } else if (field == "test") {
- if (t.n_prompt > 0 && t.n_gen == 0) {
- snprintf(buf, sizeof(buf), "pp %d", t.n_prompt);
- } else if (t.n_gen > 0 && t.n_prompt == 0) {
- snprintf(buf, sizeof(buf), "tg %d", t.n_gen);
- } else {
- assert(false);
- exit(1);
- }
- value = buf;
- } else if (field == "t/s") {
- snprintf(buf, sizeof(buf), "%.2f ± %.2f", t.avg_ts(), t.stdev_ts());
- value = buf;
- } else if (vmap.find(field) != vmap.end()) {
- value = vmap.at(field);
- } else {
- assert(false);
- exit(1);
- }
-
- int width = get_field_width(field);
- if (field == "t/s") {
- // HACK: the utf-8 character is 2 bytes
- width += 1;
- }
- fprintf(fout, " %*s |", width, value.c_str());
- }
- fprintf(fout, "\n");
- }
-
- void print_footer() override {
- fprintf(fout, "\nbuild: %s (%d)\n", test::build_commit.c_str(), test::build_number);
- }
-};
-
-struct sql_printer : public printer {
- static std::string get_sql_field_type(const std::string & field) {
- switch (test::get_field_type(field)) {
- case test::STRING:
- return "TEXT";
- case test::BOOL:
- case test::INT:
- return "INTEGER";
- case test::FLOAT:
- return "REAL";
- default:
- assert(false);
- exit(1);
- }
- }
-
- void print_header(const cmd_params & params) override {
- std::vector fields = test::get_fields();
- fprintf(fout, "CREATE TABLE IF NOT EXISTS test (\n");
- for (size_t i = 0; i < fields.size(); i++) {
- fprintf(fout, " %s %s%s\n", fields.at(i).c_str(), get_sql_field_type(fields.at(i)).c_str(), i < fields.size() - 1 ? "," : "");
- }
- fprintf(fout, ");\n");
- fprintf(fout, "\n");
- (void) params;
- }
-
- void print_test(const test & t) override {
- fprintf(fout, "INSERT INTO test (%s) ", join(test::get_fields(), ", ").c_str());
- fprintf(fout, "VALUES (");
- std::vector values = t.get_values();
- for (size_t i = 0; i < values.size(); i++) {
- fprintf(fout, "'%s'%s", values.at(i).c_str(), i < values.size() - 1 ? ", " : "");
- }
- fprintf(fout, ");\n");
- }
-};
-
-static void test_prompt(llama_context * ctx, int n_prompt, int n_past, int n_batch, int n_threads) {
- std::vector tokens(n_batch, llama_token_bos(ctx));
- int n_processed = 0;
-
- llama_set_n_threads(ctx, n_threads, n_threads);
-
- while (n_processed < n_prompt) {
- int n_tokens = std::min(n_prompt - n_processed, n_batch);
- llama_decode(ctx, llama_batch_get_one(tokens.data(), n_tokens, n_past + n_processed, 0));
- n_processed += n_tokens;
- }
-}
-
-static void test_gen(llama_context * ctx, int n_gen, int n_past, int n_threads) {
- llama_token token = llama_token_bos(ctx);
-
- llama_set_n_threads(ctx, n_threads, n_threads);
-
- for (int i = 0; i < n_gen; i++) {
- llama_decode(ctx, llama_batch_get_one(&token, 1, n_past + i, 0));
- }
-}
-
-static void llama_null_log_callback(enum ggml_log_level level, const char * text, void * user_data) {
- (void) level;
- (void) text;
- (void) user_data;
-}
-
-int main(int argc, char ** argv) {
- // try to set locale for unicode characters in markdown
- setlocale(LC_CTYPE, ".UTF-8");
-
-#if !defined(NDEBUG)
- fprintf(stderr, "warning: asserts enabled, performance may be affected\n");
-#endif
-
-#if (defined(_MSC_VER) && defined(_DEBUG)) || (!defined(_MSC_VER) && !defined(__OPTIMIZE__))
- fprintf(stderr, "warning: debug build, performance may be affected\n");
-#endif
-
-#if defined(__SANITIZE_ADDRESS__) || defined(__SANITIZE_THREAD__)
- fprintf(stderr, "warning: sanitizer enabled, performance may be affected\n");
-#endif
-
- cmd_params params = parse_cmd_params(argc, argv);
-
- // initialize llama.cpp
- if (!params.verbose) {
- llama_log_set(llama_null_log_callback, NULL);
- }
- bool numa = false;
- llama_backend_init(numa);
-
- // initialize printer
- std::unique_ptr p;
- switch (params.output_format) {
- case CSV:
- p.reset(new csv_printer());
- break;
- case JSON:
- p.reset(new json_printer());
- break;
- case MARKDOWN:
- p.reset(new markdown_printer());
- break;
- case SQL:
- p.reset(new sql_printer());
- break;
- default:
- assert(false);
- exit(1);
- }
- p->fout = stdout;
- p->print_header(params);
-
- std::vector params_instances = get_cmd_params_instances(params);
-
- llama_model * lmodel = nullptr;
- const cmd_params_instance * prev_inst = nullptr;
-
- for (const auto & inst : params_instances) {
- // keep the same model between tests when possible
- if (!lmodel || !prev_inst || !inst.equal_mparams(*prev_inst)) {
- if (lmodel) {
- llama_free_model(lmodel);
- }
-
- lmodel = llama_load_model_from_file(inst.model.c_str(), inst.to_llama_mparams());
- if (lmodel == NULL) {
- fprintf(stderr, "%s: error: failed to load model '%s'\n", __func__, inst.model.c_str());
- return 1;
- }
- prev_inst = &inst;
- }
-
- llama_context * ctx = llama_new_context_with_model(lmodel, inst.to_llama_cparams());
- if (ctx == NULL) {
- fprintf(stderr, "%s: error: failed to create context with model '%s'\n", __func__, inst.model.c_str());
- llama_free_model(lmodel);
- return 1;
- }
-
- test t(inst, lmodel, ctx);
-
- llama_kv_cache_tokens_rm(ctx, -1, -1);
-
- // warmup run
- if (t.n_prompt > 0) {
- test_prompt(ctx, std::min(2, t.n_batch), 0, t.n_batch, t.n_threads);
- }
- if (t.n_gen > 0) {
- test_gen(ctx, 1, 0, t.n_threads);
- }
-
- for (int i = 0; i < params.reps; i++) {
- llama_kv_cache_tokens_rm(ctx, -1, -1);
-
- uint64_t t_start = get_time_ns();
- if (t.n_prompt > 0) {
- test_prompt(ctx, t.n_prompt, 0, t.n_batch, t.n_threads);
- }
- if (t.n_gen > 0) {
- test_gen(ctx, t.n_gen, t.n_prompt, t.n_threads);
- }
- uint64_t t_ns = get_time_ns() - t_start;
- t.samples_ns.push_back(t_ns);
- }
-
- p->print_test(t);
-
- llama_print_timings(ctx);
-
- llama_free(ctx);
- }
-
- llama_free_model(lmodel);
-
- p->print_footer();
-
- llama_backend_free();
-
- return 0;
-}
diff --git a/spaces/Jamkonams/AutoGPT/autogpt/chat.py b/spaces/Jamkonams/AutoGPT/autogpt/chat.py
deleted file mode 100644
index 1f6bca96eb216c667656b50f131006b83c681065..0000000000000000000000000000000000000000
--- a/spaces/Jamkonams/AutoGPT/autogpt/chat.py
+++ /dev/null
@@ -1,175 +0,0 @@
-import time
-
-from openai.error import RateLimitError
-
-from autogpt import token_counter
-from autogpt.config import Config
-from autogpt.llm_utils import create_chat_completion
-from autogpt.logs import logger
-
-cfg = Config()
-
-
-def create_chat_message(role, content):
- """
- Create a chat message with the given role and content.
-
- Args:
- role (str): The role of the message sender, e.g., "system", "user", or "assistant".
- content (str): The content of the message.
-
- Returns:
- dict: A dictionary containing the role and content of the message.
- """
- return {"role": role, "content": content}
-
-
-def generate_context(prompt, relevant_memory, full_message_history, model):
- current_context = [
- create_chat_message("system", prompt),
- create_chat_message(
- "system", f"The current time and date is {time.strftime('%c')}"
- ),
- create_chat_message(
- "system",
- f"This reminds you of these events from your past:\n{relevant_memory}\n\n",
- ),
- ]
-
- # Add messages from the full message history until we reach the token limit
- next_message_to_add_index = len(full_message_history) - 1
- insertion_index = len(current_context)
- # Count the currently used tokens
- current_tokens_used = token_counter.count_message_tokens(current_context, model)
- return (
- next_message_to_add_index,
- current_tokens_used,
- insertion_index,
- current_context,
- )
-
-
-# TODO: Change debug from hardcode to argument
-def chat_with_ai(
- prompt, user_input, full_message_history, permanent_memory, token_limit
-):
- """Interact with the OpenAI API, sending the prompt, user input, message history,
- and permanent memory."""
- while True:
- try:
- """
- Interact with the OpenAI API, sending the prompt, user input,
- message history, and permanent memory.
-
- Args:
- prompt (str): The prompt explaining the rules to the AI.
- user_input (str): The input from the user.
- full_message_history (list): The list of all messages sent between the
- user and the AI.
- permanent_memory (Obj): The memory object containing the permanent
- memory.
- token_limit (int): The maximum number of tokens allowed in the API call.
-
- Returns:
- str: The AI's response.
- """
- model = cfg.fast_llm_model # TODO: Change model from hardcode to argument
- # Reserve 1000 tokens for the response
-
- logger.debug(f"Token limit: {token_limit}")
- send_token_limit = token_limit - 1000
-
- relevant_memory = (
- ""
- if len(full_message_history) == 0
- else permanent_memory.get_relevant(str(full_message_history[-9:]), 10)
- )
-
- logger.debug(f"Memory Stats: {permanent_memory.get_stats()}")
-
- (
- next_message_to_add_index,
- current_tokens_used,
- insertion_index,
- current_context,
- ) = generate_context(prompt, relevant_memory, full_message_history, model)
-
- while current_tokens_used > 2500:
- # remove memories until we are under 2500 tokens
- relevant_memory = relevant_memory[:-1]
- (
- next_message_to_add_index,
- current_tokens_used,
- insertion_index,
- current_context,
- ) = generate_context(
- prompt, relevant_memory, full_message_history, model
- )
-
- current_tokens_used += token_counter.count_message_tokens(
- [create_chat_message("user", user_input)], model
- ) # Account for user input (appended later)
-
- while next_message_to_add_index >= 0:
- # print (f"CURRENT TOKENS USED: {current_tokens_used}")
- message_to_add = full_message_history[next_message_to_add_index]
-
- tokens_to_add = token_counter.count_message_tokens(
- [message_to_add], model
- )
- if current_tokens_used + tokens_to_add > send_token_limit:
- break
-
- # Add the most recent message to the start of the current context,
- # after the two system prompts.
- current_context.insert(
- insertion_index, full_message_history[next_message_to_add_index]
- )
-
- # Count the currently used tokens
- current_tokens_used += tokens_to_add
-
- # Move to the next most recent message in the full message history
- next_message_to_add_index -= 1
-
- # Append user input, the length of this is accounted for above
- current_context.extend([create_chat_message("user", user_input)])
-
- # Calculate remaining tokens
- tokens_remaining = token_limit - current_tokens_used
- # assert tokens_remaining >= 0, "Tokens remaining is negative.
- # This should never happen, please submit a bug report at
- # https://www.github.com/Torantulino/Auto-GPT"
-
- # Debug print the current context
- logger.debug(f"Token limit: {token_limit}")
- logger.debug(f"Send Token Count: {current_tokens_used}")
- logger.debug(f"Tokens remaining for response: {tokens_remaining}")
- logger.debug("------------ CONTEXT SENT TO AI ---------------")
- for message in current_context:
- # Skip printing the prompt
- if message["role"] == "system" and message["content"] == prompt:
- continue
- logger.debug(f"{message['role'].capitalize()}: {message['content']}")
- logger.debug("")
- logger.debug("----------- END OF CONTEXT ----------------")
-
- # TODO: use a model defined elsewhere, so that model can contain
- # temperature and other settings we care about
- assistant_reply = create_chat_completion(
- model=model,
- messages=current_context,
- max_tokens=tokens_remaining,
- )
-
- # Update full message history
- full_message_history.append(create_chat_message("user", user_input))
- full_message_history.append(
- create_chat_message("assistant", assistant_reply)
- )
-
- return assistant_reply
- except RateLimitError:
- # TODO: When we switch to langchain, this is built in
- print("Error: ", "API Rate Limit Reached. Waiting 10 seconds...")
- time.sleep(10)
diff --git a/spaces/Jimmie/similar-books/app.py b/spaces/Jimmie/similar-books/app.py
deleted file mode 100644
index d25868ec3fd258410545d220ea69dd5c63eb3632..0000000000000000000000000000000000000000
--- a/spaces/Jimmie/similar-books/app.py
+++ /dev/null
@@ -1,52 +0,0 @@
-# imports
-import streamlit as st
-from fastai.tabular.all import *
-from PIL import Image
-
-# Preprocessing of App
-path = Path()
-learn_inf = load_learner(path/'final_model.pkl', cpu=True)
-book_factors = learn_inf.model.i_weight.weight
-img = Image.open('header.png')
-books = pd.read_csv('books.csv')
-
-
-def selectbox_with_default(text, values, default, sidebar=False):
- func = st.sidebar.selectbox if sidebar else st.selectbox
- return func(text, np.insert(np.array(values, object), 0, default))
-
-def get_similar_books(title, number):
- idx = learn_inf.dls.classes['original_title'].o2i[title]
- distances = nn.CosineSimilarity(dim=1)(book_factors, book_factors[idx][None])
- idx = distances.argsort(descending=True)[1:number+1]
- similar = [learn_inf.dls.classes['original_title'][i] for i in idx]
- ids = [int(books.loc[books['original_title']==str(i)]['goodreads_book_id'].values[0]) for i in similar]
- urls = [f'https://www.goodreads.com/book/show/{id}' for id in ids]
- return similar, urls
-
-
-# APP
-st.image(img, width=200)
-st.title('SIMILAR BOOKS')
-st.subheader('A Book Recommendation System')
-"Here's the [GitHub](https://github.com/jimmiemunyi/SimilarBooks) repo."
-
-st.info("Start typing and you will get suggestions of Books we currently have. We Currently have support for 10, 000 Books!")
-title = selectbox_with_default("Which Book Do you want Recommendations From:",
- books['original_title'], default='Select A Book')
-number = st.slider("How many Similar Books do you want?", 1, 10, value=5)
-
-if(st.button("Suggest Similar Books")):
- similar, urls = get_similar_books(title, number)
- st.subheader('Here are your Book Recommendations. Enjoy!')
- for book, url in zip(similar, urls):
- st.write(f'{book}: {url}')
-
-st.title('Developer Details')
-'''
-
-My name is Jimmie Munyi. You can connect with me on [Twitter](https://twitter.com/jimmie_munyi). You can check out other projects I have done from [My GitHub](https://github.com/jimmiemunyi) and from [My Blog](https://jimmiemunyi.github.io/blog/).
-
-If you wish to see how Similar Books was created, read this [blog post](https://jimmiemunyi.github.io/blog/projects/tutorial/2021/02/15/Book-Recommendation-Model-Training.html).
-
-'''
diff --git a/spaces/Jishnnu/Emotion-Detection/app.py b/spaces/Jishnnu/Emotion-Detection/app.py
deleted file mode 100644
index 602c913183065ca0c21840c74fde995d1c44a55e..0000000000000000000000000000000000000000
--- a/spaces/Jishnnu/Emotion-Detection/app.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import gradio as gr
-import cv2
-import tensorflow as tf
-import numpy as np
-from tensorflow.keras.preprocessing.image import ImageDataGenerator
-from tensorflow.keras.models import load_model
-
-# Load the pre-trained model
-model = tf.keras.models.load_model('Trained_Model.h5')
-
-# Define the emotion labels
-emotion_labels = {
- 0: 'Angry',
- 1: 'Disgust',
- 2: 'Fear',
- 3: 'Happy',
- 4: 'Neutral',
- 5: 'Sad',
- 6: 'Surprise'
-}
-
-# Create the image generator for preprocessing
-img_gen = ImageDataGenerator(rescale=1./255)
-
-# Define the function to predict emotions
-def predict_emotion(file):
- # Load the image or video
- cap = cv2.VideoCapture(file.name)
- if cap.isOpened():
- ret, frame = cap.read()
- # Check if it's an image or video
- if frame is not None:
- # Preprocess the image
- img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
- img = cv2.resize(img, (48, 48))
- img = np.expand_dims(img, axis=-1)
- img = np.expand_dims(img, axis=0)
- img = img.astype('float32')
- img = img_gen.standardize(img)
- # Predict the emotion
- prediction = model.predict(img)
- label = emotion_labels[np.argmax(prediction)]
- else:
- label = "No frames found in the video"
- else:
- label = "Could not open the file"
- return label
-
-# Create the Gradio interface
-input_type = gr.inputs.File(label="Upload an image or video to predict emotions")
-output_type = gr.outputs.Textbox(label="Predicted emotion")
-title = "Emotion Detection"
-description = "Upload an image or video to predict the corresponding emotion"
-iface = gr.Interface(fn=predict_emotion, inputs=input_type, outputs=output_type, title=title, description=description)
-if __name__ == '__main__':
- iface.launch(inline=False)
-
\ No newline at end of file
diff --git a/spaces/Jokerkid/porntech-sex-position/app.py b/spaces/Jokerkid/porntech-sex-position/app.py
deleted file mode 100644
index a32de17c2ac797b0ed3667a085fb8c0a9c397260..0000000000000000000000000000000000000000
--- a/spaces/Jokerkid/porntech-sex-position/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/porntech/sex-position").launch()
\ No newline at end of file
diff --git a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/utils/fresnelvis.py b/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/utils/fresnelvis.py
deleted file mode 100644
index 532d95d54df19a2f6a052bba6957742ebadf5bcc..0000000000000000000000000000000000000000
--- a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/utils/fresnelvis.py
+++ /dev/null
@@ -1,593 +0,0 @@
-# Borrowed from https://github.com/QhelDIV/xgutils/blob/main/vis/fresnelvis.py
-
-import numpy as np
-import fresnel
-import matplotlib.pyplot as plt
-import math
-import copy
-from scipy.spatial.transform import Rotation as R
-from skimage.color import rgba2rgb, rgb2gray
-
-dflt_camera = dict(
- camPos=np.array([2, 2, 2]),
- camLookat=np.array([0.0, 0.0, 0.0]),
- camUp=np.array([0, 1, 0]),
- camHeight=2,
- fit_camera=False,
- light_samples=32,
- samples=32,
- resolution=(256, 256),
-)
-gold_color = np.array([253, 204, 134]) / 256
-gray_color = np.array([0.9, 0.9, 0.9])
-white_color = np.array([1, 1, 1.0])
-black_color = np.array([0, 0, 0.0])
-red_color = np.array([1.0, 0.0, 0.0])
-
-voxel_mat = dict(specular=0.5, roughness=0.5, metal=1.0, spec_trans=0.0)
-# default_mat = dict(specular=.5, roughness=.5, metal=1., spec_trans=0.)
-
-light_preset = ["lightbox", "Cloudy", "Rembrandt", "loop", "butterfly", "ring"]
-
-
-def addAxes(scene, radius=[0.01, 0.01, 0.01]):
- axs = fresnel.geometry.Cylinder(scene, N=3)
- axs.material = fresnel.material.Material(solid=1.0)
- axs.material.primitive_color_mix = 1.0
- axs.points[:] = [
- [[0, 0, 0], [1, 0, 0]],
- [[0, 0, 0], [0, 1, 0]],
- [[0, 0, 0], [0, 0, 1]],
- ]
- axs.radius[:] = radius
- axs.color[:] = [
- [[1, 0, 0], [1, 0, 0]],
- [[0, 1, 0], [0, 1, 0]],
- [[0, 0, 1], [0, 0, 1]],
- ]
-
-
-def addBBox(
- scene,
- bb_min=np.array([-1, -1, -1.0]),
- bb_max=np.array([1, 1, 1.0]),
- color=red_color,
- radius=0.005,
- solid=1.0,
-):
- axs = fresnel.geometry.Cylinder(scene, N=12)
- axs.material = fresnel.material.Material(
- color=fresnel.color.linear(color), solid=solid, spec_trans=0.4
- )
- # axs.material.primitive_color_mix = 1.0
- pts = []
- xi, yi, zi = bb_min
- xa, ya, za = bb_max
- axs.points[:] = [
- [[xi, yi, zi], [xa, yi, zi]],
- [[xi, yi, zi], [xi, ya, zi]],
- [[xi, yi, zi], [xi, yi, za]], #
- [[xi, ya, za], [xa, ya, za]],
- [[xi, ya, za], [xi, yi, za]],
- [[xi, ya, za], [xi, ya, zi]], #
- [[xa, ya, zi], [xi, ya, zi]],
- [[xa, ya, zi], [xa, yi, zi]],
- [[xa, ya, zi], [xa, ya, za]], #
- [[xa, yi, za], [xi, yi, za]],
- [[xa, yi, za], [xa, ya, za]],
- [[xa, yi, za], [xa, yi, zi]], #
- ]
- axs.radius[:] = radius
- axs.color[:] = [[[0.5, 0, 0], [0.5, 0, 0]]] * 12
-
-
-def addBox(
- scene,
- center,
- spec=(1, 1, 1),
- color=gray_color,
- solid=0.0,
- outline_width=0.0,
- metal=0.0,
- specular=0.0,
- roughness=1.0,
- **kwargs
-):
- X, Y, Z = spec[0], spec[1], spec[2]
- poly_info = fresnel.util.convex_polyhedron_from_vertices(
- [
- [-X, -Y, -Z],
- [-X, -Y, Z],
- [-X, Y, -Z],
- [-X, Y, Z],
- [X, -Y, -Z],
- [X, -Y, Z],
- [X, Y, -Z],
- [X, Y, Z],
- ]
- )
- geometry = fresnel.geometry.ConvexPolyhedron(
- scene, poly_info, position=center, outline_width=outline_width
- ) # 0.015)
- geometry.material = fresnel.material.Material(
- roughness=roughness, solid=solid, specular=specular, metal=metal, **kwargs
- )
- # if len(color)!=3:
- geometry.material.primitive_color_mix = 1.0
- geometry.material.color = fresnel.color.linear([1, 1, 1])
- geometry.color[:] = color
- geometry.outline_material = fresnel.material.Material(
- color=fresnel.color.linear([0, 0, 0]), roughness=0.3, metal=0.0
- )
- # geometry.color[:] = color
- geometry.outline_material.primitive_color_mix = 0.7
- geometry.outline_material.solid = 0.0
-
-
-def addPlane(
- scene, center, up=(0, 1, 0), spec=(1, 1), color=white_color, solid=0.0, **kwargs
-):
- X, Z = spec[0], spec[1]
- poly_info = np.array([[-X, 0, -Z], [X, 0, -Z], [X, 0, Z], [-X, 0, Z]])
- vertices = poly_info[[0, 1, 3, 3, 1, 2]]
- geometry = fresnel.geometry.Mesh(
- scene, N=1, vertices=vertices, position=center, outline_width=0
- )
- geometry.material = fresnel.material.Material(
- roughness=1.0, specular=0.0, color=color, solid=solid
- )
- geometry.material.primitive_color_mix = (
- 0.0 # Set 0 to use the color specified in the Material,
- )
-
-
-def get_cam2world(camera, lookat=np.array([0, 0, 0]), up=np.array([0, 1, 0])):
- shift = -camera
- z_axis = -lookat + camera # +z
- x_axis = np.cross(up, z_axis)
- y_axis = np.cross(z_axis, x_axis)
- x_axis = x_axis / np.sqrt(np.sum(x_axis**2))
- y_axis = y_axis / np.sqrt(np.sum(y_axis**2))
- z_axis = z_axis / np.sqrt(np.sum(z_axis**2))
- rot = np.array([x_axis, y_axis, z_axis]).transpose()
- return shift, rot
-
-
-def world2camera(point, camera):
- point, camera = np.array(point), np.array(camera)
- shift, rot = get_cam2world(camera)
- rot = R.from_matrix(rot.transpose())
- return rot.apply(point)
-
-
-def add_world_light(scene, direction, camera_pos, color, theta=1.0):
- world_dir = direction
- cam_dir = world2camera(world_dir, camera_pos)
- new_light = fresnel.light.Light(direction=cam_dir, color=color, theta=theta)
- scene.lights.append(new_light)
- return new_light
-
-
-def get_world_lights(directions, colors, thetas, camera_pos):
- lights = []
- for i, direction in enumerate(directions):
- world_dir = direction
- cam_dir = world2camera(world_dir, camera_pos)
- new_light = fresnel.light.Light(
- direction=cam_dir, color=colors[i], theta=thetas[i]
- )
- lights.append(new_light)
- return lights
-
-
-def old_renderMeshCloud(
- mesh=None,
- meshC=gray_color,
- mesh_outline_width=None,
- meshflat=False, # mesh settings
- cloud=None,
- cloudR=0.006,
- cloudC=None, # pc settings
- camPos=None,
- camLookat=None,
- camUp=np.array([0, 0, 1]),
- camHeight=1.0, # camera settings
- samples=32,
- axes=False,
- bbox=False,
- resolution=(1024, 1024), # render settings
- lights="rembrandt",
- **kwargs
-):
- device = fresnel.Device()
- scene = fresnel.Scene(device)
- if mesh is not None and mesh["vert"].shape[0] > 0:
- mesh = fresnel.geometry.Mesh(
- scene, vertices=mesh["vert"][mesh["face"]].reshape(-1, 3), N=1
- )
- mesh.material = fresnel.material.Material(
- color=fresnel.color.linear(meshC),
- roughness=0.3,
- specular=1.0,
- spec_trans=0.0,
- )
- if mesh_outline_width is not None:
- mesh.outline_width = mesh_outline_width
- if cloud is not None and cloud.shape[0] > 0:
- cloud = fresnel.geometry.Sphere(scene, position=cloud, radius=cloudR)
- solid = 0.7 if mesh is not None else 0.0
- cloud_flat_color = gold_color
- if cloudC is not None and len(cloudC) == 3:
- cloud_flat_color = cloudC
- cloud.material = fresnel.material.Material(
- solid=solid,
- color=fresnel.color.linear(cloud_flat_color),
- roughness=1.0,
- specular=1.0,
- )
- if cloudC is not None and len(cloudC) != 3:
- cloud.material.primitive_color_mix = 1.0
- cloud.color[:] = fresnel.color.linear(plt.cm.plasma(cloudC)[:, :3])
- if axes == True:
- addAxes(scene)
- if bbox == True:
- addBBox(scene)
- if camPos is None or camLookat is None:
- print("Fitting")
- scene.camera = fresnel.camera.fit(scene, margin=0)
- else:
- scene.camera = fresnel.camera.Orthographic(camPos, camLookat, camUp, camHeight)
- if lights == "cloudy":
- scene.lights = fresnel.light.cloudy()
- if lights == "rembrandt":
- scene.lights = fresnel.light.rembrandt()
- if lights == "lightbox":
- scene.lights = fresnel.light.lightbox()
- if lights == "loop":
- scene.lights = fresnel.light.loop()
- if lights == "butterfly":
- scene.lights = fresnel.light.butterfly()
- # scene.lights[0].theta = 3
-
- tracer = fresnel.tracer.Path(device=device, w=resolution[0], h=resolution[1])
- tracer.sample(scene, samples=samples, light_samples=32)
- # tracer.resize(w=450, h=450)
- # tracer.aa_level = 3
- image = tracer.render(scene)[:]
- return image
-
-
-def renderMeshCloud(
- mesh=None,
- meshC=gray_color,
- mesh_outline_width=None,
- meshflat=False, # mesh settings
- cloud=None,
- cloudR=0.006,
- cloudC=None, # pc settings
- camPos=None,
- camLookat=None,
- camUp=np.array([0, 0, 1]),
- camHeight=1.0, # camera settings
- samples=32,
- axes=False,
- bbox=False,
- resolution=(1024, 1024), # render settings
- lights="rembrandt",
- **kwargs
-):
- camera_opt = dict(
- resolution=resolution,
- samples=samples,
- camPos=camPos,
- camLookat=camLookat,
- camUp=camUp,
- camHeight=camHeight,
- )
- renderer = FresnelRenderer(lights=lights, camera_kwargs=camera_opt)
- if axes == True:
- renderer.addAxes()
- if bbox == True:
- renderer.add_bbox()
- if mesh is not None and mesh["vert"].shape[0] > 0:
- renderer.add_mesh(
- mesh["vert"], mesh["face"], color=meshC, outline_width=mesh_outline_width
- )
- if cloud is not None and cloud.shape[0] > 0:
- renderer.add_cloud(cloud, radius=cloudR, color=cloudC)
- image = renderer.render()
- return image
-
-
-def renderMeshCloud2(
- mesh=None,
- meshC=gray_color,
- mesh_outline_width=None,
- meshflat=False, # mesh settings
- cloud=None,
- cloudR=0.006,
- cloudC=None, # pc settings
- camHeight=1.0, # camera settings
- axes=False,
- bbox=False, # render settings
- camera_kwargs={},
- **kwargs
-):
- camera_opt = dflt_camera
- camera_opt.update(camera_kwargs)
-
- renderer = FresnelRenderer(camera_kwargs=camera_opt)
- if axes == True:
- renderer.addAxes()
- if bbox == True:
- renderer.addBBox()
- if mesh is not None and mesh["vert"].shape[0] > 0:
- renderer.add_mesh(mesh, color=meshC, outline_width=mesh_outline_width)
- if cloud is not None and cloud.shape[0] > 0:
- renderer.add_cloud(cloud, radius=cloudR, color=cloudC)
- image = renderer.render()
- return image
-
-
-def render_mesh(
- vert, face, camera_kwargs={}, render_kwargs={}, shadow_catcher=False, **kwargs
-):
- renderer = FresnelRenderer(camera_kwargs=camera_kwargs)
- renderer.add_mesh(vert, face, **kwargs)
-
- if shadow_catcher == True:
- img = renderer.render(
- shadow_catcher=True, min_y=vert.min(axis=0)[1], **render_kwargs
- )
- else:
- img = renderer.render(**render_kwargs)
- return img
-
-
-def render_cloud(cloud, camera_kwargs={}, render_kwargs={}, **kwargs):
- renderer = FresnelRenderer(camera_kwargs=camera_kwargs)
- renderer.add_cloud(cloud=cloud, **kwargs)
- img = renderer.render(**render_kwargs)
- return img
-
-
-class FresnelRenderer:
- def __init__(self, camera_kwargs={}, lights="rembrandt", **kwargs):
- self.setup_scene(camera_kwargs=camera_kwargs, lights=lights)
-
- def setup_scene(self, camera_kwargs={}, lights="rembrandt"):
- device = fresnel.Device()
- scene = fresnel.Scene(device)
-
- self.camera_opt = camera_opt = copy.deepcopy(dflt_camera)
- camera_opt.update(camera_kwargs)
- self.camera_kwargs = camera_opt
-
- if camera_opt["fit_camera"] == True:
- print("Camera is not setup, now auto-fit camera")
- scene.camera = fresnel.camera.fit(scene, margin=0)
- else:
- camPos = camera_opt["camPos"]
- camLookat = camera_opt["camLookat"]
- camUp = camera_opt["camUp"]
- camHeight = camera_opt["camHeight"]
- scene.camera = fresnel.camera.Orthographic(
- camPos, camLookat, camUp, camHeight
- )
- # setup lightings
- if "lights" in camera_kwargs:
- lights = camera_kwargs["lights"]
- if type(lights) is not str:
- scene.lights = camera_kwargs["lights"]
- elif lights == "cloudy":
- scene.lights = fresnel.light.cloudy()
- elif lights == "rembrandt":
- scene.lights = fresnel.light.rembrandt()
- elif lights == "lightbox":
- scene.lights = fresnel.light.lightbox()
- elif lights == "loop":
- scene.lights = fresnel.light.loop()
- elif lights == "butterfly":
- scene.lights = fresnel.light.butterfly()
- elif lights == "up":
- scene.lights = get_world_lights(
- [np.array([0, 1, 0])],
- colors=[np.array([1, 1, 1])],
- thetas=[1.0],
- camera_pos=camPos,
- )
- # addAxes(scene)
- # addBBox(scene)
- self.scene, self.device = scene, device
-
- def add_error_cloud(self, cloud, radius=0.006, color=None, solid=0.0, name=None):
- scene = self.scene
- cloud = fresnel.geometry.Sphere(scene, position=cloud, radius=radius)
- cloud_flat_color = gold_color
- if color is not None and len(color) == 3:
- cloud_flat_color = color
- cloud.material = fresnel.material.Material(
- solid=solid,
- color=fresnel.color.linear(cloud_flat_color),
- roughness=1.0,
- specular=0.0,
- )
- if color is not None and len(color) != 3:
- cloud.material.primitive_color_mix = 1.0
- cloud.color[:] = fresnel.color.linear(plt.cm.plasma(color)[:, :3])
-
- def add_cloud(
- self,
- cloud,
- radius=0.006,
- color=None,
- solid=0.0,
- primitive_color_mix=1.0,
- cloud_flat_color=gold_color,
- roughness=0.2,
- specular=0.8,
- spec_trans=0.0,
- metal=0.0,
- name=None,
- ):
- scene = self.scene
- cloud = fresnel.geometry.Sphere(scene, position=cloud, radius=radius)
-
- if color is not None and len(color) == 3:
- cloud_flat_color = color
- cloud.material = fresnel.material.Material(
- solid=solid,
- color=fresnel.color.linear(cloud_flat_color),
- roughness=roughness,
- specular=specular,
- metal=metal,
- spec_trans=spec_trans,
- )
- if color is not None and len(color) != 3:
- cloud.material.primitive_color_mix = primitive_color_mix
- cloud.color[:] = fresnel.color.linear(color)
-
- def add_mesh(
- self,
- vert,
- face,
- outline_width=None,
- name=None,
- color=gray_color,
- vert_color=None,
- solid=0.0,
- roughness=0.2,
- specular=0.8,
- spec_trans=0.0,
- metal=0.0,
- ):
- """vert_color: (Vn, 4)"""
- scene = self.scene
- mesh = fresnel.geometry.Mesh(scene, vertices=vert[face].reshape(-1, 3), N=1)
- mesh.material = fresnel.material.Material(
- color=fresnel.color.linear(color),
- solid=solid,
- roughness=roughness,
- specular=specular,
- spec_trans=spec_trans,
- metal=metal,
- )
- if vert_color is not None:
- mesh.color[:] = fresnel.color.linear(vert_color)
- mesh.material.primitive_color_mix = 1.0
- if outline_width is not None:
- mesh.outline_width = outline_width
- return self
-
- def add_light(self, direction=(0, 1, 0), color=(1, 1, 1), theta=3.14):
- self.scene.lights.append(
- fresnel.light.Light(direction=direction, color=color, theta=theta)
- )
-
- def add_bbox(self, *args, **kwargs):
- addBBox(self.scene, *args, **kwargs)
- return self
-
- def add_box(self, *args, **kwargs):
- addBox(self.scene, *args, **kwargs)
- return self
-
- def add_plane(self, *args, **kwargs):
- addPlane(self.scene, *args, **kwargs)
- return self
-
- def compute_mask(self, min_y=None):
- scene = self.scene
- if min_y is None:
- min_y = scene.get_extents()[0, 1]
- # self.add_box(center=np.array([0,min_y-0.04,0]), spec=(100, 0.01, 100), color=black_color*0, solid=1.)
- # temp_lights = [light for light in scene.lights]
- # scene.lights.append( fresnel.light.Light(direction= np.array([0,1,0]), color=np.array([1,1,1])*10, theta=3.14) )
-
- preview_tracer = fresnel.tracer.Preview(
- device=self.device,
- w=self.camera_kwargs["resolution"][0],
- h=self.camera_kwargs["resolution"][1],
- )
- preview_img = np.array(preview_tracer.render(scene)[:])
- mask = preview_img[..., 3] / 255
- # del scene.geometry[-1] #.material.color = white_color
- # scene.lights = temp_lights
- # mask = (preview_img[...,:3].sum(axis=-1) != preview_img.min())
- # mask = rgb2gray( rgba2rgb(preview_img) )
- return mask
-
- def render(
- self,
- preview=False,
- shadow_catcher=False,
- invisible_catcher=False,
- min_y=None,
- shadow_percentile=80,
- shadow_strength=1.0,
- lights=None,
- ):
- scene = self.scene
- resolution = self.camera_opt["resolution"]
- samples = self.camera_opt["samples"]
- light_samples = self.camera_opt["light_samples"]
- # scene.lights[0].direction = np.array([.2,1,0.2])
- if lights is not None:
- scene.lights = lights
- tracer = fresnel.tracer.Path(
- device=self.device, w=resolution[0], h=resolution[1]
- )
-
- if preview == True:
- preview_tracer = fresnel.tracer.Preview(
- device=self.device,
- w=self.camera_kwargs["resolution"][0],
- h=self.camera_kwargs["resolution"][1],
- )
- image = np.array(preview_tracer.render(scene)[:])
- else:
-
- if shadow_catcher == True:
- mask = self.compute_mask(min_y)
- self.add_plane(
- center=np.array([0, min_y - 0.04, 0]),
- spec=(400, 400),
- color=white_color * 1.0,
- solid=0.0,
- )
-
- # geos = scene.geometry
- # scene.geometry = [scene.geometry[-1]]
- # preview_tracer = fresnel.tracer.Preview(device=self.device, w=self.camera_kwargs["resolution"][0], h=self.camera_kwargs["resolution"][1])
- # plane_img = np.array(preview_tracer.render(scene)[:])
- # visutil.showImg(plane_img)
- # scene.geometry = geos
-
- tracer.sample(scene, samples=samples, light_samples=light_samples)
- image = tracer.render(scene)[:]
-
- if shadow_catcher == True:
- if invisible_catcher == True:
- del scene.geometry[-1] # .material.color = white_color
- self.add_box(
- center=np.array([0, min_y - 0.04, 0]),
- spec=(100, 0.01, 100),
- color=black_color * 0,
- solid=1.0,
- )
- true_img = tracer.render(scene)[:]
- image[mask] = true_img[mask]
- grayscale = rgb2gray(rgba2rgb(image))
- shadow_map = (1 - grayscale) * 255 # 255: opaque
- all_mask = image[..., 3] / 255
- catcher_mask = np.maximum(mask, all_mask) - np.minimum(mask, all_mask)
- shadow_map = shadow_map / 255 * catcher_mask
- thresh = np.percentile(shadow_map.reshape(-1), shadow_percentile)
- shadow_map[shadow_map < thresh] = 0.0
- shadow_map[shadow_map >= thresh] = (
- (shadow_map[shadow_map >= thresh] - thresh) * 1 / (1 - thresh)
- ) ** shadow_strength
- image[..., 3] = (
- image[..., 3] * (1 - catcher_mask) + shadow_map * 255 * catcher_mask
- )
- return image
diff --git a/spaces/KenjieDec/RemBG/rembg/session_factory.py b/spaces/KenjieDec/RemBG/rembg/session_factory.py
deleted file mode 100644
index ab25b8acbd4fa8c0ea087c31f3905ca321a30dd4..0000000000000000000000000000000000000000
--- a/spaces/KenjieDec/RemBG/rembg/session_factory.py
+++ /dev/null
@@ -1,26 +0,0 @@
-import os
-from typing import Type
-
-import onnxruntime as ort
-
-from .sessions import sessions_class
-from .sessions.base import BaseSession
-from .sessions.u2net import U2netSession
-
-
-def new_session(
- model_name: str = "u2net", providers=None, *args, **kwargs
-) -> BaseSession:
- session_class: Type[BaseSession] = U2netSession
-
- for sc in sessions_class:
- if sc.name() == model_name:
- session_class = sc
- break
-
- sess_opts = ort.SessionOptions()
-
- if "OMP_NUM_THREADS" in os.environ:
- sess_opts.inter_op_num_threads = int(os.environ["OMP_NUM_THREADS"])
-
- return session_class(model_name, sess_opts, providers, *args, **kwargs)
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/base/ui/streamlit_ui.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/base/ui/streamlit_ui.py
deleted file mode 100644
index 479fe1c3e3ec6cd9f2c785c777ea9fe892853d8b..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/base/ui/streamlit_ui.py
+++ /dev/null
@@ -1,888 +0,0 @@
-import datetime
-import inspect
-import mimetypes
-import sys
-from os import getcwd, unlink
-from platform import system
-from tempfile import NamedTemporaryFile
-from typing import Any, Callable, Dict, List, Type
-from PIL import Image
-
-import pandas as pd
-import streamlit as st
-from fastapi.encoders import jsonable_encoder
-from loguru import logger
-from pydantic import BaseModel, ValidationError, parse_obj_as
-
-from mkgui.base import Opyrator
-from mkgui.base.core import name_to_title
-from mkgui.base.ui import schema_utils
-from mkgui.base.ui.streamlit_utils import CUSTOM_STREAMLIT_CSS
-
-STREAMLIT_RUNNER_SNIPPET = """
-from mkgui.base.ui import render_streamlit_ui
-from mkgui.base import Opyrator
-
-import streamlit as st
-
-# TODO: Make it configurable
-# Page config can only be setup once
-st.set_page_config(
- page_title="MockingBird",
- page_icon="🧊",
- layout="wide")
-
-render_streamlit_ui()
-"""
-
-# with st.spinner("Loading MockingBird GUI. Please wait..."):
-# opyrator = Opyrator("{opyrator_path}")
-
-
-def launch_ui(port: int = 8501) -> None:
- with NamedTemporaryFile(
- suffix=".py", mode="w", encoding="utf-8", delete=False
- ) as f:
- f.write(STREAMLIT_RUNNER_SNIPPET)
- f.seek(0)
-
- import subprocess
-
- python_path = f'PYTHONPATH="$PYTHONPATH:{getcwd()}"'
- if system() == "Windows":
- python_path = f"set PYTHONPATH=%PYTHONPATH%;{getcwd()} &&"
- subprocess.run(
- f"""set STREAMLIT_GLOBAL_SHOW_WARNING_ON_DIRECT_EXECUTION=false""",
- shell=True,
- )
-
- subprocess.run(
- f"""{python_path} "{sys.executable}" -m streamlit run --server.port={port} --server.headless=True --runner.magicEnabled=False --server.maxUploadSize=50 --browser.gatherUsageStats=False {f.name}""",
- shell=True,
- )
-
- f.close()
- unlink(f.name)
-
-
-def function_has_named_arg(func: Callable, parameter: str) -> bool:
- try:
- sig = inspect.signature(func)
- for param in sig.parameters.values():
- if param.name == "input":
- return True
- except Exception:
- return False
- return False
-
-
-def has_output_ui_renderer(data_item: BaseModel) -> bool:
- return hasattr(data_item, "render_output_ui")
-
-
-def has_input_ui_renderer(input_class: Type[BaseModel]) -> bool:
- return hasattr(input_class, "render_input_ui")
-
-
-def is_compatible_audio(mime_type: str) -> bool:
- return mime_type in ["audio/mpeg", "audio/ogg", "audio/wav"]
-
-
-def is_compatible_image(mime_type: str) -> bool:
- return mime_type in ["image/png", "image/jpeg"]
-
-
-def is_compatible_video(mime_type: str) -> bool:
- return mime_type in ["video/mp4"]
-
-
-class InputUI:
- def __init__(self, session_state, input_class: Type[BaseModel]):
- self._session_state = session_state
- self._input_class = input_class
-
- self._schema_properties = input_class.schema(by_alias=True).get(
- "properties", {}
- )
- self._schema_references = input_class.schema(by_alias=True).get(
- "definitions", {}
- )
-
- def render_ui(self, streamlit_app_root) -> None:
- if has_input_ui_renderer(self._input_class):
- # The input model has a rendering function
- # The rendering also returns the current state of input data
- self._session_state.input_data = self._input_class.render_input_ui( # type: ignore
- st, self._session_state.input_data
- )
- return
-
- # print(self._schema_properties)
- for property_key in self._schema_properties.keys():
- property = self._schema_properties[property_key]
-
- if not property.get("title"):
- # Set property key as fallback title
- property["title"] = name_to_title(property_key)
-
- try:
- if "input_data" in self._session_state:
- self._store_value(
- property_key,
- self._render_property(streamlit_app_root, property_key, property),
- )
- except Exception as e:
- print("Exception!", e)
- pass
-
- def _get_default_streamlit_input_kwargs(self, key: str, property: Dict) -> Dict:
- streamlit_kwargs = {
- "label": property.get("title"),
- "key": key,
- }
-
- if property.get("description"):
- streamlit_kwargs["help"] = property.get("description")
- return streamlit_kwargs
-
- def _store_value(self, key: str, value: Any) -> None:
- data_element = self._session_state.input_data
- key_elements = key.split(".")
- for i, key_element in enumerate(key_elements):
- if i == len(key_elements) - 1:
- # add value to this element
- data_element[key_element] = value
- return
- if key_element not in data_element:
- data_element[key_element] = {}
- data_element = data_element[key_element]
-
- def _get_value(self, key: str) -> Any:
- data_element = self._session_state.input_data
- key_elements = key.split(".")
- for i, key_element in enumerate(key_elements):
- if i == len(key_elements) - 1:
- # add value to this element
- if key_element not in data_element:
- return None
- return data_element[key_element]
- if key_element not in data_element:
- data_element[key_element] = {}
- data_element = data_element[key_element]
- return None
-
- def _render_single_datetime_input(
- self, streamlit_app: st, key: str, property: Dict
- ) -> Any:
- streamlit_kwargs = self._get_default_streamlit_input_kwargs(key, property)
-
- if property.get("format") == "time":
- if property.get("default"):
- try:
- streamlit_kwargs["value"] = datetime.time.fromisoformat( # type: ignore
- property.get("default")
- )
- except Exception:
- pass
- return streamlit_app.time_input(**streamlit_kwargs)
- elif property.get("format") == "date":
- if property.get("default"):
- try:
- streamlit_kwargs["value"] = datetime.date.fromisoformat( # type: ignore
- property.get("default")
- )
- except Exception:
- pass
- return streamlit_app.date_input(**streamlit_kwargs)
- elif property.get("format") == "date-time":
- if property.get("default"):
- try:
- streamlit_kwargs["value"] = datetime.datetime.fromisoformat( # type: ignore
- property.get("default")
- )
- except Exception:
- pass
- with streamlit_app.container():
- streamlit_app.subheader(streamlit_kwargs.get("label"))
- if streamlit_kwargs.get("description"):
- streamlit_app.text(streamlit_kwargs.get("description"))
- selected_date = None
- selected_time = None
- date_col, time_col = streamlit_app.columns(2)
- with date_col:
- date_kwargs = {"label": "Date", "key": key + "-date-input"}
- if streamlit_kwargs.get("value"):
- try:
- date_kwargs["value"] = streamlit_kwargs.get( # type: ignore
- "value"
- ).date()
- except Exception:
- pass
- selected_date = streamlit_app.date_input(**date_kwargs)
-
- with time_col:
- time_kwargs = {"label": "Time", "key": key + "-time-input"}
- if streamlit_kwargs.get("value"):
- try:
- time_kwargs["value"] = streamlit_kwargs.get( # type: ignore
- "value"
- ).time()
- except Exception:
- pass
- selected_time = streamlit_app.time_input(**time_kwargs)
- return datetime.datetime.combine(selected_date, selected_time)
- else:
- streamlit_app.warning(
- "Date format is not supported: " + str(property.get("format"))
- )
-
- def _render_single_file_input(
- self, streamlit_app: st, key: str, property: Dict
- ) -> Any:
- streamlit_kwargs = self._get_default_streamlit_input_kwargs(key, property)
- file_extension = None
- if "mime_type" in property:
- file_extension = mimetypes.guess_extension(property["mime_type"])
-
- uploaded_file = streamlit_app.file_uploader(
- **streamlit_kwargs, accept_multiple_files=False, type=file_extension
- )
- if uploaded_file is None:
- return None
-
- bytes = uploaded_file.getvalue()
- if property.get("mime_type"):
- if is_compatible_audio(property["mime_type"]):
- # Show audio
- streamlit_app.audio(bytes, format=property.get("mime_type"))
- if is_compatible_image(property["mime_type"]):
- # Show image
- streamlit_app.image(bytes)
- if is_compatible_video(property["mime_type"]):
- # Show video
- streamlit_app.video(bytes, format=property.get("mime_type"))
- return bytes
-
- def _render_single_string_input(
- self, streamlit_app: st, key: str, property: Dict
- ) -> Any:
- streamlit_kwargs = self._get_default_streamlit_input_kwargs(key, property)
-
- if property.get("default"):
- streamlit_kwargs["value"] = property.get("default")
- elif property.get("example"):
- # TODO: also use example for other property types
- # Use example as value if it is provided
- streamlit_kwargs["value"] = property.get("example")
-
- if property.get("maxLength") is not None:
- streamlit_kwargs["max_chars"] = property.get("maxLength")
-
- if (
- property.get("format")
- or (
- property.get("maxLength") is not None
- and int(property.get("maxLength")) < 140 # type: ignore
- )
- or property.get("writeOnly")
- ):
- # If any format is set, use single text input
- # If max chars is set to less than 140, use single text input
- # If write only -> password field
- if property.get("writeOnly"):
- streamlit_kwargs["type"] = "password"
- return streamlit_app.text_input(**streamlit_kwargs)
- else:
- # Otherwise use multiline text area
- return streamlit_app.text_area(**streamlit_kwargs)
-
- def _render_multi_enum_input(
- self, streamlit_app: st, key: str, property: Dict
- ) -> Any:
- streamlit_kwargs = self._get_default_streamlit_input_kwargs(key, property)
- reference_item = schema_utils.resolve_reference(
- property["items"]["$ref"], self._schema_references
- )
- # TODO: how to select defaults
- return streamlit_app.multiselect(
- **streamlit_kwargs, options=reference_item["enum"]
- )
-
- def _render_single_enum_input(
- self, streamlit_app: st, key: str, property: Dict
- ) -> Any:
-
- streamlit_kwargs = self._get_default_streamlit_input_kwargs(key, property)
- reference_item = schema_utils.get_single_reference_item(
- property, self._schema_references
- )
-
- if property.get("default") is not None:
- try:
- streamlit_kwargs["index"] = reference_item["enum"].index(
- property.get("default")
- )
- except Exception:
- # Use default selection
- pass
-
- return streamlit_app.selectbox(
- **streamlit_kwargs, options=reference_item["enum"]
- )
-
- def _render_single_dict_input(
- self, streamlit_app: st, key: str, property: Dict
- ) -> Any:
-
- # Add title and subheader
- streamlit_app.subheader(property.get("title"))
- if property.get("description"):
- streamlit_app.markdown(property.get("description"))
-
- streamlit_app.markdown("---")
-
- current_dict = self._get_value(key)
- if not current_dict:
- current_dict = {}
-
- key_col, value_col = streamlit_app.columns(2)
-
- with key_col:
- updated_key = streamlit_app.text_input(
- "Key", value="", key=key + "-new-key"
- )
-
- with value_col:
- # TODO: also add boolean?
- value_kwargs = {"label": "Value", "key": key + "-new-value"}
- if property["additionalProperties"].get("type") == "integer":
- value_kwargs["value"] = 0 # type: ignore
- updated_value = streamlit_app.number_input(**value_kwargs)
- elif property["additionalProperties"].get("type") == "number":
- value_kwargs["value"] = 0.0 # type: ignore
- value_kwargs["format"] = "%f"
- updated_value = streamlit_app.number_input(**value_kwargs)
- else:
- value_kwargs["value"] = ""
- updated_value = streamlit_app.text_input(**value_kwargs)
-
- streamlit_app.markdown("---")
-
- with streamlit_app.container():
- clear_col, add_col = streamlit_app.columns([1, 2])
-
- with clear_col:
- if streamlit_app.button("Clear Items", key=key + "-clear-items"):
- current_dict = {}
-
- with add_col:
- if (
- streamlit_app.button("Add Item", key=key + "-add-item")
- and updated_key
- ):
- current_dict[updated_key] = updated_value
-
- streamlit_app.write(current_dict)
-
- return current_dict
-
- def _render_single_reference(
- self, streamlit_app: st, key: str, property: Dict
- ) -> Any:
- reference_item = schema_utils.get_single_reference_item(
- property, self._schema_references
- )
- return self._render_property(streamlit_app, key, reference_item)
-
- def _render_multi_file_input(
- self, streamlit_app: st, key: str, property: Dict
- ) -> Any:
- streamlit_kwargs = self._get_default_streamlit_input_kwargs(key, property)
-
- file_extension = None
- if "mime_type" in property:
- file_extension = mimetypes.guess_extension(property["mime_type"])
-
- uploaded_files = streamlit_app.file_uploader(
- **streamlit_kwargs, accept_multiple_files=True, type=file_extension
- )
- uploaded_files_bytes = []
- if uploaded_files:
- for uploaded_file in uploaded_files:
- uploaded_files_bytes.append(uploaded_file.read())
- return uploaded_files_bytes
-
- def _render_single_boolean_input(
- self, streamlit_app: st, key: str, property: Dict
- ) -> Any:
- streamlit_kwargs = self._get_default_streamlit_input_kwargs(key, property)
-
- if property.get("default"):
- streamlit_kwargs["value"] = property.get("default")
- return streamlit_app.checkbox(**streamlit_kwargs)
-
- def _render_single_number_input(
- self, streamlit_app: st, key: str, property: Dict
- ) -> Any:
- streamlit_kwargs = self._get_default_streamlit_input_kwargs(key, property)
-
- number_transform = int
- if property.get("type") == "number":
- number_transform = float # type: ignore
- streamlit_kwargs["format"] = "%f"
-
- if "multipleOf" in property:
- # Set stepcount based on multiple of parameter
- streamlit_kwargs["step"] = number_transform(property["multipleOf"])
- elif number_transform == int:
- # Set step size to 1 as default
- streamlit_kwargs["step"] = 1
- elif number_transform == float:
- # Set step size to 0.01 as default
- # TODO: adapt to default value
- streamlit_kwargs["step"] = 0.01
-
- if "minimum" in property:
- streamlit_kwargs["min_value"] = number_transform(property["minimum"])
- if "exclusiveMinimum" in property:
- streamlit_kwargs["min_value"] = number_transform(
- property["exclusiveMinimum"] + streamlit_kwargs["step"]
- )
- if "maximum" in property:
- streamlit_kwargs["max_value"] = number_transform(property["maximum"])
-
- if "exclusiveMaximum" in property:
- streamlit_kwargs["max_value"] = number_transform(
- property["exclusiveMaximum"] - streamlit_kwargs["step"]
- )
-
- if property.get("default") is not None:
- streamlit_kwargs["value"] = number_transform(property.get("default")) # type: ignore
- else:
- if "min_value" in streamlit_kwargs:
- streamlit_kwargs["value"] = streamlit_kwargs["min_value"]
- elif number_transform == int:
- streamlit_kwargs["value"] = 0
- else:
- # Set default value to step
- streamlit_kwargs["value"] = number_transform(streamlit_kwargs["step"])
-
- if "min_value" in streamlit_kwargs and "max_value" in streamlit_kwargs:
- # TODO: Only if less than X steps
- return streamlit_app.slider(**streamlit_kwargs)
- else:
- return streamlit_app.number_input(**streamlit_kwargs)
-
- def _render_object_input(self, streamlit_app: st, key: str, property: Dict) -> Any:
- properties = property["properties"]
- object_inputs = {}
- for property_key in properties:
- property = properties[property_key]
- if not property.get("title"):
- # Set property key as fallback title
- property["title"] = name_to_title(property_key)
- # construct full key based on key parts -> required later to get the value
- full_key = key + "." + property_key
- object_inputs[property_key] = self._render_property(
- streamlit_app, full_key, property
- )
- return object_inputs
-
- def _render_single_object_input(
- self, streamlit_app: st, key: str, property: Dict
- ) -> Any:
- # Add title and subheader
- title = property.get("title")
- streamlit_app.subheader(title)
- if property.get("description"):
- streamlit_app.markdown(property.get("description"))
-
- object_reference = schema_utils.get_single_reference_item(
- property, self._schema_references
- )
- return self._render_object_input(streamlit_app, key, object_reference)
-
- def _render_property_list_input(
- self, streamlit_app: st, key: str, property: Dict
- ) -> Any:
-
- # Add title and subheader
- streamlit_app.subheader(property.get("title"))
- if property.get("description"):
- streamlit_app.markdown(property.get("description"))
-
- streamlit_app.markdown("---")
-
- current_list = self._get_value(key)
- if not current_list:
- current_list = []
-
- value_kwargs = {"label": "Value", "key": key + "-new-value"}
- if property["items"]["type"] == "integer":
- value_kwargs["value"] = 0 # type: ignore
- new_value = streamlit_app.number_input(**value_kwargs)
- elif property["items"]["type"] == "number":
- value_kwargs["value"] = 0.0 # type: ignore
- value_kwargs["format"] = "%f"
- new_value = streamlit_app.number_input(**value_kwargs)
- else:
- value_kwargs["value"] = ""
- new_value = streamlit_app.text_input(**value_kwargs)
-
- streamlit_app.markdown("---")
-
- with streamlit_app.container():
- clear_col, add_col = streamlit_app.columns([1, 2])
-
- with clear_col:
- if streamlit_app.button("Clear Items", key=key + "-clear-items"):
- current_list = []
-
- with add_col:
- if (
- streamlit_app.button("Add Item", key=key + "-add-item")
- and new_value is not None
- ):
- current_list.append(new_value)
-
- streamlit_app.write(current_list)
-
- return current_list
-
- def _render_object_list_input(
- self, streamlit_app: st, key: str, property: Dict
- ) -> Any:
-
- # TODO: support max_items, and min_items properties
-
- # Add title and subheader
- streamlit_app.subheader(property.get("title"))
- if property.get("description"):
- streamlit_app.markdown(property.get("description"))
-
- streamlit_app.markdown("---")
-
- current_list = self._get_value(key)
- if not current_list:
- current_list = []
-
- object_reference = schema_utils.resolve_reference(
- property["items"]["$ref"], self._schema_references
- )
- input_data = self._render_object_input(streamlit_app, key, object_reference)
-
- streamlit_app.markdown("---")
-
- with streamlit_app.container():
- clear_col, add_col = streamlit_app.columns([1, 2])
-
- with clear_col:
- if streamlit_app.button("Clear Items", key=key + "-clear-items"):
- current_list = []
-
- with add_col:
- if (
- streamlit_app.button("Add Item", key=key + "-add-item")
- and input_data
- ):
- current_list.append(input_data)
-
- streamlit_app.write(current_list)
- return current_list
-
- def _render_property(self, streamlit_app: st, key: str, property: Dict) -> Any:
- if schema_utils.is_single_enum_property(property, self._schema_references):
- return self._render_single_enum_input(streamlit_app, key, property)
-
- if schema_utils.is_multi_enum_property(property, self._schema_references):
- return self._render_multi_enum_input(streamlit_app, key, property)
-
- if schema_utils.is_single_file_property(property):
- return self._render_single_file_input(streamlit_app, key, property)
-
- if schema_utils.is_multi_file_property(property):
- return self._render_multi_file_input(streamlit_app, key, property)
-
- if schema_utils.is_single_datetime_property(property):
- return self._render_single_datetime_input(streamlit_app, key, property)
-
- if schema_utils.is_single_boolean_property(property):
- return self._render_single_boolean_input(streamlit_app, key, property)
-
- if schema_utils.is_single_dict_property(property):
- return self._render_single_dict_input(streamlit_app, key, property)
-
- if schema_utils.is_single_number_property(property):
- return self._render_single_number_input(streamlit_app, key, property)
-
- if schema_utils.is_single_string_property(property):
- return self._render_single_string_input(streamlit_app, key, property)
-
- if schema_utils.is_single_object(property, self._schema_references):
- return self._render_single_object_input(streamlit_app, key, property)
-
- if schema_utils.is_object_list_property(property, self._schema_references):
- return self._render_object_list_input(streamlit_app, key, property)
-
- if schema_utils.is_property_list(property):
- return self._render_property_list_input(streamlit_app, key, property)
-
- if schema_utils.is_single_reference(property):
- return self._render_single_reference(streamlit_app, key, property)
-
- streamlit_app.warning(
- "The type of the following property is currently not supported: "
- + str(property.get("title"))
- )
- raise Exception("Unsupported property")
-
-
-class OutputUI:
- def __init__(self, output_data: Any, input_data: Any):
- self._output_data = output_data
- self._input_data = input_data
-
- def render_ui(self, streamlit_app) -> None:
- try:
- if isinstance(self._output_data, BaseModel):
- self._render_single_output(streamlit_app, self._output_data)
- return
- if type(self._output_data) == list:
- self._render_list_output(streamlit_app, self._output_data)
- return
- except Exception as ex:
- streamlit_app.exception(ex)
- # Fallback to
- streamlit_app.json(jsonable_encoder(self._output_data))
-
- def _render_single_text_property(
- self, streamlit: st, property_schema: Dict, value: Any
- ) -> None:
- # Add title and subheader
- streamlit.subheader(property_schema.get("title"))
- if property_schema.get("description"):
- streamlit.markdown(property_schema.get("description"))
- if value is None or value == "":
- streamlit.info("No value returned!")
- else:
- streamlit.code(str(value), language="plain")
-
- def _render_single_file_property(
- self, streamlit: st, property_schema: Dict, value: Any
- ) -> None:
- # Add title and subheader
- streamlit.subheader(property_schema.get("title"))
- if property_schema.get("description"):
- streamlit.markdown(property_schema.get("description"))
- if value is None or value == "":
- streamlit.info("No value returned!")
- else:
- # TODO: Detect if it is a FileContent instance
- # TODO: detect if it is base64
- file_extension = ""
- if "mime_type" in property_schema:
- mime_type = property_schema["mime_type"]
- file_extension = mimetypes.guess_extension(mime_type) or ""
-
- if is_compatible_audio(mime_type):
- streamlit.audio(value.as_bytes(), format=mime_type)
- return
-
- if is_compatible_image(mime_type):
- streamlit.image(value.as_bytes())
- return
-
- if is_compatible_video(mime_type):
- streamlit.video(value.as_bytes(), format=mime_type)
- return
-
- filename = (
- (property_schema["title"] + file_extension)
- .lower()
- .strip()
- .replace(" ", "-")
- )
- streamlit.markdown(
- f' ',
- unsafe_allow_html=True,
- )
-
- def _render_single_complex_property(
- self, streamlit: st, property_schema: Dict, value: Any
- ) -> None:
- # Add title and subheader
- streamlit.subheader(property_schema.get("title"))
- if property_schema.get("description"):
- streamlit.markdown(property_schema.get("description"))
-
- streamlit.json(jsonable_encoder(value))
-
- def _render_single_output(self, streamlit: st, output_data: BaseModel) -> None:
- try:
- if has_output_ui_renderer(output_data):
- if function_has_named_arg(output_data.render_output_ui, "input"): # type: ignore
- # render method also requests the input data
- output_data.render_output_ui(streamlit, input=self._input_data) # type: ignore
- else:
- output_data.render_output_ui(streamlit) # type: ignore
- return
- except Exception:
- # Use default auto-generation methods if the custom rendering throws an exception
- logger.exception(
- "Failed to execute custom render_output_ui function. Using auto-generation instead"
- )
-
- model_schema = output_data.schema(by_alias=False)
- model_properties = model_schema.get("properties")
- definitions = model_schema.get("definitions")
-
- if model_properties:
- for property_key in output_data.__dict__:
- property_schema = model_properties.get(property_key)
- if not property_schema.get("title"):
- # Set property key as fallback title
- property_schema["title"] = property_key
-
- output_property_value = output_data.__dict__[property_key]
-
- if has_output_ui_renderer(output_property_value):
- output_property_value.render_output_ui(streamlit) # type: ignore
- continue
-
- if isinstance(output_property_value, BaseModel):
- # Render output recursivly
- streamlit.subheader(property_schema.get("title"))
- if property_schema.get("description"):
- streamlit.markdown(property_schema.get("description"))
- self._render_single_output(streamlit, output_property_value)
- continue
-
- if property_schema:
- if schema_utils.is_single_file_property(property_schema):
- self._render_single_file_property(
- streamlit, property_schema, output_property_value
- )
- continue
-
- if (
- schema_utils.is_single_string_property(property_schema)
- or schema_utils.is_single_number_property(property_schema)
- or schema_utils.is_single_datetime_property(property_schema)
- or schema_utils.is_single_boolean_property(property_schema)
- ):
- self._render_single_text_property(
- streamlit, property_schema, output_property_value
- )
- continue
- if definitions and schema_utils.is_single_enum_property(
- property_schema, definitions
- ):
- self._render_single_text_property(
- streamlit, property_schema, output_property_value.value
- )
- continue
-
- # TODO: render dict as table
-
- self._render_single_complex_property(
- streamlit, property_schema, output_property_value
- )
- return
-
- def _render_list_output(self, streamlit: st, output_data: List) -> None:
- try:
- data_items: List = []
- for data_item in output_data:
- if has_output_ui_renderer(data_item):
- # Render using the render function
- data_item.render_output_ui(streamlit) # type: ignore
- continue
- data_items.append(data_item.dict())
- # Try to show as dataframe
- streamlit.table(pd.DataFrame(data_items))
- except Exception:
- # Fallback to
- streamlit.json(jsonable_encoder(output_data))
-
-
-def getOpyrator(mode: str) -> Opyrator:
- if mode == None or mode.startswith('VC'):
- from mkgui.app_vc import convert
- return Opyrator(convert)
- if mode == None or mode.startswith('预处理'):
- from mkgui.preprocess import preprocess
- return Opyrator(preprocess)
- if mode == None or mode.startswith('模型训练'):
- from mkgui.train import train
- return Opyrator(train)
- if mode == None or mode.startswith('模型训练(VC)'):
- from mkgui.train_vc import train_vc
- return Opyrator(train_vc)
- from mkgui.app import synthesize
- return Opyrator(synthesize)
-
-
-def render_streamlit_ui() -> None:
- # init
- session_state = st.session_state
- session_state.input_data = {}
- # Add custom css settings
- st.markdown(f"", unsafe_allow_html=True)
-
- with st.spinner("Loading MockingBird GUI. Please wait..."):
- session_state.mode = st.sidebar.selectbox(
- '模式选择',
- ( "AI拟音", "VC拟音", "预处理", "模型训练", "模型训练(VC)")
- )
- if "mode" in session_state:
- mode = session_state.mode
- else:
- mode = ""
- opyrator = getOpyrator(mode)
- title = opyrator.name + mode
-
- col1, col2, _ = st.columns(3)
- col2.title(title)
- col2.markdown("欢迎使用MockingBird Web 2")
-
- image = Image.open('.\\mkgui\\static\\mb.png')
- col1.image(image)
-
- st.markdown("---")
- left, right = st.columns([0.4, 0.6])
-
- with left:
- st.header("Control 控制")
- InputUI(session_state=session_state, input_class=opyrator.input_type).render_ui(st)
- execute_selected = st.button(opyrator.action)
- if execute_selected:
- with st.spinner("Executing operation. Please wait..."):
- try:
- input_data_obj = parse_obj_as(
- opyrator.input_type, session_state.input_data
- )
- session_state.output_data = opyrator(input=input_data_obj)
- session_state.latest_operation_input = input_data_obj # should this really be saved as additional session object?
- except ValidationError as ex:
- st.error(ex)
- else:
- # st.success("Operation executed successfully.")
- pass
-
- with right:
- st.header("Result 结果")
- if 'output_data' in session_state:
- OutputUI(
- session_state.output_data, session_state.latest_operation_input
- ).render_ui(st)
- if st.button("Clear"):
- # Clear all state
- for key in st.session_state.keys():
- del st.session_state[key]
- session_state.input_data = {}
- st.experimental_rerun()
- else:
- # placeholder
- st.caption("请使用左侧控制板进行输入并运行获得结果")
-
-
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/layers/transformer/deformable_detr_layers.py b/spaces/KyanChen/RSPrompter/mmdet/models/layers/transformer/deformable_detr_layers.py
deleted file mode 100644
index f337e7fd01ba05ace0a74441192d4e58299bbd93..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/layers/transformer/deformable_detr_layers.py
+++ /dev/null
@@ -1,250 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Optional, Tuple, Union
-
-import torch
-from mmcv.cnn import build_norm_layer
-from mmcv.cnn.bricks.transformer import FFN, MultiheadAttention
-from mmcv.ops import MultiScaleDeformableAttention
-from mmengine.model import ModuleList
-from torch import Tensor, nn
-
-from .detr_layers import (DetrTransformerDecoder, DetrTransformerDecoderLayer,
- DetrTransformerEncoder, DetrTransformerEncoderLayer)
-from .utils import inverse_sigmoid
-
-
-class DeformableDetrTransformerEncoder(DetrTransformerEncoder):
- """Transformer encoder of Deformable DETR."""
-
- def _init_layers(self) -> None:
- """Initialize encoder layers."""
- self.layers = ModuleList([
- DeformableDetrTransformerEncoderLayer(**self.layer_cfg)
- for _ in range(self.num_layers)
- ])
- self.embed_dims = self.layers[0].embed_dims
-
- def forward(self, query: Tensor, query_pos: Tensor,
- key_padding_mask: Tensor, spatial_shapes: Tensor,
- level_start_index: Tensor, valid_ratios: Tensor,
- **kwargs) -> Tensor:
- """Forward function of Transformer encoder.
-
- Args:
- query (Tensor): The input query, has shape (bs, num_queries, dim).
- query_pos (Tensor): The positional encoding for query, has shape
- (bs, num_queries, dim).
- key_padding_mask (Tensor): The `key_padding_mask` of `self_attn`
- input. ByteTensor, has shape (bs, num_queries).
- spatial_shapes (Tensor): Spatial shapes of features in all levels,
- has shape (num_levels, 2), last dimension represents (h, w).
- level_start_index (Tensor): The start index of each level.
- A tensor has shape (num_levels, ) and can be represented
- as [0, h_0*w_0, h_0*w_0+h_1*w_1, ...].
- valid_ratios (Tensor): The ratios of the valid width and the valid
- height relative to the width and the height of features in all
- levels, has shape (bs, num_levels, 2).
-
- Returns:
- Tensor: Output queries of Transformer encoder, which is also
- called 'encoder output embeddings' or 'memory', has shape
- (bs, num_queries, dim)
- """
- reference_points = self.get_encoder_reference_points(
- spatial_shapes, valid_ratios, device=query.device)
- for layer in self.layers:
- query = layer(
- query=query,
- query_pos=query_pos,
- key_padding_mask=key_padding_mask,
- spatial_shapes=spatial_shapes,
- level_start_index=level_start_index,
- valid_ratios=valid_ratios,
- reference_points=reference_points,
- **kwargs)
- return query
-
- @staticmethod
- def get_encoder_reference_points(
- spatial_shapes: Tensor, valid_ratios: Tensor,
- device: Union[torch.device, str]) -> Tensor:
- """Get the reference points used in encoder.
-
- Args:
- spatial_shapes (Tensor): Spatial shapes of features in all levels,
- has shape (num_levels, 2), last dimension represents (h, w).
- valid_ratios (Tensor): The ratios of the valid width and the valid
- height relative to the width and the height of features in all
- levels, has shape (bs, num_levels, 2).
- device (obj:`device` or str): The device acquired by the
- `reference_points`.
-
- Returns:
- Tensor: Reference points used in decoder, has shape (bs, length,
- num_levels, 2).
- """
-
- reference_points_list = []
- for lvl, (H, W) in enumerate(spatial_shapes):
- ref_y, ref_x = torch.meshgrid(
- torch.linspace(
- 0.5, H - 0.5, H, dtype=torch.float32, device=device),
- torch.linspace(
- 0.5, W - 0.5, W, dtype=torch.float32, device=device))
- ref_y = ref_y.reshape(-1)[None] / (
- valid_ratios[:, None, lvl, 1] * H)
- ref_x = ref_x.reshape(-1)[None] / (
- valid_ratios[:, None, lvl, 0] * W)
- ref = torch.stack((ref_x, ref_y), -1)
- reference_points_list.append(ref)
- reference_points = torch.cat(reference_points_list, 1)
- # [bs, sum(hw), num_level, 2]
- reference_points = reference_points[:, :, None] * valid_ratios[:, None]
- return reference_points
-
-
-class DeformableDetrTransformerDecoder(DetrTransformerDecoder):
- """Transformer Decoder of Deformable DETR."""
-
- def _init_layers(self) -> None:
- """Initialize decoder layers."""
- self.layers = ModuleList([
- DeformableDetrTransformerDecoderLayer(**self.layer_cfg)
- for _ in range(self.num_layers)
- ])
- self.embed_dims = self.layers[0].embed_dims
- if self.post_norm_cfg is not None:
- raise ValueError('There is not post_norm in '
- f'{self._get_name()}')
-
- def forward(self,
- query: Tensor,
- query_pos: Tensor,
- value: Tensor,
- key_padding_mask: Tensor,
- reference_points: Tensor,
- spatial_shapes: Tensor,
- level_start_index: Tensor,
- valid_ratios: Tensor,
- reg_branches: Optional[nn.Module] = None,
- **kwargs) -> Tuple[Tensor]:
- """Forward function of Transformer decoder.
-
- Args:
- query (Tensor): The input queries, has shape (bs, num_queries,
- dim).
- query_pos (Tensor): The input positional query, has shape
- (bs, num_queries, dim). It will be added to `query` before
- forward function.
- value (Tensor): The input values, has shape (bs, num_value, dim).
- key_padding_mask (Tensor): The `key_padding_mask` of `cross_attn`
- input. ByteTensor, has shape (bs, num_value).
- reference_points (Tensor): The initial reference, has shape
- (bs, num_queries, 4) with the last dimension arranged as
- (cx, cy, w, h) when `as_two_stage` is `True`, otherwise has
- shape (bs, num_queries, 2) with the last dimension arranged
- as (cx, cy).
- spatial_shapes (Tensor): Spatial shapes of features in all levels,
- has shape (num_levels, 2), last dimension represents (h, w).
- level_start_index (Tensor): The start index of each level.
- A tensor has shape (num_levels, ) and can be represented
- as [0, h_0*w_0, h_0*w_0+h_1*w_1, ...].
- valid_ratios (Tensor): The ratios of the valid width and the valid
- height relative to the width and the height of features in all
- levels, has shape (bs, num_levels, 2).
- reg_branches: (obj:`nn.ModuleList`, optional): Used for refining
- the regression results. Only would be passed when
- `with_box_refine` is `True`, otherwise would be `None`.
-
- Returns:
- tuple[Tensor]: Outputs of Deformable Transformer Decoder.
-
- - output (Tensor): Output embeddings of the last decoder, has
- shape (num_queries, bs, embed_dims) when `return_intermediate`
- is `False`. Otherwise, Intermediate output embeddings of all
- decoder layers, has shape (num_decoder_layers, num_queries, bs,
- embed_dims).
- - reference_points (Tensor): The reference of the last decoder
- layer, has shape (bs, num_queries, 4) when `return_intermediate`
- is `False`. Otherwise, Intermediate references of all decoder
- layers, has shape (num_decoder_layers, bs, num_queries, 4). The
- coordinates are arranged as (cx, cy, w, h)
- """
- output = query
- intermediate = []
- intermediate_reference_points = []
- for layer_id, layer in enumerate(self.layers):
- if reference_points.shape[-1] == 4:
- reference_points_input = \
- reference_points[:, :, None] * \
- torch.cat([valid_ratios, valid_ratios], -1)[:, None]
- else:
- assert reference_points.shape[-1] == 2
- reference_points_input = \
- reference_points[:, :, None] * \
- valid_ratios[:, None]
- output = layer(
- output,
- query_pos=query_pos,
- value=value,
- key_padding_mask=key_padding_mask,
- spatial_shapes=spatial_shapes,
- level_start_index=level_start_index,
- valid_ratios=valid_ratios,
- reference_points=reference_points_input,
- **kwargs)
-
- if reg_branches is not None:
- tmp_reg_preds = reg_branches[layer_id](output)
- if reference_points.shape[-1] == 4:
- new_reference_points = tmp_reg_preds + inverse_sigmoid(
- reference_points)
- new_reference_points = new_reference_points.sigmoid()
- else:
- assert reference_points.shape[-1] == 2
- new_reference_points = tmp_reg_preds
- new_reference_points[..., :2] = tmp_reg_preds[
- ..., :2] + inverse_sigmoid(reference_points)
- new_reference_points = new_reference_points.sigmoid()
- reference_points = new_reference_points.detach()
-
- if self.return_intermediate:
- intermediate.append(output)
- intermediate_reference_points.append(reference_points)
-
- if self.return_intermediate:
- return torch.stack(intermediate), torch.stack(
- intermediate_reference_points)
-
- return output, reference_points
-
-
-class DeformableDetrTransformerEncoderLayer(DetrTransformerEncoderLayer):
- """Encoder layer of Deformable DETR."""
-
- def _init_layers(self) -> None:
- """Initialize self_attn, ffn, and norms."""
- self.self_attn = MultiScaleDeformableAttention(**self.self_attn_cfg)
- self.embed_dims = self.self_attn.embed_dims
- self.ffn = FFN(**self.ffn_cfg)
- norms_list = [
- build_norm_layer(self.norm_cfg, self.embed_dims)[1]
- for _ in range(2)
- ]
- self.norms = ModuleList(norms_list)
-
-
-class DeformableDetrTransformerDecoderLayer(DetrTransformerDecoderLayer):
- """Decoder layer of Deformable DETR."""
-
- def _init_layers(self) -> None:
- """Initialize self_attn, cross-attn, ffn, and norms."""
- self.self_attn = MultiheadAttention(**self.self_attn_cfg)
- self.cross_attn = MultiScaleDeformableAttention(**self.cross_attn_cfg)
- self.embed_dims = self.self_attn.embed_dims
- self.ffn = FFN(**self.ffn_cfg)
- norms_list = [
- build_norm_layer(self.norm_cfg, self.embed_dims)[1]
- for _ in range(3)
- ]
- self.norms = ModuleList(norms_list)
diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/apis/image_caption.py b/spaces/KyanChen/RSPrompter/mmpretrain/apis/image_caption.py
deleted file mode 100644
index aef21878112763bf1ae12d2373e9645b73049665..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmpretrain/apis/image_caption.py
+++ /dev/null
@@ -1,164 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from pathlib import Path
-from typing import Callable, List, Optional
-
-import numpy as np
-from mmcv.image import imread
-from mmengine.config import Config
-from mmengine.dataset import Compose, default_collate
-
-from mmpretrain.registry import TRANSFORMS
-from mmpretrain.structures import DataSample
-from .base import BaseInferencer, InputType
-from .model import list_models
-
-
-class ImageCaptionInferencer(BaseInferencer):
- """The inferencer for image caption.
-
- Args:
- model (BaseModel | str | Config): A model name or a path to the config
- file, or a :obj:`BaseModel` object. The model name can be found
- by ``ImageCaptionInferencer.list_models()`` and you can also
- query it in :doc:`/modelzoo_statistics`.
- pretrained (str, optional): Path to the checkpoint. If None, it will
- try to find a pre-defined weight from the model you specified
- (only work if the ``model`` is a model name). Defaults to None.
- device (str, optional): Device to run inference. If None, the available
- device will be automatically used. Defaults to None.
- **kwargs: Other keyword arguments to initialize the model (only work if
- the ``model`` is a model name).
-
- Example:
- >>> from mmpretrain import ImageCaptionInferencer
- >>> inferencer = ImageCaptionInferencer('blip-base_3rdparty_caption')
- >>> inferencer('demo/cat-dog.png')[0]
- {'pred_caption': 'a puppy and a cat sitting on a blanket'}
- """ # noqa: E501
-
- visualize_kwargs: set = {'resize', 'show', 'show_dir', 'wait_time'}
-
- def __call__(self,
- images: InputType,
- return_datasamples: bool = False,
- batch_size: int = 1,
- **kwargs) -> dict:
- """Call the inferencer.
-
- Args:
- images (str | array | list): The image path or array, or a list of
- images.
- return_datasamples (bool): Whether to return results as
- :obj:`DataSample`. Defaults to False.
- batch_size (int): Batch size. Defaults to 1.
- resize (int, optional): Resize the short edge of the image to the
- specified length before visualization. Defaults to None.
- draw_score (bool): Whether to draw the prediction scores
- of prediction categories. Defaults to True.
- show (bool): Whether to display the visualization result in a
- window. Defaults to False.
- wait_time (float): The display time (s). Defaults to 0, which means
- "forever".
- show_dir (str, optional): If not None, save the visualization
- results in the specified directory. Defaults to None.
-
- Returns:
- list: The inference results.
- """
- return super().__call__(images, return_datasamples, batch_size,
- **kwargs)
-
- def _init_pipeline(self, cfg: Config) -> Callable:
- test_pipeline_cfg = cfg.test_dataloader.dataset.pipeline
- if test_pipeline_cfg[0]['type'] == 'LoadImageFromFile':
- # Image loading is finished in `self.preprocess`.
- test_pipeline_cfg = test_pipeline_cfg[1:]
- test_pipeline = Compose(
- [TRANSFORMS.build(t) for t in test_pipeline_cfg])
- return test_pipeline
-
- def preprocess(self, inputs: List[InputType], batch_size: int = 1):
-
- def load_image(input_):
- img = imread(input_)
- if img is None:
- raise ValueError(f'Failed to read image {input_}.')
- return dict(
- img=img,
- img_shape=img.shape[:2],
- ori_shape=img.shape[:2],
- )
-
- pipeline = Compose([load_image, self.pipeline])
-
- chunked_data = self._get_chunk_data(map(pipeline, inputs), batch_size)
- yield from map(default_collate, chunked_data)
-
- def visualize(self,
- ori_inputs: List[InputType],
- preds: List[DataSample],
- show: bool = False,
- wait_time: int = 0,
- resize: Optional[int] = None,
- show_dir=None):
- if not show and show_dir is None:
- return None
-
- if self.visualizer is None:
- from mmpretrain.visualization import UniversalVisualizer
- self.visualizer = UniversalVisualizer()
-
- visualization = []
- for i, (input_, data_sample) in enumerate(zip(ori_inputs, preds)):
- image = imread(input_)
- if isinstance(input_, str):
- # The image loaded from path is BGR format.
- image = image[..., ::-1]
- name = Path(input_).stem
- else:
- name = str(i)
-
- if show_dir is not None:
- show_dir = Path(show_dir)
- show_dir.mkdir(exist_ok=True)
- out_file = str((show_dir / name).with_suffix('.png'))
- else:
- out_file = None
-
- self.visualizer.visualize_image_caption(
- image,
- data_sample,
- resize=resize,
- show=show,
- wait_time=wait_time,
- name=name,
- out_file=out_file)
- visualization.append(self.visualizer.get_image())
- if show:
- self.visualizer.close()
- return visualization
-
- def postprocess(self,
- preds: List[DataSample],
- visualization: List[np.ndarray],
- return_datasamples=False) -> dict:
- if return_datasamples:
- return preds
-
- results = []
- for data_sample in preds:
- results.append({'pred_caption': data_sample.get('pred_caption')})
-
- return results
-
- @staticmethod
- def list_models(pattern: Optional[str] = None):
- """List all available model names.
-
- Args:
- pattern (str | None): A wildcard pattern to match model names.
-
- Returns:
- List[str]: a list of model names.
- """
- return list_models(pattern=pattern, task='Image Caption')
diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/engine/__init__.py b/spaces/KyanChen/RSPrompter/mmpretrain/engine/__init__.py
deleted file mode 100644
index 7785da7b25950b7f13770e30ba5a5082dd5f8655..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmpretrain/engine/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .hooks import * # noqa: F401, F403
-from .optimizers import * # noqa: F401, F403
-from .runners import * # noqa: F401, F403
diff --git a/spaces/LightChen2333/OpenSLU/config/examples/README.md b/spaces/LightChen2333/OpenSLU/config/examples/README.md
deleted file mode 100644
index aec8ce8006690c161333a3100dde4c1b7dab2cb5..0000000000000000000000000000000000000000
--- a/spaces/LightChen2333/OpenSLU/config/examples/README.md
+++ /dev/null
@@ -1,38 +0,0 @@
-# Examples
-
-Here we introduce some usage of our famework by configuration.
-
-## Reload to train
-
-Firstly, you can run this script to train a `joint-bert` model:
-```shell
-python run.py -cp config/examples/normal.yaml
-```
-
-and you can use `kill` or `Ctrl+C` to kill the training process.
-
-Then, to reload model and continue training, you can run `reload_to_train.yaml` to reload checkpoint and training state.
-```shell
-python run.py -cp config/examples/reload_to_train.yaml
-```
-
-The main difference in `reload_to_train.yaml` is the `model_manager` configuration item:
-```yaml
-...
-model_manager:
- load_train_state: True # set to True
- load_dir: save/joint_bert # not null
- ...
-...
-```
-
-## Load from Pre-finetuned model.
-We upload all models to [LightChen2333](https://huggingface.co/LightChen2333). You can load those model by simple configuration.
-In `from_pretrained.yaml` and `from_pretrained_multi.yaml`, we show two example scripts to load from hugging face in single- and multi-intent, respectively. The key configuration items are as below:
-```yaml
-tokenizer:
- _from_pretrained_: "'LightChen2333/agif-slu-' + '{dataset.dataset_name}'" # Support simple calculation script
-
-model:
- _from_pretrained_: "'LightChen2333/agif-slu-' + '{dataset.dataset_name}'"
-```
diff --git a/spaces/MackDX/Neptunia/README.md b/spaces/MackDX/Neptunia/README.md
deleted file mode 100644
index d43fc6feb9947a9a987a09d28e276aeeebd158fa..0000000000000000000000000000000000000000
--- a/spaces/MackDX/Neptunia/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Neptunia
-emoji: 🏆
-colorFrom: pink
-colorTo: green
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Mahiruoshi/Lovelive-Nijigasaku-Chat-iSTFT-GPT3/monotonic_align/core.c b/spaces/Mahiruoshi/Lovelive-Nijigasaku-Chat-iSTFT-GPT3/monotonic_align/core.c
deleted file mode 100644
index 4628a9e1febbc34f868cce06748b52c533ad25c7..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/Lovelive-Nijigasaku-Chat-iSTFT-GPT3/monotonic_align/core.c
+++ /dev/null
@@ -1,21299 +0,0 @@
-/* Generated by Cython 0.29.21 */
-
-/* BEGIN: Cython Metadata
-{
- "distutils": {
- "name": "monotonic_align.core",
- "sources": [
- "core.pyx"
- ]
- },
- "module_name": "monotonic_align.core"
-}
-END: Cython Metadata */
-
-#define PY_SSIZE_T_CLEAN
-#include "Python.h"
-#ifndef Py_PYTHON_H
- #error Python headers needed to compile C extensions, please install development version of Python.
-#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000)
- #error Cython requires Python 2.6+ or Python 3.3+.
-#else
-#define CYTHON_ABI "0_29_21"
-#define CYTHON_HEX_VERSION 0x001D15F0
-#define CYTHON_FUTURE_DIVISION 1
-#include
-#ifndef offsetof
- #define offsetof(type, member) ( (size_t) & ((type*)0) -> member )
-#endif
-#if !defined(WIN32) && !defined(MS_WINDOWS)
- #ifndef __stdcall
- #define __stdcall
- #endif
- #ifndef __cdecl
- #define __cdecl
- #endif
- #ifndef __fastcall
- #define __fastcall
- #endif
-#endif
-#ifndef DL_IMPORT
- #define DL_IMPORT(t) t
-#endif
-#ifndef DL_EXPORT
- #define DL_EXPORT(t) t
-#endif
-#define __PYX_COMMA ,
-#ifndef HAVE_LONG_LONG
- #if PY_VERSION_HEX >= 0x02070000
- #define HAVE_LONG_LONG
- #endif
-#endif
-#ifndef PY_LONG_LONG
- #define PY_LONG_LONG LONG_LONG
-#endif
-#ifndef Py_HUGE_VAL
- #define Py_HUGE_VAL HUGE_VAL
-#endif
-#ifdef PYPY_VERSION
- #define CYTHON_COMPILING_IN_PYPY 1
- #define CYTHON_COMPILING_IN_PYSTON 0
- #define CYTHON_COMPILING_IN_CPYTHON 0
- #undef CYTHON_USE_TYPE_SLOTS
- #define CYTHON_USE_TYPE_SLOTS 0
- #undef CYTHON_USE_PYTYPE_LOOKUP
- #define CYTHON_USE_PYTYPE_LOOKUP 0
- #if PY_VERSION_HEX < 0x03050000
- #undef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 0
- #elif !defined(CYTHON_USE_ASYNC_SLOTS)
- #define CYTHON_USE_ASYNC_SLOTS 1
- #endif
- #undef CYTHON_USE_PYLIST_INTERNALS
- #define CYTHON_USE_PYLIST_INTERNALS 0
- #undef CYTHON_USE_UNICODE_INTERNALS
- #define CYTHON_USE_UNICODE_INTERNALS 0
- #undef CYTHON_USE_UNICODE_WRITER
- #define CYTHON_USE_UNICODE_WRITER 0
- #undef CYTHON_USE_PYLONG_INTERNALS
- #define CYTHON_USE_PYLONG_INTERNALS 0
- #undef CYTHON_AVOID_BORROWED_REFS
- #define CYTHON_AVOID_BORROWED_REFS 1
- #undef CYTHON_ASSUME_SAFE_MACROS
- #define CYTHON_ASSUME_SAFE_MACROS 0
- #undef CYTHON_UNPACK_METHODS
- #define CYTHON_UNPACK_METHODS 0
- #undef CYTHON_FAST_THREAD_STATE
- #define CYTHON_FAST_THREAD_STATE 0
- #undef CYTHON_FAST_PYCALL
- #define CYTHON_FAST_PYCALL 0
- #undef CYTHON_PEP489_MULTI_PHASE_INIT
- #define CYTHON_PEP489_MULTI_PHASE_INIT 0
- #undef CYTHON_USE_TP_FINALIZE
- #define CYTHON_USE_TP_FINALIZE 0
- #undef CYTHON_USE_DICT_VERSIONS
- #define CYTHON_USE_DICT_VERSIONS 0
- #undef CYTHON_USE_EXC_INFO_STACK
- #define CYTHON_USE_EXC_INFO_STACK 0
-#elif defined(PYSTON_VERSION)
- #define CYTHON_COMPILING_IN_PYPY 0
- #define CYTHON_COMPILING_IN_PYSTON 1
- #define CYTHON_COMPILING_IN_CPYTHON 0
- #ifndef CYTHON_USE_TYPE_SLOTS
- #define CYTHON_USE_TYPE_SLOTS 1
- #endif
- #undef CYTHON_USE_PYTYPE_LOOKUP
- #define CYTHON_USE_PYTYPE_LOOKUP 0
- #undef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 0
- #undef CYTHON_USE_PYLIST_INTERNALS
- #define CYTHON_USE_PYLIST_INTERNALS 0
- #ifndef CYTHON_USE_UNICODE_INTERNALS
- #define CYTHON_USE_UNICODE_INTERNALS 1
- #endif
- #undef CYTHON_USE_UNICODE_WRITER
- #define CYTHON_USE_UNICODE_WRITER 0
- #undef CYTHON_USE_PYLONG_INTERNALS
- #define CYTHON_USE_PYLONG_INTERNALS 0
- #ifndef CYTHON_AVOID_BORROWED_REFS
- #define CYTHON_AVOID_BORROWED_REFS 0
- #endif
- #ifndef CYTHON_ASSUME_SAFE_MACROS
- #define CYTHON_ASSUME_SAFE_MACROS 1
- #endif
- #ifndef CYTHON_UNPACK_METHODS
- #define CYTHON_UNPACK_METHODS 1
- #endif
- #undef CYTHON_FAST_THREAD_STATE
- #define CYTHON_FAST_THREAD_STATE 0
- #undef CYTHON_FAST_PYCALL
- #define CYTHON_FAST_PYCALL 0
- #undef CYTHON_PEP489_MULTI_PHASE_INIT
- #define CYTHON_PEP489_MULTI_PHASE_INIT 0
- #undef CYTHON_USE_TP_FINALIZE
- #define CYTHON_USE_TP_FINALIZE 0
- #undef CYTHON_USE_DICT_VERSIONS
- #define CYTHON_USE_DICT_VERSIONS 0
- #undef CYTHON_USE_EXC_INFO_STACK
- #define CYTHON_USE_EXC_INFO_STACK 0
-#else
- #define CYTHON_COMPILING_IN_PYPY 0
- #define CYTHON_COMPILING_IN_PYSTON 0
- #define CYTHON_COMPILING_IN_CPYTHON 1
- #ifndef CYTHON_USE_TYPE_SLOTS
- #define CYTHON_USE_TYPE_SLOTS 1
- #endif
- #if PY_VERSION_HEX < 0x02070000
- #undef CYTHON_USE_PYTYPE_LOOKUP
- #define CYTHON_USE_PYTYPE_LOOKUP 0
- #elif !defined(CYTHON_USE_PYTYPE_LOOKUP)
- #define CYTHON_USE_PYTYPE_LOOKUP 1
- #endif
- #if PY_MAJOR_VERSION < 3
- #undef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 0
- #elif !defined(CYTHON_USE_ASYNC_SLOTS)
- #define CYTHON_USE_ASYNC_SLOTS 1
- #endif
- #if PY_VERSION_HEX < 0x02070000
- #undef CYTHON_USE_PYLONG_INTERNALS
- #define CYTHON_USE_PYLONG_INTERNALS 0
- #elif !defined(CYTHON_USE_PYLONG_INTERNALS)
- #define CYTHON_USE_PYLONG_INTERNALS 1
- #endif
- #ifndef CYTHON_USE_PYLIST_INTERNALS
- #define CYTHON_USE_PYLIST_INTERNALS 1
- #endif
- #ifndef CYTHON_USE_UNICODE_INTERNALS
- #define CYTHON_USE_UNICODE_INTERNALS 1
- #endif
- #if PY_VERSION_HEX < 0x030300F0
- #undef CYTHON_USE_UNICODE_WRITER
- #define CYTHON_USE_UNICODE_WRITER 0
- #elif !defined(CYTHON_USE_UNICODE_WRITER)
- #define CYTHON_USE_UNICODE_WRITER 1
- #endif
- #ifndef CYTHON_AVOID_BORROWED_REFS
- #define CYTHON_AVOID_BORROWED_REFS 0
- #endif
- #ifndef CYTHON_ASSUME_SAFE_MACROS
- #define CYTHON_ASSUME_SAFE_MACROS 1
- #endif
- #ifndef CYTHON_UNPACK_METHODS
- #define CYTHON_UNPACK_METHODS 1
- #endif
- #ifndef CYTHON_FAST_THREAD_STATE
- #define CYTHON_FAST_THREAD_STATE 1
- #endif
- #ifndef CYTHON_FAST_PYCALL
- #define CYTHON_FAST_PYCALL 1
- #endif
- #ifndef CYTHON_PEP489_MULTI_PHASE_INIT
- #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000)
- #endif
- #ifndef CYTHON_USE_TP_FINALIZE
- #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1)
- #endif
- #ifndef CYTHON_USE_DICT_VERSIONS
- #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX >= 0x030600B1)
- #endif
- #ifndef CYTHON_USE_EXC_INFO_STACK
- #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3)
- #endif
-#endif
-#if !defined(CYTHON_FAST_PYCCALL)
-#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1)
-#endif
-#if CYTHON_USE_PYLONG_INTERNALS
- #include "longintrepr.h"
- #undef SHIFT
- #undef BASE
- #undef MASK
- #ifdef SIZEOF_VOID_P
- enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) };
- #endif
-#endif
-#ifndef __has_attribute
- #define __has_attribute(x) 0
-#endif
-#ifndef __has_cpp_attribute
- #define __has_cpp_attribute(x) 0
-#endif
-#ifndef CYTHON_RESTRICT
- #if defined(__GNUC__)
- #define CYTHON_RESTRICT __restrict__
- #elif defined(_MSC_VER) && _MSC_VER >= 1400
- #define CYTHON_RESTRICT __restrict
- #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- #define CYTHON_RESTRICT restrict
- #else
- #define CYTHON_RESTRICT
- #endif
-#endif
-#ifndef CYTHON_UNUSED
-# if defined(__GNUC__)
-# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4))
-# define CYTHON_UNUSED __attribute__ ((__unused__))
-# else
-# define CYTHON_UNUSED
-# endif
-# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER))
-# define CYTHON_UNUSED __attribute__ ((__unused__))
-# else
-# define CYTHON_UNUSED
-# endif
-#endif
-#ifndef CYTHON_MAYBE_UNUSED_VAR
-# if defined(__cplusplus)
- template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { }
-# else
-# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x)
-# endif
-#endif
-#ifndef CYTHON_NCP_UNUSED
-# if CYTHON_COMPILING_IN_CPYTHON
-# define CYTHON_NCP_UNUSED
-# else
-# define CYTHON_NCP_UNUSED CYTHON_UNUSED
-# endif
-#endif
-#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None)
-#ifdef _MSC_VER
- #ifndef _MSC_STDINT_H_
- #if _MSC_VER < 1300
- typedef unsigned char uint8_t;
- typedef unsigned int uint32_t;
- #else
- typedef unsigned __int8 uint8_t;
- typedef unsigned __int32 uint32_t;
- #endif
- #endif
-#else
- #include
-#endif
-#ifndef CYTHON_FALLTHROUGH
- #if defined(__cplusplus) && __cplusplus >= 201103L
- #if __has_cpp_attribute(fallthrough)
- #define CYTHON_FALLTHROUGH [[fallthrough]]
- #elif __has_cpp_attribute(clang::fallthrough)
- #define CYTHON_FALLTHROUGH [[clang::fallthrough]]
- #elif __has_cpp_attribute(gnu::fallthrough)
- #define CYTHON_FALLTHROUGH [[gnu::fallthrough]]
- #endif
- #endif
- #ifndef CYTHON_FALLTHROUGH
- #if __has_attribute(fallthrough)
- #define CYTHON_FALLTHROUGH __attribute__((fallthrough))
- #else
- #define CYTHON_FALLTHROUGH
- #endif
- #endif
- #if defined(__clang__ ) && defined(__apple_build_version__)
- #if __apple_build_version__ < 7000000
- #undef CYTHON_FALLTHROUGH
- #define CYTHON_FALLTHROUGH
- #endif
- #endif
-#endif
-
-#ifndef CYTHON_INLINE
- #if defined(__clang__)
- #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))
- #elif defined(__GNUC__)
- #define CYTHON_INLINE __inline__
- #elif defined(_MSC_VER)
- #define CYTHON_INLINE __inline
- #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- #define CYTHON_INLINE inline
- #else
- #define CYTHON_INLINE
- #endif
-#endif
-
-#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag)
- #define Py_OptimizeFlag 0
-#endif
-#define __PYX_BUILD_PY_SSIZE_T "n"
-#define CYTHON_FORMAT_SSIZE_T "z"
-#if PY_MAJOR_VERSION < 3
- #define __Pyx_BUILTIN_MODULE_NAME "__builtin__"
- #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\
- PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
- #define __Pyx_DefaultClassType PyClass_Type
-#else
- #define __Pyx_BUILTIN_MODULE_NAME "builtins"
-#if PY_VERSION_HEX >= 0x030800A4 && PY_VERSION_HEX < 0x030800B2
- #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\
- PyCode_New(a, 0, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
-#else
- #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\
- PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
-#endif
- #define __Pyx_DefaultClassType PyType_Type
-#endif
-#ifndef Py_TPFLAGS_CHECKTYPES
- #define Py_TPFLAGS_CHECKTYPES 0
-#endif
-#ifndef Py_TPFLAGS_HAVE_INDEX
- #define Py_TPFLAGS_HAVE_INDEX 0
-#endif
-#ifndef Py_TPFLAGS_HAVE_NEWBUFFER
- #define Py_TPFLAGS_HAVE_NEWBUFFER 0
-#endif
-#ifndef Py_TPFLAGS_HAVE_FINALIZE
- #define Py_TPFLAGS_HAVE_FINALIZE 0
-#endif
-#ifndef METH_STACKLESS
- #define METH_STACKLESS 0
-#endif
-#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL)
- #ifndef METH_FASTCALL
- #define METH_FASTCALL 0x80
- #endif
- typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs);
- typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args,
- Py_ssize_t nargs, PyObject *kwnames);
-#else
- #define __Pyx_PyCFunctionFast _PyCFunctionFast
- #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords
-#endif
-#if CYTHON_FAST_PYCCALL
-#define __Pyx_PyFastCFunction_Check(func)\
- ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS)))))
-#else
-#define __Pyx_PyFastCFunction_Check(func) 0
-#endif
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc)
- #define PyObject_Malloc(s) PyMem_Malloc(s)
- #define PyObject_Free(p) PyMem_Free(p)
- #define PyObject_Realloc(p) PyMem_Realloc(p)
-#endif
-#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1
- #define PyMem_RawMalloc(n) PyMem_Malloc(n)
- #define PyMem_RawRealloc(p, n) PyMem_Realloc(p, n)
- #define PyMem_RawFree(p) PyMem_Free(p)
-#endif
-#if CYTHON_COMPILING_IN_PYSTON
- #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co)
- #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno)
-#else
- #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0)
- #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno)
-#endif
-#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000
- #define __Pyx_PyThreadState_Current PyThreadState_GET()
-#elif PY_VERSION_HEX >= 0x03060000
- #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet()
-#elif PY_VERSION_HEX >= 0x03000000
- #define __Pyx_PyThreadState_Current PyThreadState_GET()
-#else
- #define __Pyx_PyThreadState_Current _PyThreadState_Current
-#endif
-#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT)
-#include "pythread.h"
-#define Py_tss_NEEDS_INIT 0
-typedef int Py_tss_t;
-static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) {
- *key = PyThread_create_key();
- return 0;
-}
-static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) {
- Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t));
- *key = Py_tss_NEEDS_INIT;
- return key;
-}
-static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) {
- PyObject_Free(key);
-}
-static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) {
- return *key != Py_tss_NEEDS_INIT;
-}
-static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) {
- PyThread_delete_key(*key);
- *key = Py_tss_NEEDS_INIT;
-}
-static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) {
- return PyThread_set_key_value(*key, value);
-}
-static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) {
- return PyThread_get_key_value(*key);
-}
-#endif
-#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized)
-#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n))
-#else
-#define __Pyx_PyDict_NewPresized(n) PyDict_New()
-#endif
-#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION
- #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y)
- #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y)
-#else
- #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y)
- #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y)
-#endif
-#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS
-#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash)
-#else
-#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name)
-#endif
-#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND)
- #define CYTHON_PEP393_ENABLED 1
- #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\
- 0 : _PyUnicode_Ready((PyObject *)(op)))
- #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u)
- #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i)
- #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u)
- #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u)
- #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u)
- #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i)
- #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch)
- #if defined(PyUnicode_IS_READY) && defined(PyUnicode_GET_SIZE)
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u)))
- #else
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u))
- #endif
-#else
- #define CYTHON_PEP393_ENABLED 0
- #define PyUnicode_1BYTE_KIND 1
- #define PyUnicode_2BYTE_KIND 2
- #define PyUnicode_4BYTE_KIND 4
- #define __Pyx_PyUnicode_READY(op) (0)
- #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u)
- #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i]))
- #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111)
- #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE))
- #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u))
- #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i]))
- #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch)
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u))
-#endif
-#if CYTHON_COMPILING_IN_PYPY
- #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b)
- #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b)
-#else
- #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b)
- #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\
- PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b))
-#endif
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains)
- #define PyUnicode_Contains(u, s) PySequence_Contains(u, s)
-#endif
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check)
- #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type)
-#endif
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format)
- #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt)
-#endif
-#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b))
-#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b))
-#if PY_MAJOR_VERSION >= 3
- #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b)
-#else
- #define __Pyx_PyString_Format(a, b) PyString_Format(a, b)
-#endif
-#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII)
- #define PyObject_ASCII(o) PyObject_Repr(o)
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define PyBaseString_Type PyUnicode_Type
- #define PyStringObject PyUnicodeObject
- #define PyString_Type PyUnicode_Type
- #define PyString_Check PyUnicode_Check
- #define PyString_CheckExact PyUnicode_CheckExact
-#ifndef PyObject_Unicode
- #define PyObject_Unicode PyObject_Str
-#endif
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj)
- #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj)
-#else
- #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj))
- #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj))
-#endif
-#ifndef PySet_CheckExact
- #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type)
-#endif
-#if PY_VERSION_HEX >= 0x030900A4
- #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt)
- #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size)
-#else
- #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt)
- #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size)
-#endif
-#if CYTHON_ASSUME_SAFE_MACROS
- #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq)
-#else
- #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq)
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define PyIntObject PyLongObject
- #define PyInt_Type PyLong_Type
- #define PyInt_Check(op) PyLong_Check(op)
- #define PyInt_CheckExact(op) PyLong_CheckExact(op)
- #define PyInt_FromString PyLong_FromString
- #define PyInt_FromUnicode PyLong_FromUnicode
- #define PyInt_FromLong PyLong_FromLong
- #define PyInt_FromSize_t PyLong_FromSize_t
- #define PyInt_FromSsize_t PyLong_FromSsize_t
- #define PyInt_AsLong PyLong_AsLong
- #define PyInt_AS_LONG PyLong_AS_LONG
- #define PyInt_AsSsize_t PyLong_AsSsize_t
- #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask
- #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask
- #define PyNumber_Int PyNumber_Long
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define PyBoolObject PyLongObject
-#endif
-#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY
- #ifndef PyUnicode_InternFromString
- #define PyUnicode_InternFromString(s) PyUnicode_FromString(s)
- #endif
-#endif
-#if PY_VERSION_HEX < 0x030200A4
- typedef long Py_hash_t;
- #define __Pyx_PyInt_FromHash_t PyInt_FromLong
- #define __Pyx_PyInt_AsHash_t PyInt_AsLong
-#else
- #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t
- #define __Pyx_PyInt_AsHash_t PyInt_AsSsize_t
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define __Pyx_PyMethod_New(func, self, klass) ((self) ? ((void)(klass), PyMethod_New(func, self)) : __Pyx_NewRef(func))
-#else
- #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass)
-#endif
-#if CYTHON_USE_ASYNC_SLOTS
- #if PY_VERSION_HEX >= 0x030500B1
- #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods
- #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async)
- #else
- #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved))
- #endif
-#else
- #define __Pyx_PyType_AsAsync(obj) NULL
-#endif
-#ifndef __Pyx_PyAsyncMethodsStruct
- typedef struct {
- unaryfunc am_await;
- unaryfunc am_aiter;
- unaryfunc am_anext;
- } __Pyx_PyAsyncMethodsStruct;
-#endif
-
-#if defined(WIN32) || defined(MS_WINDOWS)
- #define _USE_MATH_DEFINES
-#endif
-#include
-#ifdef NAN
-#define __PYX_NAN() ((float) NAN)
-#else
-static CYTHON_INLINE float __PYX_NAN() {
- float value;
- memset(&value, 0xFF, sizeof(value));
- return value;
-}
-#endif
-#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL)
-#define __Pyx_truncl trunc
-#else
-#define __Pyx_truncl truncl
-#endif
-
-#define __PYX_MARK_ERR_POS(f_index, lineno) \
- { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; }
-#define __PYX_ERR(f_index, lineno, Ln_error) \
- { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; }
-
-#ifndef __PYX_EXTERN_C
- #ifdef __cplusplus
- #define __PYX_EXTERN_C extern "C"
- #else
- #define __PYX_EXTERN_C extern
- #endif
-#endif
-
-#define __PYX_HAVE__monotonic_align__core
-#define __PYX_HAVE_API__monotonic_align__core
-/* Early includes */
-#include "pythread.h"
-#include
-#include
-#include
-#include "pystate.h"
-#ifdef _OPENMP
-#include
-#endif /* _OPENMP */
-
-#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS)
-#define CYTHON_WITHOUT_ASSERTIONS
-#endif
-
-typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding;
- const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry;
-
-#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0
-#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0
-#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8)
-#define __PYX_DEFAULT_STRING_ENCODING ""
-#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString
-#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize
-#define __Pyx_uchar_cast(c) ((unsigned char)c)
-#define __Pyx_long_cast(x) ((long)x)
-#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\
- (sizeof(type) < sizeof(Py_ssize_t)) ||\
- (sizeof(type) > sizeof(Py_ssize_t) &&\
- likely(v < (type)PY_SSIZE_T_MAX ||\
- v == (type)PY_SSIZE_T_MAX) &&\
- (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\
- v == (type)PY_SSIZE_T_MIN))) ||\
- (sizeof(type) == sizeof(Py_ssize_t) &&\
- (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\
- v == (type)PY_SSIZE_T_MAX))) )
-static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) {
- return (size_t) i < (size_t) limit;
-}
-#if defined (__cplusplus) && __cplusplus >= 201103L
- #include
- #define __Pyx_sst_abs(value) std::abs(value)
-#elif SIZEOF_INT >= SIZEOF_SIZE_T
- #define __Pyx_sst_abs(value) abs(value)
-#elif SIZEOF_LONG >= SIZEOF_SIZE_T
- #define __Pyx_sst_abs(value) labs(value)
-#elif defined (_MSC_VER)
- #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value))
-#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- #define __Pyx_sst_abs(value) llabs(value)
-#elif defined (__GNUC__)
- #define __Pyx_sst_abs(value) __builtin_llabs(value)
-#else
- #define __Pyx_sst_abs(value) ((value<0) ? -value : value)
-#endif
-static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*);
-static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length);
-#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s))
-#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l)
-#define __Pyx_PyBytes_FromString PyBytes_FromString
-#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize
-static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*);
-#if PY_MAJOR_VERSION < 3
- #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString
- #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize
-#else
- #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString
- #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize
-#endif
-#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s)
-#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s)
-#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s)
-#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s)
-#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s)
-static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) {
- const Py_UNICODE *u_end = u;
- while (*u_end++) ;
- return (size_t)(u_end - u - 1);
-}
-#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u))
-#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode
-#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode
-#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj)
-#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None)
-static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b);
-static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*);
-static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*);
-static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x);
-#define __Pyx_PySequence_Tuple(obj)\
- (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj))
-static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*);
-static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t);
-#if CYTHON_ASSUME_SAFE_MACROS
-#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x))
-#else
-#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x)
-#endif
-#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x))
-#if PY_MAJOR_VERSION >= 3
-#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x))
-#else
-#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x))
-#endif
-#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x))
-#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
-static int __Pyx_sys_getdefaultencoding_not_ascii;
-static int __Pyx_init_sys_getdefaultencoding_params(void) {
- PyObject* sys;
- PyObject* default_encoding = NULL;
- PyObject* ascii_chars_u = NULL;
- PyObject* ascii_chars_b = NULL;
- const char* default_encoding_c;
- sys = PyImport_ImportModule("sys");
- if (!sys) goto bad;
- default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL);
- Py_DECREF(sys);
- if (!default_encoding) goto bad;
- default_encoding_c = PyBytes_AsString(default_encoding);
- if (!default_encoding_c) goto bad;
- if (strcmp(default_encoding_c, "ascii") == 0) {
- __Pyx_sys_getdefaultencoding_not_ascii = 0;
- } else {
- char ascii_chars[128];
- int c;
- for (c = 0; c < 128; c++) {
- ascii_chars[c] = c;
- }
- __Pyx_sys_getdefaultencoding_not_ascii = 1;
- ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL);
- if (!ascii_chars_u) goto bad;
- ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL);
- if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) {
- PyErr_Format(
- PyExc_ValueError,
- "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.",
- default_encoding_c);
- goto bad;
- }
- Py_DECREF(ascii_chars_u);
- Py_DECREF(ascii_chars_b);
- }
- Py_DECREF(default_encoding);
- return 0;
-bad:
- Py_XDECREF(default_encoding);
- Py_XDECREF(ascii_chars_u);
- Py_XDECREF(ascii_chars_b);
- return -1;
-}
-#endif
-#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3
-#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL)
-#else
-#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL)
-#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT
-static char* __PYX_DEFAULT_STRING_ENCODING;
-static int __Pyx_init_sys_getdefaultencoding_params(void) {
- PyObject* sys;
- PyObject* default_encoding = NULL;
- char* default_encoding_c;
- sys = PyImport_ImportModule("sys");
- if (!sys) goto bad;
- default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL);
- Py_DECREF(sys);
- if (!default_encoding) goto bad;
- default_encoding_c = PyBytes_AsString(default_encoding);
- if (!default_encoding_c) goto bad;
- __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1);
- if (!__PYX_DEFAULT_STRING_ENCODING) goto bad;
- strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c);
- Py_DECREF(default_encoding);
- return 0;
-bad:
- Py_XDECREF(default_encoding);
- return -1;
-}
-#endif
-#endif
-
-
-/* Test for GCC > 2.95 */
-#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))
- #define likely(x) __builtin_expect(!!(x), 1)
- #define unlikely(x) __builtin_expect(!!(x), 0)
-#else /* !__GNUC__ or GCC < 2.95 */
- #define likely(x) (x)
- #define unlikely(x) (x)
-#endif /* __GNUC__ */
-static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; }
-
-static PyObject *__pyx_m = NULL;
-static PyObject *__pyx_d;
-static PyObject *__pyx_b;
-static PyObject *__pyx_cython_runtime = NULL;
-static PyObject *__pyx_empty_tuple;
-static PyObject *__pyx_empty_bytes;
-static PyObject *__pyx_empty_unicode;
-static int __pyx_lineno;
-static int __pyx_clineno = 0;
-static const char * __pyx_cfilenm= __FILE__;
-static const char *__pyx_filename;
-
-
-static const char *__pyx_f[] = {
- "core.pyx",
- "stringsource",
-};
-/* NoFastGil.proto */
-#define __Pyx_PyGILState_Ensure PyGILState_Ensure
-#define __Pyx_PyGILState_Release PyGILState_Release
-#define __Pyx_FastGIL_Remember()
-#define __Pyx_FastGIL_Forget()
-#define __Pyx_FastGilFuncInit()
-
-/* MemviewSliceStruct.proto */
-struct __pyx_memoryview_obj;
-typedef struct {
- struct __pyx_memoryview_obj *memview;
- char *data;
- Py_ssize_t shape[8];
- Py_ssize_t strides[8];
- Py_ssize_t suboffsets[8];
-} __Pyx_memviewslice;
-#define __Pyx_MemoryView_Len(m) (m.shape[0])
-
-/* Atomics.proto */
-#include
-#ifndef CYTHON_ATOMICS
- #define CYTHON_ATOMICS 1
-#endif
-#define __pyx_atomic_int_type int
-#if CYTHON_ATOMICS && __GNUC__ >= 4 && (__GNUC_MINOR__ > 1 ||\
- (__GNUC_MINOR__ == 1 && __GNUC_PATCHLEVEL >= 2)) &&\
- !defined(__i386__)
- #define __pyx_atomic_incr_aligned(value, lock) __sync_fetch_and_add(value, 1)
- #define __pyx_atomic_decr_aligned(value, lock) __sync_fetch_and_sub(value, 1)
- #ifdef __PYX_DEBUG_ATOMICS
- #warning "Using GNU atomics"
- #endif
-#elif CYTHON_ATOMICS && defined(_MSC_VER) && 0
- #include
- #undef __pyx_atomic_int_type
- #define __pyx_atomic_int_type LONG
- #define __pyx_atomic_incr_aligned(value, lock) InterlockedIncrement(value)
- #define __pyx_atomic_decr_aligned(value, lock) InterlockedDecrement(value)
- #ifdef __PYX_DEBUG_ATOMICS
- #pragma message ("Using MSVC atomics")
- #endif
-#elif CYTHON_ATOMICS && (defined(__ICC) || defined(__INTEL_COMPILER)) && 0
- #define __pyx_atomic_incr_aligned(value, lock) _InterlockedIncrement(value)
- #define __pyx_atomic_decr_aligned(value, lock) _InterlockedDecrement(value)
- #ifdef __PYX_DEBUG_ATOMICS
- #warning "Using Intel atomics"
- #endif
-#else
- #undef CYTHON_ATOMICS
- #define CYTHON_ATOMICS 0
- #ifdef __PYX_DEBUG_ATOMICS
- #warning "Not using atomics"
- #endif
-#endif
-typedef volatile __pyx_atomic_int_type __pyx_atomic_int;
-#if CYTHON_ATOMICS
- #define __pyx_add_acquisition_count(memview)\
- __pyx_atomic_incr_aligned(__pyx_get_slice_count_pointer(memview), memview->lock)
- #define __pyx_sub_acquisition_count(memview)\
- __pyx_atomic_decr_aligned(__pyx_get_slice_count_pointer(memview), memview->lock)
-#else
- #define __pyx_add_acquisition_count(memview)\
- __pyx_add_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock)
- #define __pyx_sub_acquisition_count(memview)\
- __pyx_sub_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock)
-#endif
-
-/* ForceInitThreads.proto */
-#ifndef __PYX_FORCE_INIT_THREADS
- #define __PYX_FORCE_INIT_THREADS 0
-#endif
-
-/* BufferFormatStructs.proto */
-#define IS_UNSIGNED(type) (((type) -1) > 0)
-struct __Pyx_StructField_;
-#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0)
-typedef struct {
- const char* name;
- struct __Pyx_StructField_* fields;
- size_t size;
- size_t arraysize[8];
- int ndim;
- char typegroup;
- char is_unsigned;
- int flags;
-} __Pyx_TypeInfo;
-typedef struct __Pyx_StructField_ {
- __Pyx_TypeInfo* type;
- const char* name;
- size_t offset;
-} __Pyx_StructField;
-typedef struct {
- __Pyx_StructField* field;
- size_t parent_offset;
-} __Pyx_BufFmt_StackElem;
-typedef struct {
- __Pyx_StructField root;
- __Pyx_BufFmt_StackElem* head;
- size_t fmt_offset;
- size_t new_count, enc_count;
- size_t struct_alignment;
- int is_complex;
- char enc_type;
- char new_packmode;
- char enc_packmode;
- char is_valid_array;
-} __Pyx_BufFmt_Context;
-
-
-/*--- Type declarations ---*/
-struct __pyx_array_obj;
-struct __pyx_MemviewEnum_obj;
-struct __pyx_memoryview_obj;
-struct __pyx_memoryviewslice_obj;
-struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each;
-
-/* "monotonic_align/core.pyx":9
- * @cython.boundscheck(False)
- * @cython.wraparound(False)
- * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<<
- * cdef int x
- * cdef int y
- */
-struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each {
- int __pyx_n;
- float max_neg_val;
-};
-
-/* "View.MemoryView":105
- *
- * @cname("__pyx_array")
- * cdef class array: # <<<<<<<<<<<<<<
- *
- * cdef:
- */
-struct __pyx_array_obj {
- PyObject_HEAD
- struct __pyx_vtabstruct_array *__pyx_vtab;
- char *data;
- Py_ssize_t len;
- char *format;
- int ndim;
- Py_ssize_t *_shape;
- Py_ssize_t *_strides;
- Py_ssize_t itemsize;
- PyObject *mode;
- PyObject *_format;
- void (*callback_free_data)(void *);
- int free_data;
- int dtype_is_object;
-};
-
-
-/* "View.MemoryView":279
- *
- * @cname('__pyx_MemviewEnum')
- * cdef class Enum(object): # <<<<<<<<<<<<<<
- * cdef object name
- * def __init__(self, name):
- */
-struct __pyx_MemviewEnum_obj {
- PyObject_HEAD
- PyObject *name;
-};
-
-
-/* "View.MemoryView":330
- *
- * @cname('__pyx_memoryview')
- * cdef class memoryview(object): # <<<<<<<<<<<<<<
- *
- * cdef object obj
- */
-struct __pyx_memoryview_obj {
- PyObject_HEAD
- struct __pyx_vtabstruct_memoryview *__pyx_vtab;
- PyObject *obj;
- PyObject *_size;
- PyObject *_array_interface;
- PyThread_type_lock lock;
- __pyx_atomic_int acquisition_count[2];
- __pyx_atomic_int *acquisition_count_aligned_p;
- Py_buffer view;
- int flags;
- int dtype_is_object;
- __Pyx_TypeInfo *typeinfo;
-};
-
-
-/* "View.MemoryView":965
- *
- * @cname('__pyx_memoryviewslice')
- * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<<
- * "Internal class for passing memoryview slices to Python"
- *
- */
-struct __pyx_memoryviewslice_obj {
- struct __pyx_memoryview_obj __pyx_base;
- __Pyx_memviewslice from_slice;
- PyObject *from_object;
- PyObject *(*to_object_func)(char *);
- int (*to_dtype_func)(char *, PyObject *);
-};
-
-
-
-/* "View.MemoryView":105
- *
- * @cname("__pyx_array")
- * cdef class array: # <<<<<<<<<<<<<<
- *
- * cdef:
- */
-
-struct __pyx_vtabstruct_array {
- PyObject *(*get_memview)(struct __pyx_array_obj *);
-};
-static struct __pyx_vtabstruct_array *__pyx_vtabptr_array;
-
-
-/* "View.MemoryView":330
- *
- * @cname('__pyx_memoryview')
- * cdef class memoryview(object): # <<<<<<<<<<<<<<
- *
- * cdef object obj
- */
-
-struct __pyx_vtabstruct_memoryview {
- char *(*get_item_pointer)(struct __pyx_memoryview_obj *, PyObject *);
- PyObject *(*is_slice)(struct __pyx_memoryview_obj *, PyObject *);
- PyObject *(*setitem_slice_assignment)(struct __pyx_memoryview_obj *, PyObject *, PyObject *);
- PyObject *(*setitem_slice_assign_scalar)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *);
- PyObject *(*setitem_indexed)(struct __pyx_memoryview_obj *, PyObject *, PyObject *);
- PyObject *(*convert_item_to_object)(struct __pyx_memoryview_obj *, char *);
- PyObject *(*assign_item_from_object)(struct __pyx_memoryview_obj *, char *, PyObject *);
-};
-static struct __pyx_vtabstruct_memoryview *__pyx_vtabptr_memoryview;
-
-
-/* "View.MemoryView":965
- *
- * @cname('__pyx_memoryviewslice')
- * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<<
- * "Internal class for passing memoryview slices to Python"
- *
- */
-
-struct __pyx_vtabstruct__memoryviewslice {
- struct __pyx_vtabstruct_memoryview __pyx_base;
-};
-static struct __pyx_vtabstruct__memoryviewslice *__pyx_vtabptr__memoryviewslice;
-
-/* --- Runtime support code (head) --- */
-/* Refnanny.proto */
-#ifndef CYTHON_REFNANNY
- #define CYTHON_REFNANNY 0
-#endif
-#if CYTHON_REFNANNY
- typedef struct {
- void (*INCREF)(void*, PyObject*, int);
- void (*DECREF)(void*, PyObject*, int);
- void (*GOTREF)(void*, PyObject*, int);
- void (*GIVEREF)(void*, PyObject*, int);
- void* (*SetupContext)(const char*, int, const char*);
- void (*FinishContext)(void**);
- } __Pyx_RefNannyAPIStruct;
- static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL;
- static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname);
- #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL;
-#ifdef WITH_THREAD
- #define __Pyx_RefNannySetupContext(name, acquire_gil)\
- if (acquire_gil) {\
- PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\
- __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\
- PyGILState_Release(__pyx_gilstate_save);\
- } else {\
- __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\
- }
-#else
- #define __Pyx_RefNannySetupContext(name, acquire_gil)\
- __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__)
-#endif
- #define __Pyx_RefNannyFinishContext()\
- __Pyx_RefNanny->FinishContext(&__pyx_refnanny)
- #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0)
- #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0)
- #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0)
- #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0)
-#else
- #define __Pyx_RefNannyDeclarations
- #define __Pyx_RefNannySetupContext(name, acquire_gil)
- #define __Pyx_RefNannyFinishContext()
- #define __Pyx_INCREF(r) Py_INCREF(r)
- #define __Pyx_DECREF(r) Py_DECREF(r)
- #define __Pyx_GOTREF(r)
- #define __Pyx_GIVEREF(r)
- #define __Pyx_XINCREF(r) Py_XINCREF(r)
- #define __Pyx_XDECREF(r) Py_XDECREF(r)
- #define __Pyx_XGOTREF(r)
- #define __Pyx_XGIVEREF(r)
-#endif
-#define __Pyx_XDECREF_SET(r, v) do {\
- PyObject *tmp = (PyObject *) r;\
- r = v; __Pyx_XDECREF(tmp);\
- } while (0)
-#define __Pyx_DECREF_SET(r, v) do {\
- PyObject *tmp = (PyObject *) r;\
- r = v; __Pyx_DECREF(tmp);\
- } while (0)
-#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0)
-#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0)
-
-/* PyObjectGetAttrStr.proto */
-#if CYTHON_USE_TYPE_SLOTS
-static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name);
-#else
-#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n)
-#endif
-
-/* GetBuiltinName.proto */
-static PyObject *__Pyx_GetBuiltinName(PyObject *name);
-
-/* MemviewSliceInit.proto */
-#define __Pyx_BUF_MAX_NDIMS %(BUF_MAX_NDIMS)d
-#define __Pyx_MEMVIEW_DIRECT 1
-#define __Pyx_MEMVIEW_PTR 2
-#define __Pyx_MEMVIEW_FULL 4
-#define __Pyx_MEMVIEW_CONTIG 8
-#define __Pyx_MEMVIEW_STRIDED 16
-#define __Pyx_MEMVIEW_FOLLOW 32
-#define __Pyx_IS_C_CONTIG 1
-#define __Pyx_IS_F_CONTIG 2
-static int __Pyx_init_memviewslice(
- struct __pyx_memoryview_obj *memview,
- int ndim,
- __Pyx_memviewslice *memviewslice,
- int memview_is_new_reference);
-static CYTHON_INLINE int __pyx_add_acquisition_count_locked(
- __pyx_atomic_int *acquisition_count, PyThread_type_lock lock);
-static CYTHON_INLINE int __pyx_sub_acquisition_count_locked(
- __pyx_atomic_int *acquisition_count, PyThread_type_lock lock);
-#define __pyx_get_slice_count_pointer(memview) (memview->acquisition_count_aligned_p)
-#define __pyx_get_slice_count(memview) (*__pyx_get_slice_count_pointer(memview))
-#define __PYX_INC_MEMVIEW(slice, have_gil) __Pyx_INC_MEMVIEW(slice, have_gil, __LINE__)
-#define __PYX_XDEC_MEMVIEW(slice, have_gil) __Pyx_XDEC_MEMVIEW(slice, have_gil, __LINE__)
-static CYTHON_INLINE void __Pyx_INC_MEMVIEW(__Pyx_memviewslice *, int, int);
-static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *, int, int);
-
-/* RaiseArgTupleInvalid.proto */
-static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact,
- Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found);
-
-/* RaiseDoubleKeywords.proto */
-static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name);
-
-/* ParseKeywords.proto */
-static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\
- PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\
- const char* function_name);
-
-/* None.proto */
-static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname);
-
-/* ArgTypeTest.proto */
-#define __Pyx_ArgTypeTest(obj, type, none_allowed, name, exact)\
- ((likely((Py_TYPE(obj) == type) | (none_allowed && (obj == Py_None)))) ? 1 :\
- __Pyx__ArgTypeTest(obj, type, name, exact))
-static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact);
-
-/* PyObjectCall.proto */
-#if CYTHON_COMPILING_IN_CPYTHON
-static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw);
-#else
-#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw)
-#endif
-
-/* PyThreadStateGet.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate;
-#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current;
-#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type
-#else
-#define __Pyx_PyThreadState_declare
-#define __Pyx_PyThreadState_assign
-#define __Pyx_PyErr_Occurred() PyErr_Occurred()
-#endif
-
-/* PyErrFetchRestore.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL)
-#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb)
-#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb)
-#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb)
-#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb)
-static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);
-static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
-#if CYTHON_COMPILING_IN_CPYTHON
-#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL))
-#else
-#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)
-#endif
-#else
-#define __Pyx_PyErr_Clear() PyErr_Clear()
-#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)
-#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb)
-#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb)
-#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb)
-#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb)
-#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb)
-#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb)
-#endif
-
-/* RaiseException.proto */
-static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause);
-
-/* PyCFunctionFastCall.proto */
-#if CYTHON_FAST_PYCCALL
-static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs);
-#else
-#define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL)
-#endif
-
-/* PyFunctionFastCall.proto */
-#if CYTHON_FAST_PYCALL
-#define __Pyx_PyFunction_FastCall(func, args, nargs)\
- __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL)
-#if 1 || PY_VERSION_HEX < 0x030600B1
-static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs);
-#else
-#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs)
-#endif
-#define __Pyx_BUILD_ASSERT_EXPR(cond)\
- (sizeof(char [1 - 2*!(cond)]) - 1)
-#ifndef Py_MEMBER_SIZE
-#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member)
-#endif
- static size_t __pyx_pyframe_localsplus_offset = 0;
- #include "frameobject.h"
- #define __Pxy_PyFrame_Initialize_Offsets()\
- ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\
- (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus)))
- #define __Pyx_PyFrame_GetLocalsplus(frame)\
- (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset))
-#endif
-
-/* PyObjectCall2Args.proto */
-static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2);
-
-/* PyObjectCallMethO.proto */
-#if CYTHON_COMPILING_IN_CPYTHON
-static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg);
-#endif
-
-/* PyObjectCallOneArg.proto */
-static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg);
-
-/* IncludeStringH.proto */
-#include
-
-/* BytesEquals.proto */
-static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals);
-
-/* UnicodeEquals.proto */
-static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals);
-
-/* StrEquals.proto */
-#if PY_MAJOR_VERSION >= 3
-#define __Pyx_PyString_Equals __Pyx_PyUnicode_Equals
-#else
-#define __Pyx_PyString_Equals __Pyx_PyBytes_Equals
-#endif
-
-/* None.proto */
-static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t, Py_ssize_t);
-
-/* UnaryNegOverflows.proto */
-#define UNARY_NEG_WOULD_OVERFLOW(x)\
- (((x) < 0) & ((unsigned long)(x) == 0-(unsigned long)(x)))
-
-static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/
-static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *); /*proto*/
-/* GetAttr.proto */
-static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *, PyObject *);
-
-/* GetItemInt.proto */
-#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\
- (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\
- __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\
- (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\
- __Pyx_GetItemInt_Generic(o, to_py_func(i))))
-#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\
- (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\
- __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\
- (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL))
-static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i,
- int wraparound, int boundscheck);
-#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\
- (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\
- __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\
- (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL))
-static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i,
- int wraparound, int boundscheck);
-static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j);
-static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i,
- int is_list, int wraparound, int boundscheck);
-
-/* ObjectGetItem.proto */
-#if CYTHON_USE_TYPE_SLOTS
-static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key);
-#else
-#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key)
-#endif
-
-/* decode_c_string_utf16.proto */
-static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16(const char *s, Py_ssize_t size, const char *errors) {
- int byteorder = 0;
- return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);
-}
-static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16LE(const char *s, Py_ssize_t size, const char *errors) {
- int byteorder = -1;
- return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);
-}
-static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16BE(const char *s, Py_ssize_t size, const char *errors) {
- int byteorder = 1;
- return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);
-}
-
-/* decode_c_string.proto */
-static CYTHON_INLINE PyObject* __Pyx_decode_c_string(
- const char* cstring, Py_ssize_t start, Py_ssize_t stop,
- const char* encoding, const char* errors,
- PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors));
-
-/* PyErrExceptionMatches.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err)
-static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err);
-#else
-#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err)
-#endif
-
-/* GetAttr3.proto */
-static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *, PyObject *, PyObject *);
-
-/* PyDictVersioning.proto */
-#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS
-#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1)
-#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag)
-#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\
- (version_var) = __PYX_GET_DICT_VERSION(dict);\
- (cache_var) = (value);
-#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\
- static PY_UINT64_T __pyx_dict_version = 0;\
- static PyObject *__pyx_dict_cached_value = NULL;\
- if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\
- (VAR) = __pyx_dict_cached_value;\
- } else {\
- (VAR) = __pyx_dict_cached_value = (LOOKUP);\
- __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\
- }\
-}
-static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj);
-static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj);
-static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version);
-#else
-#define __PYX_GET_DICT_VERSION(dict) (0)
-#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)
-#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP);
-#endif
-
-/* GetModuleGlobalName.proto */
-#if CYTHON_USE_DICT_VERSIONS
-#define __Pyx_GetModuleGlobalName(var, name) {\
- static PY_UINT64_T __pyx_dict_version = 0;\
- static PyObject *__pyx_dict_cached_value = NULL;\
- (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\
- (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\
- __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\
-}
-#define __Pyx_GetModuleGlobalNameUncached(var, name) {\
- PY_UINT64_T __pyx_dict_version;\
- PyObject *__pyx_dict_cached_value;\
- (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\
-}
-static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value);
-#else
-#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name)
-#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name)
-static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name);
-#endif
-
-/* RaiseTooManyValuesToUnpack.proto */
-static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected);
-
-/* RaiseNeedMoreValuesToUnpack.proto */
-static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index);
-
-/* RaiseNoneIterError.proto */
-static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void);
-
-/* ExtTypeTest.proto */
-static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type);
-
-/* GetTopmostException.proto */
-#if CYTHON_USE_EXC_INFO_STACK
-static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate);
-#endif
-
-/* SaveResetException.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb)
-static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
-#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb)
-static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);
-#else
-#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb)
-#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb)
-#endif
-
-/* GetException.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb)
-static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
-#else
-static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb);
-#endif
-
-/* SwapException.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb)
-static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
-#else
-static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb);
-#endif
-
-/* Import.proto */
-static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level);
-
-/* FastTypeChecks.proto */
-#if CYTHON_COMPILING_IN_CPYTHON
-#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type)
-static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b);
-static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type);
-static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2);
-#else
-#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type)
-#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type)
-#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2))
-#endif
-#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception)
-
-static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/
-/* ListCompAppend.proto */
-#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS
-static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) {
- PyListObject* L = (PyListObject*) list;
- Py_ssize_t len = Py_SIZE(list);
- if (likely(L->allocated > len)) {
- Py_INCREF(x);
- PyList_SET_ITEM(list, len, x);
- __Pyx_SET_SIZE(list, len + 1);
- return 0;
- }
- return PyList_Append(list, x);
-}
-#else
-#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x)
-#endif
-
-/* PyIntBinop.proto */
-#if !CYTHON_COMPILING_IN_PYPY
-static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check);
-#else
-#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace, zerodivision_check)\
- (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2))
-#endif
-
-/* ListExtend.proto */
-static CYTHON_INLINE int __Pyx_PyList_Extend(PyObject* L, PyObject* v) {
-#if CYTHON_COMPILING_IN_CPYTHON
- PyObject* none = _PyList_Extend((PyListObject*)L, v);
- if (unlikely(!none))
- return -1;
- Py_DECREF(none);
- return 0;
-#else
- return PyList_SetSlice(L, PY_SSIZE_T_MAX, PY_SSIZE_T_MAX, v);
-#endif
-}
-
-/* ListAppend.proto */
-#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS
-static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) {
- PyListObject* L = (PyListObject*) list;
- Py_ssize_t len = Py_SIZE(list);
- if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) {
- Py_INCREF(x);
- PyList_SET_ITEM(list, len, x);
- __Pyx_SET_SIZE(list, len + 1);
- return 0;
- }
- return PyList_Append(list, x);
-}
-#else
-#define __Pyx_PyList_Append(L,x) PyList_Append(L,x)
-#endif
-
-/* None.proto */
-static CYTHON_INLINE long __Pyx_div_long(long, long);
-
-/* ImportFrom.proto */
-static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name);
-
-/* HasAttr.proto */
-static CYTHON_INLINE int __Pyx_HasAttr(PyObject *, PyObject *);
-
-/* PyObject_GenericGetAttrNoDict.proto */
-#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000
-static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name);
-#else
-#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr
-#endif
-
-/* PyObject_GenericGetAttr.proto */
-#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000
-static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name);
-#else
-#define __Pyx_PyObject_GenericGetAttr PyObject_GenericGetAttr
-#endif
-
-/* SetVTable.proto */
-static int __Pyx_SetVtable(PyObject *dict, void *vtable);
-
-/* PyObjectGetAttrStrNoError.proto */
-static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name);
-
-/* SetupReduce.proto */
-static int __Pyx_setup_reduce(PyObject* type_obj);
-
-/* CLineInTraceback.proto */
-#ifdef CYTHON_CLINE_IN_TRACEBACK
-#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0)
-#else
-static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line);
-#endif
-
-/* CodeObjectCache.proto */
-typedef struct {
- PyCodeObject* code_object;
- int code_line;
-} __Pyx_CodeObjectCacheEntry;
-struct __Pyx_CodeObjectCache {
- int count;
- int max_count;
- __Pyx_CodeObjectCacheEntry* entries;
-};
-static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL};
-static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line);
-static PyCodeObject *__pyx_find_code_object(int code_line);
-static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object);
-
-/* AddTraceback.proto */
-static void __Pyx_AddTraceback(const char *funcname, int c_line,
- int py_line, const char *filename);
-
-#if PY_MAJOR_VERSION < 3
- static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags);
- static void __Pyx_ReleaseBuffer(Py_buffer *view);
-#else
- #define __Pyx_GetBuffer PyObject_GetBuffer
- #define __Pyx_ReleaseBuffer PyBuffer_Release
-#endif
-
-
-/* BufferStructDeclare.proto */
-typedef struct {
- Py_ssize_t shape, strides, suboffsets;
-} __Pyx_Buf_DimInfo;
-typedef struct {
- size_t refcount;
- Py_buffer pybuffer;
-} __Pyx_Buffer;
-typedef struct {
- __Pyx_Buffer *rcbuffer;
- char *data;
- __Pyx_Buf_DimInfo diminfo[8];
-} __Pyx_LocalBuf_ND;
-
-/* MemviewSliceIsContig.proto */
-static int __pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim);
-
-/* OverlappingSlices.proto */
-static int __pyx_slices_overlap(__Pyx_memviewslice *slice1,
- __Pyx_memviewslice *slice2,
- int ndim, size_t itemsize);
-
-/* Capsule.proto */
-static CYTHON_INLINE PyObject *__pyx_capsule_create(void *p, const char *sig);
-
-/* IsLittleEndian.proto */
-static CYTHON_INLINE int __Pyx_Is_Little_Endian(void);
-
-/* BufferFormatCheck.proto */
-static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts);
-static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,
- __Pyx_BufFmt_StackElem* stack,
- __Pyx_TypeInfo* type);
-
-/* TypeInfoCompare.proto */
-static int __pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b);
-
-/* MemviewSliceValidateAndInit.proto */
-static int __Pyx_ValidateAndInit_memviewslice(
- int *axes_specs,
- int c_or_f_flag,
- int buf_flags,
- int ndim,
- __Pyx_TypeInfo *dtype,
- __Pyx_BufFmt_StackElem stack[],
- __Pyx_memviewslice *memviewslice,
- PyObject *original_obj);
-
-/* ObjectToMemviewSlice.proto */
-static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *, int writable_flag);
-
-/* ObjectToMemviewSlice.proto */
-static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *, int writable_flag);
-
-/* ObjectToMemviewSlice.proto */
-static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *, int writable_flag);
-
-/* CIntToPy.proto */
-static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value);
-
-/* CIntToPy.proto */
-static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value);
-
-/* MemviewSliceCopyTemplate.proto */
-static __Pyx_memviewslice
-__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs,
- const char *mode, int ndim,
- size_t sizeof_dtype, int contig_flag,
- int dtype_is_object);
-
-/* CIntFromPy.proto */
-static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *);
-
-/* CIntFromPy.proto */
-static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *);
-
-/* CIntFromPy.proto */
-static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *);
-
-/* CheckBinaryVersion.proto */
-static int __Pyx_check_binary_version(void);
-
-/* InitStrings.proto */
-static int __Pyx_InitStrings(__Pyx_StringTabEntry *t);
-
-static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self); /* proto*/
-static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto*/
-static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj); /* proto*/
-static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src); /* proto*/
-static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value); /* proto*/
-static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto*/
-static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/
-static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/
-static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/
-static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/
-
-/* Module declarations from 'cython.view' */
-
-/* Module declarations from 'cython' */
-
-/* Module declarations from 'monotonic_align.core' */
-static PyTypeObject *__pyx_array_type = 0;
-static PyTypeObject *__pyx_MemviewEnum_type = 0;
-static PyTypeObject *__pyx_memoryview_type = 0;
-static PyTypeObject *__pyx_memoryviewslice_type = 0;
-static PyObject *generic = 0;
-static PyObject *strided = 0;
-static PyObject *indirect = 0;
-static PyObject *contiguous = 0;
-static PyObject *indirect_contiguous = 0;
-static int __pyx_memoryview_thread_locks_used;
-static PyThread_type_lock __pyx_memoryview_thread_locks[8];
-static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice, __Pyx_memviewslice, int, int, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args); /*proto*/
-static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, int __pyx_skip_dispatch); /*proto*/
-static struct __pyx_array_obj *__pyx_array_new(PyObject *, Py_ssize_t, char *, char *, char *); /*proto*/
-static void *__pyx_align_pointer(void *, size_t); /*proto*/
-static PyObject *__pyx_memoryview_new(PyObject *, int, int, __Pyx_TypeInfo *); /*proto*/
-static CYTHON_INLINE int __pyx_memoryview_check(PyObject *); /*proto*/
-static PyObject *_unellipsify(PyObject *, int); /*proto*/
-static PyObject *assert_direct_dimensions(Py_ssize_t *, int); /*proto*/
-static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *, PyObject *); /*proto*/
-static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int, int); /*proto*/
-static char *__pyx_pybuffer_index(Py_buffer *, char *, Py_ssize_t, Py_ssize_t); /*proto*/
-static int __pyx_memslice_transpose(__Pyx_memviewslice *); /*proto*/
-static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice, int, PyObject *(*)(char *), int (*)(char *, PyObject *), int); /*proto*/
-static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/
-static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/
-static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *); /*proto*/
-static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/
-static Py_ssize_t abs_py_ssize_t(Py_ssize_t); /*proto*/
-static char __pyx_get_best_slice_order(__Pyx_memviewslice *, int); /*proto*/
-static void _copy_strided_to_strided(char *, Py_ssize_t *, char *, Py_ssize_t *, Py_ssize_t *, Py_ssize_t *, int, size_t); /*proto*/
-static void copy_strided_to_strided(__Pyx_memviewslice *, __Pyx_memviewslice *, int, size_t); /*proto*/
-static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *, int); /*proto*/
-static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *, Py_ssize_t *, Py_ssize_t, int, char); /*proto*/
-static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *, __Pyx_memviewslice *, char, int); /*proto*/
-static int __pyx_memoryview_err_extents(int, Py_ssize_t, Py_ssize_t); /*proto*/
-static int __pyx_memoryview_err_dim(PyObject *, char *, int); /*proto*/
-static int __pyx_memoryview_err(PyObject *, char *); /*proto*/
-static int __pyx_memoryview_copy_contents(__Pyx_memviewslice, __Pyx_memviewslice, int, int, int); /*proto*/
-static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *, int, int); /*proto*/
-static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *, int, int, int); /*proto*/
-static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/
-static void __pyx_memoryview_refcount_objects_in_slice(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/
-static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *, int, size_t, void *, int); /*proto*/
-static void __pyx_memoryview__slice_assign_scalar(char *, Py_ssize_t *, Py_ssize_t *, int, size_t, void *); /*proto*/
-static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *, PyObject *); /*proto*/
-static __Pyx_TypeInfo __Pyx_TypeInfo_int = { "int", NULL, sizeof(int), { 0 }, 0, IS_UNSIGNED(int) ? 'U' : 'I', IS_UNSIGNED(int), 0 };
-static __Pyx_TypeInfo __Pyx_TypeInfo_float = { "float", NULL, sizeof(float), { 0 }, 0, 'R', 0, 0 };
-#define __Pyx_MODULE_NAME "monotonic_align.core"
-extern int __pyx_module_is_main_monotonic_align__core;
-int __pyx_module_is_main_monotonic_align__core = 0;
-
-/* Implementation of 'monotonic_align.core' */
-static PyObject *__pyx_builtin_range;
-static PyObject *__pyx_builtin_ValueError;
-static PyObject *__pyx_builtin_MemoryError;
-static PyObject *__pyx_builtin_enumerate;
-static PyObject *__pyx_builtin_TypeError;
-static PyObject *__pyx_builtin_Ellipsis;
-static PyObject *__pyx_builtin_id;
-static PyObject *__pyx_builtin_IndexError;
-static const char __pyx_k_O[] = "O";
-static const char __pyx_k_c[] = "c";
-static const char __pyx_k_id[] = "id";
-static const char __pyx_k_new[] = "__new__";
-static const char __pyx_k_obj[] = "obj";
-static const char __pyx_k_base[] = "base";
-static const char __pyx_k_dict[] = "__dict__";
-static const char __pyx_k_main[] = "__main__";
-static const char __pyx_k_mode[] = "mode";
-static const char __pyx_k_name[] = "name";
-static const char __pyx_k_ndim[] = "ndim";
-static const char __pyx_k_pack[] = "pack";
-static const char __pyx_k_size[] = "size";
-static const char __pyx_k_step[] = "step";
-static const char __pyx_k_stop[] = "stop";
-static const char __pyx_k_t_xs[] = "t_xs";
-static const char __pyx_k_t_ys[] = "t_ys";
-static const char __pyx_k_test[] = "__test__";
-static const char __pyx_k_ASCII[] = "ASCII";
-static const char __pyx_k_class[] = "__class__";
-static const char __pyx_k_error[] = "error";
-static const char __pyx_k_flags[] = "flags";
-static const char __pyx_k_paths[] = "paths";
-static const char __pyx_k_range[] = "range";
-static const char __pyx_k_shape[] = "shape";
-static const char __pyx_k_start[] = "start";
-static const char __pyx_k_encode[] = "encode";
-static const char __pyx_k_format[] = "format";
-static const char __pyx_k_import[] = "__import__";
-static const char __pyx_k_name_2[] = "__name__";
-static const char __pyx_k_pickle[] = "pickle";
-static const char __pyx_k_reduce[] = "__reduce__";
-static const char __pyx_k_struct[] = "struct";
-static const char __pyx_k_unpack[] = "unpack";
-static const char __pyx_k_update[] = "update";
-static const char __pyx_k_values[] = "values";
-static const char __pyx_k_fortran[] = "fortran";
-static const char __pyx_k_memview[] = "memview";
-static const char __pyx_k_Ellipsis[] = "Ellipsis";
-static const char __pyx_k_getstate[] = "__getstate__";
-static const char __pyx_k_itemsize[] = "itemsize";
-static const char __pyx_k_pyx_type[] = "__pyx_type";
-static const char __pyx_k_setstate[] = "__setstate__";
-static const char __pyx_k_TypeError[] = "TypeError";
-static const char __pyx_k_enumerate[] = "enumerate";
-static const char __pyx_k_pyx_state[] = "__pyx_state";
-static const char __pyx_k_reduce_ex[] = "__reduce_ex__";
-static const char __pyx_k_IndexError[] = "IndexError";
-static const char __pyx_k_ValueError[] = "ValueError";
-static const char __pyx_k_pyx_result[] = "__pyx_result";
-static const char __pyx_k_pyx_vtable[] = "__pyx_vtable__";
-static const char __pyx_k_MemoryError[] = "MemoryError";
-static const char __pyx_k_PickleError[] = "PickleError";
-static const char __pyx_k_pyx_checksum[] = "__pyx_checksum";
-static const char __pyx_k_stringsource[] = "stringsource";
-static const char __pyx_k_pyx_getbuffer[] = "__pyx_getbuffer";
-static const char __pyx_k_reduce_cython[] = "__reduce_cython__";
-static const char __pyx_k_View_MemoryView[] = "View.MemoryView";
-static const char __pyx_k_allocate_buffer[] = "allocate_buffer";
-static const char __pyx_k_dtype_is_object[] = "dtype_is_object";
-static const char __pyx_k_pyx_PickleError[] = "__pyx_PickleError";
-static const char __pyx_k_setstate_cython[] = "__setstate_cython__";
-static const char __pyx_k_pyx_unpickle_Enum[] = "__pyx_unpickle_Enum";
-static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback";
-static const char __pyx_k_strided_and_direct[] = "";
-static const char __pyx_k_strided_and_indirect[] = "";
-static const char __pyx_k_contiguous_and_direct[] = "";
-static const char __pyx_k_MemoryView_of_r_object[] = "";
-static const char __pyx_k_MemoryView_of_r_at_0x_x[] = "";
-static const char __pyx_k_contiguous_and_indirect[] = "";
-static const char __pyx_k_Cannot_index_with_type_s[] = "Cannot index with type '%s'";
-static const char __pyx_k_Invalid_shape_in_axis_d_d[] = "Invalid shape in axis %d: %d.";
-static const char __pyx_k_itemsize_0_for_cython_array[] = "itemsize <= 0 for cython.array";
-static const char __pyx_k_unable_to_allocate_array_data[] = "unable to allocate array data.";
-static const char __pyx_k_strided_and_direct_or_indirect[] = "";
-static const char __pyx_k_Buffer_view_does_not_expose_stri[] = "Buffer view does not expose strides";
-static const char __pyx_k_Can_only_create_a_buffer_that_is[] = "Can only create a buffer that is contiguous in memory.";
-static const char __pyx_k_Cannot_assign_to_read_only_memor[] = "Cannot assign to read-only memoryview";
-static const char __pyx_k_Cannot_create_writable_memory_vi[] = "Cannot create writable memory view from read-only memoryview";
-static const char __pyx_k_Empty_shape_tuple_for_cython_arr[] = "Empty shape tuple for cython.array";
-static const char __pyx_k_Incompatible_checksums_s_vs_0xb0[] = "Incompatible checksums (%s vs 0xb068931 = (name))";
-static const char __pyx_k_Indirect_dimensions_not_supporte[] = "Indirect dimensions not supported";
-static const char __pyx_k_Invalid_mode_expected_c_or_fortr[] = "Invalid mode, expected 'c' or 'fortran', got %s";
-static const char __pyx_k_Out_of_bounds_on_buffer_access_a[] = "Out of bounds on buffer access (axis %d)";
-static const char __pyx_k_Unable_to_convert_item_to_object[] = "Unable to convert item to object";
-static const char __pyx_k_got_differing_extents_in_dimensi[] = "got differing extents in dimension %d (got %d and %d)";
-static const char __pyx_k_no_default___reduce___due_to_non[] = "no default __reduce__ due to non-trivial __cinit__";
-static const char __pyx_k_unable_to_allocate_shape_and_str[] = "unable to allocate shape and strides.";
-static PyObject *__pyx_n_s_ASCII;
-static PyObject *__pyx_kp_s_Buffer_view_does_not_expose_stri;
-static PyObject *__pyx_kp_s_Can_only_create_a_buffer_that_is;
-static PyObject *__pyx_kp_s_Cannot_assign_to_read_only_memor;
-static PyObject *__pyx_kp_s_Cannot_create_writable_memory_vi;
-static PyObject *__pyx_kp_s_Cannot_index_with_type_s;
-static PyObject *__pyx_n_s_Ellipsis;
-static PyObject *__pyx_kp_s_Empty_shape_tuple_for_cython_arr;
-static PyObject *__pyx_kp_s_Incompatible_checksums_s_vs_0xb0;
-static PyObject *__pyx_n_s_IndexError;
-static PyObject *__pyx_kp_s_Indirect_dimensions_not_supporte;
-static PyObject *__pyx_kp_s_Invalid_mode_expected_c_or_fortr;
-static PyObject *__pyx_kp_s_Invalid_shape_in_axis_d_d;
-static PyObject *__pyx_n_s_MemoryError;
-static PyObject *__pyx_kp_s_MemoryView_of_r_at_0x_x;
-static PyObject *__pyx_kp_s_MemoryView_of_r_object;
-static PyObject *__pyx_n_b_O;
-static PyObject *__pyx_kp_s_Out_of_bounds_on_buffer_access_a;
-static PyObject *__pyx_n_s_PickleError;
-static PyObject *__pyx_n_s_TypeError;
-static PyObject *__pyx_kp_s_Unable_to_convert_item_to_object;
-static PyObject *__pyx_n_s_ValueError;
-static PyObject *__pyx_n_s_View_MemoryView;
-static PyObject *__pyx_n_s_allocate_buffer;
-static PyObject *__pyx_n_s_base;
-static PyObject *__pyx_n_s_c;
-static PyObject *__pyx_n_u_c;
-static PyObject *__pyx_n_s_class;
-static PyObject *__pyx_n_s_cline_in_traceback;
-static PyObject *__pyx_kp_s_contiguous_and_direct;
-static PyObject *__pyx_kp_s_contiguous_and_indirect;
-static PyObject *__pyx_n_s_dict;
-static PyObject *__pyx_n_s_dtype_is_object;
-static PyObject *__pyx_n_s_encode;
-static PyObject *__pyx_n_s_enumerate;
-static PyObject *__pyx_n_s_error;
-static PyObject *__pyx_n_s_flags;
-static PyObject *__pyx_n_s_format;
-static PyObject *__pyx_n_s_fortran;
-static PyObject *__pyx_n_u_fortran;
-static PyObject *__pyx_n_s_getstate;
-static PyObject *__pyx_kp_s_got_differing_extents_in_dimensi;
-static PyObject *__pyx_n_s_id;
-static PyObject *__pyx_n_s_import;
-static PyObject *__pyx_n_s_itemsize;
-static PyObject *__pyx_kp_s_itemsize_0_for_cython_array;
-static PyObject *__pyx_n_s_main;
-static PyObject *__pyx_n_s_memview;
-static PyObject *__pyx_n_s_mode;
-static PyObject *__pyx_n_s_name;
-static PyObject *__pyx_n_s_name_2;
-static PyObject *__pyx_n_s_ndim;
-static PyObject *__pyx_n_s_new;
-static PyObject *__pyx_kp_s_no_default___reduce___due_to_non;
-static PyObject *__pyx_n_s_obj;
-static PyObject *__pyx_n_s_pack;
-static PyObject *__pyx_n_s_paths;
-static PyObject *__pyx_n_s_pickle;
-static PyObject *__pyx_n_s_pyx_PickleError;
-static PyObject *__pyx_n_s_pyx_checksum;
-static PyObject *__pyx_n_s_pyx_getbuffer;
-static PyObject *__pyx_n_s_pyx_result;
-static PyObject *__pyx_n_s_pyx_state;
-static PyObject *__pyx_n_s_pyx_type;
-static PyObject *__pyx_n_s_pyx_unpickle_Enum;
-static PyObject *__pyx_n_s_pyx_vtable;
-static PyObject *__pyx_n_s_range;
-static PyObject *__pyx_n_s_reduce;
-static PyObject *__pyx_n_s_reduce_cython;
-static PyObject *__pyx_n_s_reduce_ex;
-static PyObject *__pyx_n_s_setstate;
-static PyObject *__pyx_n_s_setstate_cython;
-static PyObject *__pyx_n_s_shape;
-static PyObject *__pyx_n_s_size;
-static PyObject *__pyx_n_s_start;
-static PyObject *__pyx_n_s_step;
-static PyObject *__pyx_n_s_stop;
-static PyObject *__pyx_kp_s_strided_and_direct;
-static PyObject *__pyx_kp_s_strided_and_direct_or_indirect;
-static PyObject *__pyx_kp_s_strided_and_indirect;
-static PyObject *__pyx_kp_s_stringsource;
-static PyObject *__pyx_n_s_struct;
-static PyObject *__pyx_n_s_t_xs;
-static PyObject *__pyx_n_s_t_ys;
-static PyObject *__pyx_n_s_test;
-static PyObject *__pyx_kp_s_unable_to_allocate_array_data;
-static PyObject *__pyx_kp_s_unable_to_allocate_shape_and_str;
-static PyObject *__pyx_n_s_unpack;
-static PyObject *__pyx_n_s_update;
-static PyObject *__pyx_n_s_values;
-static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs); /* proto */
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer); /* proto */
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */
-static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self); /* proto */
-static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr); /* proto */
-static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item); /* proto */
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /* proto */
-static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */
-static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name); /* proto */
-static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state); /* proto */
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object); /* proto */
-static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto */
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto */
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */
-static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state); /* proto */
-static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/
-static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/
-static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/
-static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/
-static PyObject *__pyx_int_0;
-static PyObject *__pyx_int_1;
-static PyObject *__pyx_int_184977713;
-static PyObject *__pyx_int_neg_1;
-static float __pyx_k_;
-static PyObject *__pyx_tuple__2;
-static PyObject *__pyx_tuple__3;
-static PyObject *__pyx_tuple__4;
-static PyObject *__pyx_tuple__5;
-static PyObject *__pyx_tuple__6;
-static PyObject *__pyx_tuple__7;
-static PyObject *__pyx_tuple__8;
-static PyObject *__pyx_tuple__9;
-static PyObject *__pyx_slice__16;
-static PyObject *__pyx_tuple__10;
-static PyObject *__pyx_tuple__11;
-static PyObject *__pyx_tuple__12;
-static PyObject *__pyx_tuple__13;
-static PyObject *__pyx_tuple__14;
-static PyObject *__pyx_tuple__15;
-static PyObject *__pyx_tuple__17;
-static PyObject *__pyx_tuple__18;
-static PyObject *__pyx_tuple__19;
-static PyObject *__pyx_tuple__20;
-static PyObject *__pyx_tuple__21;
-static PyObject *__pyx_tuple__22;
-static PyObject *__pyx_tuple__23;
-static PyObject *__pyx_tuple__24;
-static PyObject *__pyx_tuple__25;
-static PyObject *__pyx_codeobj__26;
-/* Late includes */
-
-/* "monotonic_align/core.pyx":9
- * @cython.boundscheck(False)
- * @cython.wraparound(False)
- * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<<
- * cdef int x
- * cdef int y
- */
-
-static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice __pyx_v_path, __Pyx_memviewslice __pyx_v_value, int __pyx_v_t_y, int __pyx_v_t_x, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args) {
- float __pyx_v_max_neg_val = __pyx_k_;
- int __pyx_v_x;
- int __pyx_v_y;
- float __pyx_v_v_prev;
- float __pyx_v_v_cur;
- int __pyx_v_index;
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- long __pyx_t_4;
- int __pyx_t_5;
- long __pyx_t_6;
- long __pyx_t_7;
- int __pyx_t_8;
- Py_ssize_t __pyx_t_9;
- Py_ssize_t __pyx_t_10;
- float __pyx_t_11;
- float __pyx_t_12;
- float __pyx_t_13;
- int __pyx_t_14;
- Py_ssize_t __pyx_t_15;
- Py_ssize_t __pyx_t_16;
- if (__pyx_optional_args) {
- if (__pyx_optional_args->__pyx_n > 0) {
- __pyx_v_max_neg_val = __pyx_optional_args->max_neg_val;
- }
- }
-
- /* "monotonic_align/core.pyx":15
- * cdef float v_cur
- * cdef float tmp
- * cdef int index = t_x - 1 # <<<<<<<<<<<<<<
- *
- * for y in range(t_y):
- */
- __pyx_v_index = (__pyx_v_t_x - 1);
-
- /* "monotonic_align/core.pyx":17
- * cdef int index = t_x - 1
- *
- * for y in range(t_y): # <<<<<<<<<<<<<<
- * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- * if x == y:
- */
- __pyx_t_1 = __pyx_v_t_y;
- __pyx_t_2 = __pyx_t_1;
- for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) {
- __pyx_v_y = __pyx_t_3;
-
- /* "monotonic_align/core.pyx":18
- *
- * for y in range(t_y):
- * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): # <<<<<<<<<<<<<<
- * if x == y:
- * v_cur = max_neg_val
- */
- __pyx_t_4 = (__pyx_v_y + 1);
- __pyx_t_5 = __pyx_v_t_x;
- if (((__pyx_t_4 < __pyx_t_5) != 0)) {
- __pyx_t_6 = __pyx_t_4;
- } else {
- __pyx_t_6 = __pyx_t_5;
- }
- __pyx_t_4 = __pyx_t_6;
- __pyx_t_5 = ((__pyx_v_t_x + __pyx_v_y) - __pyx_v_t_y);
- __pyx_t_6 = 0;
- if (((__pyx_t_5 > __pyx_t_6) != 0)) {
- __pyx_t_7 = __pyx_t_5;
- } else {
- __pyx_t_7 = __pyx_t_6;
- }
- __pyx_t_6 = __pyx_t_4;
- for (__pyx_t_5 = __pyx_t_7; __pyx_t_5 < __pyx_t_6; __pyx_t_5+=1) {
- __pyx_v_x = __pyx_t_5;
-
- /* "monotonic_align/core.pyx":19
- * for y in range(t_y):
- * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- * if x == y: # <<<<<<<<<<<<<<
- * v_cur = max_neg_val
- * else:
- */
- __pyx_t_8 = ((__pyx_v_x == __pyx_v_y) != 0);
- if (__pyx_t_8) {
-
- /* "monotonic_align/core.pyx":20
- * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- * if x == y:
- * v_cur = max_neg_val # <<<<<<<<<<<<<<
- * else:
- * v_cur = value[y-1, x]
- */
- __pyx_v_v_cur = __pyx_v_max_neg_val;
-
- /* "monotonic_align/core.pyx":19
- * for y in range(t_y):
- * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- * if x == y: # <<<<<<<<<<<<<<
- * v_cur = max_neg_val
- * else:
- */
- goto __pyx_L7;
- }
-
- /* "monotonic_align/core.pyx":22
- * v_cur = max_neg_val
- * else:
- * v_cur = value[y-1, x] # <<<<<<<<<<<<<<
- * if x == 0:
- * if y == 0:
- */
- /*else*/ {
- __pyx_t_9 = (__pyx_v_y - 1);
- __pyx_t_10 = __pyx_v_x;
- __pyx_v_v_cur = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) )));
- }
- __pyx_L7:;
-
- /* "monotonic_align/core.pyx":23
- * else:
- * v_cur = value[y-1, x]
- * if x == 0: # <<<<<<<<<<<<<<
- * if y == 0:
- * v_prev = 0.
- */
- __pyx_t_8 = ((__pyx_v_x == 0) != 0);
- if (__pyx_t_8) {
-
- /* "monotonic_align/core.pyx":24
- * v_cur = value[y-1, x]
- * if x == 0:
- * if y == 0: # <<<<<<<<<<<<<<
- * v_prev = 0.
- * else:
- */
- __pyx_t_8 = ((__pyx_v_y == 0) != 0);
- if (__pyx_t_8) {
-
- /* "monotonic_align/core.pyx":25
- * if x == 0:
- * if y == 0:
- * v_prev = 0. # <<<<<<<<<<<<<<
- * else:
- * v_prev = max_neg_val
- */
- __pyx_v_v_prev = 0.;
-
- /* "monotonic_align/core.pyx":24
- * v_cur = value[y-1, x]
- * if x == 0:
- * if y == 0: # <<<<<<<<<<<<<<
- * v_prev = 0.
- * else:
- */
- goto __pyx_L9;
- }
-
- /* "monotonic_align/core.pyx":27
- * v_prev = 0.
- * else:
- * v_prev = max_neg_val # <<<<<<<<<<<<<<
- * else:
- * v_prev = value[y-1, x-1]
- */
- /*else*/ {
- __pyx_v_v_prev = __pyx_v_max_neg_val;
- }
- __pyx_L9:;
-
- /* "monotonic_align/core.pyx":23
- * else:
- * v_cur = value[y-1, x]
- * if x == 0: # <<<<<<<<<<<<<<
- * if y == 0:
- * v_prev = 0.
- */
- goto __pyx_L8;
- }
-
- /* "monotonic_align/core.pyx":29
- * v_prev = max_neg_val
- * else:
- * v_prev = value[y-1, x-1] # <<<<<<<<<<<<<<
- * value[y, x] += max(v_prev, v_cur)
- *
- */
- /*else*/ {
- __pyx_t_10 = (__pyx_v_y - 1);
- __pyx_t_9 = (__pyx_v_x - 1);
- __pyx_v_v_prev = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_10 * __pyx_v_value.strides[0]) )) + __pyx_t_9)) )));
- }
- __pyx_L8:;
-
- /* "monotonic_align/core.pyx":30
- * else:
- * v_prev = value[y-1, x-1]
- * value[y, x] += max(v_prev, v_cur) # <<<<<<<<<<<<<<
- *
- * for y in range(t_y - 1, -1, -1):
- */
- __pyx_t_11 = __pyx_v_v_cur;
- __pyx_t_12 = __pyx_v_v_prev;
- if (((__pyx_t_11 > __pyx_t_12) != 0)) {
- __pyx_t_13 = __pyx_t_11;
- } else {
- __pyx_t_13 = __pyx_t_12;
- }
- __pyx_t_9 = __pyx_v_y;
- __pyx_t_10 = __pyx_v_x;
- *((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) )) += __pyx_t_13;
- }
- }
-
- /* "monotonic_align/core.pyx":32
- * value[y, x] += max(v_prev, v_cur)
- *
- * for y in range(t_y - 1, -1, -1): # <<<<<<<<<<<<<<
- * path[y, index] = 1
- * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]):
- */
- for (__pyx_t_1 = (__pyx_v_t_y - 1); __pyx_t_1 > -1; __pyx_t_1-=1) {
- __pyx_v_y = __pyx_t_1;
-
- /* "monotonic_align/core.pyx":33
- *
- * for y in range(t_y - 1, -1, -1):
- * path[y, index] = 1 # <<<<<<<<<<<<<<
- * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]):
- * index = index - 1
- */
- __pyx_t_10 = __pyx_v_y;
- __pyx_t_9 = __pyx_v_index;
- *((int *) ( /* dim=1 */ ((char *) (((int *) ( /* dim=0 */ (__pyx_v_path.data + __pyx_t_10 * __pyx_v_path.strides[0]) )) + __pyx_t_9)) )) = 1;
-
- /* "monotonic_align/core.pyx":34
- * for y in range(t_y - 1, -1, -1):
- * path[y, index] = 1
- * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<<
- * index = index - 1
- *
- */
- __pyx_t_14 = ((__pyx_v_index != 0) != 0);
- if (__pyx_t_14) {
- } else {
- __pyx_t_8 = __pyx_t_14;
- goto __pyx_L13_bool_binop_done;
- }
- __pyx_t_14 = ((__pyx_v_index == __pyx_v_y) != 0);
- if (!__pyx_t_14) {
- } else {
- __pyx_t_8 = __pyx_t_14;
- goto __pyx_L13_bool_binop_done;
- }
- __pyx_t_9 = (__pyx_v_y - 1);
- __pyx_t_10 = __pyx_v_index;
- __pyx_t_15 = (__pyx_v_y - 1);
- __pyx_t_16 = (__pyx_v_index - 1);
- __pyx_t_14 = (((*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))) < (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_15 * __pyx_v_value.strides[0]) )) + __pyx_t_16)) )))) != 0);
- __pyx_t_8 = __pyx_t_14;
- __pyx_L13_bool_binop_done:;
- if (__pyx_t_8) {
-
- /* "monotonic_align/core.pyx":35
- * path[y, index] = 1
- * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]):
- * index = index - 1 # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_v_index = (__pyx_v_index - 1);
-
- /* "monotonic_align/core.pyx":34
- * for y in range(t_y - 1, -1, -1):
- * path[y, index] = 1
- * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<<
- * index = index - 1
- *
- */
- }
- }
-
- /* "monotonic_align/core.pyx":9
- * @cython.boundscheck(False)
- * @cython.wraparound(False)
- * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<<
- * cdef int x
- * cdef int y
- */
-
- /* function exit code */
-}
-
-/* "monotonic_align/core.pyx":40
- * @cython.boundscheck(False)
- * @cython.wraparound(False)
- * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<<
- * cdef int b = paths.shape[0]
- * cdef int i
- */
-
-static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs, CYTHON_UNUSED int __pyx_skip_dispatch) {
- CYTHON_UNUSED int __pyx_v_b;
- int __pyx_v_i;
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- __Pyx_memviewslice __pyx_t_4 = { 0, 0, { 0 }, { 0 }, { 0 } };
- __Pyx_memviewslice __pyx_t_5 = { 0, 0, { 0 }, { 0 }, { 0 } };
- Py_ssize_t __pyx_t_6;
- Py_ssize_t __pyx_t_7;
-
- /* "monotonic_align/core.pyx":41
- * @cython.wraparound(False)
- * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil:
- * cdef int b = paths.shape[0] # <<<<<<<<<<<<<<
- * cdef int i
- * for i in prange(b, nogil=True):
- */
- __pyx_v_b = (__pyx_v_paths.shape[0]);
-
- /* "monotonic_align/core.pyx":43
- * cdef int b = paths.shape[0]
- * cdef int i
- * for i in prange(b, nogil=True): # <<<<<<<<<<<<<<
- * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i])
- */
- {
- #ifdef WITH_THREAD
- PyThreadState *_save;
- Py_UNBLOCK_THREADS
- __Pyx_FastGIL_Remember();
- #endif
- /*try:*/ {
- __pyx_t_1 = __pyx_v_b;
- if ((1 == 0)) abort();
- {
- #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))))
- #undef likely
- #undef unlikely
- #define likely(x) (x)
- #define unlikely(x) (x)
- #endif
- __pyx_t_3 = (__pyx_t_1 - 0 + 1 - 1/abs(1)) / 1;
- if (__pyx_t_3 > 0)
- {
- #ifdef _OPENMP
- #pragma omp parallel private(__pyx_t_6, __pyx_t_7) firstprivate(__pyx_t_4, __pyx_t_5)
- #endif /* _OPENMP */
- {
- #ifdef _OPENMP
- #pragma omp for firstprivate(__pyx_v_i) lastprivate(__pyx_v_i)
- #endif /* _OPENMP */
- for (__pyx_t_2 = 0; __pyx_t_2 < __pyx_t_3; __pyx_t_2++){
- {
- __pyx_v_i = (int)(0 + 1 * __pyx_t_2);
-
- /* "monotonic_align/core.pyx":44
- * cdef int i
- * for i in prange(b, nogil=True):
- * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) # <<<<<<<<<<<<<<
- */
- __pyx_t_4.data = __pyx_v_paths.data;
- __pyx_t_4.memview = __pyx_v_paths.memview;
- __PYX_INC_MEMVIEW(&__pyx_t_4, 0);
- {
- Py_ssize_t __pyx_tmp_idx = __pyx_v_i;
- Py_ssize_t __pyx_tmp_stride = __pyx_v_paths.strides[0];
- __pyx_t_4.data += __pyx_tmp_idx * __pyx_tmp_stride;
-}
-
-__pyx_t_4.shape[0] = __pyx_v_paths.shape[1];
-__pyx_t_4.strides[0] = __pyx_v_paths.strides[1];
- __pyx_t_4.suboffsets[0] = -1;
-
-__pyx_t_4.shape[1] = __pyx_v_paths.shape[2];
-__pyx_t_4.strides[1] = __pyx_v_paths.strides[2];
- __pyx_t_4.suboffsets[1] = -1;
-
-__pyx_t_5.data = __pyx_v_values.data;
- __pyx_t_5.memview = __pyx_v_values.memview;
- __PYX_INC_MEMVIEW(&__pyx_t_5, 0);
- {
- Py_ssize_t __pyx_tmp_idx = __pyx_v_i;
- Py_ssize_t __pyx_tmp_stride = __pyx_v_values.strides[0];
- __pyx_t_5.data += __pyx_tmp_idx * __pyx_tmp_stride;
-}
-
-__pyx_t_5.shape[0] = __pyx_v_values.shape[1];
-__pyx_t_5.strides[0] = __pyx_v_values.strides[1];
- __pyx_t_5.suboffsets[0] = -1;
-
-__pyx_t_5.shape[1] = __pyx_v_values.shape[2];
-__pyx_t_5.strides[1] = __pyx_v_values.strides[2];
- __pyx_t_5.suboffsets[1] = -1;
-
-__pyx_t_6 = __pyx_v_i;
- __pyx_t_7 = __pyx_v_i;
- __pyx_f_15monotonic_align_4core_maximum_path_each(__pyx_t_4, __pyx_t_5, (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_ys.data) + __pyx_t_6)) ))), (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_xs.data) + __pyx_t_7)) ))), NULL);
- __PYX_XDEC_MEMVIEW(&__pyx_t_4, 0);
- __pyx_t_4.memview = NULL;
- __pyx_t_4.data = NULL;
- __PYX_XDEC_MEMVIEW(&__pyx_t_5, 0);
- __pyx_t_5.memview = NULL;
- __pyx_t_5.data = NULL;
- }
- }
- }
- }
- }
- #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))))
- #undef likely
- #undef unlikely
- #define likely(x) __builtin_expect(!!(x), 1)
- #define unlikely(x) __builtin_expect(!!(x), 0)
- #endif
- }
-
- /* "monotonic_align/core.pyx":43
- * cdef int b = paths.shape[0]
- * cdef int i
- * for i in prange(b, nogil=True): # <<<<<<<<<<<<<<
- * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i])
- */
- /*finally:*/ {
- /*normal exit:*/{
- #ifdef WITH_THREAD
- __Pyx_FastGIL_Forget();
- Py_BLOCK_THREADS
- #endif
- goto __pyx_L5;
- }
- __pyx_L5:;
- }
- }
-
- /* "monotonic_align/core.pyx":40
- * @cython.boundscheck(False)
- * @cython.wraparound(False)
- * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<<
- * cdef int b = paths.shape[0]
- * cdef int i
- */
-
- /* function exit code */
-}
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- __Pyx_memviewslice __pyx_v_paths = { 0, 0, { 0 }, { 0 }, { 0 } };
- __Pyx_memviewslice __pyx_v_values = { 0, 0, { 0 }, { 0 }, { 0 } };
- __Pyx_memviewslice __pyx_v_t_ys = { 0, 0, { 0 }, { 0 }, { 0 } };
- __Pyx_memviewslice __pyx_v_t_xs = { 0, 0, { 0 }, { 0 }, { 0 } };
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("maximum_path_c (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_paths,&__pyx_n_s_values,&__pyx_n_s_t_ys,&__pyx_n_s_t_xs,0};
- PyObject* values[4] = {0,0,0,0};
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
- CYTHON_FALLTHROUGH;
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- CYTHON_FALLTHROUGH;
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- CYTHON_FALLTHROUGH;
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_paths)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- CYTHON_FALLTHROUGH;
- case 1:
- if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_values)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 1); __PYX_ERR(0, 40, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 2:
- if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_ys)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 2); __PYX_ERR(0, 40, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 3:
- if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_xs)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 3); __PYX_ERR(0, 40, __pyx_L3_error)
- }
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "maximum_path_c") < 0)) __PYX_ERR(0, 40, __pyx_L3_error)
- }
- } else if (PyTuple_GET_SIZE(__pyx_args) != 4) {
- goto __pyx_L5_argtuple_error;
- } else {
- values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
- }
- __pyx_v_paths = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(values[0], PyBUF_WRITABLE); if (unlikely(!__pyx_v_paths.memview)) __PYX_ERR(0, 40, __pyx_L3_error)
- __pyx_v_values = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(values[1], PyBUF_WRITABLE); if (unlikely(!__pyx_v_values.memview)) __PYX_ERR(0, 40, __pyx_L3_error)
- __pyx_v_t_ys = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[2], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_ys.memview)) __PYX_ERR(0, 40, __pyx_L3_error)
- __pyx_v_t_xs = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[3], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_xs.memview)) __PYX_ERR(0, 40, __pyx_L3_error)
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 40, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return NULL;
- __pyx_L4_argument_unpacking_done:;
- __pyx_r = __pyx_pf_15monotonic_align_4core_maximum_path_c(__pyx_self, __pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs);
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("maximum_path_c", 0);
- __Pyx_XDECREF(__pyx_r);
- if (unlikely(!__pyx_v_paths.memview)) { __Pyx_RaiseUnboundLocalError("paths"); __PYX_ERR(0, 40, __pyx_L1_error) }
- if (unlikely(!__pyx_v_values.memview)) { __Pyx_RaiseUnboundLocalError("values"); __PYX_ERR(0, 40, __pyx_L1_error) }
- if (unlikely(!__pyx_v_t_ys.memview)) { __Pyx_RaiseUnboundLocalError("t_ys"); __PYX_ERR(0, 40, __pyx_L1_error) }
- if (unlikely(!__pyx_v_t_xs.memview)) { __Pyx_RaiseUnboundLocalError("t_xs"); __PYX_ERR(0, 40, __pyx_L1_error) }
- __pyx_t_1 = __Pyx_void_to_None(__pyx_f_15monotonic_align_4core_maximum_path_c(__pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs, 0)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 40, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L0;
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __PYX_XDEC_MEMVIEW(&__pyx_v_paths, 1);
- __PYX_XDEC_MEMVIEW(&__pyx_v_values, 1);
- __PYX_XDEC_MEMVIEW(&__pyx_v_t_ys, 1);
- __PYX_XDEC_MEMVIEW(&__pyx_v_t_xs, 1);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":122
- * cdef bint dtype_is_object
- *
- * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<<
- * mode="c", bint allocate_buffer=True):
- *
- */
-
-/* Python wrapper */
-static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- PyObject *__pyx_v_shape = 0;
- Py_ssize_t __pyx_v_itemsize;
- PyObject *__pyx_v_format = 0;
- PyObject *__pyx_v_mode = 0;
- int __pyx_v_allocate_buffer;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_shape,&__pyx_n_s_itemsize,&__pyx_n_s_format,&__pyx_n_s_mode,&__pyx_n_s_allocate_buffer,0};
- PyObject* values[5] = {0,0,0,0,0};
- values[3] = ((PyObject *)__pyx_n_s_c);
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
- CYTHON_FALLTHROUGH;
- case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
- CYTHON_FALLTHROUGH;
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- CYTHON_FALLTHROUGH;
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- CYTHON_FALLTHROUGH;
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_shape)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- CYTHON_FALLTHROUGH;
- case 1:
- if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_itemsize)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 1); __PYX_ERR(1, 122, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 2:
- if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_format)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 2); __PYX_ERR(1, 122, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 3:
- if (kw_args > 0) {
- PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_mode);
- if (value) { values[3] = value; kw_args--; }
- }
- CYTHON_FALLTHROUGH;
- case 4:
- if (kw_args > 0) {
- PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_allocate_buffer);
- if (value) { values[4] = value; kw_args--; }
- }
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 122, __pyx_L3_error)
- }
- } else {
- switch (PyTuple_GET_SIZE(__pyx_args)) {
- case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
- CYTHON_FALLTHROUGH;
- case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
- CYTHON_FALLTHROUGH;
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- break;
- default: goto __pyx_L5_argtuple_error;
- }
- }
- __pyx_v_shape = ((PyObject*)values[0]);
- __pyx_v_itemsize = __Pyx_PyIndex_AsSsize_t(values[1]); if (unlikely((__pyx_v_itemsize == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 122, __pyx_L3_error)
- __pyx_v_format = values[2];
- __pyx_v_mode = values[3];
- if (values[4]) {
- __pyx_v_allocate_buffer = __Pyx_PyObject_IsTrue(values[4]); if (unlikely((__pyx_v_allocate_buffer == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 123, __pyx_L3_error)
- } else {
-
- /* "View.MemoryView":123
- *
- * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None,
- * mode="c", bint allocate_buffer=True): # <<<<<<<<<<<<<<
- *
- * cdef int idx
- */
- __pyx_v_allocate_buffer = ((int)1);
- }
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 122, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return -1;
- __pyx_L4_argument_unpacking_done:;
- if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_shape), (&PyTuple_Type), 1, "shape", 1))) __PYX_ERR(1, 122, __pyx_L1_error)
- if (unlikely(((PyObject *)__pyx_v_format) == Py_None)) {
- PyErr_Format(PyExc_TypeError, "Argument '%.200s' must not be None", "format"); __PYX_ERR(1, 122, __pyx_L1_error)
- }
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(((struct __pyx_array_obj *)__pyx_v_self), __pyx_v_shape, __pyx_v_itemsize, __pyx_v_format, __pyx_v_mode, __pyx_v_allocate_buffer);
-
- /* "View.MemoryView":122
- * cdef bint dtype_is_object
- *
- * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<<
- * mode="c", bint allocate_buffer=True):
- *
- */
-
- /* function exit code */
- goto __pyx_L0;
- __pyx_L1_error:;
- __pyx_r = -1;
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer) {
- int __pyx_v_idx;
- Py_ssize_t __pyx_v_i;
- Py_ssize_t __pyx_v_dim;
- PyObject **__pyx_v_p;
- char __pyx_v_order;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- Py_ssize_t __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_t_4;
- PyObject *__pyx_t_5 = NULL;
- PyObject *__pyx_t_6 = NULL;
- char *__pyx_t_7;
- int __pyx_t_8;
- Py_ssize_t __pyx_t_9;
- PyObject *__pyx_t_10 = NULL;
- Py_ssize_t __pyx_t_11;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__cinit__", 0);
- __Pyx_INCREF(__pyx_v_format);
-
- /* "View.MemoryView":129
- * cdef PyObject **p
- *
- * self.ndim = len(shape) # <<<<<<<<<<<<<<
- * self.itemsize = itemsize
- *
- */
- if (unlikely(__pyx_v_shape == Py_None)) {
- PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()");
- __PYX_ERR(1, 129, __pyx_L1_error)
- }
- __pyx_t_1 = PyTuple_GET_SIZE(__pyx_v_shape); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(1, 129, __pyx_L1_error)
- __pyx_v_self->ndim = ((int)__pyx_t_1);
-
- /* "View.MemoryView":130
- *
- * self.ndim = len(shape)
- * self.itemsize = itemsize # <<<<<<<<<<<<<<
- *
- * if not self.ndim:
- */
- __pyx_v_self->itemsize = __pyx_v_itemsize;
-
- /* "View.MemoryView":132
- * self.itemsize = itemsize
- *
- * if not self.ndim: # <<<<<<<<<<<<<<
- * raise ValueError("Empty shape tuple for cython.array")
- *
- */
- __pyx_t_2 = ((!(__pyx_v_self->ndim != 0)) != 0);
- if (unlikely(__pyx_t_2)) {
-
- /* "View.MemoryView":133
- *
- * if not self.ndim:
- * raise ValueError("Empty shape tuple for cython.array") # <<<<<<<<<<<<<<
- *
- * if itemsize <= 0:
- */
- __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 133, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 133, __pyx_L1_error)
-
- /* "View.MemoryView":132
- * self.itemsize = itemsize
- *
- * if not self.ndim: # <<<<<<<<<<<<<<
- * raise ValueError("Empty shape tuple for cython.array")
- *
- */
- }
-
- /* "View.MemoryView":135
- * raise ValueError("Empty shape tuple for cython.array")
- *
- * if itemsize <= 0: # <<<<<<<<<<<<<<
- * raise ValueError("itemsize <= 0 for cython.array")
- *
- */
- __pyx_t_2 = ((__pyx_v_itemsize <= 0) != 0);
- if (unlikely(__pyx_t_2)) {
-
- /* "View.MemoryView":136
- *
- * if itemsize <= 0:
- * raise ValueError("itemsize <= 0 for cython.array") # <<<<<<<<<<<<<<
- *
- * if not isinstance(format, bytes):
- */
- __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 136, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 136, __pyx_L1_error)
-
- /* "View.MemoryView":135
- * raise ValueError("Empty shape tuple for cython.array")
- *
- * if itemsize <= 0: # <<<<<<<<<<<<<<
- * raise ValueError("itemsize <= 0 for cython.array")
- *
- */
- }
-
- /* "View.MemoryView":138
- * raise ValueError("itemsize <= 0 for cython.array")
- *
- * if not isinstance(format, bytes): # <<<<<<<<<<<<<<
- * format = format.encode('ASCII')
- * self._format = format # keep a reference to the byte string
- */
- __pyx_t_2 = PyBytes_Check(__pyx_v_format);
- __pyx_t_4 = ((!(__pyx_t_2 != 0)) != 0);
- if (__pyx_t_4) {
-
- /* "View.MemoryView":139
- *
- * if not isinstance(format, bytes):
- * format = format.encode('ASCII') # <<<<<<<<<<<<<<
- * self._format = format # keep a reference to the byte string
- * self.format = self._format
- */
- __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_format, __pyx_n_s_encode); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 139, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __pyx_t_6 = NULL;
- if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) {
- __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5);
- if (likely(__pyx_t_6)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);
- __Pyx_INCREF(__pyx_t_6);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_5, function);
- }
- }
- __pyx_t_3 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_5, __pyx_t_6, __pyx_n_s_ASCII) : __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_n_s_ASCII);
- __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
- if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 139, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_DECREF_SET(__pyx_v_format, __pyx_t_3);
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":138
- * raise ValueError("itemsize <= 0 for cython.array")
- *
- * if not isinstance(format, bytes): # <<<<<<<<<<<<<<
- * format = format.encode('ASCII')
- * self._format = format # keep a reference to the byte string
- */
- }
-
- /* "View.MemoryView":140
- * if not isinstance(format, bytes):
- * format = format.encode('ASCII')
- * self._format = format # keep a reference to the byte string # <<<<<<<<<<<<<<
- * self.format = self._format
- *
- */
- if (!(likely(PyBytes_CheckExact(__pyx_v_format))||((__pyx_v_format) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_v_format)->tp_name), 0))) __PYX_ERR(1, 140, __pyx_L1_error)
- __pyx_t_3 = __pyx_v_format;
- __Pyx_INCREF(__pyx_t_3);
- __Pyx_GIVEREF(__pyx_t_3);
- __Pyx_GOTREF(__pyx_v_self->_format);
- __Pyx_DECREF(__pyx_v_self->_format);
- __pyx_v_self->_format = ((PyObject*)__pyx_t_3);
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":141
- * format = format.encode('ASCII')
- * self._format = format # keep a reference to the byte string
- * self.format = self._format # <<<<<<<<<<<<<<
- *
- *
- */
- if (unlikely(__pyx_v_self->_format == Py_None)) {
- PyErr_SetString(PyExc_TypeError, "expected bytes, NoneType found");
- __PYX_ERR(1, 141, __pyx_L1_error)
- }
- __pyx_t_7 = __Pyx_PyBytes_AsWritableString(__pyx_v_self->_format); if (unlikely((!__pyx_t_7) && PyErr_Occurred())) __PYX_ERR(1, 141, __pyx_L1_error)
- __pyx_v_self->format = __pyx_t_7;
-
- /* "View.MemoryView":144
- *
- *
- * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) # <<<<<<<<<<<<<<
- * self._strides = self._shape + self.ndim
- *
- */
- __pyx_v_self->_shape = ((Py_ssize_t *)PyObject_Malloc((((sizeof(Py_ssize_t)) * __pyx_v_self->ndim) * 2)));
-
- /* "View.MemoryView":145
- *
- * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2)
- * self._strides = self._shape + self.ndim # <<<<<<<<<<<<<<
- *
- * if not self._shape:
- */
- __pyx_v_self->_strides = (__pyx_v_self->_shape + __pyx_v_self->ndim);
-
- /* "View.MemoryView":147
- * self._strides = self._shape + self.ndim
- *
- * if not self._shape: # <<<<<<<<<<<<<<
- * raise MemoryError("unable to allocate shape and strides.")
- *
- */
- __pyx_t_4 = ((!(__pyx_v_self->_shape != 0)) != 0);
- if (unlikely(__pyx_t_4)) {
-
- /* "View.MemoryView":148
- *
- * if not self._shape:
- * raise MemoryError("unable to allocate shape and strides.") # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 148, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 148, __pyx_L1_error)
-
- /* "View.MemoryView":147
- * self._strides = self._shape + self.ndim
- *
- * if not self._shape: # <<<<<<<<<<<<<<
- * raise MemoryError("unable to allocate shape and strides.")
- *
- */
- }
-
- /* "View.MemoryView":151
- *
- *
- * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<<
- * if dim <= 0:
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim))
- */
- __pyx_t_8 = 0;
- __pyx_t_3 = __pyx_v_shape; __Pyx_INCREF(__pyx_t_3); __pyx_t_1 = 0;
- for (;;) {
- if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_3)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(1, 151, __pyx_L1_error)
- #else
- __pyx_t_5 = PySequence_ITEM(__pyx_t_3, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 151, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- #endif
- __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_5); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 151, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __pyx_v_dim = __pyx_t_9;
- __pyx_v_idx = __pyx_t_8;
- __pyx_t_8 = (__pyx_t_8 + 1);
-
- /* "View.MemoryView":152
- *
- * for idx, dim in enumerate(shape):
- * if dim <= 0: # <<<<<<<<<<<<<<
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim))
- * self._shape[idx] = dim
- */
- __pyx_t_4 = ((__pyx_v_dim <= 0) != 0);
- if (unlikely(__pyx_t_4)) {
-
- /* "View.MemoryView":153
- * for idx, dim in enumerate(shape):
- * if dim <= 0:
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) # <<<<<<<<<<<<<<
- * self._shape[idx] = dim
- *
- */
- __pyx_t_5 = __Pyx_PyInt_From_int(__pyx_v_idx); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 153, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 153, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __pyx_t_10 = PyTuple_New(2); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 153, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_10);
- __Pyx_GIVEREF(__pyx_t_5);
- PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_5);
- __Pyx_GIVEREF(__pyx_t_6);
- PyTuple_SET_ITEM(__pyx_t_10, 1, __pyx_t_6);
- __pyx_t_5 = 0;
- __pyx_t_6 = 0;
- __pyx_t_6 = __Pyx_PyString_Format(__pyx_kp_s_Invalid_shape_in_axis_d_d, __pyx_t_10); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 153, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
- __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_6); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 153, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_10);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- __Pyx_Raise(__pyx_t_10, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
- __PYX_ERR(1, 153, __pyx_L1_error)
-
- /* "View.MemoryView":152
- *
- * for idx, dim in enumerate(shape):
- * if dim <= 0: # <<<<<<<<<<<<<<
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim))
- * self._shape[idx] = dim
- */
- }
-
- /* "View.MemoryView":154
- * if dim <= 0:
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim))
- * self._shape[idx] = dim # <<<<<<<<<<<<<<
- *
- * cdef char order
- */
- (__pyx_v_self->_shape[__pyx_v_idx]) = __pyx_v_dim;
-
- /* "View.MemoryView":151
- *
- *
- * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<<
- * if dim <= 0:
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim))
- */
- }
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":157
- *
- * cdef char order
- * if mode == 'fortran': # <<<<<<<<<<<<<<
- * order = b'F'
- * self.mode = u'fortran'
- */
- __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_fortran, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 157, __pyx_L1_error)
- if (__pyx_t_4) {
-
- /* "View.MemoryView":158
- * cdef char order
- * if mode == 'fortran':
- * order = b'F' # <<<<<<<<<<<<<<
- * self.mode = u'fortran'
- * elif mode == 'c':
- */
- __pyx_v_order = 'F';
-
- /* "View.MemoryView":159
- * if mode == 'fortran':
- * order = b'F'
- * self.mode = u'fortran' # <<<<<<<<<<<<<<
- * elif mode == 'c':
- * order = b'C'
- */
- __Pyx_INCREF(__pyx_n_u_fortran);
- __Pyx_GIVEREF(__pyx_n_u_fortran);
- __Pyx_GOTREF(__pyx_v_self->mode);
- __Pyx_DECREF(__pyx_v_self->mode);
- __pyx_v_self->mode = __pyx_n_u_fortran;
-
- /* "View.MemoryView":157
- *
- * cdef char order
- * if mode == 'fortran': # <<<<<<<<<<<<<<
- * order = b'F'
- * self.mode = u'fortran'
- */
- goto __pyx_L10;
- }
-
- /* "View.MemoryView":160
- * order = b'F'
- * self.mode = u'fortran'
- * elif mode == 'c': # <<<<<<<<<<<<<<
- * order = b'C'
- * self.mode = u'c'
- */
- __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_c, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 160, __pyx_L1_error)
- if (likely(__pyx_t_4)) {
-
- /* "View.MemoryView":161
- * self.mode = u'fortran'
- * elif mode == 'c':
- * order = b'C' # <<<<<<<<<<<<<<
- * self.mode = u'c'
- * else:
- */
- __pyx_v_order = 'C';
-
- /* "View.MemoryView":162
- * elif mode == 'c':
- * order = b'C'
- * self.mode = u'c' # <<<<<<<<<<<<<<
- * else:
- * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode)
- */
- __Pyx_INCREF(__pyx_n_u_c);
- __Pyx_GIVEREF(__pyx_n_u_c);
- __Pyx_GOTREF(__pyx_v_self->mode);
- __Pyx_DECREF(__pyx_v_self->mode);
- __pyx_v_self->mode = __pyx_n_u_c;
-
- /* "View.MemoryView":160
- * order = b'F'
- * self.mode = u'fortran'
- * elif mode == 'c': # <<<<<<<<<<<<<<
- * order = b'C'
- * self.mode = u'c'
- */
- goto __pyx_L10;
- }
-
- /* "View.MemoryView":164
- * self.mode = u'c'
- * else:
- * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) # <<<<<<<<<<<<<<
- *
- * self.len = fill_contig_strides_array(self._shape, self._strides,
- */
- /*else*/ {
- __pyx_t_3 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Invalid_mode_expected_c_or_fortr, __pyx_v_mode); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 164, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_3); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 164, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_10);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __Pyx_Raise(__pyx_t_10, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
- __PYX_ERR(1, 164, __pyx_L1_error)
- }
- __pyx_L10:;
-
- /* "View.MemoryView":166
- * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode)
- *
- * self.len = fill_contig_strides_array(self._shape, self._strides, # <<<<<<<<<<<<<<
- * itemsize, self.ndim, order)
- *
- */
- __pyx_v_self->len = __pyx_fill_contig_strides_array(__pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_itemsize, __pyx_v_self->ndim, __pyx_v_order);
-
- /* "View.MemoryView":169
- * itemsize, self.ndim, order)
- *
- * self.free_data = allocate_buffer # <<<<<<<<<<<<<<
- * self.dtype_is_object = format == b'O'
- * if allocate_buffer:
- */
- __pyx_v_self->free_data = __pyx_v_allocate_buffer;
-
- /* "View.MemoryView":170
- *
- * self.free_data = allocate_buffer
- * self.dtype_is_object = format == b'O' # <<<<<<<<<<<<<<
- * if allocate_buffer:
- *
- */
- __pyx_t_10 = PyObject_RichCompare(__pyx_v_format, __pyx_n_b_O, Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 170, __pyx_L1_error)
- __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 170, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
- __pyx_v_self->dtype_is_object = __pyx_t_4;
-
- /* "View.MemoryView":171
- * self.free_data = allocate_buffer
- * self.dtype_is_object = format == b'O'
- * if allocate_buffer: # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_4 = (__pyx_v_allocate_buffer != 0);
- if (__pyx_t_4) {
-
- /* "View.MemoryView":174
- *
- *
- * self.data = malloc(self.len) # <<<<<<<<<<<<<<
- * if not self.data:
- * raise MemoryError("unable to allocate array data.")
- */
- __pyx_v_self->data = ((char *)malloc(__pyx_v_self->len));
-
- /* "View.MemoryView":175
- *
- * self.data = malloc(self.len)
- * if not self.data: # <<<<<<<<<<<<<<
- * raise MemoryError("unable to allocate array data.")
- *
- */
- __pyx_t_4 = ((!(__pyx_v_self->data != 0)) != 0);
- if (unlikely(__pyx_t_4)) {
-
- /* "View.MemoryView":176
- * self.data = malloc(self.len)
- * if not self.data:
- * raise MemoryError("unable to allocate array data.") # <<<<<<<<<<<<<<
- *
- * if self.dtype_is_object:
- */
- __pyx_t_10 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 176, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_10);
- __Pyx_Raise(__pyx_t_10, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
- __PYX_ERR(1, 176, __pyx_L1_error)
-
- /* "View.MemoryView":175
- *
- * self.data = malloc(self.len)
- * if not self.data: # <<<<<<<<<<<<<<
- * raise MemoryError("unable to allocate array data.")
- *
- */
- }
-
- /* "View.MemoryView":178
- * raise MemoryError("unable to allocate array data.")
- *
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * p = self.data
- * for i in range(self.len / itemsize):
- */
- __pyx_t_4 = (__pyx_v_self->dtype_is_object != 0);
- if (__pyx_t_4) {
-
- /* "View.MemoryView":179
- *
- * if self.dtype_is_object:
- * p = self.data # <<<<<<<<<<<<<<
- * for i in range(self.len / itemsize):
- * p[i] = Py_None
- */
- __pyx_v_p = ((PyObject **)__pyx_v_self->data);
-
- /* "View.MemoryView":180
- * if self.dtype_is_object:
- * p = self.data
- * for i in range(self.len / itemsize): # <<<<<<<<<<<<<<
- * p[i] = Py_None
- * Py_INCREF(Py_None)
- */
- if (unlikely(__pyx_v_itemsize == 0)) {
- PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero");
- __PYX_ERR(1, 180, __pyx_L1_error)
- }
- else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_self->len))) {
- PyErr_SetString(PyExc_OverflowError, "value too large to perform division");
- __PYX_ERR(1, 180, __pyx_L1_error)
- }
- __pyx_t_1 = __Pyx_div_Py_ssize_t(__pyx_v_self->len, __pyx_v_itemsize);
- __pyx_t_9 = __pyx_t_1;
- for (__pyx_t_11 = 0; __pyx_t_11 < __pyx_t_9; __pyx_t_11+=1) {
- __pyx_v_i = __pyx_t_11;
-
- /* "View.MemoryView":181
- * p = self.data
- * for i in range(self.len / itemsize):
- * p[i] = Py_None # <<<<<<<<<<<<<<
- * Py_INCREF(Py_None)
- *
- */
- (__pyx_v_p[__pyx_v_i]) = Py_None;
-
- /* "View.MemoryView":182
- * for i in range(self.len / itemsize):
- * p[i] = Py_None
- * Py_INCREF(Py_None) # <<<<<<<<<<<<<<
- *
- * @cname('getbuffer')
- */
- Py_INCREF(Py_None);
- }
-
- /* "View.MemoryView":178
- * raise MemoryError("unable to allocate array data.")
- *
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * p = self.data
- * for i in range(self.len / itemsize):
- */
- }
-
- /* "View.MemoryView":171
- * self.free_data = allocate_buffer
- * self.dtype_is_object = format == b'O'
- * if allocate_buffer: # <<<<<<<<<<<<<<
- *
- *
- */
- }
-
- /* "View.MemoryView":122
- * cdef bint dtype_is_object
- *
- * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<<
- * mode="c", bint allocate_buffer=True):
- *
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_XDECREF(__pyx_t_10);
- __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_format);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":185
- *
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<<
- * cdef int bufmode = -1
- * if self.mode == u"c":
- */
-
-/* Python wrapper */
-static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/
-static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0);
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(((struct __pyx_array_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
- int __pyx_v_bufmode;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- char *__pyx_t_4;
- Py_ssize_t __pyx_t_5;
- int __pyx_t_6;
- Py_ssize_t *__pyx_t_7;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- if (__pyx_v_info == NULL) {
- PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete");
- return -1;
- }
- __Pyx_RefNannySetupContext("__getbuffer__", 0);
- __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None);
- __Pyx_GIVEREF(__pyx_v_info->obj);
-
- /* "View.MemoryView":186
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * cdef int bufmode = -1 # <<<<<<<<<<<<<<
- * if self.mode == u"c":
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- */
- __pyx_v_bufmode = -1;
-
- /* "View.MemoryView":187
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * cdef int bufmode = -1
- * if self.mode == u"c": # <<<<<<<<<<<<<<
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * elif self.mode == u"fortran":
- */
- __pyx_t_1 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_c, Py_EQ)); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 187, __pyx_L1_error)
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":188
- * cdef int bufmode = -1
- * if self.mode == u"c":
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<<
- * elif self.mode == u"fortran":
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- */
- __pyx_v_bufmode = (PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS);
-
- /* "View.MemoryView":187
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * cdef int bufmode = -1
- * if self.mode == u"c": # <<<<<<<<<<<<<<
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * elif self.mode == u"fortran":
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":189
- * if self.mode == u"c":
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * elif self.mode == u"fortran": # <<<<<<<<<<<<<<
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * if not (flags & bufmode):
- */
- __pyx_t_2 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_fortran, Py_EQ)); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 189, __pyx_L1_error)
- __pyx_t_1 = (__pyx_t_2 != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":190
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * elif self.mode == u"fortran":
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<<
- * if not (flags & bufmode):
- * raise ValueError("Can only create a buffer that is contiguous in memory.")
- */
- __pyx_v_bufmode = (PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS);
-
- /* "View.MemoryView":189
- * if self.mode == u"c":
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * elif self.mode == u"fortran": # <<<<<<<<<<<<<<
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * if not (flags & bufmode):
- */
- }
- __pyx_L3:;
-
- /* "View.MemoryView":191
- * elif self.mode == u"fortran":
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * if not (flags & bufmode): # <<<<<<<<<<<<<<
- * raise ValueError("Can only create a buffer that is contiguous in memory.")
- * info.buf = self.data
- */
- __pyx_t_1 = ((!((__pyx_v_flags & __pyx_v_bufmode) != 0)) != 0);
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":192
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * if not (flags & bufmode):
- * raise ValueError("Can only create a buffer that is contiguous in memory.") # <<<<<<<<<<<<<<
- * info.buf = self.data
- * info.len = self.len
- */
- __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 192, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 192, __pyx_L1_error)
-
- /* "View.MemoryView":191
- * elif self.mode == u"fortran":
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * if not (flags & bufmode): # <<<<<<<<<<<<<<
- * raise ValueError("Can only create a buffer that is contiguous in memory.")
- * info.buf = self.data
- */
- }
-
- /* "View.MemoryView":193
- * if not (flags & bufmode):
- * raise ValueError("Can only create a buffer that is contiguous in memory.")
- * info.buf = self.data # <<<<<<<<<<<<<<
- * info.len = self.len
- * info.ndim = self.ndim
- */
- __pyx_t_4 = __pyx_v_self->data;
- __pyx_v_info->buf = __pyx_t_4;
-
- /* "View.MemoryView":194
- * raise ValueError("Can only create a buffer that is contiguous in memory.")
- * info.buf = self.data
- * info.len = self.len # <<<<<<<<<<<<<<
- * info.ndim = self.ndim
- * info.shape = self._shape
- */
- __pyx_t_5 = __pyx_v_self->len;
- __pyx_v_info->len = __pyx_t_5;
-
- /* "View.MemoryView":195
- * info.buf = self.data
- * info.len = self.len
- * info.ndim = self.ndim # <<<<<<<<<<<<<<
- * info.shape = self._shape
- * info.strides = self._strides
- */
- __pyx_t_6 = __pyx_v_self->ndim;
- __pyx_v_info->ndim = __pyx_t_6;
-
- /* "View.MemoryView":196
- * info.len = self.len
- * info.ndim = self.ndim
- * info.shape = self._shape # <<<<<<<<<<<<<<
- * info.strides = self._strides
- * info.suboffsets = NULL
- */
- __pyx_t_7 = __pyx_v_self->_shape;
- __pyx_v_info->shape = __pyx_t_7;
-
- /* "View.MemoryView":197
- * info.ndim = self.ndim
- * info.shape = self._shape
- * info.strides = self._strides # <<<<<<<<<<<<<<
- * info.suboffsets = NULL
- * info.itemsize = self.itemsize
- */
- __pyx_t_7 = __pyx_v_self->_strides;
- __pyx_v_info->strides = __pyx_t_7;
-
- /* "View.MemoryView":198
- * info.shape = self._shape
- * info.strides = self._strides
- * info.suboffsets = NULL # <<<<<<<<<<<<<<
- * info.itemsize = self.itemsize
- * info.readonly = 0
- */
- __pyx_v_info->suboffsets = NULL;
-
- /* "View.MemoryView":199
- * info.strides = self._strides
- * info.suboffsets = NULL
- * info.itemsize = self.itemsize # <<<<<<<<<<<<<<
- * info.readonly = 0
- *
- */
- __pyx_t_5 = __pyx_v_self->itemsize;
- __pyx_v_info->itemsize = __pyx_t_5;
-
- /* "View.MemoryView":200
- * info.suboffsets = NULL
- * info.itemsize = self.itemsize
- * info.readonly = 0 # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_FORMAT:
- */
- __pyx_v_info->readonly = 0;
-
- /* "View.MemoryView":202
- * info.readonly = 0
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * info.format = self.format
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":203
- *
- * if flags & PyBUF_FORMAT:
- * info.format = self.format # <<<<<<<<<<<<<<
- * else:
- * info.format = NULL
- */
- __pyx_t_4 = __pyx_v_self->format;
- __pyx_v_info->format = __pyx_t_4;
-
- /* "View.MemoryView":202
- * info.readonly = 0
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * info.format = self.format
- * else:
- */
- goto __pyx_L5;
- }
-
- /* "View.MemoryView":205
- * info.format = self.format
- * else:
- * info.format = NULL # <<<<<<<<<<<<<<
- *
- * info.obj = self
- */
- /*else*/ {
- __pyx_v_info->format = NULL;
- }
- __pyx_L5:;
-
- /* "View.MemoryView":207
- * info.format = NULL
- *
- * info.obj = self # <<<<<<<<<<<<<<
- *
- * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)")
- */
- __Pyx_INCREF(((PyObject *)__pyx_v_self));
- __Pyx_GIVEREF(((PyObject *)__pyx_v_self));
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj);
- __pyx_v_info->obj = ((PyObject *)__pyx_v_self);
-
- /* "View.MemoryView":185
- *
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<<
- * cdef int bufmode = -1
- * if self.mode == u"c":
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.array.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- if (__pyx_v_info->obj != NULL) {
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0;
- }
- goto __pyx_L2;
- __pyx_L0:;
- if (__pyx_v_info->obj == Py_None) {
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0;
- }
- __pyx_L2:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":211
- * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)")
- *
- * def __dealloc__(array self): # <<<<<<<<<<<<<<
- * if self.callback_free_data != NULL:
- * self.callback_free_data(self.data)
- */
-
-/* Python wrapper */
-static void __pyx_array___dealloc__(PyObject *__pyx_v_self); /*proto*/
-static void __pyx_array___dealloc__(PyObject *__pyx_v_self) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0);
- __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(((struct __pyx_array_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self) {
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- __Pyx_RefNannySetupContext("__dealloc__", 0);
-
- /* "View.MemoryView":212
- *
- * def __dealloc__(array self):
- * if self.callback_free_data != NULL: # <<<<<<<<<<<<<<
- * self.callback_free_data(self.data)
- * elif self.free_data:
- */
- __pyx_t_1 = ((__pyx_v_self->callback_free_data != NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":213
- * def __dealloc__(array self):
- * if self.callback_free_data != NULL:
- * self.callback_free_data(self.data) # <<<<<<<<<<<<<<
- * elif self.free_data:
- * if self.dtype_is_object:
- */
- __pyx_v_self->callback_free_data(__pyx_v_self->data);
-
- /* "View.MemoryView":212
- *
- * def __dealloc__(array self):
- * if self.callback_free_data != NULL: # <<<<<<<<<<<<<<
- * self.callback_free_data(self.data)
- * elif self.free_data:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":214
- * if self.callback_free_data != NULL:
- * self.callback_free_data(self.data)
- * elif self.free_data: # <<<<<<<<<<<<<<
- * if self.dtype_is_object:
- * refcount_objects_in_slice(self.data, self._shape,
- */
- __pyx_t_1 = (__pyx_v_self->free_data != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":215
- * self.callback_free_data(self.data)
- * elif self.free_data:
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * refcount_objects_in_slice(self.data, self._shape,
- * self._strides, self.ndim, False)
- */
- __pyx_t_1 = (__pyx_v_self->dtype_is_object != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":216
- * elif self.free_data:
- * if self.dtype_is_object:
- * refcount_objects_in_slice(self.data, self._shape, # <<<<<<<<<<<<<<
- * self._strides, self.ndim, False)
- * free(self.data)
- */
- __pyx_memoryview_refcount_objects_in_slice(__pyx_v_self->data, __pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_self->ndim, 0);
-
- /* "View.MemoryView":215
- * self.callback_free_data(self.data)
- * elif self.free_data:
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * refcount_objects_in_slice(self.data, self._shape,
- * self._strides, self.ndim, False)
- */
- }
-
- /* "View.MemoryView":218
- * refcount_objects_in_slice(self.data, self._shape,
- * self._strides, self.ndim, False)
- * free(self.data) # <<<<<<<<<<<<<<
- * PyObject_Free(self._shape)
- *
- */
- free(__pyx_v_self->data);
-
- /* "View.MemoryView":214
- * if self.callback_free_data != NULL:
- * self.callback_free_data(self.data)
- * elif self.free_data: # <<<<<<<<<<<<<<
- * if self.dtype_is_object:
- * refcount_objects_in_slice(self.data, self._shape,
- */
- }
- __pyx_L3:;
-
- /* "View.MemoryView":219
- * self._strides, self.ndim, False)
- * free(self.data)
- * PyObject_Free(self._shape) # <<<<<<<<<<<<<<
- *
- * @property
- */
- PyObject_Free(__pyx_v_self->_shape);
-
- /* "View.MemoryView":211
- * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)")
- *
- * def __dealloc__(array self): # <<<<<<<<<<<<<<
- * if self.callback_free_data != NULL:
- * self.callback_free_data(self.data)
- */
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-/* "View.MemoryView":222
- *
- * @property
- * def memview(self): # <<<<<<<<<<<<<<
- * return self.get_memview()
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_5array_7memview___get__(((struct __pyx_array_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":223
- * @property
- * def memview(self):
- * return self.get_memview() # <<<<<<<<<<<<<<
- *
- * @cname('get_memview')
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = ((struct __pyx_vtabstruct_array *)__pyx_v_self->__pyx_vtab)->get_memview(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 223, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":222
- *
- * @property
- * def memview(self): # <<<<<<<<<<<<<<
- * return self.get_memview()
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.array.memview.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":226
- *
- * @cname('get_memview')
- * cdef get_memview(self): # <<<<<<<<<<<<<<
- * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE
- * return memoryview(self, flags, self.dtype_is_object)
- */
-
-static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self) {
- int __pyx_v_flags;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("get_memview", 0);
-
- /* "View.MemoryView":227
- * @cname('get_memview')
- * cdef get_memview(self):
- * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE # <<<<<<<<<<<<<<
- * return memoryview(self, flags, self.dtype_is_object)
- *
- */
- __pyx_v_flags = ((PyBUF_ANY_CONTIGUOUS | PyBUF_FORMAT) | PyBUF_WRITABLE);
-
- /* "View.MemoryView":228
- * cdef get_memview(self):
- * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE
- * return memoryview(self, flags, self.dtype_is_object) # <<<<<<<<<<<<<<
- *
- * def __len__(self):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 228, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 228, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 228, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_INCREF(((PyObject *)__pyx_v_self));
- __Pyx_GIVEREF(((PyObject *)__pyx_v_self));
- PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_self));
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_2);
- PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2);
- __pyx_t_1 = 0;
- __pyx_t_2 = 0;
- __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 228, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":226
- *
- * @cname('get_memview')
- * cdef get_memview(self): # <<<<<<<<<<<<<<
- * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE
- * return memoryview(self, flags, self.dtype_is_object)
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.array.get_memview", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":230
- * return memoryview(self, flags, self.dtype_is_object)
- *
- * def __len__(self): # <<<<<<<<<<<<<<
- * return self._shape[0]
- *
- */
-
-/* Python wrapper */
-static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self); /*proto*/
-static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self) {
- Py_ssize_t __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__len__ (wrapper)", 0);
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(((struct __pyx_array_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self) {
- Py_ssize_t __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__len__", 0);
-
- /* "View.MemoryView":231
- *
- * def __len__(self):
- * return self._shape[0] # <<<<<<<<<<<<<<
- *
- * def __getattr__(self, attr):
- */
- __pyx_r = (__pyx_v_self->_shape[0]);
- goto __pyx_L0;
-
- /* "View.MemoryView":230
- * return memoryview(self, flags, self.dtype_is_object)
- *
- * def __len__(self): # <<<<<<<<<<<<<<
- * return self._shape[0]
- *
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":233
- * return self._shape[0]
- *
- * def __getattr__(self, attr): # <<<<<<<<<<<<<<
- * return getattr(self.memview, attr)
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr); /*proto*/
-static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__getattr__ (wrapper)", 0);
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_attr));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__getattr__", 0);
-
- /* "View.MemoryView":234
- *
- * def __getattr__(self, attr):
- * return getattr(self.memview, attr) # <<<<<<<<<<<<<<
- *
- * def __getitem__(self, item):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 234, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_GetAttr(__pyx_t_1, __pyx_v_attr); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 234, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":233
- * return self._shape[0]
- *
- * def __getattr__(self, attr): # <<<<<<<<<<<<<<
- * return getattr(self.memview, attr)
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.array.__getattr__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":236
- * return getattr(self.memview, attr)
- *
- * def __getitem__(self, item): # <<<<<<<<<<<<<<
- * return self.memview[item]
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item); /*proto*/
-static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0);
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__getitem__", 0);
-
- /* "View.MemoryView":237
- *
- * def __getitem__(self, item):
- * return self.memview[item] # <<<<<<<<<<<<<<
- *
- * def __setitem__(self, item, value):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 237, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_item); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 237, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":236
- * return getattr(self.memview, attr)
- *
- * def __getitem__(self, item): # <<<<<<<<<<<<<<
- * return self.memview[item]
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.array.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":239
- * return self.memview[item]
- *
- * def __setitem__(self, item, value): # <<<<<<<<<<<<<<
- * self.memview[item] = value
- *
- */
-
-/* Python wrapper */
-static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /*proto*/
-static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0);
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item), ((PyObject *)__pyx_v_value));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setitem__", 0);
-
- /* "View.MemoryView":240
- *
- * def __setitem__(self, item, value):
- * self.memview[item] = value # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 240, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- if (unlikely(PyObject_SetItem(__pyx_t_1, __pyx_v_item, __pyx_v_value) < 0)) __PYX_ERR(1, 240, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
-
- /* "View.MemoryView":239
- * return self.memview[item]
- *
- * def __setitem__(self, item, value): # <<<<<<<<<<<<<<
- * self.memview[item] = value
- *
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.array.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_array___reduce_cython__(((struct __pyx_array_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__reduce_cython__", 0);
-
- /* "(tree fragment)":2
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 2, __pyx_L1_error)
-
- /* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.array.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/
-static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_array_2__setstate_cython__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setstate_cython__", 0);
-
- /* "(tree fragment)":4
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 4, __pyx_L1_error)
-
- /* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.array.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":244
- *
- * @cname("__pyx_array_new")
- * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<<
- * char *mode, char *buf):
- * cdef array result
- */
-
-static struct __pyx_array_obj *__pyx_array_new(PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, char *__pyx_v_format, char *__pyx_v_mode, char *__pyx_v_buf) {
- struct __pyx_array_obj *__pyx_v_result = 0;
- struct __pyx_array_obj *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("array_cwrapper", 0);
-
- /* "View.MemoryView":248
- * cdef array result
- *
- * if buf == NULL: # <<<<<<<<<<<<<<
- * result = array(shape, itemsize, format, mode.decode('ASCII'))
- * else:
- */
- __pyx_t_1 = ((__pyx_v_buf == NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":249
- *
- * if buf == NULL:
- * result = array(shape, itemsize, format, mode.decode('ASCII')) # <<<<<<<<<<<<<<
- * else:
- * result = array(shape, itemsize, format, mode.decode('ASCII'),
- */
- __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 249, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 249, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_4 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 249, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 249, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_INCREF(__pyx_v_shape);
- __Pyx_GIVEREF(__pyx_v_shape);
- PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_shape);
- __Pyx_GIVEREF(__pyx_t_2);
- PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2);
- __Pyx_GIVEREF(__pyx_t_3);
- PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3);
- __Pyx_GIVEREF(__pyx_t_4);
- PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_4);
- __pyx_t_2 = 0;
- __pyx_t_3 = 0;
- __pyx_t_4 = 0;
- __pyx_t_4 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 249, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_4);
- __pyx_t_4 = 0;
-
- /* "View.MemoryView":248
- * cdef array result
- *
- * if buf == NULL: # <<<<<<<<<<<<<<
- * result = array(shape, itemsize, format, mode.decode('ASCII'))
- * else:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":251
- * result = array(shape, itemsize, format, mode.decode('ASCII'))
- * else:
- * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<<
- * allocate_buffer=False)
- * result.data = buf
- */
- /*else*/ {
- __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 251, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_5 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 251, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __pyx_t_3 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 251, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_2 = PyTuple_New(4); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 251, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_INCREF(__pyx_v_shape);
- __Pyx_GIVEREF(__pyx_v_shape);
- PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_shape);
- __Pyx_GIVEREF(__pyx_t_4);
- PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4);
- __Pyx_GIVEREF(__pyx_t_5);
- PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_t_5);
- __Pyx_GIVEREF(__pyx_t_3);
- PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_t_3);
- __pyx_t_4 = 0;
- __pyx_t_5 = 0;
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":252
- * else:
- * result = array(shape, itemsize, format, mode.decode('ASCII'),
- * allocate_buffer=False) # <<<<<<<<<<<<<<
- * result.data = buf
- *
- */
- __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 252, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_allocate_buffer, Py_False) < 0) __PYX_ERR(1, 252, __pyx_L1_error)
-
- /* "View.MemoryView":251
- * result = array(shape, itemsize, format, mode.decode('ASCII'))
- * else:
- * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<<
- * allocate_buffer=False)
- * result.data = buf
- */
- __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 251, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_5);
- __pyx_t_5 = 0;
-
- /* "View.MemoryView":253
- * result = array(shape, itemsize, format, mode.decode('ASCII'),
- * allocate_buffer=False)
- * result.data = buf # <<<<<<<<<<<<<<
- *
- * return result
- */
- __pyx_v_result->data = __pyx_v_buf;
- }
- __pyx_L3:;
-
- /* "View.MemoryView":255
- * result.data = buf
- *
- * return result # <<<<<<<<<<<<<<
- *
- *
- */
- __Pyx_XDECREF(((PyObject *)__pyx_r));
- __Pyx_INCREF(((PyObject *)__pyx_v_result));
- __pyx_r = __pyx_v_result;
- goto __pyx_L0;
-
- /* "View.MemoryView":244
- *
- * @cname("__pyx_array_new")
- * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<<
- * char *mode, char *buf):
- * cdef array result
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.array_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF((PyObject *)__pyx_v_result);
- __Pyx_XGIVEREF((PyObject *)__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":281
- * cdef class Enum(object):
- * cdef object name
- * def __init__(self, name): # <<<<<<<<<<<<<<
- * self.name = name
- * def __repr__(self):
- */
-
-/* Python wrapper */
-static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- PyObject *__pyx_v_name = 0;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__init__ (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_name,0};
- PyObject* values[1] = {0};
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_name)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__init__") < 0)) __PYX_ERR(1, 281, __pyx_L3_error)
- }
- } else if (PyTuple_GET_SIZE(__pyx_args) != 1) {
- goto __pyx_L5_argtuple_error;
- } else {
- values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- }
- __pyx_v_name = values[0];
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("__init__", 1, 1, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 281, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("View.MemoryView.Enum.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return -1;
- __pyx_L4_argument_unpacking_done:;
- __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), __pyx_v_name);
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__init__", 0);
-
- /* "View.MemoryView":282
- * cdef object name
- * def __init__(self, name):
- * self.name = name # <<<<<<<<<<<<<<
- * def __repr__(self):
- * return self.name
- */
- __Pyx_INCREF(__pyx_v_name);
- __Pyx_GIVEREF(__pyx_v_name);
- __Pyx_GOTREF(__pyx_v_self->name);
- __Pyx_DECREF(__pyx_v_self->name);
- __pyx_v_self->name = __pyx_v_name;
-
- /* "View.MemoryView":281
- * cdef class Enum(object):
- * cdef object name
- * def __init__(self, name): # <<<<<<<<<<<<<<
- * self.name = name
- * def __repr__(self):
- */
-
- /* function exit code */
- __pyx_r = 0;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":283
- * def __init__(self, name):
- * self.name = name
- * def __repr__(self): # <<<<<<<<<<<<<<
- * return self.name
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0);
- __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__repr__", 0);
-
- /* "View.MemoryView":284
- * self.name = name
- * def __repr__(self):
- * return self.name # <<<<<<<<<<<<<<
- *
- * cdef generic = Enum("")
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_self->name);
- __pyx_r = __pyx_v_self->name;
- goto __pyx_L0;
-
- /* "View.MemoryView":283
- * def __init__(self, name):
- * self.name = name
- * def __repr__(self): # <<<<<<<<<<<<<<
- * return self.name
- *
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * cdef tuple state
- * cdef object _dict
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_MemviewEnum___reduce_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self) {
- PyObject *__pyx_v_state = 0;
- PyObject *__pyx_v__dict = 0;
- int __pyx_v_use_setstate;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_t_2;
- int __pyx_t_3;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__reduce_cython__", 0);
-
- /* "(tree fragment)":5
- * cdef object _dict
- * cdef bint use_setstate
- * state = (self.name,) # <<<<<<<<<<<<<<
- * _dict = getattr(self, '__dict__', None)
- * if _dict is not None:
- */
- __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_INCREF(__pyx_v_self->name);
- __Pyx_GIVEREF(__pyx_v_self->name);
- PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_self->name);
- __pyx_v_state = ((PyObject*)__pyx_t_1);
- __pyx_t_1 = 0;
-
- /* "(tree fragment)":6
- * cdef bint use_setstate
- * state = (self.name,)
- * _dict = getattr(self, '__dict__', None) # <<<<<<<<<<<<<<
- * if _dict is not None:
- * state += (_dict,)
- */
- __pyx_t_1 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 6, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_v__dict = __pyx_t_1;
- __pyx_t_1 = 0;
-
- /* "(tree fragment)":7
- * state = (self.name,)
- * _dict = getattr(self, '__dict__', None)
- * if _dict is not None: # <<<<<<<<<<<<<<
- * state += (_dict,)
- * use_setstate = True
- */
- __pyx_t_2 = (__pyx_v__dict != Py_None);
- __pyx_t_3 = (__pyx_t_2 != 0);
- if (__pyx_t_3) {
-
- /* "(tree fragment)":8
- * _dict = getattr(self, '__dict__', None)
- * if _dict is not None:
- * state += (_dict,) # <<<<<<<<<<<<<<
- * use_setstate = True
- * else:
- */
- __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 8, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_INCREF(__pyx_v__dict);
- __Pyx_GIVEREF(__pyx_v__dict);
- PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v__dict);
- __pyx_t_4 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 8, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_4));
- __pyx_t_4 = 0;
-
- /* "(tree fragment)":9
- * if _dict is not None:
- * state += (_dict,)
- * use_setstate = True # <<<<<<<<<<<<<<
- * else:
- * use_setstate = self.name is not None
- */
- __pyx_v_use_setstate = 1;
-
- /* "(tree fragment)":7
- * state = (self.name,)
- * _dict = getattr(self, '__dict__', None)
- * if _dict is not None: # <<<<<<<<<<<<<<
- * state += (_dict,)
- * use_setstate = True
- */
- goto __pyx_L3;
- }
-
- /* "(tree fragment)":11
- * use_setstate = True
- * else:
- * use_setstate = self.name is not None # <<<<<<<<<<<<<<
- * if use_setstate:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state
- */
- /*else*/ {
- __pyx_t_3 = (__pyx_v_self->name != Py_None);
- __pyx_v_use_setstate = __pyx_t_3;
- }
- __pyx_L3:;
-
- /* "(tree fragment)":12
- * else:
- * use_setstate = self.name is not None
- * if use_setstate: # <<<<<<<<<<<<<<
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state
- * else:
- */
- __pyx_t_3 = (__pyx_v_use_setstate != 0);
- if (__pyx_t_3) {
-
- /* "(tree fragment)":13
- * use_setstate = self.name is not None
- * if use_setstate:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state # <<<<<<<<<<<<<<
- * else:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, state)
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 13, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 13, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- __Pyx_INCREF(__pyx_int_184977713);
- __Pyx_GIVEREF(__pyx_int_184977713);
- PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713);
- __Pyx_INCREF(Py_None);
- __Pyx_GIVEREF(Py_None);
- PyTuple_SET_ITEM(__pyx_t_1, 2, Py_None);
- __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 13, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_GIVEREF(__pyx_t_4);
- PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_1);
- __Pyx_INCREF(__pyx_v_state);
- __Pyx_GIVEREF(__pyx_v_state);
- PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_v_state);
- __pyx_t_4 = 0;
- __pyx_t_1 = 0;
- __pyx_r = __pyx_t_5;
- __pyx_t_5 = 0;
- goto __pyx_L0;
-
- /* "(tree fragment)":12
- * else:
- * use_setstate = self.name is not None
- * if use_setstate: # <<<<<<<<<<<<<<
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state
- * else:
- */
- }
-
- /* "(tree fragment)":15
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state
- * else:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) # <<<<<<<<<<<<<<
- * def __setstate_cython__(self, __pyx_state):
- * __pyx_unpickle_Enum__set_state(self, __pyx_state)
- */
- /*else*/ {
- __Pyx_XDECREF(__pyx_r);
- __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 15, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 15, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- __Pyx_INCREF(__pyx_int_184977713);
- __Pyx_GIVEREF(__pyx_int_184977713);
- PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713);
- __Pyx_INCREF(__pyx_v_state);
- __Pyx_GIVEREF(__pyx_v_state);
- PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_state);
- __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 15, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_GIVEREF(__pyx_t_5);
- PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1);
- __pyx_t_5 = 0;
- __pyx_t_1 = 0;
- __pyx_r = __pyx_t_4;
- __pyx_t_4 = 0;
- goto __pyx_L0;
- }
-
- /* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * cdef tuple state
- * cdef object _dict
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.Enum.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_state);
- __Pyx_XDECREF(__pyx_v__dict);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":16
- * else:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, state)
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * __pyx_unpickle_Enum__set_state(self, __pyx_state)
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/
-static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_MemviewEnum_2__setstate_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setstate_cython__", 0);
-
- /* "(tree fragment)":17
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, state)
- * def __setstate_cython__(self, __pyx_state):
- * __pyx_unpickle_Enum__set_state(self, __pyx_state) # <<<<<<<<<<<<<<
- */
- if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 17, __pyx_L1_error)
- __pyx_t_1 = __pyx_unpickle_Enum__set_state(__pyx_v_self, ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 17, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
-
- /* "(tree fragment)":16
- * else:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, state)
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * __pyx_unpickle_Enum__set_state(self, __pyx_state)
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.Enum.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":298
- *
- * @cname('__pyx_align_pointer')
- * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<<
- * "Align pointer memory on a given boundary"
- * cdef Py_intptr_t aligned_p = memory
- */
-
-static void *__pyx_align_pointer(void *__pyx_v_memory, size_t __pyx_v_alignment) {
- Py_intptr_t __pyx_v_aligned_p;
- size_t __pyx_v_offset;
- void *__pyx_r;
- int __pyx_t_1;
-
- /* "View.MemoryView":300
- * cdef void *align_pointer(void *memory, size_t alignment) nogil:
- * "Align pointer memory on a given boundary"
- * cdef Py_intptr_t aligned_p = memory # <<<<<<<<<<<<<<
- * cdef size_t offset
- *
- */
- __pyx_v_aligned_p = ((Py_intptr_t)__pyx_v_memory);
-
- /* "View.MemoryView":304
- *
- * with cython.cdivision(True):
- * offset = aligned_p % alignment # <<<<<<<<<<<<<<
- *
- * if offset > 0:
- */
- __pyx_v_offset = (__pyx_v_aligned_p % __pyx_v_alignment);
-
- /* "View.MemoryView":306
- * offset = aligned_p % alignment
- *
- * if offset > 0: # <<<<<<<<<<<<<<
- * aligned_p += alignment - offset
- *
- */
- __pyx_t_1 = ((__pyx_v_offset > 0) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":307
- *
- * if offset > 0:
- * aligned_p += alignment - offset # <<<<<<<<<<<<<<
- *
- * return aligned_p
- */
- __pyx_v_aligned_p = (__pyx_v_aligned_p + (__pyx_v_alignment - __pyx_v_offset));
-
- /* "View.MemoryView":306
- * offset = aligned_p % alignment
- *
- * if offset > 0: # <<<<<<<<<<<<<<
- * aligned_p += alignment - offset
- *
- */
- }
-
- /* "View.MemoryView":309
- * aligned_p += alignment - offset
- *
- * return aligned_p # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_r = ((void *)__pyx_v_aligned_p);
- goto __pyx_L0;
-
- /* "View.MemoryView":298
- *
- * @cname('__pyx_align_pointer')
- * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<<
- * "Align pointer memory on a given boundary"
- * cdef Py_intptr_t aligned_p = memory
- */
-
- /* function exit code */
- __pyx_L0:;
- return __pyx_r;
-}
-
-/* "View.MemoryView":345
- * cdef __Pyx_TypeInfo *typeinfo
- *
- * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<<
- * self.obj = obj
- * self.flags = flags
- */
-
-/* Python wrapper */
-static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- PyObject *__pyx_v_obj = 0;
- int __pyx_v_flags;
- int __pyx_v_dtype_is_object;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_obj,&__pyx_n_s_flags,&__pyx_n_s_dtype_is_object,0};
- PyObject* values[3] = {0,0,0};
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- CYTHON_FALLTHROUGH;
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- CYTHON_FALLTHROUGH;
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_obj)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- CYTHON_FALLTHROUGH;
- case 1:
- if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_flags)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, 1); __PYX_ERR(1, 345, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 2:
- if (kw_args > 0) {
- PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_dtype_is_object);
- if (value) { values[2] = value; kw_args--; }
- }
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 345, __pyx_L3_error)
- }
- } else {
- switch (PyTuple_GET_SIZE(__pyx_args)) {
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- CYTHON_FALLTHROUGH;
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- break;
- default: goto __pyx_L5_argtuple_error;
- }
- }
- __pyx_v_obj = values[0];
- __pyx_v_flags = __Pyx_PyInt_As_int(values[1]); if (unlikely((__pyx_v_flags == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 345, __pyx_L3_error)
- if (values[2]) {
- __pyx_v_dtype_is_object = __Pyx_PyObject_IsTrue(values[2]); if (unlikely((__pyx_v_dtype_is_object == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 345, __pyx_L3_error)
- } else {
- __pyx_v_dtype_is_object = ((int)0);
- }
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 345, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return -1;
- __pyx_L4_argument_unpacking_done:;
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_obj, __pyx_v_flags, __pyx_v_dtype_is_object);
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- int __pyx_t_4;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__cinit__", 0);
-
- /* "View.MemoryView":346
- *
- * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False):
- * self.obj = obj # <<<<<<<<<<<<<<
- * self.flags = flags
- * if type(self) is memoryview or obj is not None:
- */
- __Pyx_INCREF(__pyx_v_obj);
- __Pyx_GIVEREF(__pyx_v_obj);
- __Pyx_GOTREF(__pyx_v_self->obj);
- __Pyx_DECREF(__pyx_v_self->obj);
- __pyx_v_self->obj = __pyx_v_obj;
-
- /* "View.MemoryView":347
- * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False):
- * self.obj = obj
- * self.flags = flags # <<<<<<<<<<<<<<
- * if type(self) is memoryview or obj is not None:
- * __Pyx_GetBuffer(obj, &self.view, flags)
- */
- __pyx_v_self->flags = __pyx_v_flags;
-
- /* "View.MemoryView":348
- * self.obj = obj
- * self.flags = flags
- * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<<
- * __Pyx_GetBuffer(obj, &self.view, flags)
- * if self.view.obj == NULL:
- */
- __pyx_t_2 = (((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))) == ((PyObject *)__pyx_memoryview_type));
- __pyx_t_3 = (__pyx_t_2 != 0);
- if (!__pyx_t_3) {
- } else {
- __pyx_t_1 = __pyx_t_3;
- goto __pyx_L4_bool_binop_done;
- }
- __pyx_t_3 = (__pyx_v_obj != Py_None);
- __pyx_t_2 = (__pyx_t_3 != 0);
- __pyx_t_1 = __pyx_t_2;
- __pyx_L4_bool_binop_done:;
- if (__pyx_t_1) {
-
- /* "View.MemoryView":349
- * self.flags = flags
- * if type(self) is memoryview or obj is not None:
- * __Pyx_GetBuffer(obj, &self.view, flags) # <<<<<<<<<<<<<<
- * if self.view.obj == NULL:
- * (<__pyx_buffer *> &self.view).obj = Py_None
- */
- __pyx_t_4 = __Pyx_GetBuffer(__pyx_v_obj, (&__pyx_v_self->view), __pyx_v_flags); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 349, __pyx_L1_error)
-
- /* "View.MemoryView":350
- * if type(self) is memoryview or obj is not None:
- * __Pyx_GetBuffer(obj, &self.view, flags)
- * if self.view.obj == NULL: # <<<<<<<<<<<<<<
- * (<__pyx_buffer *> &self.view).obj = Py_None
- * Py_INCREF(Py_None)
- */
- __pyx_t_1 = ((((PyObject *)__pyx_v_self->view.obj) == NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":351
- * __Pyx_GetBuffer(obj, &self.view, flags)
- * if self.view.obj == NULL:
- * (<__pyx_buffer *> &self.view).obj = Py_None # <<<<<<<<<<<<<<
- * Py_INCREF(Py_None)
- *
- */
- ((Py_buffer *)(&__pyx_v_self->view))->obj = Py_None;
-
- /* "View.MemoryView":352
- * if self.view.obj == NULL:
- * (<__pyx_buffer *> &self.view).obj = Py_None
- * Py_INCREF(Py_None) # <<<<<<<<<<<<<<
- *
- * global __pyx_memoryview_thread_locks_used
- */
- Py_INCREF(Py_None);
-
- /* "View.MemoryView":350
- * if type(self) is memoryview or obj is not None:
- * __Pyx_GetBuffer(obj, &self.view, flags)
- * if self.view.obj == NULL: # <<<<<<<<<<<<<<
- * (<__pyx_buffer *> &self.view).obj = Py_None
- * Py_INCREF(Py_None)
- */
- }
-
- /* "View.MemoryView":348
- * self.obj = obj
- * self.flags = flags
- * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<<
- * __Pyx_GetBuffer(obj, &self.view, flags)
- * if self.view.obj == NULL:
- */
- }
-
- /* "View.MemoryView":355
- *
- * global __pyx_memoryview_thread_locks_used
- * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<<
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]
- * __pyx_memoryview_thread_locks_used += 1
- */
- __pyx_t_1 = ((__pyx_memoryview_thread_locks_used < 8) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":356
- * global __pyx_memoryview_thread_locks_used
- * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED:
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks_used += 1
- * if self.lock is NULL:
- */
- __pyx_v_self->lock = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]);
-
- /* "View.MemoryView":357
- * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED:
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]
- * __pyx_memoryview_thread_locks_used += 1 # <<<<<<<<<<<<<<
- * if self.lock is NULL:
- * self.lock = PyThread_allocate_lock()
- */
- __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used + 1);
-
- /* "View.MemoryView":355
- *
- * global __pyx_memoryview_thread_locks_used
- * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<<
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]
- * __pyx_memoryview_thread_locks_used += 1
- */
- }
-
- /* "View.MemoryView":358
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]
- * __pyx_memoryview_thread_locks_used += 1
- * if self.lock is NULL: # <<<<<<<<<<<<<<
- * self.lock = PyThread_allocate_lock()
- * if self.lock is NULL:
- */
- __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":359
- * __pyx_memoryview_thread_locks_used += 1
- * if self.lock is NULL:
- * self.lock = PyThread_allocate_lock() # <<<<<<<<<<<<<<
- * if self.lock is NULL:
- * raise MemoryError
- */
- __pyx_v_self->lock = PyThread_allocate_lock();
-
- /* "View.MemoryView":360
- * if self.lock is NULL:
- * self.lock = PyThread_allocate_lock()
- * if self.lock is NULL: # <<<<<<<<<<<<<<
- * raise MemoryError
- *
- */
- __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0);
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":361
- * self.lock = PyThread_allocate_lock()
- * if self.lock is NULL:
- * raise MemoryError # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_FORMAT:
- */
- PyErr_NoMemory(); __PYX_ERR(1, 361, __pyx_L1_error)
-
- /* "View.MemoryView":360
- * if self.lock is NULL:
- * self.lock = PyThread_allocate_lock()
- * if self.lock is NULL: # <<<<<<<<<<<<<<
- * raise MemoryError
- *
- */
- }
-
- /* "View.MemoryView":358
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]
- * __pyx_memoryview_thread_locks_used += 1
- * if self.lock is NULL: # <<<<<<<<<<<<<<
- * self.lock = PyThread_allocate_lock()
- * if self.lock is NULL:
- */
- }
-
- /* "View.MemoryView":363
- * raise MemoryError
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0')
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":364
- *
- * if flags & PyBUF_FORMAT:
- * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') # <<<<<<<<<<<<<<
- * else:
- * self.dtype_is_object = dtype_is_object
- */
- __pyx_t_2 = (((__pyx_v_self->view.format[0]) == 'O') != 0);
- if (__pyx_t_2) {
- } else {
- __pyx_t_1 = __pyx_t_2;
- goto __pyx_L11_bool_binop_done;
- }
- __pyx_t_2 = (((__pyx_v_self->view.format[1]) == '\x00') != 0);
- __pyx_t_1 = __pyx_t_2;
- __pyx_L11_bool_binop_done:;
- __pyx_v_self->dtype_is_object = __pyx_t_1;
-
- /* "View.MemoryView":363
- * raise MemoryError
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0')
- * else:
- */
- goto __pyx_L10;
- }
-
- /* "View.MemoryView":366
- * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0')
- * else:
- * self.dtype_is_object = dtype_is_object # <<<<<<<<<<<<<<
- *
- * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer(
- */
- /*else*/ {
- __pyx_v_self->dtype_is_object = __pyx_v_dtype_is_object;
- }
- __pyx_L10:;
-
- /* "View.MemoryView":368
- * self.dtype_is_object = dtype_is_object
- *
- * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( # <<<<<<<<<<<<<<
- * &self.acquisition_count[0], sizeof(__pyx_atomic_int))
- * self.typeinfo = NULL
- */
- __pyx_v_self->acquisition_count_aligned_p = ((__pyx_atomic_int *)__pyx_align_pointer(((void *)(&(__pyx_v_self->acquisition_count[0]))), (sizeof(__pyx_atomic_int))));
-
- /* "View.MemoryView":370
- * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer(
- * &self.acquisition_count[0], sizeof(__pyx_atomic_int))
- * self.typeinfo = NULL # <<<<<<<<<<<<<<
- *
- * def __dealloc__(memoryview self):
- */
- __pyx_v_self->typeinfo = NULL;
-
- /* "View.MemoryView":345
- * cdef __Pyx_TypeInfo *typeinfo
- *
- * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<<
- * self.obj = obj
- * self.flags = flags
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":372
- * self.typeinfo = NULL
- *
- * def __dealloc__(memoryview self): # <<<<<<<<<<<<<<
- * if self.obj is not None:
- * __Pyx_ReleaseBuffer(&self.view)
- */
-
-/* Python wrapper */
-static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self); /*proto*/
-static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0);
- __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self) {
- int __pyx_v_i;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- int __pyx_t_4;
- int __pyx_t_5;
- PyThread_type_lock __pyx_t_6;
- PyThread_type_lock __pyx_t_7;
- __Pyx_RefNannySetupContext("__dealloc__", 0);
-
- /* "View.MemoryView":373
- *
- * def __dealloc__(memoryview self):
- * if self.obj is not None: # <<<<<<<<<<<<<<
- * __Pyx_ReleaseBuffer(&self.view)
- * elif (<__pyx_buffer *> &self.view).obj == Py_None:
- */
- __pyx_t_1 = (__pyx_v_self->obj != Py_None);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":374
- * def __dealloc__(memoryview self):
- * if self.obj is not None:
- * __Pyx_ReleaseBuffer(&self.view) # <<<<<<<<<<<<<<
- * elif (<__pyx_buffer *> &self.view).obj == Py_None:
- *
- */
- __Pyx_ReleaseBuffer((&__pyx_v_self->view));
-
- /* "View.MemoryView":373
- *
- * def __dealloc__(memoryview self):
- * if self.obj is not None: # <<<<<<<<<<<<<<
- * __Pyx_ReleaseBuffer(&self.view)
- * elif (<__pyx_buffer *> &self.view).obj == Py_None:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":375
- * if self.obj is not None:
- * __Pyx_ReleaseBuffer(&self.view)
- * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<<
- *
- * (<__pyx_buffer *> &self.view).obj = NULL
- */
- __pyx_t_2 = ((((Py_buffer *)(&__pyx_v_self->view))->obj == Py_None) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":377
- * elif (<__pyx_buffer *> &self.view).obj == Py_None:
- *
- * (<__pyx_buffer *> &self.view).obj = NULL # <<<<<<<<<<<<<<
- * Py_DECREF(Py_None)
- *
- */
- ((Py_buffer *)(&__pyx_v_self->view))->obj = NULL;
-
- /* "View.MemoryView":378
- *
- * (<__pyx_buffer *> &self.view).obj = NULL
- * Py_DECREF(Py_None) # <<<<<<<<<<<<<<
- *
- * cdef int i
- */
- Py_DECREF(Py_None);
-
- /* "View.MemoryView":375
- * if self.obj is not None:
- * __Pyx_ReleaseBuffer(&self.view)
- * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<<
- *
- * (<__pyx_buffer *> &self.view).obj = NULL
- */
- }
- __pyx_L3:;
-
- /* "View.MemoryView":382
- * cdef int i
- * global __pyx_memoryview_thread_locks_used
- * if self.lock != NULL: # <<<<<<<<<<<<<<
- * for i in range(__pyx_memoryview_thread_locks_used):
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- */
- __pyx_t_2 = ((__pyx_v_self->lock != NULL) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":383
- * global __pyx_memoryview_thread_locks_used
- * if self.lock != NULL:
- * for i in range(__pyx_memoryview_thread_locks_used): # <<<<<<<<<<<<<<
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- * __pyx_memoryview_thread_locks_used -= 1
- */
- __pyx_t_3 = __pyx_memoryview_thread_locks_used;
- __pyx_t_4 = __pyx_t_3;
- for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) {
- __pyx_v_i = __pyx_t_5;
-
- /* "View.MemoryView":384
- * if self.lock != NULL:
- * for i in range(__pyx_memoryview_thread_locks_used):
- * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks_used -= 1
- * if i != __pyx_memoryview_thread_locks_used:
- */
- __pyx_t_2 = (((__pyx_memoryview_thread_locks[__pyx_v_i]) == __pyx_v_self->lock) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":385
- * for i in range(__pyx_memoryview_thread_locks_used):
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- * __pyx_memoryview_thread_locks_used -= 1 # <<<<<<<<<<<<<<
- * if i != __pyx_memoryview_thread_locks_used:
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = (
- */
- __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used - 1);
-
- /* "View.MemoryView":386
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- * __pyx_memoryview_thread_locks_used -= 1
- * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = (
- * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i])
- */
- __pyx_t_2 = ((__pyx_v_i != __pyx_memoryview_thread_locks_used) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":388
- * if i != __pyx_memoryview_thread_locks_used:
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = (
- * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) # <<<<<<<<<<<<<<
- * break
- * else:
- */
- __pyx_t_6 = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]);
- __pyx_t_7 = (__pyx_memoryview_thread_locks[__pyx_v_i]);
-
- /* "View.MemoryView":387
- * __pyx_memoryview_thread_locks_used -= 1
- * if i != __pyx_memoryview_thread_locks_used:
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i])
- * break
- */
- (__pyx_memoryview_thread_locks[__pyx_v_i]) = __pyx_t_6;
- (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]) = __pyx_t_7;
-
- /* "View.MemoryView":386
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- * __pyx_memoryview_thread_locks_used -= 1
- * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = (
- * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i])
- */
- }
-
- /* "View.MemoryView":389
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = (
- * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i])
- * break # <<<<<<<<<<<<<<
- * else:
- * PyThread_free_lock(self.lock)
- */
- goto __pyx_L6_break;
-
- /* "View.MemoryView":384
- * if self.lock != NULL:
- * for i in range(__pyx_memoryview_thread_locks_used):
- * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks_used -= 1
- * if i != __pyx_memoryview_thread_locks_used:
- */
- }
- }
- /*else*/ {
-
- /* "View.MemoryView":391
- * break
- * else:
- * PyThread_free_lock(self.lock) # <<<<<<<<<<<<<<
- *
- * cdef char *get_item_pointer(memoryview self, object index) except NULL:
- */
- PyThread_free_lock(__pyx_v_self->lock);
- }
- __pyx_L6_break:;
-
- /* "View.MemoryView":382
- * cdef int i
- * global __pyx_memoryview_thread_locks_used
- * if self.lock != NULL: # <<<<<<<<<<<<<<
- * for i in range(__pyx_memoryview_thread_locks_used):
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- */
- }
-
- /* "View.MemoryView":372
- * self.typeinfo = NULL
- *
- * def __dealloc__(memoryview self): # <<<<<<<<<<<<<<
- * if self.obj is not None:
- * __Pyx_ReleaseBuffer(&self.view)
- */
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-/* "View.MemoryView":393
- * PyThread_free_lock(self.lock)
- *
- * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<<
- * cdef Py_ssize_t dim
- * cdef char *itemp = self.view.buf
- */
-
-static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) {
- Py_ssize_t __pyx_v_dim;
- char *__pyx_v_itemp;
- PyObject *__pyx_v_idx = NULL;
- char *__pyx_r;
- __Pyx_RefNannyDeclarations
- Py_ssize_t __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- Py_ssize_t __pyx_t_3;
- PyObject *(*__pyx_t_4)(PyObject *);
- PyObject *__pyx_t_5 = NULL;
- Py_ssize_t __pyx_t_6;
- char *__pyx_t_7;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("get_item_pointer", 0);
-
- /* "View.MemoryView":395
- * cdef char *get_item_pointer(memoryview self, object index) except NULL:
- * cdef Py_ssize_t dim
- * cdef char *itemp = self.view.buf # <<<<<<<<<<<<<<
- *
- * for dim, idx in enumerate(index):
- */
- __pyx_v_itemp = ((char *)__pyx_v_self->view.buf);
-
- /* "View.MemoryView":397
- * cdef char *itemp = self.view.buf
- *
- * for dim, idx in enumerate(index): # <<<<<<<<<<<<<<
- * itemp = pybuffer_index(&self.view, itemp, idx, dim)
- *
- */
- __pyx_t_1 = 0;
- if (likely(PyList_CheckExact(__pyx_v_index)) || PyTuple_CheckExact(__pyx_v_index)) {
- __pyx_t_2 = __pyx_v_index; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0;
- __pyx_t_4 = NULL;
- } else {
- __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 397, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_4 = Py_TYPE(__pyx_t_2)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 397, __pyx_L1_error)
- }
- for (;;) {
- if (likely(!__pyx_t_4)) {
- if (likely(PyList_CheckExact(__pyx_t_2))) {
- if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 397, __pyx_L1_error)
- #else
- __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 397, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- #endif
- } else {
- if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 397, __pyx_L1_error)
- #else
- __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 397, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- #endif
- }
- } else {
- __pyx_t_5 = __pyx_t_4(__pyx_t_2);
- if (unlikely(!__pyx_t_5)) {
- PyObject* exc_type = PyErr_Occurred();
- if (exc_type) {
- if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
- else __PYX_ERR(1, 397, __pyx_L1_error)
- }
- break;
- }
- __Pyx_GOTREF(__pyx_t_5);
- }
- __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_5);
- __pyx_t_5 = 0;
- __pyx_v_dim = __pyx_t_1;
- __pyx_t_1 = (__pyx_t_1 + 1);
-
- /* "View.MemoryView":398
- *
- * for dim, idx in enumerate(index):
- * itemp = pybuffer_index(&self.view, itemp, idx, dim) # <<<<<<<<<<<<<<
- *
- * return itemp
- */
- __pyx_t_6 = __Pyx_PyIndex_AsSsize_t(__pyx_v_idx); if (unlikely((__pyx_t_6 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 398, __pyx_L1_error)
- __pyx_t_7 = __pyx_pybuffer_index((&__pyx_v_self->view), __pyx_v_itemp, __pyx_t_6, __pyx_v_dim); if (unlikely(__pyx_t_7 == ((char *)NULL))) __PYX_ERR(1, 398, __pyx_L1_error)
- __pyx_v_itemp = __pyx_t_7;
-
- /* "View.MemoryView":397
- * cdef char *itemp = self.view.buf
- *
- * for dim, idx in enumerate(index): # <<<<<<<<<<<<<<
- * itemp = pybuffer_index(&self.view, itemp, idx, dim)
- *
- */
- }
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "View.MemoryView":400
- * itemp = pybuffer_index(&self.view, itemp, idx, dim)
- *
- * return itemp # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_r = __pyx_v_itemp;
- goto __pyx_L0;
-
- /* "View.MemoryView":393
- * PyThread_free_lock(self.lock)
- *
- * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<<
- * cdef Py_ssize_t dim
- * cdef char *itemp = self.view.buf
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.memoryview.get_item_pointer", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_idx);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":403
- *
- *
- * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<<
- * if index is Ellipsis:
- * return self
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index); /*proto*/
-static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) {
- PyObject *__pyx_v_have_slices = NULL;
- PyObject *__pyx_v_indices = NULL;
- char *__pyx_v_itemp;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- char *__pyx_t_6;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__getitem__", 0);
-
- /* "View.MemoryView":404
- *
- * def __getitem__(memoryview self, object index):
- * if index is Ellipsis: # <<<<<<<<<<<<<<
- * return self
- *
- */
- __pyx_t_1 = (__pyx_v_index == __pyx_builtin_Ellipsis);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":405
- * def __getitem__(memoryview self, object index):
- * if index is Ellipsis:
- * return self # <<<<<<<<<<<<<<
- *
- * have_slices, indices = _unellipsify(index, self.view.ndim)
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(((PyObject *)__pyx_v_self));
- __pyx_r = ((PyObject *)__pyx_v_self);
- goto __pyx_L0;
-
- /* "View.MemoryView":404
- *
- * def __getitem__(memoryview self, object index):
- * if index is Ellipsis: # <<<<<<<<<<<<<<
- * return self
- *
- */
- }
-
- /* "View.MemoryView":407
- * return self
- *
- * have_slices, indices = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<<
- *
- * cdef char *itemp
- */
- __pyx_t_3 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 407, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- if (likely(__pyx_t_3 != Py_None)) {
- PyObject* sequence = __pyx_t_3;
- Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);
- if (unlikely(size != 2)) {
- if (size > 2) __Pyx_RaiseTooManyValuesError(2);
- else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);
- __PYX_ERR(1, 407, __pyx_L1_error)
- }
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_4 = PyTuple_GET_ITEM(sequence, 0);
- __pyx_t_5 = PyTuple_GET_ITEM(sequence, 1);
- __Pyx_INCREF(__pyx_t_4);
- __Pyx_INCREF(__pyx_t_5);
- #else
- __pyx_t_4 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 407, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_5 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 407, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- #endif
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- } else {
- __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 407, __pyx_L1_error)
- }
- __pyx_v_have_slices = __pyx_t_4;
- __pyx_t_4 = 0;
- __pyx_v_indices = __pyx_t_5;
- __pyx_t_5 = 0;
-
- /* "View.MemoryView":410
- *
- * cdef char *itemp
- * if have_slices: # <<<<<<<<<<<<<<
- * return memview_slice(self, indices)
- * else:
- */
- __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 410, __pyx_L1_error)
- if (__pyx_t_2) {
-
- /* "View.MemoryView":411
- * cdef char *itemp
- * if have_slices:
- * return memview_slice(self, indices) # <<<<<<<<<<<<<<
- * else:
- * itemp = self.get_item_pointer(indices)
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_3 = ((PyObject *)__pyx_memview_slice(__pyx_v_self, __pyx_v_indices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 411, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_r = __pyx_t_3;
- __pyx_t_3 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":410
- *
- * cdef char *itemp
- * if have_slices: # <<<<<<<<<<<<<<
- * return memview_slice(self, indices)
- * else:
- */
- }
-
- /* "View.MemoryView":413
- * return memview_slice(self, indices)
- * else:
- * itemp = self.get_item_pointer(indices) # <<<<<<<<<<<<<<
- * return self.convert_item_to_object(itemp)
- *
- */
- /*else*/ {
- __pyx_t_6 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_indices); if (unlikely(__pyx_t_6 == ((char *)NULL))) __PYX_ERR(1, 413, __pyx_L1_error)
- __pyx_v_itemp = __pyx_t_6;
-
- /* "View.MemoryView":414
- * else:
- * itemp = self.get_item_pointer(indices)
- * return self.convert_item_to_object(itemp) # <<<<<<<<<<<<<<
- *
- * def __setitem__(memoryview self, object index, object value):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->convert_item_to_object(__pyx_v_self, __pyx_v_itemp); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 414, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_r = __pyx_t_3;
- __pyx_t_3 = 0;
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":403
- *
- *
- * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<<
- * if index is Ellipsis:
- * return self
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_have_slices);
- __Pyx_XDECREF(__pyx_v_indices);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":416
- * return self.convert_item_to_object(itemp)
- *
- * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<<
- * if self.view.readonly:
- * raise TypeError("Cannot assign to read-only memoryview")
- */
-
-/* Python wrapper */
-static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /*proto*/
-static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index), ((PyObject *)__pyx_v_value));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) {
- PyObject *__pyx_v_have_slices = NULL;
- PyObject *__pyx_v_obj = NULL;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setitem__", 0);
- __Pyx_INCREF(__pyx_v_index);
-
- /* "View.MemoryView":417
- *
- * def __setitem__(memoryview self, object index, object value):
- * if self.view.readonly: # <<<<<<<<<<<<<<
- * raise TypeError("Cannot assign to read-only memoryview")
- *
- */
- __pyx_t_1 = (__pyx_v_self->view.readonly != 0);
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":418
- * def __setitem__(memoryview self, object index, object value):
- * if self.view.readonly:
- * raise TypeError("Cannot assign to read-only memoryview") # <<<<<<<<<<<<<<
- *
- * have_slices, index = _unellipsify(index, self.view.ndim)
- */
- __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 418, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_Raise(__pyx_t_2, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __PYX_ERR(1, 418, __pyx_L1_error)
-
- /* "View.MemoryView":417
- *
- * def __setitem__(memoryview self, object index, object value):
- * if self.view.readonly: # <<<<<<<<<<<<<<
- * raise TypeError("Cannot assign to read-only memoryview")
- *
- */
- }
-
- /* "View.MemoryView":420
- * raise TypeError("Cannot assign to read-only memoryview")
- *
- * have_slices, index = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<<
- *
- * if have_slices:
- */
- __pyx_t_2 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 420, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- if (likely(__pyx_t_2 != Py_None)) {
- PyObject* sequence = __pyx_t_2;
- Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);
- if (unlikely(size != 2)) {
- if (size > 2) __Pyx_RaiseTooManyValuesError(2);
- else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);
- __PYX_ERR(1, 420, __pyx_L1_error)
- }
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0);
- __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1);
- __Pyx_INCREF(__pyx_t_3);
- __Pyx_INCREF(__pyx_t_4);
- #else
- __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 420, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 420, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- #endif
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- } else {
- __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 420, __pyx_L1_error)
- }
- __pyx_v_have_slices = __pyx_t_3;
- __pyx_t_3 = 0;
- __Pyx_DECREF_SET(__pyx_v_index, __pyx_t_4);
- __pyx_t_4 = 0;
-
- /* "View.MemoryView":422
- * have_slices, index = _unellipsify(index, self.view.ndim)
- *
- * if have_slices: # <<<<<<<<<<<<<<
- * obj = self.is_slice(value)
- * if obj:
- */
- __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 422, __pyx_L1_error)
- if (__pyx_t_1) {
-
- /* "View.MemoryView":423
- *
- * if have_slices:
- * obj = self.is_slice(value) # <<<<<<<<<<<<<<
- * if obj:
- * self.setitem_slice_assignment(self[index], obj)
- */
- __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->is_slice(__pyx_v_self, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 423, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_v_obj = __pyx_t_2;
- __pyx_t_2 = 0;
-
- /* "View.MemoryView":424
- * if have_slices:
- * obj = self.is_slice(value)
- * if obj: # <<<<<<<<<<<<<<
- * self.setitem_slice_assignment(self[index], obj)
- * else:
- */
- __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_obj); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 424, __pyx_L1_error)
- if (__pyx_t_1) {
-
- /* "View.MemoryView":425
- * obj = self.is_slice(value)
- * if obj:
- * self.setitem_slice_assignment(self[index], obj) # <<<<<<<<<<<<<<
- * else:
- * self.setitem_slice_assign_scalar(self[index], value)
- */
- __pyx_t_2 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 425, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_4 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assignment(__pyx_v_self, __pyx_t_2, __pyx_v_obj); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 425, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
-
- /* "View.MemoryView":424
- * if have_slices:
- * obj = self.is_slice(value)
- * if obj: # <<<<<<<<<<<<<<
- * self.setitem_slice_assignment(self[index], obj)
- * else:
- */
- goto __pyx_L5;
- }
-
- /* "View.MemoryView":427
- * self.setitem_slice_assignment(self[index], obj)
- * else:
- * self.setitem_slice_assign_scalar(self[index], value) # <<<<<<<<<<<<<<
- * else:
- * self.setitem_indexed(index, value)
- */
- /*else*/ {
- __pyx_t_4 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 427, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_memoryview_type))))) __PYX_ERR(1, 427, __pyx_L1_error)
- __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assign_scalar(__pyx_v_self, ((struct __pyx_memoryview_obj *)__pyx_t_4), __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 427, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- }
- __pyx_L5:;
-
- /* "View.MemoryView":422
- * have_slices, index = _unellipsify(index, self.view.ndim)
- *
- * if have_slices: # <<<<<<<<<<<<<<
- * obj = self.is_slice(value)
- * if obj:
- */
- goto __pyx_L4;
- }
-
- /* "View.MemoryView":429
- * self.setitem_slice_assign_scalar(self[index], value)
- * else:
- * self.setitem_indexed(index, value) # <<<<<<<<<<<<<<
- *
- * cdef is_slice(self, obj):
- */
- /*else*/ {
- __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_indexed(__pyx_v_self, __pyx_v_index, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 429, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- }
- __pyx_L4:;
-
- /* "View.MemoryView":416
- * return self.convert_item_to_object(itemp)
- *
- * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<<
- * if self.view.readonly:
- * raise TypeError("Cannot assign to read-only memoryview")
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_have_slices);
- __Pyx_XDECREF(__pyx_v_obj);
- __Pyx_XDECREF(__pyx_v_index);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":431
- * self.setitem_indexed(index, value)
- *
- * cdef is_slice(self, obj): # <<<<<<<<<<<<<<
- * if not isinstance(obj, memoryview):
- * try:
- */
-
-static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- PyObject *__pyx_t_6 = NULL;
- PyObject *__pyx_t_7 = NULL;
- PyObject *__pyx_t_8 = NULL;
- int __pyx_t_9;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("is_slice", 0);
- __Pyx_INCREF(__pyx_v_obj);
-
- /* "View.MemoryView":432
- *
- * cdef is_slice(self, obj):
- * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<<
- * try:
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- */
- __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_obj, __pyx_memoryview_type);
- __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":433
- * cdef is_slice(self, obj):
- * if not isinstance(obj, memoryview):
- * try: # <<<<<<<<<<<<<<
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- * self.dtype_is_object)
- */
- {
- __Pyx_PyThreadState_declare
- __Pyx_PyThreadState_assign
- __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_4, &__pyx_t_5);
- __Pyx_XGOTREF(__pyx_t_3);
- __Pyx_XGOTREF(__pyx_t_4);
- __Pyx_XGOTREF(__pyx_t_5);
- /*try:*/ {
-
- /* "View.MemoryView":434
- * if not isinstance(obj, memoryview):
- * try:
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<<
- * self.dtype_is_object)
- * except TypeError:
- */
- __pyx_t_6 = __Pyx_PyInt_From_int(((__pyx_v_self->flags & (~PyBUF_WRITABLE)) | PyBUF_ANY_CONTIGUOUS)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 434, __pyx_L4_error)
- __Pyx_GOTREF(__pyx_t_6);
-
- /* "View.MemoryView":435
- * try:
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- * self.dtype_is_object) # <<<<<<<<<<<<<<
- * except TypeError:
- * return None
- */
- __pyx_t_7 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 435, __pyx_L4_error)
- __Pyx_GOTREF(__pyx_t_7);
-
- /* "View.MemoryView":434
- * if not isinstance(obj, memoryview):
- * try:
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<<
- * self.dtype_is_object)
- * except TypeError:
- */
- __pyx_t_8 = PyTuple_New(3); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 434, __pyx_L4_error)
- __Pyx_GOTREF(__pyx_t_8);
- __Pyx_INCREF(__pyx_v_obj);
- __Pyx_GIVEREF(__pyx_v_obj);
- PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_v_obj);
- __Pyx_GIVEREF(__pyx_t_6);
- PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_t_6);
- __Pyx_GIVEREF(__pyx_t_7);
- PyTuple_SET_ITEM(__pyx_t_8, 2, __pyx_t_7);
- __pyx_t_6 = 0;
- __pyx_t_7 = 0;
- __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_8, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 434, __pyx_L4_error)
- __Pyx_GOTREF(__pyx_t_7);
- __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
- __Pyx_DECREF_SET(__pyx_v_obj, __pyx_t_7);
- __pyx_t_7 = 0;
-
- /* "View.MemoryView":433
- * cdef is_slice(self, obj):
- * if not isinstance(obj, memoryview):
- * try: # <<<<<<<<<<<<<<
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- * self.dtype_is_object)
- */
- }
- __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
- __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
- __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
- goto __pyx_L9_try_end;
- __pyx_L4_error:;
- __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
- __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0;
-
- /* "View.MemoryView":436
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- * self.dtype_is_object)
- * except TypeError: # <<<<<<<<<<<<<<
- * return None
- *
- */
- __pyx_t_9 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_TypeError);
- if (__pyx_t_9) {
- __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- if (__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_6) < 0) __PYX_ERR(1, 436, __pyx_L6_except_error)
- __Pyx_GOTREF(__pyx_t_7);
- __Pyx_GOTREF(__pyx_t_8);
- __Pyx_GOTREF(__pyx_t_6);
-
- /* "View.MemoryView":437
- * self.dtype_is_object)
- * except TypeError:
- * return None # <<<<<<<<<<<<<<
- *
- * return obj
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
- goto __pyx_L7_except_return;
- }
- goto __pyx_L6_except_error;
- __pyx_L6_except_error:;
-
- /* "View.MemoryView":433
- * cdef is_slice(self, obj):
- * if not isinstance(obj, memoryview):
- * try: # <<<<<<<<<<<<<<
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- * self.dtype_is_object)
- */
- __Pyx_XGIVEREF(__pyx_t_3);
- __Pyx_XGIVEREF(__pyx_t_4);
- __Pyx_XGIVEREF(__pyx_t_5);
- __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5);
- goto __pyx_L1_error;
- __pyx_L7_except_return:;
- __Pyx_XGIVEREF(__pyx_t_3);
- __Pyx_XGIVEREF(__pyx_t_4);
- __Pyx_XGIVEREF(__pyx_t_5);
- __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5);
- goto __pyx_L0;
- __pyx_L9_try_end:;
- }
-
- /* "View.MemoryView":432
- *
- * cdef is_slice(self, obj):
- * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<<
- * try:
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- */
- }
-
- /* "View.MemoryView":439
- * return None
- *
- * return obj # <<<<<<<<<<<<<<
- *
- * cdef setitem_slice_assignment(self, dst, src):
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_obj);
- __pyx_r = __pyx_v_obj;
- goto __pyx_L0;
-
- /* "View.MemoryView":431
- * self.setitem_indexed(index, value)
- *
- * cdef is_slice(self, obj): # <<<<<<<<<<<<<<
- * if not isinstance(obj, memoryview):
- * try:
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_XDECREF(__pyx_t_7);
- __Pyx_XDECREF(__pyx_t_8);
- __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_obj);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":441
- * return obj
- *
- * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice dst_slice
- * cdef __Pyx_memviewslice src_slice
- */
-
-static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src) {
- __Pyx_memviewslice __pyx_v_dst_slice;
- __Pyx_memviewslice __pyx_v_src_slice;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice *__pyx_t_1;
- __Pyx_memviewslice *__pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_t_4;
- int __pyx_t_5;
- int __pyx_t_6;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("setitem_slice_assignment", 0);
-
- /* "View.MemoryView":445
- * cdef __Pyx_memviewslice src_slice
- *
- * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<<
- * get_slice_from_memview(dst, &dst_slice)[0],
- * src.ndim, dst.ndim, self.dtype_is_object)
- */
- if (!(likely(((__pyx_v_src) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_src, __pyx_memoryview_type))))) __PYX_ERR(1, 445, __pyx_L1_error)
- __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_src), (&__pyx_v_src_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 445, __pyx_L1_error)
-
- /* "View.MemoryView":446
- *
- * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0],
- * get_slice_from_memview(dst, &dst_slice)[0], # <<<<<<<<<<<<<<
- * src.ndim, dst.ndim, self.dtype_is_object)
- *
- */
- if (!(likely(((__pyx_v_dst) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_dst, __pyx_memoryview_type))))) __PYX_ERR(1, 446, __pyx_L1_error)
- __pyx_t_2 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_dst), (&__pyx_v_dst_slice)); if (unlikely(__pyx_t_2 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 446, __pyx_L1_error)
-
- /* "View.MemoryView":447
- * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0],
- * get_slice_from_memview(dst, &dst_slice)[0],
- * src.ndim, dst.ndim, self.dtype_is_object) # <<<<<<<<<<<<<<
- *
- * cdef setitem_slice_assign_scalar(self, memoryview dst, value):
- */
- __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_src, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 447, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_4 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 447, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_dst, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 447, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 447, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":445
- * cdef __Pyx_memviewslice src_slice
- *
- * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<<
- * get_slice_from_memview(dst, &dst_slice)[0],
- * src.ndim, dst.ndim, self.dtype_is_object)
- */
- __pyx_t_6 = __pyx_memoryview_copy_contents((__pyx_t_1[0]), (__pyx_t_2[0]), __pyx_t_4, __pyx_t_5, __pyx_v_self->dtype_is_object); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 445, __pyx_L1_error)
-
- /* "View.MemoryView":441
- * return obj
- *
- * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice dst_slice
- * cdef __Pyx_memviewslice src_slice
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assignment", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":449
- * src.ndim, dst.ndim, self.dtype_is_object)
- *
- * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<<
- * cdef int array[128]
- * cdef void *tmp = NULL
- */
-
-static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value) {
- int __pyx_v_array[0x80];
- void *__pyx_v_tmp;
- void *__pyx_v_item;
- __Pyx_memviewslice *__pyx_v_dst_slice;
- __Pyx_memviewslice __pyx_v_tmp_slice;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice *__pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_t_4;
- int __pyx_t_5;
- char const *__pyx_t_6;
- PyObject *__pyx_t_7 = NULL;
- PyObject *__pyx_t_8 = NULL;
- PyObject *__pyx_t_9 = NULL;
- PyObject *__pyx_t_10 = NULL;
- PyObject *__pyx_t_11 = NULL;
- PyObject *__pyx_t_12 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("setitem_slice_assign_scalar", 0);
-
- /* "View.MemoryView":451
- * cdef setitem_slice_assign_scalar(self, memoryview dst, value):
- * cdef int array[128]
- * cdef void *tmp = NULL # <<<<<<<<<<<<<<
- * cdef void *item
- *
- */
- __pyx_v_tmp = NULL;
-
- /* "View.MemoryView":456
- * cdef __Pyx_memviewslice *dst_slice
- * cdef __Pyx_memviewslice tmp_slice
- * dst_slice = get_slice_from_memview(dst, &tmp_slice) # <<<<<<<<<<<<<<
- *
- * if self.view.itemsize > sizeof(array):
- */
- __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_dst, (&__pyx_v_tmp_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 456, __pyx_L1_error)
- __pyx_v_dst_slice = __pyx_t_1;
-
- /* "View.MemoryView":458
- * dst_slice = get_slice_from_memview(dst, &tmp_slice)
- *
- * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<<
- * tmp = PyMem_Malloc(self.view.itemsize)
- * if tmp == NULL:
- */
- __pyx_t_2 = ((((size_t)__pyx_v_self->view.itemsize) > (sizeof(__pyx_v_array))) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":459
- *
- * if self.view.itemsize > sizeof(array):
- * tmp = PyMem_Malloc(self.view.itemsize) # <<<<<<<<<<<<<<
- * if tmp == NULL:
- * raise MemoryError
- */
- __pyx_v_tmp = PyMem_Malloc(__pyx_v_self->view.itemsize);
-
- /* "View.MemoryView":460
- * if self.view.itemsize > sizeof(array):
- * tmp = PyMem_Malloc(self.view.itemsize)
- * if tmp == NULL: # <<<<<<<<<<<<<<
- * raise MemoryError
- * item = tmp
- */
- __pyx_t_2 = ((__pyx_v_tmp == NULL) != 0);
- if (unlikely(__pyx_t_2)) {
-
- /* "View.MemoryView":461
- * tmp = PyMem_Malloc(self.view.itemsize)
- * if tmp == NULL:
- * raise MemoryError # <<<<<<<<<<<<<<
- * item = tmp
- * else:
- */
- PyErr_NoMemory(); __PYX_ERR(1, 461, __pyx_L1_error)
-
- /* "View.MemoryView":460
- * if self.view.itemsize > sizeof(array):
- * tmp = PyMem_Malloc(self.view.itemsize)
- * if tmp == NULL: # <<<<<<<<<<<<<<
- * raise MemoryError
- * item = tmp
- */
- }
-
- /* "View.MemoryView":462
- * if tmp == NULL:
- * raise MemoryError
- * item = tmp # <<<<<<<<<<<<<<
- * else:
- * item = array
- */
- __pyx_v_item = __pyx_v_tmp;
-
- /* "View.MemoryView":458
- * dst_slice = get_slice_from_memview(dst, &tmp_slice)
- *
- * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<<
- * tmp = PyMem_Malloc(self.view.itemsize)
- * if tmp == NULL:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":464
- * item = tmp
- * else:
- * item = array # <<<<<<<<<<<<<<
- *
- * try:
- */
- /*else*/ {
- __pyx_v_item = ((void *)__pyx_v_array);
- }
- __pyx_L3:;
-
- /* "View.MemoryView":466
- * item = array
- *
- * try: # <<<<<<<<<<<<<<
- * if self.dtype_is_object:
- * ( item)[0] = value
- */
- /*try:*/ {
-
- /* "View.MemoryView":467
- *
- * try:
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * ( item)[0] = value
- * else:
- */
- __pyx_t_2 = (__pyx_v_self->dtype_is_object != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":468
- * try:
- * if self.dtype_is_object:
- * ( item)[0] = value # <<<<<<<<<<<<<<
- * else:
- * self.assign_item_from_object( item, value)
- */
- (((PyObject **)__pyx_v_item)[0]) = ((PyObject *)__pyx_v_value);
-
- /* "View.MemoryView":467
- *
- * try:
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * ( item)[0] = value
- * else:
- */
- goto __pyx_L8;
- }
-
- /* "View.MemoryView":470
- * ( item)[0] = value
- * else:
- * self.assign_item_from_object( item, value) # <<<<<<<<<<<<<<
- *
- *
- */
- /*else*/ {
- __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, ((char *)__pyx_v_item), __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 470, __pyx_L6_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- }
- __pyx_L8:;
-
- /* "View.MemoryView":474
- *
- *
- * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<<
- * assert_direct_dimensions(self.view.suboffsets, self.view.ndim)
- * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize,
- */
- __pyx_t_2 = ((__pyx_v_self->view.suboffsets != NULL) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":475
- *
- * if self.view.suboffsets != NULL:
- * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) # <<<<<<<<<<<<<<
- * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize,
- * item, self.dtype_is_object)
- */
- __pyx_t_3 = assert_direct_dimensions(__pyx_v_self->view.suboffsets, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 475, __pyx_L6_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":474
- *
- *
- * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<<
- * assert_direct_dimensions(self.view.suboffsets, self.view.ndim)
- * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize,
- */
- }
-
- /* "View.MemoryView":476
- * if self.view.suboffsets != NULL:
- * assert_direct_dimensions(self.view.suboffsets, self.view.ndim)
- * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, # <<<<<<<<<<<<<<
- * item, self.dtype_is_object)
- * finally:
- */
- __pyx_memoryview_slice_assign_scalar(__pyx_v_dst_slice, __pyx_v_dst->view.ndim, __pyx_v_self->view.itemsize, __pyx_v_item, __pyx_v_self->dtype_is_object);
- }
-
- /* "View.MemoryView":479
- * item, self.dtype_is_object)
- * finally:
- * PyMem_Free(tmp) # <<<<<<<<<<<<<<
- *
- * cdef setitem_indexed(self, index, value):
- */
- /*finally:*/ {
- /*normal exit:*/{
- PyMem_Free(__pyx_v_tmp);
- goto __pyx_L7;
- }
- __pyx_L6_error:;
- /*exception exit:*/{
- __Pyx_PyThreadState_declare
- __Pyx_PyThreadState_assign
- __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0;
- __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
- if (PY_MAJOR_VERSION >= 3) __Pyx_ExceptionSwap(&__pyx_t_10, &__pyx_t_11, &__pyx_t_12);
- if ((PY_MAJOR_VERSION < 3) || unlikely(__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9) < 0)) __Pyx_ErrFetch(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9);
- __Pyx_XGOTREF(__pyx_t_7);
- __Pyx_XGOTREF(__pyx_t_8);
- __Pyx_XGOTREF(__pyx_t_9);
- __Pyx_XGOTREF(__pyx_t_10);
- __Pyx_XGOTREF(__pyx_t_11);
- __Pyx_XGOTREF(__pyx_t_12);
- __pyx_t_4 = __pyx_lineno; __pyx_t_5 = __pyx_clineno; __pyx_t_6 = __pyx_filename;
- {
- PyMem_Free(__pyx_v_tmp);
- }
- if (PY_MAJOR_VERSION >= 3) {
- __Pyx_XGIVEREF(__pyx_t_10);
- __Pyx_XGIVEREF(__pyx_t_11);
- __Pyx_XGIVEREF(__pyx_t_12);
- __Pyx_ExceptionReset(__pyx_t_10, __pyx_t_11, __pyx_t_12);
- }
- __Pyx_XGIVEREF(__pyx_t_7);
- __Pyx_XGIVEREF(__pyx_t_8);
- __Pyx_XGIVEREF(__pyx_t_9);
- __Pyx_ErrRestore(__pyx_t_7, __pyx_t_8, __pyx_t_9);
- __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0;
- __pyx_lineno = __pyx_t_4; __pyx_clineno = __pyx_t_5; __pyx_filename = __pyx_t_6;
- goto __pyx_L1_error;
- }
- __pyx_L7:;
- }
-
- /* "View.MemoryView":449
- * src.ndim, dst.ndim, self.dtype_is_object)
- *
- * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<<
- * cdef int array[128]
- * cdef void *tmp = NULL
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assign_scalar", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":481
- * PyMem_Free(tmp)
- *
- * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<<
- * cdef char *itemp = self.get_item_pointer(index)
- * self.assign_item_from_object(itemp, value)
- */
-
-static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) {
- char *__pyx_v_itemp;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- char *__pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("setitem_indexed", 0);
-
- /* "View.MemoryView":482
- *
- * cdef setitem_indexed(self, index, value):
- * cdef char *itemp = self.get_item_pointer(index) # <<<<<<<<<<<<<<
- * self.assign_item_from_object(itemp, value)
- *
- */
- __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_index); if (unlikely(__pyx_t_1 == ((char *)NULL))) __PYX_ERR(1, 482, __pyx_L1_error)
- __pyx_v_itemp = __pyx_t_1;
-
- /* "View.MemoryView":483
- * cdef setitem_indexed(self, index, value):
- * cdef char *itemp = self.get_item_pointer(index)
- * self.assign_item_from_object(itemp, value) # <<<<<<<<<<<<<<
- *
- * cdef convert_item_to_object(self, char *itemp):
- */
- __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 483, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "View.MemoryView":481
- * PyMem_Free(tmp)
- *
- * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<<
- * cdef char *itemp = self.get_item_pointer(index)
- * self.assign_item_from_object(itemp, value)
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_indexed", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":485
- * self.assign_item_from_object(itemp, value)
- *
- * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<<
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- */
-
-static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp) {
- PyObject *__pyx_v_struct = NULL;
- PyObject *__pyx_v_bytesitem = 0;
- PyObject *__pyx_v_result = NULL;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- PyObject *__pyx_t_6 = NULL;
- PyObject *__pyx_t_7 = NULL;
- int __pyx_t_8;
- PyObject *__pyx_t_9 = NULL;
- size_t __pyx_t_10;
- int __pyx_t_11;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("convert_item_to_object", 0);
-
- /* "View.MemoryView":488
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- * import struct # <<<<<<<<<<<<<<
- * cdef bytes bytesitem
- *
- */
- __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 488, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_v_struct = __pyx_t_1;
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":491
- * cdef bytes bytesitem
- *
- * bytesitem = itemp[:self.view.itemsize] # <<<<<<<<<<<<<<
- * try:
- * result = struct.unpack(self.view.format, bytesitem)
- */
- __pyx_t_1 = __Pyx_PyBytes_FromStringAndSize(__pyx_v_itemp + 0, __pyx_v_self->view.itemsize - 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 491, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_v_bytesitem = ((PyObject*)__pyx_t_1);
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":492
- *
- * bytesitem = itemp[:self.view.itemsize]
- * try: # <<<<<<<<<<<<<<
- * result = struct.unpack(self.view.format, bytesitem)
- * except struct.error:
- */
- {
- __Pyx_PyThreadState_declare
- __Pyx_PyThreadState_assign
- __Pyx_ExceptionSave(&__pyx_t_2, &__pyx_t_3, &__pyx_t_4);
- __Pyx_XGOTREF(__pyx_t_2);
- __Pyx_XGOTREF(__pyx_t_3);
- __Pyx_XGOTREF(__pyx_t_4);
- /*try:*/ {
-
- /* "View.MemoryView":493
- * bytesitem = itemp[:self.view.itemsize]
- * try:
- * result = struct.unpack(self.view.format, bytesitem) # <<<<<<<<<<<<<<
- * except struct.error:
- * raise ValueError("Unable to convert item to object")
- */
- __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_unpack); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 493, __pyx_L3_error)
- __Pyx_GOTREF(__pyx_t_5);
- __pyx_t_6 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 493, __pyx_L3_error)
- __Pyx_GOTREF(__pyx_t_6);
- __pyx_t_7 = NULL;
- __pyx_t_8 = 0;
- if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) {
- __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_5);
- if (likely(__pyx_t_7)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);
- __Pyx_INCREF(__pyx_t_7);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_5, function);
- __pyx_t_8 = 1;
- }
- }
- #if CYTHON_FAST_PYCALL
- if (PyFunction_Check(__pyx_t_5)) {
- PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem};
- __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error)
- __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- } else
- #endif
- #if CYTHON_FAST_PYCCALL
- if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) {
- PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem};
- __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error)
- __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- } else
- #endif
- {
- __pyx_t_9 = PyTuple_New(2+__pyx_t_8); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 493, __pyx_L3_error)
- __Pyx_GOTREF(__pyx_t_9);
- if (__pyx_t_7) {
- __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_7); __pyx_t_7 = NULL;
- }
- __Pyx_GIVEREF(__pyx_t_6);
- PyTuple_SET_ITEM(__pyx_t_9, 0+__pyx_t_8, __pyx_t_6);
- __Pyx_INCREF(__pyx_v_bytesitem);
- __Pyx_GIVEREF(__pyx_v_bytesitem);
- PyTuple_SET_ITEM(__pyx_t_9, 1+__pyx_t_8, __pyx_v_bytesitem);
- __pyx_t_6 = 0;
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_9, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- }
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __pyx_v_result = __pyx_t_1;
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":492
- *
- * bytesitem = itemp[:self.view.itemsize]
- * try: # <<<<<<<<<<<<<<
- * result = struct.unpack(self.view.format, bytesitem)
- * except struct.error:
- */
- }
-
- /* "View.MemoryView":497
- * raise ValueError("Unable to convert item to object")
- * else:
- * if len(self.view.format) == 1: # <<<<<<<<<<<<<<
- * return result[0]
- * return result
- */
- /*else:*/ {
- __pyx_t_10 = strlen(__pyx_v_self->view.format);
- __pyx_t_11 = ((__pyx_t_10 == 1) != 0);
- if (__pyx_t_11) {
-
- /* "View.MemoryView":498
- * else:
- * if len(self.view.format) == 1:
- * return result[0] # <<<<<<<<<<<<<<
- * return result
- *
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_result, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 498, __pyx_L5_except_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L6_except_return;
-
- /* "View.MemoryView":497
- * raise ValueError("Unable to convert item to object")
- * else:
- * if len(self.view.format) == 1: # <<<<<<<<<<<<<<
- * return result[0]
- * return result
- */
- }
-
- /* "View.MemoryView":499
- * if len(self.view.format) == 1:
- * return result[0]
- * return result # <<<<<<<<<<<<<<
- *
- * cdef assign_item_from_object(self, char *itemp, object value):
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_result);
- __pyx_r = __pyx_v_result;
- goto __pyx_L6_except_return;
- }
- __pyx_L3_error:;
- __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
- __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;
-
- /* "View.MemoryView":494
- * try:
- * result = struct.unpack(self.view.format, bytesitem)
- * except struct.error: # <<<<<<<<<<<<<<
- * raise ValueError("Unable to convert item to object")
- * else:
- */
- __Pyx_ErrFetch(&__pyx_t_1, &__pyx_t_5, &__pyx_t_9);
- __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_error); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 494, __pyx_L5_except_error)
- __Pyx_GOTREF(__pyx_t_6);
- __pyx_t_8 = __Pyx_PyErr_GivenExceptionMatches(__pyx_t_1, __pyx_t_6);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- __Pyx_ErrRestore(__pyx_t_1, __pyx_t_5, __pyx_t_9);
- __pyx_t_1 = 0; __pyx_t_5 = 0; __pyx_t_9 = 0;
- if (__pyx_t_8) {
- __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename);
- if (__Pyx_GetException(&__pyx_t_9, &__pyx_t_5, &__pyx_t_1) < 0) __PYX_ERR(1, 494, __pyx_L5_except_error)
- __Pyx_GOTREF(__pyx_t_9);
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_GOTREF(__pyx_t_1);
-
- /* "View.MemoryView":495
- * result = struct.unpack(self.view.format, bytesitem)
- * except struct.error:
- * raise ValueError("Unable to convert item to object") # <<<<<<<<<<<<<<
- * else:
- * if len(self.view.format) == 1:
- */
- __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 495, __pyx_L5_except_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_Raise(__pyx_t_6, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- __PYX_ERR(1, 495, __pyx_L5_except_error)
- }
- goto __pyx_L5_except_error;
- __pyx_L5_except_error:;
-
- /* "View.MemoryView":492
- *
- * bytesitem = itemp[:self.view.itemsize]
- * try: # <<<<<<<<<<<<<<
- * result = struct.unpack(self.view.format, bytesitem)
- * except struct.error:
- */
- __Pyx_XGIVEREF(__pyx_t_2);
- __Pyx_XGIVEREF(__pyx_t_3);
- __Pyx_XGIVEREF(__pyx_t_4);
- __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4);
- goto __pyx_L1_error;
- __pyx_L6_except_return:;
- __Pyx_XGIVEREF(__pyx_t_2);
- __Pyx_XGIVEREF(__pyx_t_3);
- __Pyx_XGIVEREF(__pyx_t_4);
- __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4);
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":485
- * self.assign_item_from_object(itemp, value)
- *
- * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<<
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_XDECREF(__pyx_t_7);
- __Pyx_XDECREF(__pyx_t_9);
- __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_struct);
- __Pyx_XDECREF(__pyx_v_bytesitem);
- __Pyx_XDECREF(__pyx_v_result);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":501
- * return result
- *
- * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<<
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- */
-
-static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) {
- PyObject *__pyx_v_struct = NULL;
- char __pyx_v_c;
- PyObject *__pyx_v_bytesvalue = 0;
- Py_ssize_t __pyx_v_i;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_t_2;
- int __pyx_t_3;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- PyObject *__pyx_t_6 = NULL;
- int __pyx_t_7;
- PyObject *__pyx_t_8 = NULL;
- Py_ssize_t __pyx_t_9;
- PyObject *__pyx_t_10 = NULL;
- char *__pyx_t_11;
- char *__pyx_t_12;
- char *__pyx_t_13;
- char *__pyx_t_14;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("assign_item_from_object", 0);
-
- /* "View.MemoryView":504
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- * import struct # <<<<<<<<<<<<<<
- * cdef char c
- * cdef bytes bytesvalue
- */
- __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 504, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_v_struct = __pyx_t_1;
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":509
- * cdef Py_ssize_t i
- *
- * if isinstance(value, tuple): # <<<<<<<<<<<<<<
- * bytesvalue = struct.pack(self.view.format, *value)
- * else:
- */
- __pyx_t_2 = PyTuple_Check(__pyx_v_value);
- __pyx_t_3 = (__pyx_t_2 != 0);
- if (__pyx_t_3) {
-
- /* "View.MemoryView":510
- *
- * if isinstance(value, tuple):
- * bytesvalue = struct.pack(self.view.format, *value) # <<<<<<<<<<<<<<
- * else:
- * bytesvalue = struct.pack(self.view.format, value)
- */
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 510, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_4 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 510, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_GIVEREF(__pyx_t_4);
- PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4);
- __pyx_t_4 = 0;
- __pyx_t_4 = __Pyx_PySequence_Tuple(__pyx_v_value); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_6 = PyNumber_Add(__pyx_t_5, __pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 510, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_6, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 510, __pyx_L1_error)
- __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4);
- __pyx_t_4 = 0;
-
- /* "View.MemoryView":509
- * cdef Py_ssize_t i
- *
- * if isinstance(value, tuple): # <<<<<<<<<<<<<<
- * bytesvalue = struct.pack(self.view.format, *value)
- * else:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":512
- * bytesvalue = struct.pack(self.view.format, *value)
- * else:
- * bytesvalue = struct.pack(self.view.format, value) # <<<<<<<<<<<<<<
- *
- * for i, c in enumerate(bytesvalue):
- */
- /*else*/ {
- __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __pyx_t_1 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_5 = NULL;
- __pyx_t_7 = 0;
- if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) {
- __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_6);
- if (likely(__pyx_t_5)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6);
- __Pyx_INCREF(__pyx_t_5);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_6, function);
- __pyx_t_7 = 1;
- }
- }
- #if CYTHON_FAST_PYCALL
- if (PyFunction_Check(__pyx_t_6)) {
- PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value};
- __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- } else
- #endif
- #if CYTHON_FAST_PYCCALL
- if (__Pyx_PyFastCFunction_Check(__pyx_t_6)) {
- PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value};
- __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- } else
- #endif
- {
- __pyx_t_8 = PyTuple_New(2+__pyx_t_7); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_8);
- if (__pyx_t_5) {
- __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_5); __pyx_t_5 = NULL;
- }
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_8, 0+__pyx_t_7, __pyx_t_1);
- __Pyx_INCREF(__pyx_v_value);
- __Pyx_GIVEREF(__pyx_v_value);
- PyTuple_SET_ITEM(__pyx_t_8, 1+__pyx_t_7, __pyx_v_value);
- __pyx_t_1 = 0;
- __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_8, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
- }
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 512, __pyx_L1_error)
- __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4);
- __pyx_t_4 = 0;
- }
- __pyx_L3:;
-
- /* "View.MemoryView":514
- * bytesvalue = struct.pack(self.view.format, value)
- *
- * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<<
- * itemp[i] = c
- *
- */
- __pyx_t_9 = 0;
- if (unlikely(__pyx_v_bytesvalue == Py_None)) {
- PyErr_SetString(PyExc_TypeError, "'NoneType' is not iterable");
- __PYX_ERR(1, 514, __pyx_L1_error)
- }
- __Pyx_INCREF(__pyx_v_bytesvalue);
- __pyx_t_10 = __pyx_v_bytesvalue;
- __pyx_t_12 = PyBytes_AS_STRING(__pyx_t_10);
- __pyx_t_13 = (__pyx_t_12 + PyBytes_GET_SIZE(__pyx_t_10));
- for (__pyx_t_14 = __pyx_t_12; __pyx_t_14 < __pyx_t_13; __pyx_t_14++) {
- __pyx_t_11 = __pyx_t_14;
- __pyx_v_c = (__pyx_t_11[0]);
-
- /* "View.MemoryView":515
- *
- * for i, c in enumerate(bytesvalue):
- * itemp[i] = c # <<<<<<<<<<<<<<
- *
- * @cname('getbuffer')
- */
- __pyx_v_i = __pyx_t_9;
-
- /* "View.MemoryView":514
- * bytesvalue = struct.pack(self.view.format, value)
- *
- * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<<
- * itemp[i] = c
- *
- */
- __pyx_t_9 = (__pyx_t_9 + 1);
-
- /* "View.MemoryView":515
- *
- * for i, c in enumerate(bytesvalue):
- * itemp[i] = c # <<<<<<<<<<<<<<
- *
- * @cname('getbuffer')
- */
- (__pyx_v_itemp[__pyx_v_i]) = __pyx_v_c;
- }
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
-
- /* "View.MemoryView":501
- * return result
- *
- * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<<
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_XDECREF(__pyx_t_8);
- __Pyx_XDECREF(__pyx_t_10);
- __Pyx_AddTraceback("View.MemoryView.memoryview.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_struct);
- __Pyx_XDECREF(__pyx_v_bytesvalue);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":518
- *
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<<
- * if flags & PyBUF_WRITABLE and self.view.readonly:
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- */
-
-/* Python wrapper */
-static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/
-static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- Py_ssize_t *__pyx_t_4;
- char *__pyx_t_5;
- void *__pyx_t_6;
- int __pyx_t_7;
- Py_ssize_t __pyx_t_8;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- if (__pyx_v_info == NULL) {
- PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete");
- return -1;
- }
- __Pyx_RefNannySetupContext("__getbuffer__", 0);
- __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None);
- __Pyx_GIVEREF(__pyx_v_info->obj);
-
- /* "View.MemoryView":519
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<<
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- *
- */
- __pyx_t_2 = ((__pyx_v_flags & PyBUF_WRITABLE) != 0);
- if (__pyx_t_2) {
- } else {
- __pyx_t_1 = __pyx_t_2;
- goto __pyx_L4_bool_binop_done;
- }
- __pyx_t_2 = (__pyx_v_self->view.readonly != 0);
- __pyx_t_1 = __pyx_t_2;
- __pyx_L4_bool_binop_done:;
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":520
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * if flags & PyBUF_WRITABLE and self.view.readonly:
- * raise ValueError("Cannot create writable memory view from read-only memoryview") # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_ND:
- */
- __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__11, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 520, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 520, __pyx_L1_error)
-
- /* "View.MemoryView":519
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<<
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- *
- */
- }
-
- /* "View.MemoryView":522
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- *
- * if flags & PyBUF_ND: # <<<<<<<<<<<<<<
- * info.shape = self.view.shape
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_ND) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":523
- *
- * if flags & PyBUF_ND:
- * info.shape = self.view.shape # <<<<<<<<<<<<<<
- * else:
- * info.shape = NULL
- */
- __pyx_t_4 = __pyx_v_self->view.shape;
- __pyx_v_info->shape = __pyx_t_4;
-
- /* "View.MemoryView":522
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- *
- * if flags & PyBUF_ND: # <<<<<<<<<<<<<<
- * info.shape = self.view.shape
- * else:
- */
- goto __pyx_L6;
- }
-
- /* "View.MemoryView":525
- * info.shape = self.view.shape
- * else:
- * info.shape = NULL # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_STRIDES:
- */
- /*else*/ {
- __pyx_v_info->shape = NULL;
- }
- __pyx_L6:;
-
- /* "View.MemoryView":527
- * info.shape = NULL
- *
- * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<<
- * info.strides = self.view.strides
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_STRIDES) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":528
- *
- * if flags & PyBUF_STRIDES:
- * info.strides = self.view.strides # <<<<<<<<<<<<<<
- * else:
- * info.strides = NULL
- */
- __pyx_t_4 = __pyx_v_self->view.strides;
- __pyx_v_info->strides = __pyx_t_4;
-
- /* "View.MemoryView":527
- * info.shape = NULL
- *
- * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<<
- * info.strides = self.view.strides
- * else:
- */
- goto __pyx_L7;
- }
-
- /* "View.MemoryView":530
- * info.strides = self.view.strides
- * else:
- * info.strides = NULL # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_INDIRECT:
- */
- /*else*/ {
- __pyx_v_info->strides = NULL;
- }
- __pyx_L7:;
-
- /* "View.MemoryView":532
- * info.strides = NULL
- *
- * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<<
- * info.suboffsets = self.view.suboffsets
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_INDIRECT) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":533
- *
- * if flags & PyBUF_INDIRECT:
- * info.suboffsets = self.view.suboffsets # <<<<<<<<<<<<<<
- * else:
- * info.suboffsets = NULL
- */
- __pyx_t_4 = __pyx_v_self->view.suboffsets;
- __pyx_v_info->suboffsets = __pyx_t_4;
-
- /* "View.MemoryView":532
- * info.strides = NULL
- *
- * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<<
- * info.suboffsets = self.view.suboffsets
- * else:
- */
- goto __pyx_L8;
- }
-
- /* "View.MemoryView":535
- * info.suboffsets = self.view.suboffsets
- * else:
- * info.suboffsets = NULL # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_FORMAT:
- */
- /*else*/ {
- __pyx_v_info->suboffsets = NULL;
- }
- __pyx_L8:;
-
- /* "View.MemoryView":537
- * info.suboffsets = NULL
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * info.format = self.view.format
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":538
- *
- * if flags & PyBUF_FORMAT:
- * info.format = self.view.format # <<<<<<<<<<<<<<
- * else:
- * info.format = NULL
- */
- __pyx_t_5 = __pyx_v_self->view.format;
- __pyx_v_info->format = __pyx_t_5;
-
- /* "View.MemoryView":537
- * info.suboffsets = NULL
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * info.format = self.view.format
- * else:
- */
- goto __pyx_L9;
- }
-
- /* "View.MemoryView":540
- * info.format = self.view.format
- * else:
- * info.format = NULL # <<<<<<<<<<<<<<
- *
- * info.buf = self.view.buf
- */
- /*else*/ {
- __pyx_v_info->format = NULL;
- }
- __pyx_L9:;
-
- /* "View.MemoryView":542
- * info.format = NULL
- *
- * info.buf = self.view.buf # <<<<<<<<<<<<<<
- * info.ndim = self.view.ndim
- * info.itemsize = self.view.itemsize
- */
- __pyx_t_6 = __pyx_v_self->view.buf;
- __pyx_v_info->buf = __pyx_t_6;
-
- /* "View.MemoryView":543
- *
- * info.buf = self.view.buf
- * info.ndim = self.view.ndim # <<<<<<<<<<<<<<
- * info.itemsize = self.view.itemsize
- * info.len = self.view.len
- */
- __pyx_t_7 = __pyx_v_self->view.ndim;
- __pyx_v_info->ndim = __pyx_t_7;
-
- /* "View.MemoryView":544
- * info.buf = self.view.buf
- * info.ndim = self.view.ndim
- * info.itemsize = self.view.itemsize # <<<<<<<<<<<<<<
- * info.len = self.view.len
- * info.readonly = self.view.readonly
- */
- __pyx_t_8 = __pyx_v_self->view.itemsize;
- __pyx_v_info->itemsize = __pyx_t_8;
-
- /* "View.MemoryView":545
- * info.ndim = self.view.ndim
- * info.itemsize = self.view.itemsize
- * info.len = self.view.len # <<<<<<<<<<<<<<
- * info.readonly = self.view.readonly
- * info.obj = self
- */
- __pyx_t_8 = __pyx_v_self->view.len;
- __pyx_v_info->len = __pyx_t_8;
-
- /* "View.MemoryView":546
- * info.itemsize = self.view.itemsize
- * info.len = self.view.len
- * info.readonly = self.view.readonly # <<<<<<<<<<<<<<
- * info.obj = self
- *
- */
- __pyx_t_1 = __pyx_v_self->view.readonly;
- __pyx_v_info->readonly = __pyx_t_1;
-
- /* "View.MemoryView":547
- * info.len = self.view.len
- * info.readonly = self.view.readonly
- * info.obj = self # <<<<<<<<<<<<<<
- *
- * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)")
- */
- __Pyx_INCREF(((PyObject *)__pyx_v_self));
- __Pyx_GIVEREF(((PyObject *)__pyx_v_self));
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj);
- __pyx_v_info->obj = ((PyObject *)__pyx_v_self);
-
- /* "View.MemoryView":518
- *
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<<
- * if flags & PyBUF_WRITABLE and self.view.readonly:
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- if (__pyx_v_info->obj != NULL) {
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0;
- }
- goto __pyx_L2;
- __pyx_L0:;
- if (__pyx_v_info->obj == Py_None) {
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0;
- }
- __pyx_L2:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":553
- *
- * @property
- * def T(self): # <<<<<<<<<<<<<<
- * cdef _memoryviewslice result = memoryview_copy(self)
- * transpose_memslice(&result.from_slice)
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- struct __pyx_memoryviewslice_obj *__pyx_v_result = 0;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_t_2;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":554
- * @property
- * def T(self):
- * cdef _memoryviewslice result = memoryview_copy(self) # <<<<<<<<<<<<<<
- * transpose_memslice(&result.from_slice)
- * return result
- */
- __pyx_t_1 = __pyx_memoryview_copy_object(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 554, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_memoryviewslice_type))))) __PYX_ERR(1, 554, __pyx_L1_error)
- __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_1);
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":555
- * def T(self):
- * cdef _memoryviewslice result = memoryview_copy(self)
- * transpose_memslice(&result.from_slice) # <<<<<<<<<<<<<<
- * return result
- *
- */
- __pyx_t_2 = __pyx_memslice_transpose((&__pyx_v_result->from_slice)); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 555, __pyx_L1_error)
-
- /* "View.MemoryView":556
- * cdef _memoryviewslice result = memoryview_copy(self)
- * transpose_memslice(&result.from_slice)
- * return result # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(((PyObject *)__pyx_v_result));
- __pyx_r = ((PyObject *)__pyx_v_result);
- goto __pyx_L0;
-
- /* "View.MemoryView":553
- *
- * @property
- * def T(self): # <<<<<<<<<<<<<<
- * cdef _memoryviewslice result = memoryview_copy(self)
- * transpose_memslice(&result.from_slice)
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.memoryview.T.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF((PyObject *)__pyx_v_result);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":559
- *
- * @property
- * def base(self): # <<<<<<<<<<<<<<
- * return self.obj
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":560
- * @property
- * def base(self):
- * return self.obj # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_self->obj);
- __pyx_r = __pyx_v_self->obj;
- goto __pyx_L0;
-
- /* "View.MemoryView":559
- *
- * @property
- * def base(self): # <<<<<<<<<<<<<<
- * return self.obj
- *
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":563
- *
- * @property
- * def shape(self): # <<<<<<<<<<<<<<
- * return tuple([length for length in self.view.shape[:self.view.ndim]])
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- Py_ssize_t __pyx_v_length;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- Py_ssize_t *__pyx_t_2;
- Py_ssize_t *__pyx_t_3;
- Py_ssize_t *__pyx_t_4;
- PyObject *__pyx_t_5 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":564
- * @property
- * def shape(self):
- * return tuple([length for length in self.view.shape[:self.view.ndim]]) # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 564, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_3 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim);
- for (__pyx_t_4 = __pyx_v_self->view.shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) {
- __pyx_t_2 = __pyx_t_4;
- __pyx_v_length = (__pyx_t_2[0]);
- __pyx_t_5 = PyInt_FromSsize_t(__pyx_v_length); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 564, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_5))) __PYX_ERR(1, 564, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- }
- __pyx_t_5 = PyList_AsTuple(((PyObject*)__pyx_t_1)); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 564, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_r = __pyx_t_5;
- __pyx_t_5 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":563
- *
- * @property
- * def shape(self): # <<<<<<<<<<<<<<
- * return tuple([length for length in self.view.shape[:self.view.ndim]])
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.memoryview.shape.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":567
- *
- * @property
- * def strides(self): # <<<<<<<<<<<<<<
- * if self.view.strides == NULL:
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- Py_ssize_t __pyx_v_stride;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- Py_ssize_t *__pyx_t_3;
- Py_ssize_t *__pyx_t_4;
- Py_ssize_t *__pyx_t_5;
- PyObject *__pyx_t_6 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":568
- * @property
- * def strides(self):
- * if self.view.strides == NULL: # <<<<<<<<<<<<<<
- *
- * raise ValueError("Buffer view does not expose strides")
- */
- __pyx_t_1 = ((__pyx_v_self->view.strides == NULL) != 0);
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":570
- * if self.view.strides == NULL:
- *
- * raise ValueError("Buffer view does not expose strides") # <<<<<<<<<<<<<<
- *
- * return tuple([stride for stride in self.view.strides[:self.view.ndim]])
- */
- __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__12, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 570, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_Raise(__pyx_t_2, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __PYX_ERR(1, 570, __pyx_L1_error)
-
- /* "View.MemoryView":568
- * @property
- * def strides(self):
- * if self.view.strides == NULL: # <<<<<<<<<<<<<<
- *
- * raise ValueError("Buffer view does not expose strides")
- */
- }
-
- /* "View.MemoryView":572
- * raise ValueError("Buffer view does not expose strides")
- *
- * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 572, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_4 = (__pyx_v_self->view.strides + __pyx_v_self->view.ndim);
- for (__pyx_t_5 = __pyx_v_self->view.strides; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) {
- __pyx_t_3 = __pyx_t_5;
- __pyx_v_stride = (__pyx_t_3[0]);
- __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_stride); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 572, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_6))) __PYX_ERR(1, 572, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- }
- __pyx_t_6 = PyList_AsTuple(((PyObject*)__pyx_t_2)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 572, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_r = __pyx_t_6;
- __pyx_t_6 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":567
- *
- * @property
- * def strides(self): # <<<<<<<<<<<<<<
- * if self.view.strides == NULL:
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_AddTraceback("View.MemoryView.memoryview.strides.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":575
- *
- * @property
- * def suboffsets(self): # <<<<<<<<<<<<<<
- * if self.view.suboffsets == NULL:
- * return (-1,) * self.view.ndim
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- Py_ssize_t __pyx_v_suboffset;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- Py_ssize_t *__pyx_t_4;
- Py_ssize_t *__pyx_t_5;
- Py_ssize_t *__pyx_t_6;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":576
- * @property
- * def suboffsets(self):
- * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<<
- * return (-1,) * self.view.ndim
- *
- */
- __pyx_t_1 = ((__pyx_v_self->view.suboffsets == NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":577
- * def suboffsets(self):
- * if self.view.suboffsets == NULL:
- * return (-1,) * self.view.ndim # <<<<<<<<<<<<<<
- *
- * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]])
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 577, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyNumber_Multiply(__pyx_tuple__13, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 577, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_r = __pyx_t_3;
- __pyx_t_3 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":576
- * @property
- * def suboffsets(self):
- * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<<
- * return (-1,) * self.view.ndim
- *
- */
- }
-
- /* "View.MemoryView":579
- * return (-1,) * self.view.ndim
- *
- * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 579, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_5 = (__pyx_v_self->view.suboffsets + __pyx_v_self->view.ndim);
- for (__pyx_t_6 = __pyx_v_self->view.suboffsets; __pyx_t_6 < __pyx_t_5; __pyx_t_6++) {
- __pyx_t_4 = __pyx_t_6;
- __pyx_v_suboffset = (__pyx_t_4[0]);
- __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_suboffset); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 579, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- if (unlikely(__Pyx_ListComp_Append(__pyx_t_3, (PyObject*)__pyx_t_2))) __PYX_ERR(1, 579, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- }
- __pyx_t_2 = PyList_AsTuple(((PyObject*)__pyx_t_3)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 579, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":575
- *
- * @property
- * def suboffsets(self): # <<<<<<<<<<<<<<
- * if self.view.suboffsets == NULL:
- * return (-1,) * self.view.ndim
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.suboffsets.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":582
- *
- * @property
- * def ndim(self): # <<<<<<<<<<<<<<
- * return self.view.ndim
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":583
- * @property
- * def ndim(self):
- * return self.view.ndim # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 583, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":582
- *
- * @property
- * def ndim(self): # <<<<<<<<<<<<<<
- * return self.view.ndim
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.memoryview.ndim.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":586
- *
- * @property
- * def itemsize(self): # <<<<<<<<<<<<<<
- * return self.view.itemsize
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":587
- * @property
- * def itemsize(self):
- * return self.view.itemsize # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 587, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":586
- *
- * @property
- * def itemsize(self): # <<<<<<<<<<<<<<
- * return self.view.itemsize
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.memoryview.itemsize.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":590
- *
- * @property
- * def nbytes(self): # <<<<<<<<<<<<<<
- * return self.size * self.view.itemsize
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":591
- * @property
- * def nbytes(self):
- * return self.size * self.view.itemsize # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 591, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 591, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 591, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_r = __pyx_t_3;
- __pyx_t_3 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":590
- *
- * @property
- * def nbytes(self): # <<<<<<<<<<<<<<
- * return self.size * self.view.itemsize
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.nbytes.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":594
- *
- * @property
- * def size(self): # <<<<<<<<<<<<<<
- * if self._size is None:
- * result = 1
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_v_result = NULL;
- PyObject *__pyx_v_length = NULL;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- Py_ssize_t *__pyx_t_3;
- Py_ssize_t *__pyx_t_4;
- Py_ssize_t *__pyx_t_5;
- PyObject *__pyx_t_6 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":595
- * @property
- * def size(self):
- * if self._size is None: # <<<<<<<<<<<<<<
- * result = 1
- *
- */
- __pyx_t_1 = (__pyx_v_self->_size == Py_None);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":596
- * def size(self):
- * if self._size is None:
- * result = 1 # <<<<<<<<<<<<<<
- *
- * for length in self.view.shape[:self.view.ndim]:
- */
- __Pyx_INCREF(__pyx_int_1);
- __pyx_v_result = __pyx_int_1;
-
- /* "View.MemoryView":598
- * result = 1
- *
- * for length in self.view.shape[:self.view.ndim]: # <<<<<<<<<<<<<<
- * result *= length
- *
- */
- __pyx_t_4 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim);
- for (__pyx_t_5 = __pyx_v_self->view.shape; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) {
- __pyx_t_3 = __pyx_t_5;
- __pyx_t_6 = PyInt_FromSsize_t((__pyx_t_3[0])); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 598, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_6);
- __pyx_t_6 = 0;
-
- /* "View.MemoryView":599
- *
- * for length in self.view.shape[:self.view.ndim]:
- * result *= length # <<<<<<<<<<<<<<
- *
- * self._size = result
- */
- __pyx_t_6 = PyNumber_InPlaceMultiply(__pyx_v_result, __pyx_v_length); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 599, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_DECREF_SET(__pyx_v_result, __pyx_t_6);
- __pyx_t_6 = 0;
- }
-
- /* "View.MemoryView":601
- * result *= length
- *
- * self._size = result # <<<<<<<<<<<<<<
- *
- * return self._size
- */
- __Pyx_INCREF(__pyx_v_result);
- __Pyx_GIVEREF(__pyx_v_result);
- __Pyx_GOTREF(__pyx_v_self->_size);
- __Pyx_DECREF(__pyx_v_self->_size);
- __pyx_v_self->_size = __pyx_v_result;
-
- /* "View.MemoryView":595
- * @property
- * def size(self):
- * if self._size is None: # <<<<<<<<<<<<<<
- * result = 1
- *
- */
- }
-
- /* "View.MemoryView":603
- * self._size = result
- *
- * return self._size # <<<<<<<<<<<<<<
- *
- * def __len__(self):
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_self->_size);
- __pyx_r = __pyx_v_self->_size;
- goto __pyx_L0;
-
- /* "View.MemoryView":594
- *
- * @property
- * def size(self): # <<<<<<<<<<<<<<
- * if self._size is None:
- * result = 1
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_AddTraceback("View.MemoryView.memoryview.size.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_result);
- __Pyx_XDECREF(__pyx_v_length);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":605
- * return self._size
- *
- * def __len__(self): # <<<<<<<<<<<<<<
- * if self.view.ndim >= 1:
- * return self.view.shape[0]
- */
-
-/* Python wrapper */
-static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self); /*proto*/
-static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self) {
- Py_ssize_t __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__len__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self) {
- Py_ssize_t __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- __Pyx_RefNannySetupContext("__len__", 0);
-
- /* "View.MemoryView":606
- *
- * def __len__(self):
- * if self.view.ndim >= 1: # <<<<<<<<<<<<<<
- * return self.view.shape[0]
- *
- */
- __pyx_t_1 = ((__pyx_v_self->view.ndim >= 1) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":607
- * def __len__(self):
- * if self.view.ndim >= 1:
- * return self.view.shape[0] # <<<<<<<<<<<<<<
- *
- * return 0
- */
- __pyx_r = (__pyx_v_self->view.shape[0]);
- goto __pyx_L0;
-
- /* "View.MemoryView":606
- *
- * def __len__(self):
- * if self.view.ndim >= 1: # <<<<<<<<<<<<<<
- * return self.view.shape[0]
- *
- */
- }
-
- /* "View.MemoryView":609
- * return self.view.shape[0]
- *
- * return 0 # <<<<<<<<<<<<<<
- *
- * def __repr__(self):
- */
- __pyx_r = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":605
- * return self._size
- *
- * def __len__(self): # <<<<<<<<<<<<<<
- * if self.view.ndim >= 1:
- * return self.view.shape[0]
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":611
- * return 0
- *
- * def __repr__(self): # <<<<<<<<<<<<<<
- * return "" % (self.base.__class__.__name__,
- * id(self))
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__repr__", 0);
-
- /* "View.MemoryView":612
- *
- * def __repr__(self):
- * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<<
- * id(self))
- *
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 612, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 612, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 612, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "View.MemoryView":613
- * def __repr__(self):
- * return "" % (self.base.__class__.__name__,
- * id(self)) # <<<<<<<<<<<<<<
- *
- * def __str__(self):
- */
- __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_id, ((PyObject *)__pyx_v_self)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 613, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
-
- /* "View.MemoryView":612
- *
- * def __repr__(self):
- * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<<
- * id(self))
- *
- */
- __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 612, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_2);
- PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2);
- __pyx_t_1 = 0;
- __pyx_t_2 = 0;
- __pyx_t_2 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 612, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":611
- * return 0
- *
- * def __repr__(self): # <<<<<<<<<<<<<<
- * return "" % (self.base.__class__.__name__,
- * id(self))
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__repr__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":615
- * id(self))
- *
- * def __str__(self): # <<<<<<<<<<<<<<
- * return "" % (self.base.__class__.__name__,)
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__str__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__str__", 0);
-
- /* "View.MemoryView":616
- *
- * def __str__(self):
- * return "" % (self.base.__class__.__name__,) # <<<<<<<<<<<<<<
- *
- *
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 616, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 616, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1);
- __pyx_t_1 = 0;
- __pyx_t_1 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_object, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":615
- * id(self))
- *
- * def __str__(self): # <<<<<<<<<<<<<<
- * return "" % (self.base.__class__.__name__,)
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__str__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":619
- *
- *
- * def is_c_contig(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("is_c_contig (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self) {
- __Pyx_memviewslice *__pyx_v_mslice;
- __Pyx_memviewslice __pyx_v_tmp;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice *__pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("is_c_contig", 0);
-
- /* "View.MemoryView":622
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<<
- * return slice_is_contig(mslice[0], 'C', self.view.ndim)
- *
- */
- __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 622, __pyx_L1_error)
- __pyx_v_mslice = __pyx_t_1;
-
- /* "View.MemoryView":623
- * cdef __Pyx_memviewslice tmp
- * mslice = get_slice_from_memview(self, &tmp)
- * return slice_is_contig(mslice[0], 'C', self.view.ndim) # <<<<<<<<<<<<<<
- *
- * def is_f_contig(self):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'C', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 623, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":619
- *
- *
- * def is_c_contig(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.is_c_contig", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":625
- * return slice_is_contig(mslice[0], 'C', self.view.ndim)
- *
- * def is_f_contig(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("is_f_contig (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self) {
- __Pyx_memviewslice *__pyx_v_mslice;
- __Pyx_memviewslice __pyx_v_tmp;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice *__pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("is_f_contig", 0);
-
- /* "View.MemoryView":628
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<<
- * return slice_is_contig(mslice[0], 'F', self.view.ndim)
- *
- */
- __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 628, __pyx_L1_error)
- __pyx_v_mslice = __pyx_t_1;
-
- /* "View.MemoryView":629
- * cdef __Pyx_memviewslice tmp
- * mslice = get_slice_from_memview(self, &tmp)
- * return slice_is_contig(mslice[0], 'F', self.view.ndim) # <<<<<<<<<<<<<<
- *
- * def copy(self):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'F', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 629, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":625
- * return slice_is_contig(mslice[0], 'C', self.view.ndim)
- *
- * def is_f_contig(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.is_f_contig", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":631
- * return slice_is_contig(mslice[0], 'F', self.view.ndim)
- *
- * def copy(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice mslice
- * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("copy (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self) {
- __Pyx_memviewslice __pyx_v_mslice;
- int __pyx_v_flags;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("copy", 0);
-
- /* "View.MemoryView":633
- * def copy(self):
- * cdef __Pyx_memviewslice mslice
- * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS # <<<<<<<<<<<<<<
- *
- * slice_copy(self, &mslice)
- */
- __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_F_CONTIGUOUS));
-
- /* "View.MemoryView":635
- * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS
- *
- * slice_copy(self, &mslice) # <<<<<<<<<<<<<<
- * mslice = slice_copy_contig(&mslice, "c", self.view.ndim,
- * self.view.itemsize,
- */
- __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_mslice));
-
- /* "View.MemoryView":636
- *
- * slice_copy(self, &mslice)
- * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, # <<<<<<<<<<<<<<
- * self.view.itemsize,
- * flags|PyBUF_C_CONTIGUOUS,
- */
- __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_mslice), ((char *)"c"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_C_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 636, __pyx_L1_error)
- __pyx_v_mslice = __pyx_t_1;
-
- /* "View.MemoryView":641
- * self.dtype_is_object)
- *
- * return memoryview_copy_from_slice(self, &mslice) # <<<<<<<<<<<<<<
- *
- * def copy_fortran(self):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_mslice)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 641, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":631
- * return slice_is_contig(mslice[0], 'F', self.view.ndim)
- *
- * def copy(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice mslice
- * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.copy", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":643
- * return memoryview_copy_from_slice(self, &mslice)
- *
- * def copy_fortran(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice src, dst
- * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("copy_fortran (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self) {
- __Pyx_memviewslice __pyx_v_src;
- __Pyx_memviewslice __pyx_v_dst;
- int __pyx_v_flags;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("copy_fortran", 0);
-
- /* "View.MemoryView":645
- * def copy_fortran(self):
- * cdef __Pyx_memviewslice src, dst
- * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS # <<<<<<<<<<<<<<
- *
- * slice_copy(self, &src)
- */
- __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_C_CONTIGUOUS));
-
- /* "View.MemoryView":647
- * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS
- *
- * slice_copy(self, &src) # <<<<<<<<<<<<<<
- * dst = slice_copy_contig(&src, "fortran", self.view.ndim,
- * self.view.itemsize,
- */
- __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_src));
-
- /* "View.MemoryView":648
- *
- * slice_copy(self, &src)
- * dst = slice_copy_contig(&src, "fortran", self.view.ndim, # <<<<<<<<<<<<<<
- * self.view.itemsize,
- * flags|PyBUF_F_CONTIGUOUS,
- */
- __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_src), ((char *)"fortran"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_F_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 648, __pyx_L1_error)
- __pyx_v_dst = __pyx_t_1;
-
- /* "View.MemoryView":653
- * self.dtype_is_object)
- *
- * return memoryview_copy_from_slice(self, &dst) # <<<<<<<<<<<<<<
- *
- *
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_dst)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 653, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":643
- * return memoryview_copy_from_slice(self, &mslice)
- *
- * def copy_fortran(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice src, dst
- * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.copy_fortran", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_memoryview___reduce_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__reduce_cython__", 0);
-
- /* "(tree fragment)":2
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__14, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 2, __pyx_L1_error)
-
- /* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/
-static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_memoryview_2__setstate_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setstate_cython__", 0);
-
- /* "(tree fragment)":4
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__15, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 4, __pyx_L1_error)
-
- /* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":657
- *
- * @cname('__pyx_memoryview_new')
- * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<<
- * cdef memoryview result = memoryview(o, flags, dtype_is_object)
- * result.typeinfo = typeinfo
- */
-
-static PyObject *__pyx_memoryview_new(PyObject *__pyx_v_o, int __pyx_v_flags, int __pyx_v_dtype_is_object, __Pyx_TypeInfo *__pyx_v_typeinfo) {
- struct __pyx_memoryview_obj *__pyx_v_result = 0;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("memoryview_cwrapper", 0);
-
- /* "View.MemoryView":658
- * @cname('__pyx_memoryview_new')
- * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo):
- * cdef memoryview result = memoryview(o, flags, dtype_is_object) # <<<<<<<<<<<<<<
- * result.typeinfo = typeinfo
- * return result
- */
- __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 658, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 658, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 658, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_INCREF(__pyx_v_o);
- __Pyx_GIVEREF(__pyx_v_o);
- PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_o);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_2);
- PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2);
- __pyx_t_1 = 0;
- __pyx_t_2 = 0;
- __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 658, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_result = ((struct __pyx_memoryview_obj *)__pyx_t_2);
- __pyx_t_2 = 0;
-
- /* "View.MemoryView":659
- * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo):
- * cdef memoryview result = memoryview(o, flags, dtype_is_object)
- * result.typeinfo = typeinfo # <<<<<<<<<<<<<<
- * return result
- *
- */
- __pyx_v_result->typeinfo = __pyx_v_typeinfo;
-
- /* "View.MemoryView":660
- * cdef memoryview result = memoryview(o, flags, dtype_is_object)
- * result.typeinfo = typeinfo
- * return result # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_memoryview_check')
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(((PyObject *)__pyx_v_result));
- __pyx_r = ((PyObject *)__pyx_v_result);
- goto __pyx_L0;
-
- /* "View.MemoryView":657
- *
- * @cname('__pyx_memoryview_new')
- * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<<
- * cdef memoryview result = memoryview(o, flags, dtype_is_object)
- * result.typeinfo = typeinfo
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF((PyObject *)__pyx_v_result);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":663
- *
- * @cname('__pyx_memoryview_check')
- * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<<
- * return isinstance(o, memoryview)
- *
- */
-
-static CYTHON_INLINE int __pyx_memoryview_check(PyObject *__pyx_v_o) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- __Pyx_RefNannySetupContext("memoryview_check", 0);
-
- /* "View.MemoryView":664
- * @cname('__pyx_memoryview_check')
- * cdef inline bint memoryview_check(object o):
- * return isinstance(o, memoryview) # <<<<<<<<<<<<<<
- *
- * cdef tuple _unellipsify(object index, int ndim):
- */
- __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_o, __pyx_memoryview_type);
- __pyx_r = __pyx_t_1;
- goto __pyx_L0;
-
- /* "View.MemoryView":663
- *
- * @cname('__pyx_memoryview_check')
- * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<<
- * return isinstance(o, memoryview)
- *
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":666
- * return isinstance(o, memoryview)
- *
- * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<<
- * """
- * Replace all ellipses with full slices and fill incomplete indices with
- */
-
-static PyObject *_unellipsify(PyObject *__pyx_v_index, int __pyx_v_ndim) {
- PyObject *__pyx_v_tup = NULL;
- PyObject *__pyx_v_result = NULL;
- int __pyx_v_have_slices;
- int __pyx_v_seen_ellipsis;
- CYTHON_UNUSED PyObject *__pyx_v_idx = NULL;
- PyObject *__pyx_v_item = NULL;
- Py_ssize_t __pyx_v_nslices;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- Py_ssize_t __pyx_t_5;
- PyObject *(*__pyx_t_6)(PyObject *);
- PyObject *__pyx_t_7 = NULL;
- Py_ssize_t __pyx_t_8;
- int __pyx_t_9;
- int __pyx_t_10;
- PyObject *__pyx_t_11 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("_unellipsify", 0);
-
- /* "View.MemoryView":671
- * full slices.
- * """
- * if not isinstance(index, tuple): # <<<<<<<<<<<<<<
- * tup = (index,)
- * else:
- */
- __pyx_t_1 = PyTuple_Check(__pyx_v_index);
- __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":672
- * """
- * if not isinstance(index, tuple):
- * tup = (index,) # <<<<<<<<<<<<<<
- * else:
- * tup = index
- */
- __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 672, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_INCREF(__pyx_v_index);
- __Pyx_GIVEREF(__pyx_v_index);
- PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_index);
- __pyx_v_tup = __pyx_t_3;
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":671
- * full slices.
- * """
- * if not isinstance(index, tuple): # <<<<<<<<<<<<<<
- * tup = (index,)
- * else:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":674
- * tup = (index,)
- * else:
- * tup = index # <<<<<<<<<<<<<<
- *
- * result = []
- */
- /*else*/ {
- __Pyx_INCREF(__pyx_v_index);
- __pyx_v_tup = __pyx_v_index;
- }
- __pyx_L3:;
-
- /* "View.MemoryView":676
- * tup = index
- *
- * result = [] # <<<<<<<<<<<<<<
- * have_slices = False
- * seen_ellipsis = False
- */
- __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 676, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_v_result = ((PyObject*)__pyx_t_3);
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":677
- *
- * result = []
- * have_slices = False # <<<<<<<<<<<<<<
- * seen_ellipsis = False
- * for idx, item in enumerate(tup):
- */
- __pyx_v_have_slices = 0;
-
- /* "View.MemoryView":678
- * result = []
- * have_slices = False
- * seen_ellipsis = False # <<<<<<<<<<<<<<
- * for idx, item in enumerate(tup):
- * if item is Ellipsis:
- */
- __pyx_v_seen_ellipsis = 0;
-
- /* "View.MemoryView":679
- * have_slices = False
- * seen_ellipsis = False
- * for idx, item in enumerate(tup): # <<<<<<<<<<<<<<
- * if item is Ellipsis:
- * if not seen_ellipsis:
- */
- __Pyx_INCREF(__pyx_int_0);
- __pyx_t_3 = __pyx_int_0;
- if (likely(PyList_CheckExact(__pyx_v_tup)) || PyTuple_CheckExact(__pyx_v_tup)) {
- __pyx_t_4 = __pyx_v_tup; __Pyx_INCREF(__pyx_t_4); __pyx_t_5 = 0;
- __pyx_t_6 = NULL;
- } else {
- __pyx_t_5 = -1; __pyx_t_4 = PyObject_GetIter(__pyx_v_tup); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 679, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_6 = Py_TYPE(__pyx_t_4)->tp_iternext; if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 679, __pyx_L1_error)
- }
- for (;;) {
- if (likely(!__pyx_t_6)) {
- if (likely(PyList_CheckExact(__pyx_t_4))) {
- if (__pyx_t_5 >= PyList_GET_SIZE(__pyx_t_4)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_7 = PyList_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 679, __pyx_L1_error)
- #else
- __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- #endif
- } else {
- if (__pyx_t_5 >= PyTuple_GET_SIZE(__pyx_t_4)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_7 = PyTuple_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 679, __pyx_L1_error)
- #else
- __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- #endif
- }
- } else {
- __pyx_t_7 = __pyx_t_6(__pyx_t_4);
- if (unlikely(!__pyx_t_7)) {
- PyObject* exc_type = PyErr_Occurred();
- if (exc_type) {
- if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
- else __PYX_ERR(1, 679, __pyx_L1_error)
- }
- break;
- }
- __Pyx_GOTREF(__pyx_t_7);
- }
- __Pyx_XDECREF_SET(__pyx_v_item, __pyx_t_7);
- __pyx_t_7 = 0;
- __Pyx_INCREF(__pyx_t_3);
- __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_3);
- __pyx_t_7 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- __Pyx_DECREF(__pyx_t_3);
- __pyx_t_3 = __pyx_t_7;
- __pyx_t_7 = 0;
-
- /* "View.MemoryView":680
- * seen_ellipsis = False
- * for idx, item in enumerate(tup):
- * if item is Ellipsis: # <<<<<<<<<<<<<<
- * if not seen_ellipsis:
- * result.extend([slice(None)] * (ndim - len(tup) + 1))
- */
- __pyx_t_2 = (__pyx_v_item == __pyx_builtin_Ellipsis);
- __pyx_t_1 = (__pyx_t_2 != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":681
- * for idx, item in enumerate(tup):
- * if item is Ellipsis:
- * if not seen_ellipsis: # <<<<<<<<<<<<<<
- * result.extend([slice(None)] * (ndim - len(tup) + 1))
- * seen_ellipsis = True
- */
- __pyx_t_1 = ((!(__pyx_v_seen_ellipsis != 0)) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":682
- * if item is Ellipsis:
- * if not seen_ellipsis:
- * result.extend([slice(None)] * (ndim - len(tup) + 1)) # <<<<<<<<<<<<<<
- * seen_ellipsis = True
- * else:
- */
- __pyx_t_8 = PyObject_Length(__pyx_v_tup); if (unlikely(__pyx_t_8 == ((Py_ssize_t)-1))) __PYX_ERR(1, 682, __pyx_L1_error)
- __pyx_t_7 = PyList_New(1 * ((((__pyx_v_ndim - __pyx_t_8) + 1)<0) ? 0:((__pyx_v_ndim - __pyx_t_8) + 1))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 682, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- { Py_ssize_t __pyx_temp;
- for (__pyx_temp=0; __pyx_temp < ((__pyx_v_ndim - __pyx_t_8) + 1); __pyx_temp++) {
- __Pyx_INCREF(__pyx_slice__16);
- __Pyx_GIVEREF(__pyx_slice__16);
- PyList_SET_ITEM(__pyx_t_7, __pyx_temp, __pyx_slice__16);
- }
- }
- __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_7); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 682, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
-
- /* "View.MemoryView":683
- * if not seen_ellipsis:
- * result.extend([slice(None)] * (ndim - len(tup) + 1))
- * seen_ellipsis = True # <<<<<<<<<<<<<<
- * else:
- * result.append(slice(None))
- */
- __pyx_v_seen_ellipsis = 1;
-
- /* "View.MemoryView":681
- * for idx, item in enumerate(tup):
- * if item is Ellipsis:
- * if not seen_ellipsis: # <<<<<<<<<<<<<<
- * result.extend([slice(None)] * (ndim - len(tup) + 1))
- * seen_ellipsis = True
- */
- goto __pyx_L7;
- }
-
- /* "View.MemoryView":685
- * seen_ellipsis = True
- * else:
- * result.append(slice(None)) # <<<<<<<<<<<<<<
- * have_slices = True
- * else:
- */
- /*else*/ {
- __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_slice__16); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 685, __pyx_L1_error)
- }
- __pyx_L7:;
-
- /* "View.MemoryView":686
- * else:
- * result.append(slice(None))
- * have_slices = True # <<<<<<<<<<<<<<
- * else:
- * if not isinstance(item, slice) and not PyIndex_Check(item):
- */
- __pyx_v_have_slices = 1;
-
- /* "View.MemoryView":680
- * seen_ellipsis = False
- * for idx, item in enumerate(tup):
- * if item is Ellipsis: # <<<<<<<<<<<<<<
- * if not seen_ellipsis:
- * result.extend([slice(None)] * (ndim - len(tup) + 1))
- */
- goto __pyx_L6;
- }
-
- /* "View.MemoryView":688
- * have_slices = True
- * else:
- * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<<
- * raise TypeError("Cannot index with type '%s'" % type(item))
- *
- */
- /*else*/ {
- __pyx_t_2 = PySlice_Check(__pyx_v_item);
- __pyx_t_10 = ((!(__pyx_t_2 != 0)) != 0);
- if (__pyx_t_10) {
- } else {
- __pyx_t_1 = __pyx_t_10;
- goto __pyx_L9_bool_binop_done;
- }
- __pyx_t_10 = ((!(PyIndex_Check(__pyx_v_item) != 0)) != 0);
- __pyx_t_1 = __pyx_t_10;
- __pyx_L9_bool_binop_done:;
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":689
- * else:
- * if not isinstance(item, slice) and not PyIndex_Check(item):
- * raise TypeError("Cannot index with type '%s'" % type(item)) # <<<<<<<<<<<<<<
- *
- * have_slices = have_slices or isinstance(item, slice)
- */
- __pyx_t_7 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Cannot_index_with_type_s, ((PyObject *)Py_TYPE(__pyx_v_item))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 689, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_builtin_TypeError, __pyx_t_7); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 689, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_11);
- __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_Raise(__pyx_t_11, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
- __PYX_ERR(1, 689, __pyx_L1_error)
-
- /* "View.MemoryView":688
- * have_slices = True
- * else:
- * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<<
- * raise TypeError("Cannot index with type '%s'" % type(item))
- *
- */
- }
-
- /* "View.MemoryView":691
- * raise TypeError("Cannot index with type '%s'" % type(item))
- *
- * have_slices = have_slices or isinstance(item, slice) # <<<<<<<<<<<<<<
- * result.append(item)
- *
- */
- __pyx_t_10 = (__pyx_v_have_slices != 0);
- if (!__pyx_t_10) {
- } else {
- __pyx_t_1 = __pyx_t_10;
- goto __pyx_L11_bool_binop_done;
- }
- __pyx_t_10 = PySlice_Check(__pyx_v_item);
- __pyx_t_2 = (__pyx_t_10 != 0);
- __pyx_t_1 = __pyx_t_2;
- __pyx_L11_bool_binop_done:;
- __pyx_v_have_slices = __pyx_t_1;
-
- /* "View.MemoryView":692
- *
- * have_slices = have_slices or isinstance(item, slice)
- * result.append(item) # <<<<<<<<<<<<<<
- *
- * nslices = ndim - len(result)
- */
- __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_v_item); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 692, __pyx_L1_error)
- }
- __pyx_L6:;
-
- /* "View.MemoryView":679
- * have_slices = False
- * seen_ellipsis = False
- * for idx, item in enumerate(tup): # <<<<<<<<<<<<<<
- * if item is Ellipsis:
- * if not seen_ellipsis:
- */
- }
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":694
- * result.append(item)
- *
- * nslices = ndim - len(result) # <<<<<<<<<<<<<<
- * if nslices:
- * result.extend([slice(None)] * nslices)
- */
- __pyx_t_5 = PyList_GET_SIZE(__pyx_v_result); if (unlikely(__pyx_t_5 == ((Py_ssize_t)-1))) __PYX_ERR(1, 694, __pyx_L1_error)
- __pyx_v_nslices = (__pyx_v_ndim - __pyx_t_5);
-
- /* "View.MemoryView":695
- *
- * nslices = ndim - len(result)
- * if nslices: # <<<<<<<<<<<<<<
- * result.extend([slice(None)] * nslices)
- *
- */
- __pyx_t_1 = (__pyx_v_nslices != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":696
- * nslices = ndim - len(result)
- * if nslices:
- * result.extend([slice(None)] * nslices) # <<<<<<<<<<<<<<
- *
- * return have_slices or nslices, tuple(result)
- */
- __pyx_t_3 = PyList_New(1 * ((__pyx_v_nslices<0) ? 0:__pyx_v_nslices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 696, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- { Py_ssize_t __pyx_temp;
- for (__pyx_temp=0; __pyx_temp < __pyx_v_nslices; __pyx_temp++) {
- __Pyx_INCREF(__pyx_slice__16);
- __Pyx_GIVEREF(__pyx_slice__16);
- PyList_SET_ITEM(__pyx_t_3, __pyx_temp, __pyx_slice__16);
- }
- }
- __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_3); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 696, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":695
- *
- * nslices = ndim - len(result)
- * if nslices: # <<<<<<<<<<<<<<
- * result.extend([slice(None)] * nslices)
- *
- */
- }
-
- /* "View.MemoryView":698
- * result.extend([slice(None)] * nslices)
- *
- * return have_slices or nslices, tuple(result) # <<<<<<<<<<<<<<
- *
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim):
- */
- __Pyx_XDECREF(__pyx_r);
- if (!__pyx_v_have_slices) {
- } else {
- __pyx_t_4 = __Pyx_PyBool_FromLong(__pyx_v_have_slices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_3 = __pyx_t_4;
- __pyx_t_4 = 0;
- goto __pyx_L14_bool_binop_done;
- }
- __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_nslices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_3 = __pyx_t_4;
- __pyx_t_4 = 0;
- __pyx_L14_bool_binop_done:;
- __pyx_t_4 = PyList_AsTuple(__pyx_v_result); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_11 = PyTuple_New(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 698, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_11);
- __Pyx_GIVEREF(__pyx_t_3);
- PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_3);
- __Pyx_GIVEREF(__pyx_t_4);
- PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_t_4);
- __pyx_t_3 = 0;
- __pyx_t_4 = 0;
- __pyx_r = ((PyObject*)__pyx_t_11);
- __pyx_t_11 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":666
- * return isinstance(o, memoryview)
- *
- * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<<
- * """
- * Replace all ellipses with full slices and fill incomplete indices with
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_XDECREF(__pyx_t_7);
- __Pyx_XDECREF(__pyx_t_11);
- __Pyx_AddTraceback("View.MemoryView._unellipsify", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_tup);
- __Pyx_XDECREF(__pyx_v_result);
- __Pyx_XDECREF(__pyx_v_idx);
- __Pyx_XDECREF(__pyx_v_item);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":700
- * return have_slices or nslices, tuple(result)
- *
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<<
- * for suboffset in suboffsets[:ndim]:
- * if suboffset >= 0:
- */
-
-static PyObject *assert_direct_dimensions(Py_ssize_t *__pyx_v_suboffsets, int __pyx_v_ndim) {
- Py_ssize_t __pyx_v_suboffset;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- Py_ssize_t *__pyx_t_1;
- Py_ssize_t *__pyx_t_2;
- Py_ssize_t *__pyx_t_3;
- int __pyx_t_4;
- PyObject *__pyx_t_5 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("assert_direct_dimensions", 0);
-
- /* "View.MemoryView":701
- *
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim):
- * for suboffset in suboffsets[:ndim]: # <<<<<<<<<<<<<<
- * if suboffset >= 0:
- * raise ValueError("Indirect dimensions not supported")
- */
- __pyx_t_2 = (__pyx_v_suboffsets + __pyx_v_ndim);
- for (__pyx_t_3 = __pyx_v_suboffsets; __pyx_t_3 < __pyx_t_2; __pyx_t_3++) {
- __pyx_t_1 = __pyx_t_3;
- __pyx_v_suboffset = (__pyx_t_1[0]);
-
- /* "View.MemoryView":702
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim):
- * for suboffset in suboffsets[:ndim]:
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * raise ValueError("Indirect dimensions not supported")
- *
- */
- __pyx_t_4 = ((__pyx_v_suboffset >= 0) != 0);
- if (unlikely(__pyx_t_4)) {
-
- /* "View.MemoryView":703
- * for suboffset in suboffsets[:ndim]:
- * if suboffset >= 0:
- * raise ValueError("Indirect dimensions not supported") # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__17, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 703, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_Raise(__pyx_t_5, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __PYX_ERR(1, 703, __pyx_L1_error)
-
- /* "View.MemoryView":702
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim):
- * for suboffset in suboffsets[:ndim]:
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * raise ValueError("Indirect dimensions not supported")
- *
- */
- }
- }
-
- /* "View.MemoryView":700
- * return have_slices or nslices, tuple(result)
- *
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<<
- * for suboffset in suboffsets[:ndim]:
- * if suboffset >= 0:
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.assert_direct_dimensions", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":710
- *
- * @cname('__pyx_memview_slice')
- * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<<
- * cdef int new_ndim = 0, suboffset_dim = -1, dim
- * cdef bint negative_step
- */
-
-static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *__pyx_v_memview, PyObject *__pyx_v_indices) {
- int __pyx_v_new_ndim;
- int __pyx_v_suboffset_dim;
- int __pyx_v_dim;
- __Pyx_memviewslice __pyx_v_src;
- __Pyx_memviewslice __pyx_v_dst;
- __Pyx_memviewslice *__pyx_v_p_src;
- struct __pyx_memoryviewslice_obj *__pyx_v_memviewsliceobj = 0;
- __Pyx_memviewslice *__pyx_v_p_dst;
- int *__pyx_v_p_suboffset_dim;
- Py_ssize_t __pyx_v_start;
- Py_ssize_t __pyx_v_stop;
- Py_ssize_t __pyx_v_step;
- int __pyx_v_have_start;
- int __pyx_v_have_stop;
- int __pyx_v_have_step;
- PyObject *__pyx_v_index = NULL;
- struct __pyx_memoryview_obj *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- struct __pyx_memoryview_obj *__pyx_t_4;
- char *__pyx_t_5;
- int __pyx_t_6;
- Py_ssize_t __pyx_t_7;
- PyObject *(*__pyx_t_8)(PyObject *);
- PyObject *__pyx_t_9 = NULL;
- Py_ssize_t __pyx_t_10;
- int __pyx_t_11;
- Py_ssize_t __pyx_t_12;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("memview_slice", 0);
-
- /* "View.MemoryView":711
- * @cname('__pyx_memview_slice')
- * cdef memoryview memview_slice(memoryview memview, object indices):
- * cdef int new_ndim = 0, suboffset_dim = -1, dim # <<<<<<<<<<<<<<
- * cdef bint negative_step
- * cdef __Pyx_memviewslice src, dst
- */
- __pyx_v_new_ndim = 0;
- __pyx_v_suboffset_dim = -1;
-
- /* "View.MemoryView":718
- *
- *
- * memset(&dst, 0, sizeof(dst)) # <<<<<<<<<<<<<<
- *
- * cdef _memoryviewslice memviewsliceobj
- */
- (void)(memset((&__pyx_v_dst), 0, (sizeof(__pyx_v_dst))));
-
- /* "View.MemoryView":722
- * cdef _memoryviewslice memviewsliceobj
- *
- * assert memview.view.ndim > 0 # <<<<<<<<<<<<<<
- *
- * if isinstance(memview, _memoryviewslice):
- */
- #ifndef CYTHON_WITHOUT_ASSERTIONS
- if (unlikely(!Py_OptimizeFlag)) {
- if (unlikely(!((__pyx_v_memview->view.ndim > 0) != 0))) {
- PyErr_SetNone(PyExc_AssertionError);
- __PYX_ERR(1, 722, __pyx_L1_error)
- }
- }
- #endif
-
- /* "View.MemoryView":724
- * assert memview.view.ndim > 0
- *
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * memviewsliceobj = memview
- * p_src = &memviewsliceobj.from_slice
- */
- __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":725
- *
- * if isinstance(memview, _memoryviewslice):
- * memviewsliceobj = memview # <<<<<<<<<<<<<<
- * p_src = &memviewsliceobj.from_slice
- * else:
- */
- if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 725, __pyx_L1_error)
- __pyx_t_3 = ((PyObject *)__pyx_v_memview);
- __Pyx_INCREF(__pyx_t_3);
- __pyx_v_memviewsliceobj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_3);
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":726
- * if isinstance(memview, _memoryviewslice):
- * memviewsliceobj = memview
- * p_src = &memviewsliceobj.from_slice # <<<<<<<<<<<<<<
- * else:
- * slice_copy(memview, &src)
- */
- __pyx_v_p_src = (&__pyx_v_memviewsliceobj->from_slice);
-
- /* "View.MemoryView":724
- * assert memview.view.ndim > 0
- *
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * memviewsliceobj = memview
- * p_src = &memviewsliceobj.from_slice
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":728
- * p_src = &memviewsliceobj.from_slice
- * else:
- * slice_copy(memview, &src) # <<<<<<<<<<<<<<
- * p_src = &src
- *
- */
- /*else*/ {
- __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_src));
-
- /* "View.MemoryView":729
- * else:
- * slice_copy(memview, &src)
- * p_src = &src # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_v_p_src = (&__pyx_v_src);
- }
- __pyx_L3:;
-
- /* "View.MemoryView":735
- *
- *
- * dst.memview = p_src.memview # <<<<<<<<<<<<<<
- * dst.data = p_src.data
- *
- */
- __pyx_t_4 = __pyx_v_p_src->memview;
- __pyx_v_dst.memview = __pyx_t_4;
-
- /* "View.MemoryView":736
- *
- * dst.memview = p_src.memview
- * dst.data = p_src.data # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_5 = __pyx_v_p_src->data;
- __pyx_v_dst.data = __pyx_t_5;
-
- /* "View.MemoryView":741
- *
- *
- * cdef __Pyx_memviewslice *p_dst = &dst # <<<<<<<<<<<<<<
- * cdef int *p_suboffset_dim = &suboffset_dim
- * cdef Py_ssize_t start, stop, step
- */
- __pyx_v_p_dst = (&__pyx_v_dst);
-
- /* "View.MemoryView":742
- *
- * cdef __Pyx_memviewslice *p_dst = &dst
- * cdef int *p_suboffset_dim = &suboffset_dim # <<<<<<<<<<<<<<
- * cdef Py_ssize_t start, stop, step
- * cdef bint have_start, have_stop, have_step
- */
- __pyx_v_p_suboffset_dim = (&__pyx_v_suboffset_dim);
-
- /* "View.MemoryView":746
- * cdef bint have_start, have_stop, have_step
- *
- * for dim, index in enumerate(indices): # <<<<<<<<<<<<<<
- * if PyIndex_Check(index):
- * slice_memviewslice(
- */
- __pyx_t_6 = 0;
- if (likely(PyList_CheckExact(__pyx_v_indices)) || PyTuple_CheckExact(__pyx_v_indices)) {
- __pyx_t_3 = __pyx_v_indices; __Pyx_INCREF(__pyx_t_3); __pyx_t_7 = 0;
- __pyx_t_8 = NULL;
- } else {
- __pyx_t_7 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_v_indices); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 746, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_8 = Py_TYPE(__pyx_t_3)->tp_iternext; if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 746, __pyx_L1_error)
- }
- for (;;) {
- if (likely(!__pyx_t_8)) {
- if (likely(PyList_CheckExact(__pyx_t_3))) {
- if (__pyx_t_7 >= PyList_GET_SIZE(__pyx_t_3)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_9 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 746, __pyx_L1_error)
- #else
- __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 746, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- #endif
- } else {
- if (__pyx_t_7 >= PyTuple_GET_SIZE(__pyx_t_3)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_9 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 746, __pyx_L1_error)
- #else
- __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 746, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- #endif
- }
- } else {
- __pyx_t_9 = __pyx_t_8(__pyx_t_3);
- if (unlikely(!__pyx_t_9)) {
- PyObject* exc_type = PyErr_Occurred();
- if (exc_type) {
- if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
- else __PYX_ERR(1, 746, __pyx_L1_error)
- }
- break;
- }
- __Pyx_GOTREF(__pyx_t_9);
- }
- __Pyx_XDECREF_SET(__pyx_v_index, __pyx_t_9);
- __pyx_t_9 = 0;
- __pyx_v_dim = __pyx_t_6;
- __pyx_t_6 = (__pyx_t_6 + 1);
-
- /* "View.MemoryView":747
- *
- * for dim, index in enumerate(indices):
- * if PyIndex_Check(index): # <<<<<<<<<<<<<<
- * slice_memviewslice(
- * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim],
- */
- __pyx_t_2 = (PyIndex_Check(__pyx_v_index) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":751
- * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim],
- * dim, new_ndim, p_suboffset_dim,
- * index, 0, 0, # start, stop, step # <<<<<<<<<<<<<<
- * 0, 0, 0, # have_{start,stop,step}
- * False)
- */
- __pyx_t_10 = __Pyx_PyIndex_AsSsize_t(__pyx_v_index); if (unlikely((__pyx_t_10 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 751, __pyx_L1_error)
-
- /* "View.MemoryView":748
- * for dim, index in enumerate(indices):
- * if PyIndex_Check(index):
- * slice_memviewslice( # <<<<<<<<<<<<<<
- * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim],
- * dim, new_ndim, p_suboffset_dim,
- */
- __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_t_10, 0, 0, 0, 0, 0, 0); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 748, __pyx_L1_error)
-
- /* "View.MemoryView":747
- *
- * for dim, index in enumerate(indices):
- * if PyIndex_Check(index): # <<<<<<<<<<<<<<
- * slice_memviewslice(
- * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim],
- */
- goto __pyx_L6;
- }
-
- /* "View.MemoryView":754
- * 0, 0, 0, # have_{start,stop,step}
- * False)
- * elif index is None: # <<<<<<<<<<<<<<
- * p_dst.shape[new_ndim] = 1
- * p_dst.strides[new_ndim] = 0
- */
- __pyx_t_2 = (__pyx_v_index == Py_None);
- __pyx_t_1 = (__pyx_t_2 != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":755
- * False)
- * elif index is None:
- * p_dst.shape[new_ndim] = 1 # <<<<<<<<<<<<<<
- * p_dst.strides[new_ndim] = 0
- * p_dst.suboffsets[new_ndim] = -1
- */
- (__pyx_v_p_dst->shape[__pyx_v_new_ndim]) = 1;
-
- /* "View.MemoryView":756
- * elif index is None:
- * p_dst.shape[new_ndim] = 1
- * p_dst.strides[new_ndim] = 0 # <<<<<<<<<<<<<<
- * p_dst.suboffsets[new_ndim] = -1
- * new_ndim += 1
- */
- (__pyx_v_p_dst->strides[__pyx_v_new_ndim]) = 0;
-
- /* "View.MemoryView":757
- * p_dst.shape[new_ndim] = 1
- * p_dst.strides[new_ndim] = 0
- * p_dst.suboffsets[new_ndim] = -1 # <<<<<<<<<<<<<<
- * new_ndim += 1
- * else:
- */
- (__pyx_v_p_dst->suboffsets[__pyx_v_new_ndim]) = -1L;
-
- /* "View.MemoryView":758
- * p_dst.strides[new_ndim] = 0
- * p_dst.suboffsets[new_ndim] = -1
- * new_ndim += 1 # <<<<<<<<<<<<<<
- * else:
- * start = index.start or 0
- */
- __pyx_v_new_ndim = (__pyx_v_new_ndim + 1);
-
- /* "View.MemoryView":754
- * 0, 0, 0, # have_{start,stop,step}
- * False)
- * elif index is None: # <<<<<<<<<<<<<<
- * p_dst.shape[new_ndim] = 1
- * p_dst.strides[new_ndim] = 0
- */
- goto __pyx_L6;
- }
-
- /* "View.MemoryView":760
- * new_ndim += 1
- * else:
- * start = index.start or 0 # <<<<<<<<<<<<<<
- * stop = index.stop or 0
- * step = index.step or 0
- */
- /*else*/ {
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 760, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 760, __pyx_L1_error)
- if (!__pyx_t_1) {
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- } else {
- __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 760, __pyx_L1_error)
- __pyx_t_10 = __pyx_t_12;
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- goto __pyx_L7_bool_binop_done;
- }
- __pyx_t_10 = 0;
- __pyx_L7_bool_binop_done:;
- __pyx_v_start = __pyx_t_10;
-
- /* "View.MemoryView":761
- * else:
- * start = index.start or 0
- * stop = index.stop or 0 # <<<<<<<<<<<<<<
- * step = index.step or 0
- *
- */
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 761, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 761, __pyx_L1_error)
- if (!__pyx_t_1) {
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- } else {
- __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 761, __pyx_L1_error)
- __pyx_t_10 = __pyx_t_12;
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- goto __pyx_L9_bool_binop_done;
- }
- __pyx_t_10 = 0;
- __pyx_L9_bool_binop_done:;
- __pyx_v_stop = __pyx_t_10;
-
- /* "View.MemoryView":762
- * start = index.start or 0
- * stop = index.stop or 0
- * step = index.step or 0 # <<<<<<<<<<<<<<
- *
- * have_start = index.start is not None
- */
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 762, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 762, __pyx_L1_error)
- if (!__pyx_t_1) {
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- } else {
- __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 762, __pyx_L1_error)
- __pyx_t_10 = __pyx_t_12;
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- goto __pyx_L11_bool_binop_done;
- }
- __pyx_t_10 = 0;
- __pyx_L11_bool_binop_done:;
- __pyx_v_step = __pyx_t_10;
-
- /* "View.MemoryView":764
- * step = index.step or 0
- *
- * have_start = index.start is not None # <<<<<<<<<<<<<<
- * have_stop = index.stop is not None
- * have_step = index.step is not None
- */
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 764, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = (__pyx_t_9 != Py_None);
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- __pyx_v_have_start = __pyx_t_1;
-
- /* "View.MemoryView":765
- *
- * have_start = index.start is not None
- * have_stop = index.stop is not None # <<<<<<<<<<<<<<
- * have_step = index.step is not None
- *
- */
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 765, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = (__pyx_t_9 != Py_None);
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- __pyx_v_have_stop = __pyx_t_1;
-
- /* "View.MemoryView":766
- * have_start = index.start is not None
- * have_stop = index.stop is not None
- * have_step = index.step is not None # <<<<<<<<<<<<<<
- *
- * slice_memviewslice(
- */
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 766, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = (__pyx_t_9 != Py_None);
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- __pyx_v_have_step = __pyx_t_1;
-
- /* "View.MemoryView":768
- * have_step = index.step is not None
- *
- * slice_memviewslice( # <<<<<<<<<<<<<<
- * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim],
- * dim, new_ndim, p_suboffset_dim,
- */
- __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_v_start, __pyx_v_stop, __pyx_v_step, __pyx_v_have_start, __pyx_v_have_stop, __pyx_v_have_step, 1); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 768, __pyx_L1_error)
-
- /* "View.MemoryView":774
- * have_start, have_stop, have_step,
- * True)
- * new_ndim += 1 # <<<<<<<<<<<<<<
- *
- * if isinstance(memview, _memoryviewslice):
- */
- __pyx_v_new_ndim = (__pyx_v_new_ndim + 1);
- }
- __pyx_L6:;
-
- /* "View.MemoryView":746
- * cdef bint have_start, have_stop, have_step
- *
- * for dim, index in enumerate(indices): # <<<<<<<<<<<<<<
- * if PyIndex_Check(index):
- * slice_memviewslice(
- */
- }
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":776
- * new_ndim += 1
- *
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * return memoryview_fromslice(dst, new_ndim,
- * memviewsliceobj.to_object_func,
- */
- __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":777
- *
- * if isinstance(memview, _memoryviewslice):
- * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<<
- * memviewsliceobj.to_object_func,
- * memviewsliceobj.to_dtype_func,
- */
- __Pyx_XDECREF(((PyObject *)__pyx_r));
-
- /* "View.MemoryView":778
- * if isinstance(memview, _memoryviewslice):
- * return memoryview_fromslice(dst, new_ndim,
- * memviewsliceobj.to_object_func, # <<<<<<<<<<<<<<
- * memviewsliceobj.to_dtype_func,
- * memview.dtype_is_object)
- */
- if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 778, __pyx_L1_error) }
-
- /* "View.MemoryView":779
- * return memoryview_fromslice(dst, new_ndim,
- * memviewsliceobj.to_object_func,
- * memviewsliceobj.to_dtype_func, # <<<<<<<<<<<<<<
- * memview.dtype_is_object)
- * else:
- */
- if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 779, __pyx_L1_error) }
-
- /* "View.MemoryView":777
- *
- * if isinstance(memview, _memoryviewslice):
- * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<<
- * memviewsliceobj.to_object_func,
- * memviewsliceobj.to_dtype_func,
- */
- __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, __pyx_v_memviewsliceobj->to_object_func, __pyx_v_memviewsliceobj->to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 777, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 777, __pyx_L1_error)
- __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3);
- __pyx_t_3 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":776
- * new_ndim += 1
- *
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * return memoryview_fromslice(dst, new_ndim,
- * memviewsliceobj.to_object_func,
- */
- }
-
- /* "View.MemoryView":782
- * memview.dtype_is_object)
- * else:
- * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<<
- * memview.dtype_is_object)
- *
- */
- /*else*/ {
- __Pyx_XDECREF(((PyObject *)__pyx_r));
-
- /* "View.MemoryView":783
- * else:
- * return memoryview_fromslice(dst, new_ndim, NULL, NULL,
- * memview.dtype_is_object) # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, NULL, NULL, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 782, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
-
- /* "View.MemoryView":782
- * memview.dtype_is_object)
- * else:
- * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<<
- * memview.dtype_is_object)
- *
- */
- if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 782, __pyx_L1_error)
- __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3);
- __pyx_t_3 = 0;
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":710
- *
- * @cname('__pyx_memview_slice')
- * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<<
- * cdef int new_ndim = 0, suboffset_dim = -1, dim
- * cdef bint negative_step
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_9);
- __Pyx_AddTraceback("View.MemoryView.memview_slice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF((PyObject *)__pyx_v_memviewsliceobj);
- __Pyx_XDECREF(__pyx_v_index);
- __Pyx_XGIVEREF((PyObject *)__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":807
- *
- * @cname('__pyx_memoryview_slice_memviewslice')
- * cdef int slice_memviewslice( # <<<<<<<<<<<<<<
- * __Pyx_memviewslice *dst,
- * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset,
- */
-
-static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *__pyx_v_dst, Py_ssize_t __pyx_v_shape, Py_ssize_t __pyx_v_stride, Py_ssize_t __pyx_v_suboffset, int __pyx_v_dim, int __pyx_v_new_ndim, int *__pyx_v_suboffset_dim, Py_ssize_t __pyx_v_start, Py_ssize_t __pyx_v_stop, Py_ssize_t __pyx_v_step, int __pyx_v_have_start, int __pyx_v_have_stop, int __pyx_v_have_step, int __pyx_v_is_slice) {
- Py_ssize_t __pyx_v_new_shape;
- int __pyx_v_negative_step;
- int __pyx_r;
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
-
- /* "View.MemoryView":827
- * cdef bint negative_step
- *
- * if not is_slice: # <<<<<<<<<<<<<<
- *
- * if start < 0:
- */
- __pyx_t_1 = ((!(__pyx_v_is_slice != 0)) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":829
- * if not is_slice:
- *
- * if start < 0: # <<<<<<<<<<<<<<
- * start += shape
- * if not 0 <= start < shape:
- */
- __pyx_t_1 = ((__pyx_v_start < 0) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":830
- *
- * if start < 0:
- * start += shape # <<<<<<<<<<<<<<
- * if not 0 <= start < shape:
- * _err_dim(IndexError, "Index out of bounds (axis %d)", dim)
- */
- __pyx_v_start = (__pyx_v_start + __pyx_v_shape);
-
- /* "View.MemoryView":829
- * if not is_slice:
- *
- * if start < 0: # <<<<<<<<<<<<<<
- * start += shape
- * if not 0 <= start < shape:
- */
- }
-
- /* "View.MemoryView":831
- * if start < 0:
- * start += shape
- * if not 0 <= start < shape: # <<<<<<<<<<<<<<
- * _err_dim(IndexError, "Index out of bounds (axis %d)", dim)
- * else:
- */
- __pyx_t_1 = (0 <= __pyx_v_start);
- if (__pyx_t_1) {
- __pyx_t_1 = (__pyx_v_start < __pyx_v_shape);
- }
- __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":832
- * start += shape
- * if not 0 <= start < shape:
- * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) # <<<<<<<<<<<<<<
- * else:
- *
- */
- __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"Index out of bounds (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 832, __pyx_L1_error)
-
- /* "View.MemoryView":831
- * if start < 0:
- * start += shape
- * if not 0 <= start < shape: # <<<<<<<<<<<<<<
- * _err_dim(IndexError, "Index out of bounds (axis %d)", dim)
- * else:
- */
- }
-
- /* "View.MemoryView":827
- * cdef bint negative_step
- *
- * if not is_slice: # <<<<<<<<<<<<<<
- *
- * if start < 0:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":835
- * else:
- *
- * negative_step = have_step != 0 and step < 0 # <<<<<<<<<<<<<<
- *
- * if have_step and step == 0:
- */
- /*else*/ {
- __pyx_t_1 = ((__pyx_v_have_step != 0) != 0);
- if (__pyx_t_1) {
- } else {
- __pyx_t_2 = __pyx_t_1;
- goto __pyx_L6_bool_binop_done;
- }
- __pyx_t_1 = ((__pyx_v_step < 0) != 0);
- __pyx_t_2 = __pyx_t_1;
- __pyx_L6_bool_binop_done:;
- __pyx_v_negative_step = __pyx_t_2;
-
- /* "View.MemoryView":837
- * negative_step = have_step != 0 and step < 0
- *
- * if have_step and step == 0: # <<<<<<<<<<<<<<
- * _err_dim(ValueError, "Step may not be zero (axis %d)", dim)
- *
- */
- __pyx_t_1 = (__pyx_v_have_step != 0);
- if (__pyx_t_1) {
- } else {
- __pyx_t_2 = __pyx_t_1;
- goto __pyx_L9_bool_binop_done;
- }
- __pyx_t_1 = ((__pyx_v_step == 0) != 0);
- __pyx_t_2 = __pyx_t_1;
- __pyx_L9_bool_binop_done:;
- if (__pyx_t_2) {
-
- /* "View.MemoryView":838
- *
- * if have_step and step == 0:
- * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_ValueError, ((char *)"Step may not be zero (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 838, __pyx_L1_error)
-
- /* "View.MemoryView":837
- * negative_step = have_step != 0 and step < 0
- *
- * if have_step and step == 0: # <<<<<<<<<<<<<<
- * _err_dim(ValueError, "Step may not be zero (axis %d)", dim)
- *
- */
- }
-
- /* "View.MemoryView":841
- *
- *
- * if have_start: # <<<<<<<<<<<<<<
- * if start < 0:
- * start += shape
- */
- __pyx_t_2 = (__pyx_v_have_start != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":842
- *
- * if have_start:
- * if start < 0: # <<<<<<<<<<<<<<
- * start += shape
- * if start < 0:
- */
- __pyx_t_2 = ((__pyx_v_start < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":843
- * if have_start:
- * if start < 0:
- * start += shape # <<<<<<<<<<<<<<
- * if start < 0:
- * start = 0
- */
- __pyx_v_start = (__pyx_v_start + __pyx_v_shape);
-
- /* "View.MemoryView":844
- * if start < 0:
- * start += shape
- * if start < 0: # <<<<<<<<<<<<<<
- * start = 0
- * elif start >= shape:
- */
- __pyx_t_2 = ((__pyx_v_start < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":845
- * start += shape
- * if start < 0:
- * start = 0 # <<<<<<<<<<<<<<
- * elif start >= shape:
- * if negative_step:
- */
- __pyx_v_start = 0;
-
- /* "View.MemoryView":844
- * if start < 0:
- * start += shape
- * if start < 0: # <<<<<<<<<<<<<<
- * start = 0
- * elif start >= shape:
- */
- }
-
- /* "View.MemoryView":842
- *
- * if have_start:
- * if start < 0: # <<<<<<<<<<<<<<
- * start += shape
- * if start < 0:
- */
- goto __pyx_L12;
- }
-
- /* "View.MemoryView":846
- * if start < 0:
- * start = 0
- * elif start >= shape: # <<<<<<<<<<<<<<
- * if negative_step:
- * start = shape - 1
- */
- __pyx_t_2 = ((__pyx_v_start >= __pyx_v_shape) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":847
- * start = 0
- * elif start >= shape:
- * if negative_step: # <<<<<<<<<<<<<<
- * start = shape - 1
- * else:
- */
- __pyx_t_2 = (__pyx_v_negative_step != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":848
- * elif start >= shape:
- * if negative_step:
- * start = shape - 1 # <<<<<<<<<<<<<<
- * else:
- * start = shape
- */
- __pyx_v_start = (__pyx_v_shape - 1);
-
- /* "View.MemoryView":847
- * start = 0
- * elif start >= shape:
- * if negative_step: # <<<<<<<<<<<<<<
- * start = shape - 1
- * else:
- */
- goto __pyx_L14;
- }
-
- /* "View.MemoryView":850
- * start = shape - 1
- * else:
- * start = shape # <<<<<<<<<<<<<<
- * else:
- * if negative_step:
- */
- /*else*/ {
- __pyx_v_start = __pyx_v_shape;
- }
- __pyx_L14:;
-
- /* "View.MemoryView":846
- * if start < 0:
- * start = 0
- * elif start >= shape: # <<<<<<<<<<<<<<
- * if negative_step:
- * start = shape - 1
- */
- }
- __pyx_L12:;
-
- /* "View.MemoryView":841
- *
- *
- * if have_start: # <<<<<<<<<<<<<<
- * if start < 0:
- * start += shape
- */
- goto __pyx_L11;
- }
-
- /* "View.MemoryView":852
- * start = shape
- * else:
- * if negative_step: # <<<<<<<<<<<<<<
- * start = shape - 1
- * else:
- */
- /*else*/ {
- __pyx_t_2 = (__pyx_v_negative_step != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":853
- * else:
- * if negative_step:
- * start = shape - 1 # <<<<<<<<<<<<<<
- * else:
- * start = 0
- */
- __pyx_v_start = (__pyx_v_shape - 1);
-
- /* "View.MemoryView":852
- * start = shape
- * else:
- * if negative_step: # <<<<<<<<<<<<<<
- * start = shape - 1
- * else:
- */
- goto __pyx_L15;
- }
-
- /* "View.MemoryView":855
- * start = shape - 1
- * else:
- * start = 0 # <<<<<<<<<<<<<<
- *
- * if have_stop:
- */
- /*else*/ {
- __pyx_v_start = 0;
- }
- __pyx_L15:;
- }
- __pyx_L11:;
-
- /* "View.MemoryView":857
- * start = 0
- *
- * if have_stop: # <<<<<<<<<<<<<<
- * if stop < 0:
- * stop += shape
- */
- __pyx_t_2 = (__pyx_v_have_stop != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":858
- *
- * if have_stop:
- * if stop < 0: # <<<<<<<<<<<<<<
- * stop += shape
- * if stop < 0:
- */
- __pyx_t_2 = ((__pyx_v_stop < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":859
- * if have_stop:
- * if stop < 0:
- * stop += shape # <<<<<<<<<<<<<<
- * if stop < 0:
- * stop = 0
- */
- __pyx_v_stop = (__pyx_v_stop + __pyx_v_shape);
-
- /* "View.MemoryView":860
- * if stop < 0:
- * stop += shape
- * if stop < 0: # <<<<<<<<<<<<<<
- * stop = 0
- * elif stop > shape:
- */
- __pyx_t_2 = ((__pyx_v_stop < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":861
- * stop += shape
- * if stop < 0:
- * stop = 0 # <<<<<<<<<<<<<<
- * elif stop > shape:
- * stop = shape
- */
- __pyx_v_stop = 0;
-
- /* "View.MemoryView":860
- * if stop < 0:
- * stop += shape
- * if stop < 0: # <<<<<<<<<<<<<<
- * stop = 0
- * elif stop > shape:
- */
- }
-
- /* "View.MemoryView":858
- *
- * if have_stop:
- * if stop < 0: # <<<<<<<<<<<<<<
- * stop += shape
- * if stop < 0:
- */
- goto __pyx_L17;
- }
-
- /* "View.MemoryView":862
- * if stop < 0:
- * stop = 0
- * elif stop > shape: # <<<<<<<<<<<<<<
- * stop = shape
- * else:
- */
- __pyx_t_2 = ((__pyx_v_stop > __pyx_v_shape) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":863
- * stop = 0
- * elif stop > shape:
- * stop = shape # <<<<<<<<<<<<<<
- * else:
- * if negative_step:
- */
- __pyx_v_stop = __pyx_v_shape;
-
- /* "View.MemoryView":862
- * if stop < 0:
- * stop = 0
- * elif stop > shape: # <<<<<<<<<<<<<<
- * stop = shape
- * else:
- */
- }
- __pyx_L17:;
-
- /* "View.MemoryView":857
- * start = 0
- *
- * if have_stop: # <<<<<<<<<<<<<<
- * if stop < 0:
- * stop += shape
- */
- goto __pyx_L16;
- }
-
- /* "View.MemoryView":865
- * stop = shape
- * else:
- * if negative_step: # <<<<<<<<<<<<<<
- * stop = -1
- * else:
- */
- /*else*/ {
- __pyx_t_2 = (__pyx_v_negative_step != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":866
- * else:
- * if negative_step:
- * stop = -1 # <<<<<<<<<<<<<<
- * else:
- * stop = shape
- */
- __pyx_v_stop = -1L;
-
- /* "View.MemoryView":865
- * stop = shape
- * else:
- * if negative_step: # <<<<<<<<<<<<<<
- * stop = -1
- * else:
- */
- goto __pyx_L19;
- }
-
- /* "View.MemoryView":868
- * stop = -1
- * else:
- * stop = shape # <<<<<<<<<<<<<<
- *
- * if not have_step:
- */
- /*else*/ {
- __pyx_v_stop = __pyx_v_shape;
- }
- __pyx_L19:;
- }
- __pyx_L16:;
-
- /* "View.MemoryView":870
- * stop = shape
- *
- * if not have_step: # <<<<<<<<<<<<<<
- * step = 1
- *
- */
- __pyx_t_2 = ((!(__pyx_v_have_step != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":871
- *
- * if not have_step:
- * step = 1 # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_v_step = 1;
-
- /* "View.MemoryView":870
- * stop = shape
- *
- * if not have_step: # <<<<<<<<<<<<<<
- * step = 1
- *
- */
- }
-
- /* "View.MemoryView":875
- *
- * with cython.cdivision(True):
- * new_shape = (stop - start) // step # <<<<<<<<<<<<<<
- *
- * if (stop - start) - step * new_shape:
- */
- __pyx_v_new_shape = ((__pyx_v_stop - __pyx_v_start) / __pyx_v_step);
-
- /* "View.MemoryView":877
- * new_shape = (stop - start) // step
- *
- * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<<
- * new_shape += 1
- *
- */
- __pyx_t_2 = (((__pyx_v_stop - __pyx_v_start) - (__pyx_v_step * __pyx_v_new_shape)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":878
- *
- * if (stop - start) - step * new_shape:
- * new_shape += 1 # <<<<<<<<<<<<<<
- *
- * if new_shape < 0:
- */
- __pyx_v_new_shape = (__pyx_v_new_shape + 1);
-
- /* "View.MemoryView":877
- * new_shape = (stop - start) // step
- *
- * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<<
- * new_shape += 1
- *
- */
- }
-
- /* "View.MemoryView":880
- * new_shape += 1
- *
- * if new_shape < 0: # <<<<<<<<<<<<<<
- * new_shape = 0
- *
- */
- __pyx_t_2 = ((__pyx_v_new_shape < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":881
- *
- * if new_shape < 0:
- * new_shape = 0 # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_v_new_shape = 0;
-
- /* "View.MemoryView":880
- * new_shape += 1
- *
- * if new_shape < 0: # <<<<<<<<<<<<<<
- * new_shape = 0
- *
- */
- }
-
- /* "View.MemoryView":884
- *
- *
- * dst.strides[new_ndim] = stride * step # <<<<<<<<<<<<<<
- * dst.shape[new_ndim] = new_shape
- * dst.suboffsets[new_ndim] = suboffset
- */
- (__pyx_v_dst->strides[__pyx_v_new_ndim]) = (__pyx_v_stride * __pyx_v_step);
-
- /* "View.MemoryView":885
- *
- * dst.strides[new_ndim] = stride * step
- * dst.shape[new_ndim] = new_shape # <<<<<<<<<<<<<<
- * dst.suboffsets[new_ndim] = suboffset
- *
- */
- (__pyx_v_dst->shape[__pyx_v_new_ndim]) = __pyx_v_new_shape;
-
- /* "View.MemoryView":886
- * dst.strides[new_ndim] = stride * step
- * dst.shape[new_ndim] = new_shape
- * dst.suboffsets[new_ndim] = suboffset # <<<<<<<<<<<<<<
- *
- *
- */
- (__pyx_v_dst->suboffsets[__pyx_v_new_ndim]) = __pyx_v_suboffset;
- }
- __pyx_L3:;
-
- /* "View.MemoryView":889
- *
- *
- * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<<
- * dst.data += start * stride
- * else:
- */
- __pyx_t_2 = (((__pyx_v_suboffset_dim[0]) < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":890
- *
- * if suboffset_dim[0] < 0:
- * dst.data += start * stride # <<<<<<<<<<<<<<
- * else:
- * dst.suboffsets[suboffset_dim[0]] += start * stride
- */
- __pyx_v_dst->data = (__pyx_v_dst->data + (__pyx_v_start * __pyx_v_stride));
-
- /* "View.MemoryView":889
- *
- *
- * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<<
- * dst.data += start * stride
- * else:
- */
- goto __pyx_L23;
- }
-
- /* "View.MemoryView":892
- * dst.data += start * stride
- * else:
- * dst.suboffsets[suboffset_dim[0]] += start * stride # <<<<<<<<<<<<<<
- *
- * if suboffset >= 0:
- */
- /*else*/ {
- __pyx_t_3 = (__pyx_v_suboffset_dim[0]);
- (__pyx_v_dst->suboffsets[__pyx_t_3]) = ((__pyx_v_dst->suboffsets[__pyx_t_3]) + (__pyx_v_start * __pyx_v_stride));
- }
- __pyx_L23:;
-
- /* "View.MemoryView":894
- * dst.suboffsets[suboffset_dim[0]] += start * stride
- *
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * if not is_slice:
- * if new_ndim == 0:
- */
- __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":895
- *
- * if suboffset >= 0:
- * if not is_slice: # <<<<<<<<<<<<<<
- * if new_ndim == 0:
- * dst.data = ( dst.data)[0] + suboffset
- */
- __pyx_t_2 = ((!(__pyx_v_is_slice != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":896
- * if suboffset >= 0:
- * if not is_slice:
- * if new_ndim == 0: # <<<<<<<<<<<<<<
- * dst.data = ( dst.data)[0] + suboffset
- * else:
- */
- __pyx_t_2 = ((__pyx_v_new_ndim == 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":897
- * if not is_slice:
- * if new_ndim == 0:
- * dst.data = ( dst.data)[0] + suboffset # <<<<<<<<<<<<<<
- * else:
- * _err_dim(IndexError, "All dimensions preceding dimension %d "
- */
- __pyx_v_dst->data = ((((char **)__pyx_v_dst->data)[0]) + __pyx_v_suboffset);
-
- /* "View.MemoryView":896
- * if suboffset >= 0:
- * if not is_slice:
- * if new_ndim == 0: # <<<<<<<<<<<<<<
- * dst.data = ( dst.data)[0] + suboffset
- * else:
- */
- goto __pyx_L26;
- }
-
- /* "View.MemoryView":899
- * dst.data = ( dst.data)[0] + suboffset
- * else:
- * _err_dim(IndexError, "All dimensions preceding dimension %d " # <<<<<<<<<<<<<<
- * "must be indexed and not sliced", dim)
- * else:
- */
- /*else*/ {
-
- /* "View.MemoryView":900
- * else:
- * _err_dim(IndexError, "All dimensions preceding dimension %d "
- * "must be indexed and not sliced", dim) # <<<<<<<<<<<<<<
- * else:
- * suboffset_dim[0] = new_ndim
- */
- __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"All dimensions preceding dimension %d must be indexed and not sliced"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 899, __pyx_L1_error)
- }
- __pyx_L26:;
-
- /* "View.MemoryView":895
- *
- * if suboffset >= 0:
- * if not is_slice: # <<<<<<<<<<<<<<
- * if new_ndim == 0:
- * dst.data = ( dst.data)[0] + suboffset
- */
- goto __pyx_L25;
- }
-
- /* "View.MemoryView":902
- * "must be indexed and not sliced", dim)
- * else:
- * suboffset_dim[0] = new_ndim # <<<<<<<<<<<<<<
- *
- * return 0
- */
- /*else*/ {
- (__pyx_v_suboffset_dim[0]) = __pyx_v_new_ndim;
- }
- __pyx_L25:;
-
- /* "View.MemoryView":894
- * dst.suboffsets[suboffset_dim[0]] += start * stride
- *
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * if not is_slice:
- * if new_ndim == 0:
- */
- }
-
- /* "View.MemoryView":904
- * suboffset_dim[0] = new_ndim
- *
- * return 0 # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_r = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":807
- *
- * @cname('__pyx_memoryview_slice_memviewslice')
- * cdef int slice_memviewslice( # <<<<<<<<<<<<<<
- * __Pyx_memviewslice *dst,
- * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset,
- */
-
- /* function exit code */
- __pyx_L1_error:;
- {
- #ifdef WITH_THREAD
- PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure();
- #endif
- __Pyx_AddTraceback("View.MemoryView.slice_memviewslice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- #ifdef WITH_THREAD
- __Pyx_PyGILState_Release(__pyx_gilstate_save);
- #endif
- }
- __pyx_r = -1;
- __pyx_L0:;
- return __pyx_r;
-}
-
-/* "View.MemoryView":910
- *
- * @cname('__pyx_pybuffer_index')
- * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<<
- * Py_ssize_t dim) except NULL:
- * cdef Py_ssize_t shape, stride, suboffset = -1
- */
-
-static char *__pyx_pybuffer_index(Py_buffer *__pyx_v_view, char *__pyx_v_bufp, Py_ssize_t __pyx_v_index, Py_ssize_t __pyx_v_dim) {
- Py_ssize_t __pyx_v_shape;
- Py_ssize_t __pyx_v_stride;
- Py_ssize_t __pyx_v_suboffset;
- Py_ssize_t __pyx_v_itemsize;
- char *__pyx_v_resultp;
- char *__pyx_r;
- __Pyx_RefNannyDeclarations
- Py_ssize_t __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("pybuffer_index", 0);
-
- /* "View.MemoryView":912
- * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index,
- * Py_ssize_t dim) except NULL:
- * cdef Py_ssize_t shape, stride, suboffset = -1 # <<<<<<<<<<<<<<
- * cdef Py_ssize_t itemsize = view.itemsize
- * cdef char *resultp
- */
- __pyx_v_suboffset = -1L;
-
- /* "View.MemoryView":913
- * Py_ssize_t dim) except NULL:
- * cdef Py_ssize_t shape, stride, suboffset = -1
- * cdef Py_ssize_t itemsize = view.itemsize # <<<<<<<<<<<<<<
- * cdef char *resultp
- *
- */
- __pyx_t_1 = __pyx_v_view->itemsize;
- __pyx_v_itemsize = __pyx_t_1;
-
- /* "View.MemoryView":916
- * cdef char *resultp
- *
- * if view.ndim == 0: # <<<<<<<<<<<<<<
- * shape = view.len / itemsize
- * stride = itemsize
- */
- __pyx_t_2 = ((__pyx_v_view->ndim == 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":917
- *
- * if view.ndim == 0:
- * shape = view.len / itemsize # <<<<<<<<<<<<<<
- * stride = itemsize
- * else:
- */
- if (unlikely(__pyx_v_itemsize == 0)) {
- PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero");
- __PYX_ERR(1, 917, __pyx_L1_error)
- }
- else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_view->len))) {
- PyErr_SetString(PyExc_OverflowError, "value too large to perform division");
- __PYX_ERR(1, 917, __pyx_L1_error)
- }
- __pyx_v_shape = __Pyx_div_Py_ssize_t(__pyx_v_view->len, __pyx_v_itemsize);
-
- /* "View.MemoryView":918
- * if view.ndim == 0:
- * shape = view.len / itemsize
- * stride = itemsize # <<<<<<<<<<<<<<
- * else:
- * shape = view.shape[dim]
- */
- __pyx_v_stride = __pyx_v_itemsize;
-
- /* "View.MemoryView":916
- * cdef char *resultp
- *
- * if view.ndim == 0: # <<<<<<<<<<<<<<
- * shape = view.len / itemsize
- * stride = itemsize
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":920
- * stride = itemsize
- * else:
- * shape = view.shape[dim] # <<<<<<<<<<<<<<
- * stride = view.strides[dim]
- * if view.suboffsets != NULL:
- */
- /*else*/ {
- __pyx_v_shape = (__pyx_v_view->shape[__pyx_v_dim]);
-
- /* "View.MemoryView":921
- * else:
- * shape = view.shape[dim]
- * stride = view.strides[dim] # <<<<<<<<<<<<<<
- * if view.suboffsets != NULL:
- * suboffset = view.suboffsets[dim]
- */
- __pyx_v_stride = (__pyx_v_view->strides[__pyx_v_dim]);
-
- /* "View.MemoryView":922
- * shape = view.shape[dim]
- * stride = view.strides[dim]
- * if view.suboffsets != NULL: # <<<<<<<<<<<<<<
- * suboffset = view.suboffsets[dim]
- *
- */
- __pyx_t_2 = ((__pyx_v_view->suboffsets != NULL) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":923
- * stride = view.strides[dim]
- * if view.suboffsets != NULL:
- * suboffset = view.suboffsets[dim] # <<<<<<<<<<<<<<
- *
- * if index < 0:
- */
- __pyx_v_suboffset = (__pyx_v_view->suboffsets[__pyx_v_dim]);
-
- /* "View.MemoryView":922
- * shape = view.shape[dim]
- * stride = view.strides[dim]
- * if view.suboffsets != NULL: # <<<<<<<<<<<<<<
- * suboffset = view.suboffsets[dim]
- *
- */
- }
- }
- __pyx_L3:;
-
- /* "View.MemoryView":925
- * suboffset = view.suboffsets[dim]
- *
- * if index < 0: # <<<<<<<<<<<<<<
- * index += view.shape[dim]
- * if index < 0:
- */
- __pyx_t_2 = ((__pyx_v_index < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":926
- *
- * if index < 0:
- * index += view.shape[dim] # <<<<<<<<<<<<<<
- * if index < 0:
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- */
- __pyx_v_index = (__pyx_v_index + (__pyx_v_view->shape[__pyx_v_dim]));
-
- /* "View.MemoryView":927
- * if index < 0:
- * index += view.shape[dim]
- * if index < 0: # <<<<<<<<<<<<<<
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- */
- __pyx_t_2 = ((__pyx_v_index < 0) != 0);
- if (unlikely(__pyx_t_2)) {
-
- /* "View.MemoryView":928
- * index += view.shape[dim]
- * if index < 0:
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<<
- *
- * if index >= shape:
- */
- __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 928, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 928, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 928, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 928, __pyx_L1_error)
-
- /* "View.MemoryView":927
- * if index < 0:
- * index += view.shape[dim]
- * if index < 0: # <<<<<<<<<<<<<<
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- */
- }
-
- /* "View.MemoryView":925
- * suboffset = view.suboffsets[dim]
- *
- * if index < 0: # <<<<<<<<<<<<<<
- * index += view.shape[dim]
- * if index < 0:
- */
- }
-
- /* "View.MemoryView":930
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- * if index >= shape: # <<<<<<<<<<<<<<
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- */
- __pyx_t_2 = ((__pyx_v_index >= __pyx_v_shape) != 0);
- if (unlikely(__pyx_t_2)) {
-
- /* "View.MemoryView":931
- *
- * if index >= shape:
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<<
- *
- * resultp = bufp + index * stride
- */
- __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 931, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 931, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 931, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 931, __pyx_L1_error)
-
- /* "View.MemoryView":930
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- * if index >= shape: # <<<<<<<<<<<<<<
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- */
- }
-
- /* "View.MemoryView":933
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- * resultp = bufp + index * stride # <<<<<<<<<<<<<<
- * if suboffset >= 0:
- * resultp = ( resultp)[0] + suboffset
- */
- __pyx_v_resultp = (__pyx_v_bufp + (__pyx_v_index * __pyx_v_stride));
-
- /* "View.MemoryView":934
- *
- * resultp = bufp + index * stride
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * resultp = ( resultp)[0] + suboffset
- *
- */
- __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":935
- * resultp = bufp + index * stride
- * if suboffset >= 0:
- * resultp = ( resultp)[0] + suboffset # <<<<<<<<<<<<<<
- *
- * return resultp
- */
- __pyx_v_resultp = ((((char **)__pyx_v_resultp)[0]) + __pyx_v_suboffset);
-
- /* "View.MemoryView":934
- *
- * resultp = bufp + index * stride
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * resultp = ( resultp)[0] + suboffset
- *
- */
- }
-
- /* "View.MemoryView":937
- * resultp = ( resultp)[0] + suboffset
- *
- * return resultp # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_r = __pyx_v_resultp;
- goto __pyx_L0;
-
- /* "View.MemoryView":910
- *
- * @cname('__pyx_pybuffer_index')
- * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<<
- * Py_ssize_t dim) except NULL:
- * cdef Py_ssize_t shape, stride, suboffset = -1
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_AddTraceback("View.MemoryView.pybuffer_index", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":943
- *
- * @cname('__pyx_memslice_transpose')
- * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<<
- * cdef int ndim = memslice.memview.view.ndim
- *
- */
-
-static int __pyx_memslice_transpose(__Pyx_memviewslice *__pyx_v_memslice) {
- int __pyx_v_ndim;
- Py_ssize_t *__pyx_v_shape;
- Py_ssize_t *__pyx_v_strides;
- int __pyx_v_i;
- int __pyx_v_j;
- int __pyx_r;
- int __pyx_t_1;
- Py_ssize_t *__pyx_t_2;
- long __pyx_t_3;
- long __pyx_t_4;
- Py_ssize_t __pyx_t_5;
- Py_ssize_t __pyx_t_6;
- int __pyx_t_7;
- int __pyx_t_8;
- int __pyx_t_9;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
-
- /* "View.MemoryView":944
- * @cname('__pyx_memslice_transpose')
- * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0:
- * cdef int ndim = memslice.memview.view.ndim # <<<<<<<<<<<<<<
- *
- * cdef Py_ssize_t *shape = memslice.shape
- */
- __pyx_t_1 = __pyx_v_memslice->memview->view.ndim;
- __pyx_v_ndim = __pyx_t_1;
-
- /* "View.MemoryView":946
- * cdef int ndim = memslice.memview.view.ndim
- *
- * cdef Py_ssize_t *shape = memslice.shape # <<<<<<<<<<<<<<
- * cdef Py_ssize_t *strides = memslice.strides
- *
- */
- __pyx_t_2 = __pyx_v_memslice->shape;
- __pyx_v_shape = __pyx_t_2;
-
- /* "View.MemoryView":947
- *
- * cdef Py_ssize_t *shape = memslice.shape
- * cdef Py_ssize_t *strides = memslice.strides # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_2 = __pyx_v_memslice->strides;
- __pyx_v_strides = __pyx_t_2;
-
- /* "View.MemoryView":951
- *
- * cdef int i, j
- * for i in range(ndim / 2): # <<<<<<<<<<<<<<
- * j = ndim - 1 - i
- * strides[i], strides[j] = strides[j], strides[i]
- */
- __pyx_t_3 = __Pyx_div_long(__pyx_v_ndim, 2);
- __pyx_t_4 = __pyx_t_3;
- for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_4; __pyx_t_1+=1) {
- __pyx_v_i = __pyx_t_1;
-
- /* "View.MemoryView":952
- * cdef int i, j
- * for i in range(ndim / 2):
- * j = ndim - 1 - i # <<<<<<<<<<<<<<
- * strides[i], strides[j] = strides[j], strides[i]
- * shape[i], shape[j] = shape[j], shape[i]
- */
- __pyx_v_j = ((__pyx_v_ndim - 1) - __pyx_v_i);
-
- /* "View.MemoryView":953
- * for i in range(ndim / 2):
- * j = ndim - 1 - i
- * strides[i], strides[j] = strides[j], strides[i] # <<<<<<<<<<<<<<
- * shape[i], shape[j] = shape[j], shape[i]
- *
- */
- __pyx_t_5 = (__pyx_v_strides[__pyx_v_j]);
- __pyx_t_6 = (__pyx_v_strides[__pyx_v_i]);
- (__pyx_v_strides[__pyx_v_i]) = __pyx_t_5;
- (__pyx_v_strides[__pyx_v_j]) = __pyx_t_6;
-
- /* "View.MemoryView":954
- * j = ndim - 1 - i
- * strides[i], strides[j] = strides[j], strides[i]
- * shape[i], shape[j] = shape[j], shape[i] # <<<<<<<<<<<<<<
- *
- * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0:
- */
- __pyx_t_6 = (__pyx_v_shape[__pyx_v_j]);
- __pyx_t_5 = (__pyx_v_shape[__pyx_v_i]);
- (__pyx_v_shape[__pyx_v_i]) = __pyx_t_6;
- (__pyx_v_shape[__pyx_v_j]) = __pyx_t_5;
-
- /* "View.MemoryView":956
- * shape[i], shape[j] = shape[j], shape[i]
- *
- * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<<
- * _err(ValueError, "Cannot transpose memoryview with indirect dimensions")
- *
- */
- __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_i]) >= 0) != 0);
- if (!__pyx_t_8) {
- } else {
- __pyx_t_7 = __pyx_t_8;
- goto __pyx_L6_bool_binop_done;
- }
- __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_j]) >= 0) != 0);
- __pyx_t_7 = __pyx_t_8;
- __pyx_L6_bool_binop_done:;
- if (__pyx_t_7) {
-
- /* "View.MemoryView":957
- *
- * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0:
- * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") # <<<<<<<<<<<<<<
- *
- * return 1
- */
- __pyx_t_9 = __pyx_memoryview_err(__pyx_builtin_ValueError, ((char *)"Cannot transpose memoryview with indirect dimensions")); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 957, __pyx_L1_error)
-
- /* "View.MemoryView":956
- * shape[i], shape[j] = shape[j], shape[i]
- *
- * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<<
- * _err(ValueError, "Cannot transpose memoryview with indirect dimensions")
- *
- */
- }
- }
-
- /* "View.MemoryView":959
- * _err(ValueError, "Cannot transpose memoryview with indirect dimensions")
- *
- * return 1 # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_r = 1;
- goto __pyx_L0;
-
- /* "View.MemoryView":943
- *
- * @cname('__pyx_memslice_transpose')
- * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<<
- * cdef int ndim = memslice.memview.view.ndim
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- {
- #ifdef WITH_THREAD
- PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure();
- #endif
- __Pyx_AddTraceback("View.MemoryView.transpose_memslice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- #ifdef WITH_THREAD
- __Pyx_PyGILState_Release(__pyx_gilstate_save);
- #endif
- }
- __pyx_r = 0;
- __pyx_L0:;
- return __pyx_r;
-}
-
-/* "View.MemoryView":976
- * cdef int (*to_dtype_func)(char *, object) except 0
- *
- * def __dealloc__(self): # <<<<<<<<<<<<<<
- * __PYX_XDEC_MEMVIEW(&self.from_slice, 1)
- *
- */
-
-/* Python wrapper */
-static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self); /*proto*/
-static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0);
- __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__dealloc__", 0);
-
- /* "View.MemoryView":977
- *
- * def __dealloc__(self):
- * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) # <<<<<<<<<<<<<<
- *
- * cdef convert_item_to_object(self, char *itemp):
- */
- __PYX_XDEC_MEMVIEW((&__pyx_v_self->from_slice), 1);
-
- /* "View.MemoryView":976
- * cdef int (*to_dtype_func)(char *, object) except 0
- *
- * def __dealloc__(self): # <<<<<<<<<<<<<<
- * __PYX_XDEC_MEMVIEW(&self.from_slice, 1)
- *
- */
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-/* "View.MemoryView":979
- * __PYX_XDEC_MEMVIEW(&self.from_slice, 1)
- *
- * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<<
- * if self.to_object_func != NULL:
- * return self.to_object_func(itemp)
- */
-
-static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("convert_item_to_object", 0);
-
- /* "View.MemoryView":980
- *
- * cdef convert_item_to_object(self, char *itemp):
- * if self.to_object_func != NULL: # <<<<<<<<<<<<<<
- * return self.to_object_func(itemp)
- * else:
- */
- __pyx_t_1 = ((__pyx_v_self->to_object_func != NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":981
- * cdef convert_item_to_object(self, char *itemp):
- * if self.to_object_func != NULL:
- * return self.to_object_func(itemp) # <<<<<<<<<<<<<<
- * else:
- * return memoryview.convert_item_to_object(self, itemp)
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __pyx_v_self->to_object_func(__pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 981, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":980
- *
- * cdef convert_item_to_object(self, char *itemp):
- * if self.to_object_func != NULL: # <<<<<<<<<<<<<<
- * return self.to_object_func(itemp)
- * else:
- */
- }
-
- /* "View.MemoryView":983
- * return self.to_object_func(itemp)
- * else:
- * return memoryview.convert_item_to_object(self, itemp) # <<<<<<<<<<<<<<
- *
- * cdef assign_item_from_object(self, char *itemp, object value):
- */
- /*else*/ {
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __pyx_memoryview_convert_item_to_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 983, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":979
- * __PYX_XDEC_MEMVIEW(&self.from_slice, 1)
- *
- * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<<
- * if self.to_object_func != NULL:
- * return self.to_object_func(itemp)
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView._memoryviewslice.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":985
- * return memoryview.convert_item_to_object(self, itemp)
- *
- * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<<
- * if self.to_dtype_func != NULL:
- * self.to_dtype_func(itemp, value)
- */
-
-static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("assign_item_from_object", 0);
-
- /* "View.MemoryView":986
- *
- * cdef assign_item_from_object(self, char *itemp, object value):
- * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<<
- * self.to_dtype_func(itemp, value)
- * else:
- */
- __pyx_t_1 = ((__pyx_v_self->to_dtype_func != NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":987
- * cdef assign_item_from_object(self, char *itemp, object value):
- * if self.to_dtype_func != NULL:
- * self.to_dtype_func(itemp, value) # <<<<<<<<<<<<<<
- * else:
- * memoryview.assign_item_from_object(self, itemp, value)
- */
- __pyx_t_2 = __pyx_v_self->to_dtype_func(__pyx_v_itemp, __pyx_v_value); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 987, __pyx_L1_error)
-
- /* "View.MemoryView":986
- *
- * cdef assign_item_from_object(self, char *itemp, object value):
- * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<<
- * self.to_dtype_func(itemp, value)
- * else:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":989
- * self.to_dtype_func(itemp, value)
- * else:
- * memoryview.assign_item_from_object(self, itemp, value) # <<<<<<<<<<<<<<
- *
- * @property
- */
- /*else*/ {
- __pyx_t_3 = __pyx_memoryview_assign_item_from_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 989, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- }
- __pyx_L3:;
-
- /* "View.MemoryView":985
- * return memoryview.convert_item_to_object(self, itemp)
- *
- * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<<
- * if self.to_dtype_func != NULL:
- * self.to_dtype_func(itemp, value)
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView._memoryviewslice.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":992
- *
- * @property
- * def base(self): # <<<<<<<<<<<<<<
- * return self.from_object
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":993
- * @property
- * def base(self):
- * return self.from_object # <<<<<<<<<<<<<<
- *
- * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)")
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_self->from_object);
- __pyx_r = __pyx_v_self->from_object;
- goto __pyx_L0;
-
- /* "View.MemoryView":992
- *
- * @property
- * def base(self): # <<<<<<<<<<<<<<
- * return self.from_object
- *
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_memoryviewslice___reduce_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__reduce_cython__", 0);
-
- /* "(tree fragment)":2
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__18, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 2, __pyx_L1_error)
-
- /* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/
-static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_memoryviewslice_2__setstate_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setstate_cython__", 0);
-
- /* "(tree fragment)":4
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__19, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 4, __pyx_L1_error)
-
- /* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":999
- *
- * @cname('__pyx_memoryview_fromslice')
- * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<<
- * int ndim,
- * object (*to_object_func)(char *),
- */
-
-static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice __pyx_v_memviewslice, int __pyx_v_ndim, PyObject *(*__pyx_v_to_object_func)(char *), int (*__pyx_v_to_dtype_func)(char *, PyObject *), int __pyx_v_dtype_is_object) {
- struct __pyx_memoryviewslice_obj *__pyx_v_result = 0;
- Py_ssize_t __pyx_v_suboffset;
- PyObject *__pyx_v_length = NULL;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- __Pyx_TypeInfo *__pyx_t_4;
- Py_buffer __pyx_t_5;
- Py_ssize_t *__pyx_t_6;
- Py_ssize_t *__pyx_t_7;
- Py_ssize_t *__pyx_t_8;
- Py_ssize_t __pyx_t_9;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("memoryview_fromslice", 0);
-
- /* "View.MemoryView":1007
- * cdef _memoryviewslice result
- *
- * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<<
- * return None
- *
- */
- __pyx_t_1 = ((((PyObject *)__pyx_v_memviewslice.memview) == Py_None) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":1008
- *
- * if memviewslice.memview == Py_None:
- * return None # <<<<<<<<<<<<<<
- *
- *
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
-
- /* "View.MemoryView":1007
- * cdef _memoryviewslice result
- *
- * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<<
- * return None
- *
- */
- }
-
- /* "View.MemoryView":1013
- *
- *
- * result = _memoryviewslice(None, 0, dtype_is_object) # <<<<<<<<<<<<<<
- *
- * result.from_slice = memviewslice
- */
- __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1013, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1013, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_INCREF(Py_None);
- __Pyx_GIVEREF(Py_None);
- PyTuple_SET_ITEM(__pyx_t_3, 0, Py_None);
- __Pyx_INCREF(__pyx_int_0);
- __Pyx_GIVEREF(__pyx_int_0);
- PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_0);
- __Pyx_GIVEREF(__pyx_t_2);
- PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2);
- __pyx_t_2 = 0;
- __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryviewslice_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1013, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_2);
- __pyx_t_2 = 0;
-
- /* "View.MemoryView":1015
- * result = _memoryviewslice(None, 0, dtype_is_object)
- *
- * result.from_slice = memviewslice # <<<<<<<<<<<<<<
- * __PYX_INC_MEMVIEW(&memviewslice, 1)
- *
- */
- __pyx_v_result->from_slice = __pyx_v_memviewslice;
-
- /* "View.MemoryView":1016
- *
- * result.from_slice = memviewslice
- * __PYX_INC_MEMVIEW(&memviewslice, 1) # <<<<<<<<<<<<<<
- *
- * result.from_object = ( memviewslice.memview).base
- */
- __PYX_INC_MEMVIEW((&__pyx_v_memviewslice), 1);
-
- /* "View.MemoryView":1018
- * __PYX_INC_MEMVIEW(&memviewslice, 1)
- *
- * result.from_object = ( memviewslice.memview).base # <<<<<<<<<<<<<<
- * result.typeinfo = memviewslice.memview.typeinfo
- *
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_memviewslice.memview), __pyx_n_s_base); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1018, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_GIVEREF(__pyx_t_2);
- __Pyx_GOTREF(__pyx_v_result->from_object);
- __Pyx_DECREF(__pyx_v_result->from_object);
- __pyx_v_result->from_object = __pyx_t_2;
- __pyx_t_2 = 0;
-
- /* "View.MemoryView":1019
- *
- * result.from_object = ( memviewslice.memview).base
- * result.typeinfo = memviewslice.memview.typeinfo # <<<<<<<<<<<<<<
- *
- * result.view = memviewslice.memview.view
- */
- __pyx_t_4 = __pyx_v_memviewslice.memview->typeinfo;
- __pyx_v_result->__pyx_base.typeinfo = __pyx_t_4;
-
- /* "View.MemoryView":1021
- * result.typeinfo = memviewslice.memview.typeinfo
- *
- * result.view = memviewslice.memview.view # <<<<<<<<<<<<<<
- * result.view.buf = memviewslice.data
- * result.view.ndim = ndim
- */
- __pyx_t_5 = __pyx_v_memviewslice.memview->view;
- __pyx_v_result->__pyx_base.view = __pyx_t_5;
-
- /* "View.MemoryView":1022
- *
- * result.view = memviewslice.memview.view
- * result.view.buf = memviewslice.data # <<<<<<<<<<<<<<
- * result.view.ndim = ndim
- * (<__pyx_buffer *> &result.view).obj = Py_None
- */
- __pyx_v_result->__pyx_base.view.buf = ((void *)__pyx_v_memviewslice.data);
-
- /* "View.MemoryView":1023
- * result.view = memviewslice.memview.view
- * result.view.buf = memviewslice.data
- * result.view.ndim = ndim # <<<<<<<<<<<<<<
- * (<__pyx_buffer *> &result.view).obj = Py_None
- * Py_INCREF(Py_None)
- */
- __pyx_v_result->__pyx_base.view.ndim = __pyx_v_ndim;
-
- /* "View.MemoryView":1024
- * result.view.buf = memviewslice.data
- * result.view.ndim = ndim
- * (<__pyx_buffer *> &result.view).obj = Py_None # <<<<<<<<<<<<<<
- * Py_INCREF(Py_None)
- *
- */
- ((Py_buffer *)(&__pyx_v_result->__pyx_base.view))->obj = Py_None;
-
- /* "View.MemoryView":1025
- * result.view.ndim = ndim
- * (<__pyx_buffer *> &result.view).obj = Py_None
- * Py_INCREF(Py_None) # <<<<<<<<<<<<<<
- *
- * if (