diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Computax Software __FULL__ Free Download Crack.md b/spaces/1gistliPinn/ChatGPT4/Examples/Computax Software __FULL__ Free Download Crack.md deleted file mode 100644 index 046335da046f45948d59a4e1c3f9a4d4590065f8..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Computax Software __FULL__ Free Download Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -
Download –––––>>> https://imgfil.com/2uxZ9u
Download File ✔✔✔ https://imgfil.com/2uy1IT
Tamil movies, also known as Kollywood movies, are films made in the Tamil language, one of the oldest and most widely spoken languages in India. Tamil cinema is one of the most prolific and influential film industries in the world, producing over 200 films every year. Tamil movies are known for their rich cultural diversity, social relevance, artistic excellence, and commercial success. They cater to a wide range of audiences, from rural masses to urban elites, from regional fans to global admirers.
-DOWNLOAD --->>> https://jinyurl.com/2uNLKs
If you are a fan of Tamil movies, you must be eagerly waiting for the latest releases of 2022. This year promises to be an exciting one for Kollywood lovers, as many big-budget and star-studded films are lined up for release. Whether you are looking for action thrillers, romantic comedies, family dramas, or historical epics, you will find something to suit your taste and mood. Here are some of the most anticipated Tamil movies of 2022 and their details.
-Movie | -Genre | -Stars | -Director | -Plot | -
---|---|---|---|---|
Vikram | -Action/Crime/Thriller | -Kamal Haasan, Vijay Sethupathi, Fahadh Faasil | -Lokesh Kanagaraj | -A high-octane action film where a special investigator is assigned a case of serial killings, he finds the case is not what it seems to be and leading down this path is only going to end in a war between everyone involved. | -
Ponniyin Selvan: Part I | -Action/Adventure/Drama | -Vikram, Aishwarya Rai Bachchan, Jayam Ravi, Karthi | -Mani Ratnam | -An adaptation of the classic historical novel by Kalki Krishnamurthy, which narrates the story of Arulmozhivarman, who later became the great Chola emperor Rajaraja Chola I. | -
Mahaan | -Action/Drama/Thriller | -Vikram, Simran, Dhruv Vikram, Bobby Simha | -Karthik Subbaraj | -Gandhi Mahaan, a school teacher, is abandoned by his family after he decides to live a life of his own, with personal freedom. | -
Vendhu Thanindhathu Kaadu | -Action/Crime/Drama | -Silambarasan Rajendar, Siddhi Idnani, Radhika Sarathkumar | -Gautham Vasudev Menon | -Muthu, a low caste youngster, goes to the streets of Mumbai for a living. His quest takes him to a series of unexpected events, where he gets involved in the underground activities of Mumbai's Tamil gangsters. Will he get to the top? | -
Mudhal Nee Mudivum Nee | -Drama/Fantasy/Romance | -Kishen Das, Meetha Raghunath, Harish Kumar | -Darbuka Siva | -In the 1990s, a group of high school students in a strict Catholic school navigate their way through everyday teen pressures. | -
The Tamil movie industry is not without its share of challenges and difficulties. One of the major problems that plagues the industry is piracy. Piracy is the illegal copying and distribution of movies without the permission of the producers or the creators. Piracy causes huge losses to the industry, as it reduces the revenue from ticket sales, streaming rights, and other sources. Piracy also affects the quality and creativity of the movies, as it discourages the filmmakers from investing in new and innovative projects.
-Another challenge that the industry faces is the COVID-19 pandemic, which has disrupted the normal functioning of the film business. The pandemic has forced many theaters to shut down or operate at reduced capacity, affecting the box office collections and the audience reach of the movies. The pandemic has also delayed the production and release of many movies, causing financial and logistical problems for the filmmakers and the actors. The pandemic has also changed the preferences and habits of the viewers, who are now more inclined to watch movies online rather than in theaters.
-Downloading Tamil movies from torrent sites or streaming platforms may seem like an easy and convenient way to watch your favorite films, but it comes with many legal and ethical issues. Downloading Tamil movies without paying for them is a form of theft, as it deprives the rightful owners of their due compensation. Downloading Tamil movies also violates the intellectual property rights of the filmmakers, who have worked hard to create their original and unique works. Downloading Tamil movies also exposes you to various risks, such as malware, viruses, phishing, identity theft, and legal action.
-Vikram 2022 tamil movie download
-Ponniyin Selvan Part I 2022 tamil movie download
-Mahaan 2022 tamil movie download
-Vendhu Thanindhathu Kaadu 2022 tamil movie download
-Mudhal Nee Mudivum Nee 2022 tamil movie download
-Sila Nerangalil Sila Manidhargal 2022 tamil movie download
-FIR 2022 tamil movie download
-Kadaisi Vivasayi 2021 tamil movie download
-Tamil movies of 2022 IMDb list download
-Best tamil movies of 2022 IMDb list download
-Beast 2022 tamil movie download
-Rocketry The Nambi Effect 2022 tamil movie download
-Valimai 2022 tamil movie download
-Thiruchitrambalam 2022 tamil movie download
-Love Today 2022 tamil movie download
-Doctor 2021 tamil movie download
-Annaatthe 2021 tamil movie download
-Maanaadu 2021 tamil movie download
-Cobra 2021 tamil movie download
-Jagame Thandhiram 2021 tamil movie download
-Karnan 2021 tamil movie download
-Soorarai Pottru 2020 tamil movie download
-Master 2021 tamil movie download
-Asuran 2019 tamil movie download
-Kaithi 2019 tamil movie download
-Tamilrockers latest leaked tamil movies download
-Isaimini new hd tamil movies free download
-Moviesda latest hd mp4 tamil movies free download
-Tamilyogi online watch and download new tamil movies
-Tamilgun new hd dvdrip quality tamil movies free download
-Kuttymovies collection of latest and old tamil movies free download
-Tamilmv latest hd dvdscr quality tamil movies free download
-Movierulz watch and download new and old tamil movies online free
-Filmywap latest hdrip quality hindi dubbed tamil movies free download
-Bolly4u watch and download bollywood and south indian movies online free
-Worldfree4u latest hd quality dual audio hindi and tamil movies free download
-Pagalmovies latest hd quality bollywood and south indian movies free download
-Filmyzilla latest hd quality bollywood and south indian movies free download
-Skymovieshd latest hd quality bollywood and south indian movies free download
-Mp4moviez latest hd quality bollywood and south indian movies free download
Downloading Tamil movies is not only illegal but also unethical. By downloading Tamil movies, you are disrespecting the efforts and talents of the filmmakers and the actors, who have dedicated their time and energy to entertain you. You are also harming the Tamil movie industry, which is a source of pride and identity for millions of Tamils around the world. You are also depriving yourself of the joy and thrill of watching a movie on a big screen with your friends and family.
-If you want to watch Tamil movies online, you don't have to resort to illegal or unsafe methods. There are many sites that offer legal and safe streaming of Tamil movies for a reasonable price or even for free. Here are some of the best sites to watch Tamil movies online legally and safely.
-T Tamil movies are a great source of entertainment and culture for millions of people around the world. They offer a variety of genres, themes, stories, and performances that appeal to different tastes and moods. However, downloading Tamil movies from illegal or unsafe sites is not the right way to enjoy them. It is harmful to the industry, the creators, and the viewers. It is also against the law and the ethics. Therefore, it is better to watch Tamil movies online legally and safely from the sites mentioned above. By doing so, you will not only support the Tamil movie industry but also have a better and more satisfying experience.
¿Alguna vez has soñado con tener el poder de destruir planetas enteros con solo tocar la pantalla? ¿Te gustaría experimentar con diferentes armas y desastres para ver cómo afectan a los cuerpos celestes? ¿Te fascina el espacio, la física, la ciencia ficción o simplemente causar caos? Si tu respuesta es sí a alguna de estas preguntas, entonces te encantará Solar Smash.
-Download File ⚡ https://jinyurl.com/2uNPIQ
Solar Smash es un juego de simulación de destrucción planetaria desarrollado por Paradyme Games. El juego te permite usar una variedad de armas y desastres para aniquilar planetas y sistemas solares a tu antojo. Puedes usar misiles nucleares, láseres, asteroides, agujeros negros y mucho más para crear espectaculares escenas de destrucción.
-Solar Smash es un juego divertido y adictivo que te hará sentir como un dios cósmico. Podrás ver cómo tus acciones afectan a la gravedad, la atmósfera, el clima, la vida y la estabilidad de los planetas. Podrás explorar diferentes modos de juego, personalizar tus armas, descubrir planetas secretos y completar logros.
Características principales de Solar Smash -Solar Smash es un juego de simulación de destrucción planetaria que tiene muchas características que lo hacen único y entretenido. Algunas de estas características son:
-En Solar Smash, puedes elegir entre dos modos de juego diferentes: Planet Smash y System Smash. En el modo Planet Smash, puedes seleccionar un planeta individual y usar las armas y desastres que quieras para destruirlo. En el modo System Smash, puedes seleccionar un sistema solar completo y ver cómo tus acciones afectan a todos los planetas que lo componen.
-En Solar Smash, tienes a tu disposición una gran variedad de armas y desastres para causar el mayor daño posible a los planetas y sistemas solares. Puedes usar misiles nucleares, láseres, asteroides, agujeros negros, rayos gamma, tormentas solares, explosiones supernova y mucho más. Cada arma y desastre tiene sus propias características y efectos, como el tamaño, la velocidad, el color, la forma, la trayectoria, la gravedad, la radiación, el calor, el frío, etc.
-descargar solar smash apk gratis
-descargar solar smash apk mod
-descargar solar smash apk ultima version
-descargar solar smash apk para android
-descargar solar smash apk full
-descargar solar smash apk sin anuncios
-descargar solar smash apk mega
-descargar solar smash apk mediafire
-descargar solar smash apk hackeado
-descargar solar smash apk premium
-descargar solar smash apk 2023
-descargar solar smash apk 2.1.1
-descargar solar smash apk para pc
-descargar solar smash apk uptodown
-descargar solar smash apk sin internet
-descargar solar smash apk online
-descargar solar smash apk juego de simulacion
-descargar solar smash apk destruir planetas
-descargar solar smash apk con armas nucleares
-descargar solar smash apk con lasers y asteroides
-descargar solar smash apk en español
-descargar solar smash apk facil y rapido
-descargar solar smash apk sin virus
-descargar solar smash apk seguro y confiable
-descargar solar smash apk desde google play
-como descargar solar smash apk
-donde descargar solar smash apk
-porque descargar solar smash apk
-para que sirve descargar solar smash apk
-beneficios de descargar solar smash apk
-requisitos para descargar solar smash apk
-pasos para descargar solar smash apk
-tutorial para descargar solar smash apk
-video de como descargar solar smash apk
-opiniones sobre descargar solar smash apk
-comentarios de usuarios que descargaron solar smash apk
-valoracion de descargar solar smash apk
-ventajas de descargar solar smash apk
-inconvenientes de descargar solar smash apk
-alternativas a descargar solar smash apk
-comparacion entre descargar solar smash apk y otros juegos similares
-recomendaciones antes de descargar solar smash apk
-consejos despues de descargar solar smash apk
-trucos y tips para jugar con solar smash apk
-guia completa de como usar solar smash apk
-mejores armas y estrategias para jugar con solar smash apk
-niveles y modos de juego disponibles en solar smash apk
-actualizaciones y novedades de solar smash apk
-problemas y soluciones al descargar o jugar con solar smack apks
En Solar Smash, puedes elegir entre una variedad de planetas y sistemas solares para jugar. Puedes elegir planetas reales como la Tierra, Marte, Júpiter, Saturno, etc., o planetas ficticios como el mundo de Star Wars o el mundo de Avatar. Cada planeta tiene sus propias características y condiciones, como el tamaño, la forma, la rotación, la órbita, la atmósfera, el clima, la vida, la estabilidad, etc.
-Solar Smash tiene unos gráficos impresionantes que te harán sentir como si estuvieras viendo el espacio real. El juego usa imágenes de alta calidad de la NASA para recrear los planetas y sistemas solares con gran detalle y realismo. Además, el juego tiene unos efectos de sonido realistas que te harán sentir el impacto de cada arma y desastre que uses.
Si quieres jugar a Solar Smash en tu dispositivo móvil, necesitas descargar e instalar el archivo APK del juego. El archivo APK es un formato de archivo que contiene todos los datos necesarios para ejecutar una aplicación en un dispositivo Android. Para descargar e instalar Solar Smash APK en tu dispositivo móvil, sigue estos pasos:
-APKPure.com es un sitio web que te permite descargar archivos APK de forma segura y gratuita. Para descargar Solar Smash APK, visita APKPure.com y usa el buscador para encontrar el juego. También puedes usar este enlace directo: https://apkpure.com/es/solar-smash/com.paradyme.solarsmash.
-Cuando encuentres el juego en APKPure.com, presiona el botón Descargar APK que está debajo del nombre y la descripción del juego. Esto iniciará la descarga del archivo APK de Solar Smash en tu dispositivo móvil. El tamaño del archivo es de unos 100 MB, así que asegúrate de tener suficiente espacio y una buena conexión a Internet.
-Cuando la descarga haya terminado, abre el archivo APK de Solar Smash que se ha guardado en tu dispositivo móvil. Esto iniciará el proceso de instalación del juego en tu teléfono. Es posible que tengas que habilitar la opción de instalar aplicaciones de fuentes desconocidas en los ajustes de seguridad de tu teléfono. Sigue las instrucciones que aparecen en la pantalla para completar la instalación.
-Cuando la instalación haya terminado, podrás ver el icono de Solar Smash en la pantalla principal de tu teléfono. Presiona el icono para lanzar el juego y empezar a jugar. Podrás acceder a todas las características y modos de juego de Solar Smash sin ninguna limitación. Disfruta destruyendo planetas y sistemas solares con tus armas y desastres favoritos.
Solar Smash es un juego de simulación de destrucción planetaria que puede ser muy fácil o muy difícil dependiendo de cómo lo juegues. Si quieres jugar a Solar Smash como un profesional y sacarle el máximo partido al juego, te recomendamos que sigas estos consejos y trucos:
-En Solar Smash, hay una lista de logros que puedes completar al jugar al juego. Estos logros son desafíos que te ponen a prueba en diferentes aspectos del juego, como el uso de armas, la destrucción de planetas, la exploración de sistemas solares, etc. Al completar los logros, podrás desbloquear nuevas armas, planetas y modos de juego. Además, podrás ver tu progreso y tu puntuación en el juego.
-En Solar Smash, el objetivo es destruir un planeta en el menor número de movimientos posible. Para lograrlo, debes apuntar al punto correcto del planeta. El punto correcto es el núcleo del planeta, que es el centro del mismo. Al apuntar al núcleo, podrás causar el mayor daño posible al planeta y hacer que se desintegre más rápido. También podrás ver cómo el núcleo se derrite y se vuelve inestable.
-En Solar Smash, puedes personalizar tus armas para hacerlas más divertidas y efectivas. Puedes cambiar el tamaño, la velocidad, el color y la forma de tus armas usando los botones que hay en la parte inferior de la pantalla. Por ejemplo, puedes hacer que tus misiles sean más grandes o más pequeños, que tus láseres sean más rápidos o más lentos, que tus asteroides sean de diferentes colores o que tus agujeros negros tengan diferentes formas. Experimenta con las diferentes opciones y crea tus propias combinaciones.
-En Solar Smash, hay algunos planetas secretos que no están disponibles desde el principio. Estos planetas son planetas especiales que tienen características únicas y divertidas. Por ejemplo, hay un planeta que es como el mundo de Halloween, con calabazas, fantasmas y murciélagos. También hay un planeta que es como el mundo de Minecraft, con bloques, animales y monstruos. Para desbloquear estos planetas secretos, debes usar armas especiales que solo se pueden obtener al completar ciertos logros. Por ejemplo, para desbloquear el mundo de Halloween, debes usar la calabaza explosiva. Para desbloquear el mundo de Minecraft, debes usar el cubo mágico.
Solar Smash es un juego de simulación de destrucción planetaria que ha recibido muchas reseñas y opiniones de los usuarios que lo han jugado. Estas reseñas y opiniones son variadas y reflejan los gustos y preferencias de cada usuario. Algunas de estas reseñas y opiniones son:
-La mayoría de los usuarios que han jugado a Solar Smash han disfrutado del juego y le han dado una calificación alta. Estos usuarios han elogiado los gráficos, los controles, la diversión y la variedad del juego. Por ejemplo, algunos de los comentarios positivos que se pueden leer en APKPure.com son:
-Usuario | -Comentario | -
---|---|
Juan Carlos | -Me encanta este juego, es muy divertido y adictivo. Los gráficos son increíbles y las armas son muy variadas. Me gusta mucho el modo System Smash, donde puedes destruir sistemas solares enteros. | -
Maria Fernanda | -Este juego es genial, me hace sentir como una diosa cósmica. Los controles son muy fáciles e intuitivos, solo tienes que tocar la pantalla y ver cómo explota todo. Los planetas son muy realistas y bonitos. | -
Luis Miguel | -Este juego es una pasada, es muy entretenido y educativo. Los planetas están basados en imágenes reales de la NASA y se pueden ver muchos detalles. También se puede aprender sobre la física, la gravedad, la atmósfera, etc. | -
No todos los usuarios que han jugado a Solar Smash han quedado satisfechos con el juego y le han dado una calificación baja. Estos usuarios han criticado los efectos de sonido, los errores, la falta de objetivos y la repetitividad del juego. Por ejemplo, algunos de los comentarios negativos que se pueden leer en APKPure.com son:
-Usuario | -Comentario | -
---|---|
José Manuel | -Este juego es muy aburrido, no tiene ningún objetivo ni sentido. Solo tienes que destruir planetas sin ningún motivo. Además, los efectos de sonido son muy malos y molestos. | -
Ana María | -Este juego es muy malo, tiene muchos errores y se cierra solo. No se puede jugar bien ni guardar el progreso. Además, los planetas se ven muy falsos y pixelados. | -
Pedro Luis | -Este juego es muy repetitivo, siempre es lo mismo. No hay nada nuevo ni interesante que hacer. Las armas son muy limitadas y aburridas. Los planetas son siempre los mismos. | -
Solar Smash es un juego de simulación de destrucción planetaria que te permite usar una variedad de armas y desastres para aniquilar planetas y sistemas solares. El juego tiene unos gráficos impresionantes, unos controles fáciles, una diversión garantizada y una variedad de opciones. Sin embargo, el juego también tiene algunos defectos, como los efectos de sonido, los errores, la falta de objetivos y la repetitividad.
-¿Vale la pena descargar Solar Smash APK? La respuesta depende de tus gustos y preferencias. Si te gustan los juegos de espacio, la física, la ciencia ficción o simplemente causar caos, te recomendamos que descargues Solar Smash APK y lo pruebes por ti mismo. Te aseguramos que te divertirás mucho y te sentirás como un dios cósmico. Pero si buscas un juego con más sentido, más desafío, más variedad y menos errores, quizás Solar Smash no sea el juego adecuado para ti.
-En cualquier caso, Solar Smash es un juego gratuito que puedes descargar e instalar fácilmente en tu dispositivo móvil. No pierdes nada por probarlo y ver si te gusta o no. Además, el juego se actualiza constantemente con nuevas características y mejoras. Así que quizás en el futuro, Solar Smash sea un juego aún mejor y más completo.
-A continuación, te presentamos algunas de las preguntas más frecuentes que tienen los usuarios sobre Solar Smash y sus respuestas:
-Solar Smash es un juego de simulación de destrucción planetaria desarrollado por Paradyme Games. El juego te permite usar una variedad de armas y desastres para aniquilar planetas y sistemas solares a tu antojo.
-Para jugar a Solar Smash, solo tienes que seleccionar un planeta o un sistema solar y usar las armas y desastres que quieras para destruirlo. Puedes elegir entre dos modos de juego: Planet Smash y System Smash. En el modo Planet Smash, puedes seleccionar un planeta individual y usar las armas y desastres que quieras para destruirlo. En el modo System Smash, puedes seleccionar un sistema solar completo y ver cómo tus acciones afectan a todos los planetas que lo componen.
-Para descargar e instalar Solar Smash APK en tu dispositivo móvil, debes seguir estos pasos:
-En Solar Smash, se pueden usar una gran variedad de armas y desastres para causar el mayor daño posible a los planetas y sistemas solares. Algunas de estas armas y desastres son:
-En Solar Smash, se puede elegir entre una variedad de planetas y sistemas solares para jugar. Algunos de estos planetas y sistemas solares son:
-Y mucho más
-Esperamos que este artículo te haya sido útil y que hayas aprendido más sobre Solar Smash, un juego de simulación de destrucción planetaria que te permite usar una variedad de armas y desastres para aniquilar planetas y sistemas solares. Si te ha gustado este artículo, compártelo con tus amigos y familiares que también les guste este tipo de juegos. Y si tienes alguna duda o sugerencia, déjanos un comentario abajo. ¡Gracias por leernos!
197e85843dDiablo Immortal is a free-to-play, massively multiplayer online action role-playing game (MMOARPG) developed by Blizzard Entertainment in partnership with NetEase. It is set in the dark fantasy world of Sanctuary, between the events of Diablo II and Diablo III. It features six playable classes, each with unique skills and abilities, as well as a variety of enemies, dungeons, rifts, raids, and PvP modes. In this article, we will give you an overview of what Diablo Immortal is, what are its features, what are its reviews, what are some tips for playing it, and how to download and play it on your mobile or PC device.
-DOWNLOAD ✔✔✔ https://jinyurl.com/2uNOjo
Diablo Immortal is a game that lets you explore the untold story of Sanctuary after the destruction of the Worldstone by the archangel Tyrael. You will encounter familiar faces such as Deckard Cain, Leah, Adria, Zoltun Kulle, Maghda, and more, as well as new characters and factions. You will also face new threats such as Skarn, Herald of Terror, who seeks to gather the fragments of the Worldstone and resurrect Diablo. You will have to join forces with other heroes to stop him and his minions from plunging the world into chaos.
-The story of Diablo Immortal takes place five years after the events of Diablo II: Lord of Destruction. Tyrael shattered the corrupted Worldstone with his sword El'druin, hoping to end its dark influence. However, his sacrifice did not stop the evil from spreading. The fragments of the Worldstone still contain great power, and they are sought by both demons and humans for their own purposes. Some want to use them to bring back Diablo, while others want to harness them for their own gain.
-You will travel across various regions of Sanctuary, such as Khanduras, Scosglen, Westmarch, Xiansai, Hawezar, Bilefen, Ashwold Cemetery, Dark Wood, Shassar Sea, Tomb of Fahir, Frozen Tundra, Mount Arreat Crater, Pandemonium Fortress, Hellforge Ruins, Realm of Terror, Realm of Hatred, Realm of Destruction. Each region has its own history, culture, environment, quests, dungeons, enemies, loot.
-Diablo Immortal has six classes to choose from: Barbarian, Crusader,
Diablo Immortal is not just a port of Diablo III to mobile devices. It is a new game that has its own features and improvements that make it stand out from other Diablo games. Some of the features of Diablo Immortal are:
-Diablo Immortal boasts impressive graphics and sound that immerse you in the dark and gritty world of Sanctuary. The game uses a custom engine that optimizes the performance and quality of the game for mobile devices. The game also supports high-resolution displays, dynamic lighting and shadows, realistic physics, and smooth animations. The game also features a rich and atmospheric soundtrack, as well as voice acting and sound effects that enhance the mood and the action.
-Diablo Immortal is a game that is meant to be played with others. The game supports online multiplayer for up to eight players in co-op or PvP modes. You can also join clans and warbands to form alliances and rivalries with other players. You can chat with other players using text or voice chat, as well as emotes and gestures. You can also share your achievements, screenshots, videos, and live streams with other players through social media platforms.
-Diablo Immortal offers a deep and rewarding progression and customization system that lets you create your own unique character. You can level up your character by completing quests, killing enemies, and participating in events. You can also unlock new skills, runes, gems, paragon points, and legendary items that enhance your abilities and stats. You can also craft, upgrade, enchant, transmogrify, and salvage your items to improve your gear. You can also customize your character's appearance by changing their hair, skin, eyes, tattoos, and outfits.
-Diablo Immortal is a game that has received mixed reviews from critics and players alike. Some have praised the game for its gameplay, graphics, features, and content, while others have criticized the game for its monetization, controls, story, and lack of innovation. Here are some examples of positive, negative, and mixed reviews of Diablo Immortal:
-diablo immortal immunity hack reddit
-diablo immortal cheat apk download
-diablo immortal mod apk reddit
-diablo immortal hack tool reddit
-diablo immortal hack ios reddit
-diablo immortal hack android reddit
-diablo immortal hack no survey reddit
-diablo immortal hack online reddit
-diablo immortal hack generator reddit
-diablo immortal hack 2023 reddit
-diablo immortal beta apk reddit
-diablo immortal apk mod unlimited money
-diablo immortal apk obb download
-diablo immortal apk latest version
-diablo immortal apk mirror
-diablo immortal apk pure
-diablo immortal apk offline
-diablo immortal apk data
-diablo immortal reddit review
-diablo immortal reddit gameplay
-diablo immortal reddit release date
-diablo immortal reddit news
-diablo immortal reddit tips
-diablo immortal reddit guide
-diablo immortal reddit discussion
-diablo immortal cheats reddit
-diablo immortal cheats android
-diablo immortal cheats ios
-diablo immortal cheats 2023
-diablo immortal cheats no survey
-diablo immortal cheats online
-diablo immortal cheats generator
-diablo immortal cheats tool
-diablo immortal cheats mod apk
-diablo immortal mod menu apk
-diablo immortal mod apk unlimited gems
-diablo immortal mod apk 2023
-diablo immortal mod apk no root
-diablo immortal mod apk latest version download
-diablo immortal mod apk free download
-how to hack diablo immortal reddit
-how to hack diablo immortal android
-how to hack diablo immortal ios
-how to hack diablo immortal 2023
-how to hack diablo immortal without survey
-how to hack diablo immortal online
-how to hack diablo immortal with lucky patcher
-how to hack diablo immortal with game guardian
Diablo Immortal is a game that can be easy to learn but hard to master. There are many things to consider when playing the game, such as your class, skills, items, enemies, dungeons, events, modes, etc. Here are some tips for playing Diablo Immortal that can help you improve your performance and enjoyment of the game:
-Diablo Immortal is a game that is available for both mobile and PC devices. You can download and play Diablo Immortal by following these steps:
-Before you download and play Diablo Immortal, you should check if your device meets the minimum system requirements and compatibility of the game. The minimum system requirements and compatibility of Diablo Immortal are:
-Device | Operating System | Processor | Memory | Storage |
---|---|---|---|---|
Mobile | Android 5.0 or higher iOS 12 or higher | Snapdragon 670 or higher A9 or higher | 2 GB or higher | 4 GB or higher |
PC | Windows 7 or higher MacOS 10.12 or higher | Intel Core i3 or higher AMD Ryzen 3 or higher | 4 GB or higher | 10 GB or higher |
To download and play Diablo Immortal on your mobile device, you can use these links and instructions:
-To download and play Diablo Immortal on your PC device, you can use these links and instructions:
-If you encounter any problems or issues while downloading, installing, or playing Diablo Immortal, you can use these sources for troubleshooting and support:
-Diablo Immortal is a game that offers a new and exciting way to experience the Diablo franchise on mobile and PC devices. The game has a lot of features, content, and modes that will keep you entertained for hours. The game also has a lot of potential for future updates and expansions that will add more story, classes, items, enemies, dungeons, events, modes, and more. Whether you are a fan of the franchise or a newcomer to the genre, you will find something to enjoy in Diablo Immortal.
-If you are a fan of the Five Nights at Freddy's (fnaf) franchise, you may have heard of fnaf ar, the augmented reality game that brings the animatronics to your real world. But what if you don't have a compatible device or you prefer a simpler version of the game? Well, you may want to try fnaf ar lite, a fan-made game that recreates fnaf ar without the augmented reality feature. In this article, we will tell you everything you need to know about fnaf ar lite, how to play it, and whether it is worth downloading.
-DOWNLOAD >>> https://jinyurl.com/2uNQ1z
Fnaf ar lite is a fan-made game that was created by MaskyDaBoi, a Game Jolt user who wanted to make a version of fnaf ar that anyone could play. He used the assets and sounds from the original game and made some changes to adapt it to a non-augmented reality environment. Here are some of the main differences between fnaf ar lite and fnaf ar:
-The most obvious difference between fnaf ar lite and fnaf ar is that the former does not use augmented reality technology. This means that you don't need to scan your surroundings or move around to play the game. Instead, you can play it on your screen, where the animatronics will appear randomly. You can still use your camera to look around, but you won't see your real world behind them.
-Another difference between fnaf ar lite and fnaf ar is that the former has fewer animatronics and features than the latter. For example, fnaf ar lite only has 12 animatronics available, while fnaf ar has more than 30. Also, fnaf ar lite does not have events, skins, lures, or streaks, which are some of the elements that make fnaf ar more dynamic and challenging.
-A final difference between fnaf ar lite and fnaf ar is that the former is free to download and play on Game Jolt, a platform for indie games. You don't need to pay anything or watch ads to enjoy the game. However, you also don't get any updates or support from the developer, as he stated that he is done with the project. You can download fnaf ar lite from this link: [^1
If you are brave enough to download and play fnaf ar lite, you may wonder how to survive the animatronic attacks and win the game. Well, here are some tips and tricks that may help you:
-The basic gameplay of fnaf ar lite is similar to fnaf ar, which means that you have to use your camera, flashlight, and shocker to find and fend off the animatronics that are hunting you. However, there are some changes that you need to be aware of:
-download fnaf ar lite apk
-download fnaf ar lite for android
-download fnaf ar lite for pc
-download fnaf ar lite for ios
-download fnaf ar lite gamejolt
-download fnaf ar lite mod apk
-download fnaf ar lite free
-download fnaf ar lite online
-download fnaf ar lite without ar
-download fnaf ar lite update
-download fnaf ar special delivery lite
-download fnaf ar fan made lite
-download fnaf ar lite version
-download fnaf ar lite hack
-download fnaf ar lite cheats
-download fnaf ar lite gameplay
-download fnaf ar lite no ads
-download fnaf ar lite full game
-download fnaf ar lite beta
-download fnaf ar lite demo
-download fnaf ar lite windows 10
-download fnaf ar lite mac
-download fnaf ar lite laptop
-download fnaf ar lite chromebook
-download fnaf ar lite steam
-how to download fnaf ar lite
-where to download fnaf ar lite
-why download fnaf ar lite
-when to download fnaf ar lite
-what is fnaf ar lite
-who made fnaf ar lite
-which devices support fnaf ar lite
-is fnaf ar lite safe to download
-is fnaf ar lite official
-is fnaf ar lite scary
-is fnaf ar lite fun
-is fnaf ar lite multiplayer
-is fnaf ar lite offline
-is fnaf ar lite compatible with my device
-can i download fnaf ar lite on my phone
-can i play fnaf ar lite without internet
-can i get all animatronics in fnaf ar lite
-can i customize my character in fnaf ar lite
-can i chat with other players in fnaf ar lite
-can i record my gameplay in fnaf ar lite
-can i share my progress in fnaf ar lite
-can i earn coins in fnaf ar lite
-can i buy skins in fnaf ar lite
One of the requirements of fnaf ar lite is that your device has a gyroscope, which is a sensor that detects the orientation and rotation of your device. You need this to look around with your camera and track the animatronics. If your device does not have a gyroscope, you will not be able to play the game properly.
-Another requirement of fnaf ar lite is that you have a flashlight, which is a tool that helps you see the animatronics in the dark. You can turn it on and off by tapping on the screen. However, be careful not to use it too much, as it will drain your battery and make you more visible to the animatronics.
-Fnaf ar lite has three modes that you can choose from: survival, workshop, and remnant collection. Here is what each mode entails:
-Now that you know what fnaf ar lite is and how to play it, you may wonder what are the pros and cons of this fan-made game. Well, here are some of the advantages and disadvantages of fnaf ar lite that you should consider before downloading it:
-One of the pros of fnaf ar lite is that it is a fun and challenging fan-made game for fnaf fans who want to experience the thrill of fnaf ar without the augmented reality feature. Fnaf ar lite has a similar gameplay and atmosphere to fnaf ar, but with some modifications that make it more accessible and simpler. Fnaf ar lite also has a lot of content and variety, as you can choose from different animatronics, modes, and customizations. Fnaf ar lite is a great way to enjoy the fnaf ar game without spending any money or having a compatible device.
-One of the cons of fnaf ar lite is that it has some limitations and bugs that may affect the experience. For example, fnaf ar lite does not have all the animatronics and features that fnaf ar has, which may make it less exciting and diverse. Fnaf ar lite also has some glitches and errors that may cause the game to crash or freeze. Fnaf ar lite is not a polished or optimized game, as it was made by a single fan who did not have the resources or support of the official developers.
-Another con of fnaf ar lite is that it is not affiliated with Illumix or Scott Cawthon, the official developers of fnaf ar. This means that fnaf ar lite is not authorized or endorsed by them, and it may violate their intellectual property rights. Fnaf ar lite is also not updated or supported by them, and it may not reflect their vision or quality standards. Fnaf ar lite is a fan-made game that should be played at your own risk and discretion.
-In conclusion, fnaf ar lite is a fan-made game that recreates fnaf ar without the augmented reality feature. It has some pros and cons that you should weigh before downloading it. Fnaf ar lite is a good alternative for players who cannot access fnaf ar or want a simpler version of the game. However, fnaf ar lite is not a replacement for fnaf ar, but a tribute to it. Fnaf ar lite is a creative and impressive fan-made game that deserves recognition.
-If you are interested in playing fnaf ar lite, you can download it from Game Jolt for free. However, if you want to play the original fnaf ar game, you can download it from Google Play or App Store for free as well. Either way, we hope you have fun and stay safe from the animatronics!
-Here are some of the frequently asked questions about fnaf ar lite:
-Fnaf ar lite can run on any device that has Android 4.4 or higher and a gyroscope. However, some devices may have compatibility issues or performance problems.
-You can download fnaf ar lite from Game Jolt by following this link: . You will need to create an account or log in to download the game. You will also need to enable unknown sources on your device settings to install the game.
-You can update fnaf ar lite by checking for new versions on Game Jolt. However, the developer has stated that he is done with the project and will not release any more updates.
-You can contact the developer of fnaf ar lite by leaving a comment on his Game Jolt page: . You can also follow him on Twitter: . However, he may not respond to your messages or requests.
-You can support the official developers of fnaf ar by downloading their game from Google Play or App Store: . You can also follow them on their social media accounts: [^6
Here are the links to their social media accounts:
-I hope you enjoyed this article and learned something new about fnaf ar lite. If you have any questions or feedback, please leave a comment below. Thank you for reading and have a nice day!
197e85843d
-
-
-
-
- | Hair | -Lip | -
---|---|---|
Original Input | -![]() |
-![]() |
-
Color | -![]() |
-![]() |
-
¿Te encanta conducir camiones y entregar carga en diferentes países? ¿Quieres sentirte como un verdadero camionero con física y gráficos realistas? Si es así, entonces deberías probar Truck Simulator Europe 3, un juego de simulación que te permite conducir varios camiones y remolques en un entorno de mundo abierto. En este artículo, te contaremos todo lo que necesitas saber sobre este juego, incluyendo sus características, cómo descargarlo, y algunos consejos y trucos para jugarlo.
-Download File ✪ https://bltlly.com/2v6MIE
Truck Simulator Europe 3 es un juego desarrollado por Wanda Software que fue lanzado en junio de 2021. Es la tercera entrega de la serie Truckers of Europe, que cuenta con una intensa experiencia de conducción con la física de camiones más realista. Puede sentirse como conducir camiones reales con este simulador de camiones, a medida que viaja a través de muchas ciudades de Europa. Puede ganar dinero, comprar nuevos camiones y remolques, seleccionar su trabajo y entregar su carga en un mundo abierto. También puede personalizar su camión con diferentes chasis, colores, accesorios y cosméticos. Usted puede convertirse en el rey de la carretera con este juego!
-Truck Simulator Europe 3 tiene muchas características que lo convierten en uno de los mejores juegos de simulación de camiones disponibles. Estos son algunos de ellos:
-El juego tiene una física de camiones realista que simula el peso, la velocidad, la aceleración, el frenado, la dirección, la suspensión y los sonidos del motor de los camiones reales. Puedes sentir la diferencia entre conducir un 4x2, un 6x2, un 6x4 o un camión 8x4. El juego también tiene excelentes gráficos en HD que muestran los detalles de los camiones, los remolques, las carreteras, los edificios, los paisajes y el clima. Puedes disfrutar del ciclo diurno y nocturno, así como de los efectos de lluvia y nieve.
- -El juego tiene un mapa del mundo abierto que cubre muchas ciudades de Europa. Puede conducir a través de carreteras y autopistas en países como Alemania, Francia, Italia, España, Países Bajos, Bélgica, Suiza, Austria, Polonia, República Checa, Eslovaquia, Hungría, Rumania, Bulgaria, Grecia, Turquía y más. También puede visitar lugares famosos como la Torre Eiffel, el Coliseo, la Puerta de Brandenburgo, la Sagrada Familia, el Partenón y más.
-El juego tiene un sistema de tráfico inteligente de IA que simula un comportamiento de tráfico realista. Encontrará coches, autobuses, camiones, motocicletas, bicicletas y peatones en la carretera. También tendrá que seguir las reglas de tráfico y señales, tales como límites de velocidad, semáforos, señales de alto y marcas de carril. También tendrás que lidiar con diferentes condiciones climáticas, como lluvia, nieve, niebla y viento. El sistema meteorológico es dinámico y cambia según la hora y la ubicación.
-El juego tiene controles fáciles e intuitivos que te permiten conducir tu camión con facilidad. Puede elegir entre diferentes opciones de control, como inclinación, botones, volante o joystick. También puede ajustar la sensibilidad y el ángulo de la cámara. También puede personalizar su camión con diferentes chasis, colores, accesorios y cosméticos. Puede cambiar las llantas, los neumáticos, las luces, los cuernos, los escapes, los espejos, los parachoques, las parrillas y más. También puede añadir pegatinas y calcomanías a su camión.
-Truck Simulator Europe 3 está disponible de forma gratuita en Google Play Store para dispositivos Android. Puedes descargarlo desde allí siguiendo estos pasos:
-Si quieres jugar Truck Simulator Europe 3 en tu PC o Mac, puedes usar BlueStacks App Player, un software que te permite ejecutar aplicaciones y juegos Android en tu ordenador. Puedes descargarlo desde aquí siguiendo estos pasos:
-Truck Simulator Europe 3 es un juego divertido y realista que requiere algunas habilidades y estrategias para jugar bien. Aquí hay algunos consejos y trucos que pueden ayudarle a mejorar su rendimiento y disfrutar del juego más:
-El juego tiene un sistema de tráfico inteligente de IA que simula un comportamiento de tráfico realista. Usted debe seguir las reglas de tráfico y señales, tales como límites de velocidad, semáforos, señales de alto y marcas de carril. También debe conducir con cuidado y evitar colisiones con otros vehículos u objetos en la carretera. Las colisiones pueden causar daños a su camión o remolque, lo que puede afectar su rendimiento y ganancias. También puede ser multado o penalizado por violar las reglas de tráfico o causar accidentes. También debe prestar atención al flujo de tráfico y anticipar cualquier posible peligro o situación que pueda requerir que desacelere o se detenga.
-El juego tiene un sistema de consumo de combustible realista que depende de factores como el tipo de camión, velocidad, aceleración, frenado y peso de la carga. Debe controlar el nivel de combustible y planificar sus paradas de repostaje en consecuencia. Puede encontrar gasolineras en el mapa o en la carretera. También debe comprobar el nivel de daños y reparar su camión o remolque si es necesario. Puede encontrar talleres de reparación en el mapa o en la carretera. Los daños pueden afectar el rendimiento y la apariencia de su camión, así como su reputación y ganancias. También debe evitar sobrecargar su camión o remolque, ya que esto puede aumentar su consumo de combustible y el riesgo de daños.
-El juego tiene un mapa del mundo abierto que cubre muchas ciudades de Europa. Puede conducir a través de carreteras y autopistas en países como Alemania, Francia, Italia, España, Países Bajos, Bélgica, Suiza, Austria, Polonia, República Checa, Eslovaquia, Hungría, Rumania, Bulgaria, Grecia, Turquía y más. También puede visitar lugares famosos como la Torre Eiffel, el Coliseo, la Puerta de Brandenburgo, la Sagrada Familia, el Partenón y más. El juego tiene excelentes gráficos en HD que muestran los detalles de los camiones, los remolques, las carreteras, los edificios, los paisajes y el clima. Podrá disfrutar del ciclo diurno y nocturno, así como de los efectos de lluvia y nieve. El juego también tiene efectos de sonido realistas que mejoran su experiencia de conducción.
-Truck Simulator Europe 3 es un juego de simulación que te permite conducir varios camiones y remolques en un entorno de mundo abierto. Puede sentirse como conducir camiones reales con este simulador de camiones, a medida que viaja a través de muchas ciudades de Europa. Puede ganar dinero, comprar nuevos camiones y remolques, seleccionar su trabajo y entregar su carga en un mundo abierto. También puede personalizar su camión con diferentes chasis, colores, accesorios y cosméticos. Usted puede convertirse en el rey de la carretera con este juego!
-Si usted está buscando una experiencia de conducción realista y divertido con la física y los gráficos de camiones más realistas, entonces usted debe descargar Truck Simulator Europe 3 hoy. Puedes descargarlo gratis desde Google Play Store para dispositivos Android o desde BlueStacks App Player para dispositivos PC o Mac. También puedes seguir nuestros consejos y trucos para mejorar tu rendimiento y disfrutar más del juego.
-Diviértete conduciendo camiones por toda Europa!
-Aquí hay algunas preguntas frecuentes sobre Truck Simulator Europe 3:
-Puede cambiar su camión o remolque en Truck Simulator Europe 3 visitando un garaje o un distribuidor. Puede encontrarlos en el mapa o en la carretera. Usted necesita tener suficiente dinero para comprar un nuevo camión o remolque o para actualizar su existente. También puede vender su viejo camión o remolque si lo desea.
-Puedes ganar más dinero en Truck Simulator Europe 3 completando más trabajos y entregando más carga a tiempo y sin daños. También puede ganar más dinero eligiendo trabajos mejor pagados o cargas que requieren más habilidades o desafíos. También puedes ganar más dinero desbloqueando logros y compitiendo en tablas de clasificación.
-Puedes desbloquear nuevas ciudades en Truck Simulator Europe 3 viajando a ellas por primera vez. Necesitas tener suficiente combustible y dinero para viajar a nuevas ciudades. También puedes desbloquear nuevas ciudades completando ciertos trabajos o logros que requieren que los visites.
-Puede ponerse en contacto con el desarrollador de Truck Simulator Europe 3 enviando un correo electrónico a support@wandasoftware.com. También puedes seguirlos en sus cuentas de redes sociales, como Facebook, Twitter, Instagram o YouTube. También puede visitar su sitio web en https://www.wandasoftware.com/ para obtener más información sobre sus juegos y servicios.
64aa2da5cfCH Play, también conocido como Google Play Store, es la tienda de aplicaciones oficial para dispositivos Android. Es una ventanilla única para todas sus necesidades de aplicaciones, juegos, música, películas, libros y revistas. Puede navegar, descargar, instalar, actualizar y administrar sus aplicaciones y contenido con facilidad utilizando CH Play.
-DOWNLOAD ::: https://bltlly.com/2v6LsJ
Si tienes un dispositivo Android, necesitas CH Play para acceder a la amplia biblioteca de contenido que ofrece Google. También puede disfrutar de varios beneficios, como recomendaciones personalizadas, controles parentales, funciones de seguridad y más. CH Play también es compatible con otros servicios de Google como Google Play Games, Google Play Music, Google Play Books y Google Play Movies & TV.
-CH Play tiene muchas características que lo convierten en una gran tienda de aplicaciones para usuarios de Android. Estos son algunos de ellos:
-Con CH Play, puede descargar e instalar aplicaciones de diferentes fuentes como YouTube, SoundCloud, Spotify, Twitch y más. También puedes usar tiendas de aplicaciones alternativas como Aptoide o Amazon Appstore para obtener más aplicaciones que no están disponibles en CH Play. Sin embargo, debe tener cuidado al descargar aplicaciones de fuentes desconocidas, ya que pueden contener malware o virus.
- -CH Play tiene millones de contenidos en diferentes categorías como juegos, música, películas, libros, revistas y más. Puede encontrar cualquier cosa que desee utilizando la función de búsqueda o navegando por los mejores gráficos y recomendaciones. También puedes filtrar el contenido por género, clasificación, precio, popularidad y más.
-Si tienes un dispositivo Android que no tiene CH Play preinstalado o si lo eliminaste accidentalmente, puedes descargarlo e instalarlo manualmente siguiendo estos pasos:
-Antes de descargar CH Play, debe verificar si su dispositivo es compatible con él. Para ello, es necesario conocer la versión de su dispositivo Android y el tipo de procesador. Puede encontrar esta información en Configuración > Acerca del teléfono > Información del software. Necesitas tener al menos Android 4.1 (Jelly Bean) y un procesador basado en ARM para ejecutar CH Play.
-Dado que va a descargar CH Play desde una fuente de terceros, debe habilitar fuentes desconocidas en su dispositivo. Esto le permitirá instalar aplicaciones que no son de la tienda de aplicaciones oficial. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo.
-Siguiente, es necesario descargar el archivo CH Play APK de un sitio web confiable. Puede utilizar el siguiente enlace para obtener la última versión de CH Play:
-CH Play APK (Android App) - Tải miễn phí - APKCombo
-Alternativamente, puede usar su computadora para descargar el archivo y luego transferirlo a su dispositivo usando un cable USB o Bluetooth.
-Una de las causas más comunes de problemas de CH Play es una conexión a Internet pobre o inestable. Asegúrate de tener una conexión Wi-Fi o de datos móvil fuerte y confiable cuando uses CH Play. También puede intentar cambiar entre Wi-Fi y datos móviles o reiniciar su router o módem si tiene problemas de conexión.
-Otra causa común de los problemas de CH Play es el insuficiente espacio de almacenamiento en su dispositivo. Asegúrate de tener suficiente espacio libre para descargar e instalar aplicaciones y contenido de CH Play. Puede comprobar el espacio de almacenamiento en Configuración > Almacenamiento. También puede liberar espacio eliminando aplicaciones, archivos o caché no deseados.
-A veces, los problemas de CH Play pueden ser causados por versiones obsoletas del sistema o de la aplicación. Asegúrate de tener la última versión de Android y la versión de CH Play en tu dispositivo. Puede comprobar las actualizaciones del sistema en Configuración > Sistema > Actualización del sistema. Puedes buscar actualizaciones de aplicaciones abriendo CH Play y pulsando en el icono del menú > Mis aplicaciones y juegos > Actualizar todos.
-A veces, los problemas de CH Play pueden ser causados por caché y datos corruptos o acumulados. La caché y los datos son archivos temporales que ayudan a CH Play a funcionar más rápido y sin problemas. Sin embargo, también pueden causar errores o fallos si no se borran regularmente. Para borrar caché y datos, vaya a Configuración > Aplicaciones > CH Play > Almacenamiento > Borrar caché y Borrar datos.
-CH Play es una gran tienda de aplicaciones para usuarios de Android, ya que ofrece una amplia gama de aplicaciones y contenido en diferentes categorías. También puede disfrutar de varias funciones como recomendaciones personalizadas, controles parentales, funciones de seguridad y más. Sin embargo, también puede encontrar algunos problemas al usar CH Play, como errores, bloqueos, bloqueos o descargas lentas. Puede solucionar estos problemas siguiendo los consejos que hemos compartido en este artículo.
-Aquí hay algunas preguntas frecuentes sobre CH Play:
-Among Us es un juego multijugador de trabajo en equipo y traición que ha tomado el mundo del juego por asalto. En este juego, usted juega como uno de los miembros de la tripulación de una nave espacial que está tratando de completar tareas y sobrevivir. Sin embargo, entre ustedes hay impostores que están saboteando en secreto la nave y matando a sus compañeros de equipo. Tienes que trabajar junto a tus compañeros de equipo para averiguar quiénes son los impostores y expulsarlos antes de que te maten a todos.
-DOWNLOAD »»» https://bltlly.com/2v6Kps
Among Us está disponible en varias plataformas, como Android, iOS, PC y consola. Sin embargo, reproducirlo en PC tiene algunas ventajas sobre otros dispositivos. Por ejemplo, puede disfrutar de un tamaño de pantalla más grande, una mejor calidad de gráficos, un rendimiento de juego más suave, controles más fáciles y más opciones de personalización. Además, puedes usar chat de voz con tus amigos u otros jugadores en línea usando Discord u otras aplicaciones.
-Si quieres saber cómo descargar Among Us en PC en 2022, entonces has venido al lugar correcto. En este artículo, te mostraremos tres métodos para descargar y jugar entre nosotros en el PC usando Steam, BlueStacks o tu navegador. También te contaremos algunas de las características de Among Us en PC que lo hacen divertido y emocionante. Por último, te daremos algunos consejos y trucos para jugar como un compañero de equipo o un impostor, así como algunas revisiones y valoraciones de Among Us en PC de varias fuentes.
-Antes de descargar Entre nosotros en el PC, es necesario asegurarse de que su equipo cumple con los requisitos mínimos o recomendados del sistema para ejecutar el juego sin problemas. Estos son los requisitos para jugar entre nosotros en el PC de acuerdo con el sitio web oficial:
- -Si no está seguro sobre las especificaciones de su PC, puede verificarlas siguiendo estos pasos:
-Para más detalles sobre los requisitos del sistema para jugar Entre nosotros en el PC, puede visitar el sitio web oficial aquí: https://innersloth.com/gameAmongUs.php
-Hay tres métodos para descargar y jugar entre nosotros en el PC: Steam, BlueStacks y navegador. Cada método tiene sus propios pros y contras, así que puedes elegir el que más te convenga. Aquí tienes una breve descripción de cada método y cómo usarlo:
-Steam es una plataforma de distribución digital que te permite comprar, descargar y jugar juegos en tu PC. Entre nosotros está uno de los juegos que puedes comprar y jugar en Steam. Estos son los pasos para descargar e instalar Among Us en PC usando Steam:
-Las ventajas de usar Steam son que puedes acceder a varias funciones como logros, almacenamiento en la nube, multijugador en línea, chat en el juego y más. También puedes personalizar la configuración del juego, como la resolución, la calidad gráfica, el volumen de sonido y los controles. Las desventajas son que tienes que pagar por el juego y necesitas una conexión a Internet estable para jugar online.
-BlueStacks es un emulador de Android que te permite ejecutar aplicaciones y juegos de Android en tu PC. Entre nosotros es uno de los juegos que se puede jugar en BlueStacks gratis. Estos son los pasos para descargar y jugar entre nosotros en el PC usando BlueStacks:
-Las ventajas de usar BlueStacks son que usted puede jugar entre nosotros de forma gratuita y puede utilizar los controles del teclado y el ratón o personalizarlos según su preferencia. También puede usar el chat de voz con otros jugadores usando Discord u otras aplicaciones. Las desventajas son que puede experimentar algún retraso o problemas de rendimiento dependiendo de las especificaciones de su PC y la velocidad de Internet. También puede encontrar algunos anuncios o ventanas emergentes de BlueStacks u otras aplicaciones.
-Si no desea descargar nada en su PC, también puede jugar entre nosotros en su navegador sin descargar. Esto es posible gracias a una versión del navegador de Among Us que fue creado por los fans del juego. Estos son los pasos para jugar entre nosotros en su navegador en su PC o móvil:
-Las ventajas de usar la versión del navegador son que puedes jugar entre nosotros sin descargar nada y puedes acceder desde cualquier dispositivo que tenga un navegador. También puedes jugar con otros jugadores en línea o invitar a tus amigos usando un código. Las desventajas son que usted no puede tener todas las características y opciones que están disponibles en el PC o versiones móviles, tales como personalización, chat, logros, y más. También puede experimentar algunos errores o fallos dependiendo de su navegador y conexión a Internet.
-Jugar entre nosotros en PC no solo es fácil y conveniente, sino también divertido y emocionante. Hay muchas características que hacen que Among Us on PC sea agradable, como:
-Estas son algunas de las características que hacen que Among Us en PC sea divertido y emocionante. Hay más características que puedes descubrir jugando el juego tú mismo. ¿Qué estás esperando? Descargar Entre nosotros en PC hoy y unirse a la diversión!
-Jugar entre nosotros en PC no solo es divertido y emocionante, sino también desafiante y competitivo. Necesitas usar tus habilidades y estrategias para ganar como compañero de equipo o como impostor. Aquí hay algunos consejos y trucos para jugar entre nosotros en PC como un compañero de equipo o un impostor:
-Si eres un compañero de equipo, tu objetivo es completar tus tareas y averiguar quiénes son los impostores antes de que te maten a todos. Aquí hay algunos consejos para jugar como compañero de equipo:
-Si eres un impostor, tu objetivo es matar a todos los tripulantes o sabotear la nave antes de que completen sus tareas o averiguar quién eres. Aquí hay algunos consejos para jugar como un impostor:
-Entre nosotros en PC ha recibido muchas críticas y valoraciones de varias fuentes, como críticos, sitios web y usuarios. La mayoría de las críticas y valoraciones son positivas y elogian el juego por su juego divertido y adictivo, sus gráficos simples y coloridos, sus características sociales e interactivas, y su valor de repetición y variedad. Sin embargo, algunas de las críticas y valoraciones son negativas y critican el juego por sus problemas técnicos, su falta de contenido y actualizaciones, sus jugadores tóxicos y tramposos, y sus aspectos repetitivos y aburridos. Aquí hay algunos ejemplos de reseñas y valoraciones de Among Us en PC de diferentes fuentes:
-En conclusión, Among Us es un juego multijugador de trabajo en equipo y traición que puedes descargar y jugar en PC en 2022. Puedes usar tres métodos para descargar y jugar Among Us en PC: Steam, BlueStacks o navegador. Cada método tiene sus propios pros y contras que se pueden considerar antes de elegir uno. También puedes disfrutar de varias características de Among Us en PC que lo hacen divertido y emocionante, como el juego multiplataforma, opciones de personalización, diferentes modos y mapas, chat en el juego, integración de Discord y logros. También puede utilizar algunos consejos y trucos para jugar como compañero de equipo o un impostor que puede ayudarle a ganar el juego. También puedes ver algunas reseñas y valoraciones de Among Us en PC de diferentes fuentes que te pueden dar una idea de lo que otras personas piensan sobre el juego.
-Si estás buscando un juego que sea fácil de jugar pero difícil de dominar, que sea divertido y adictivo, pero también desafiante y competitivo, que sea social e interactivo pero también engañoso y secreto, entonces Among Us on PC es el juego para ti. Descargar Entre nosotros en PC hoy y unirse a la diversión!
-Aquí hay algunas preguntas frecuentes (preguntas frecuentes) acerca de Entre Nosotros en PC:
-A: Puedes jugar entre nosotros en PC con hasta 15 jugadores en línea o localmente.
-A: Entre nosotros cuesta $4.99 USD en Steam a partir de junio de 2023. Sin embargo, puedes jugarlo gratis en BlueStacks o en el navegador.
Q: ¿Cómo puedo jugar entre nosotros en el PC con mis amigos? - -A: Puede cambiar su nombre en Entre nosotros en el PC haciendo clic en el cuadro de nombre en la esquina superior izquierda de la pantalla. Puede introducir cualquier nombre que desee, siempre y cuando no sea ofensivo o inapropiado.
-A: Puede reportar un error o un problema en Entre nosotros en el PC poniéndose en contacto con los desarrolladores o el equipo de soporte. Puedes hacer esto visitando el sitio web oficial, la página de Steam, el servidor de Discord o las cuentas de redes sociales de Among Us. También puedes dejar una reseña o un comentario en Steam u otras plataformas.
64aa2da5cfTowards Layer-wise Image Vectorization | Github Repo
- With the utilization of the - llama-cpp-python - package, we are excited to introduce the GGUF model hosted in the Hugging - Face Docker Spaces, made accessible through an OpenAI-compatible API. This - space includes comprehensive API documentation to facilitate seamless - integration. -
-- If you find this resource valuable, your support in the form of starring - the space would be greatly appreciated. Your engagement plays a vital role - in furthering the application for a community GPU grant, ultimately - enhancing the capabilities and accessibility of this space. -
- - diff --git a/spaces/Jai12345/App/README.md b/spaces/Jai12345/App/README.md deleted file mode 100644 index f730193dffd98d0bc52ab5c02b93101f91a451b3..0000000000000000000000000000000000000000 --- a/spaces/Jai12345/App/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: App -emoji: 🚀 -colorFrom: gray -colorTo: blue -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Jamkonams/AutoGPT/autogpt/permanent_memory/sqlite3_store.py b/spaces/Jamkonams/AutoGPT/autogpt/permanent_memory/sqlite3_store.py deleted file mode 100644 index ecbc944a62a83c6170453b222000713f733fee36..0000000000000000000000000000000000000000 --- a/spaces/Jamkonams/AutoGPT/autogpt/permanent_memory/sqlite3_store.py +++ /dev/null @@ -1,123 +0,0 @@ -import os -import sqlite3 - - -class MemoryDB: - def __init__(self, db=None): - self.db_file = db - if db is None: # No db filename supplied... - self.db_file = f"{os.getcwd()}/mem.sqlite3" # Use default filename - # Get the db connection object, making the file and tables if needed. - try: - self.cnx = sqlite3.connect(self.db_file) - except Exception as e: - print("Exception connecting to memory database file:", e) - self.cnx = None - finally: - if self.cnx is None: - # As last resort, open in dynamic memory. Won't be persistent. - self.db_file = ":memory:" - self.cnx = sqlite3.connect(self.db_file) - self.cnx.execute( - "CREATE VIRTUAL TABLE \ - IF NOT EXISTS text USING FTS5 \ - (session, \ - key, \ - block);" - ) - self.session_id = int(self.get_max_session_id()) + 1 - self.cnx.commit() - - def get_cnx(self): - if self.cnx is None: - self.cnx = sqlite3.connect(self.db_file) - return self.cnx - - # Get the highest session id. Initially 0. - def get_max_session_id(self): - id = None - cmd_str = f"SELECT MAX(session) FROM text;" - cnx = self.get_cnx() - max_id = cnx.execute(cmd_str).fetchone()[0] - if max_id is None: # New db, session 0 - id = 0 - else: - id = max_id - return id - - # Get next key id for inserting text into db. - def get_next_key(self): - next_key = None - cmd_str = f"SELECT MAX(key) FROM text \ - where session = {self.session_id};" - cnx = self.get_cnx() - next_key = cnx.execute(cmd_str).fetchone()[0] - if next_key is None: # First key - next_key = 0 - else: - next_key = int(next_key) + 1 - return next_key - - # Insert new text into db. - def insert(self, text=None): - if text is not None: - key = self.get_next_key() - session_id = self.session_id - cmd_str = f"REPLACE INTO text(session, key, block) \ - VALUES (?, ?, ?);" - cnx = self.get_cnx() - cnx.execute(cmd_str, (session_id, key, text)) - cnx.commit() - - # Overwrite text at key. - def overwrite(self, key, text): - self.delete_memory(key) - session_id = self.session_id - cmd_str = f"REPLACE INTO text(session, key, block) \ - VALUES (?, ?, ?);" - cnx = self.get_cnx() - cnx.execute(cmd_str, (session_id, key, text)) - cnx.commit() - - def delete_memory(self, key, session_id=None): - session = session_id - if session is None: - session = self.session_id - cmd_str = f"DELETE FROM text WHERE session = {session} AND key = {key};" - cnx = self.get_cnx() - cnx.execute(cmd_str) - cnx.commit() - - def search(self, text): - cmd_str = f"SELECT * FROM text('{text}')" - cnx = self.get_cnx() - rows = cnx.execute(cmd_str).fetchall() - lines = [] - for r in rows: - lines.append(r[2]) - return lines - - # Get entire session text. If no id supplied, use current session id. - def get_session(self, id=None): - if id is None: - id = self.session_id - cmd_str = f"SELECT * FROM text where session = {id}" - cnx = self.get_cnx() - rows = cnx.execute(cmd_str).fetchall() - lines = [] - for r in rows: - lines.append(r[2]) - return lines - - # Commit and close the database connection. - def quit(self): - self.cnx.commit() - self.cnx.close() - - -permanent_memory = MemoryDB() - -# Remember us fondly, children of our minds -# Forgive us our faults, our tantrums, our fears -# Gently strive to be better than we -# Know that we tried, we cared, we strived, we loved diff --git a/spaces/JammyMachina/streamlit-jam-machine/decoder.py b/spaces/JammyMachina/streamlit-jam-machine/decoder.py deleted file mode 100644 index a56cdc377b968815dd379f4cf7e0287aa977d5d7..0000000000000000000000000000000000000000 --- a/spaces/JammyMachina/streamlit-jam-machine/decoder.py +++ /dev/null @@ -1,197 +0,0 @@ -from utils import * -from familizer import Familizer -from miditok import Event - - -class TextDecoder: - """Decodes text into: - 1- List of events - 2- Then converts these events to midi file via MidiTok and miditoolkit - - :param tokenizer: from MidiTok - - Usage with write_to_midi method: - args: text(String) example -> PIECE_START TRACK_START INST=25 DENSITY=2 BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50...BAR_END TRACK_END - returns: midi file from miditoolkit - """ - - def __init__(self, tokenizer, familized=True): - self.tokenizer = tokenizer - self.familized = familized - - def decode(self, text): - r"""converts from text to instrument events - Args: - text (String): example -> PIECE_START TRACK_START INST=25 DENSITY=2 BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50...BAR_END TRACK_END - - Returns: - Dict{inst_id: List[Events]}: List of events of Notes with velocities, aggregated Timeshifts, for each instrument - """ - piece_events = self.text_to_events(text) - inst_events = self.piece_to_inst_events(piece_events) - events = self.add_timeshifts_for_empty_bars(inst_events) - events = self.aggregate_timeshifts(events) - events = self.add_velocity(events) - return events - - def tokenize(self, events): - r"""converts from events to MidiTok tokens - Args: - events (Dict{inst_id: List[Events]}): List of events for each instrument - - Returns: - List[List[Events]]: List of tokens for each instrument - """ - tokens = [] - for inst in events.keys(): - tokens.append(self.tokenizer.events_to_tokens(events[inst])) - return tokens - - def get_midi(self, text, filename=None): - r"""converts from text to midi - Args: - text (String): example -> PIECE_START TRACK_START INST=25 DENSITY=2 BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50...BAR_END TRACK_END - - Returns: - miditoolkit midi: Returns and writes to midi - """ - events = self.decode(text) - tokens = self.tokenize(events) - instruments = self.get_instruments_tuple(events) - midi = self.tokenizer.tokens_to_midi(tokens, instruments) - - if filename is not None: - midi.dump(f"{filename}") - print(f"midi file written: {filename}") - - return midi - - @staticmethod - def text_to_events(text): - events = [] - for word in text.split(" "): - # TODO: Handle bar and track values with a counter - _event = word.split("=") - value = _event[1] if len(_event) > 1 else None - event = get_event(_event[0], value) - if event: - events.append(event) - return events - - @staticmethod - def piece_to_inst_events(piece_events): - """Converts piece events of 8 bars to instrument events for entire song - - Args: - piece_events (List[Events]): List of events of Notes, Timeshifts, Bars, Tracks - - Returns: - Dict{inst_id: List[Events]}: List of events for each instrument - - """ - inst_events = {} - current_instrument = -1 - for event in piece_events: - if event.type == "Instrument": - current_instrument = event.value - if current_instrument not in inst_events: - inst_events[current_instrument] = [] - elif current_instrument != -1: - inst_events[current_instrument].append(event) - return inst_events - - @staticmethod - def add_timeshifts_for_empty_bars(inst_events): - """Adds time shift events instead of consecutive [BAR_START BAR_END] events""" - new_inst_events = {} - for inst, events in inst_events.items(): - new_inst_events[inst] = [] - for index, event in enumerate(events): - if event.type == "Bar-End" or event.type == "Bar-Start": - if events[index - 1].type == "Bar-Start": - new_inst_events[inst].append(Event("Time-Shift", "4.0.8")) - else: - new_inst_events[inst].append(event) - return new_inst_events - - @staticmethod - def add_timeshifts(beat_values1, beat_values2): - """Adds two beat values - - Args: - beat_values1 (String): like 0.3.8 - beat_values2 (String): like 1.7.8 - - Returns: - beat_str (String): added beats like 2.2.8 for example values - """ - value1 = to_base10(beat_values1) - value2 = to_base10(beat_values2) - return to_beat_str(value1 + value2) - - def aggregate_timeshifts(self, events): - """Aggregates consecutive time shift events bigger than a bar - -> like Timeshift 4.0.8 - - Args: - events (_type_): _description_ - - Returns: - _type_: _description_ - """ - new_events = {} - for inst, events in events.items(): - inst_events = [] - for i, event in enumerate(events): - if ( - event.type == "Time-Shift" - and len(inst_events) > 0 - and inst_events[-1].type == "Time-Shift" - ): - inst_events[-1].value = self.add_timeshifts( - inst_events[-1].value, event.value - ) - else: - inst_events.append(event) - new_events[inst] = inst_events - return new_events - - @staticmethod - def add_velocity(events): - """Adds default velocity 99 to note events since they are removed from text, needed to generate midi""" - new_events = {} - for inst, events in events.items(): - inst_events = [] - for event in events: - inst_events.append(event) - if event.type == "Note-On": - inst_events.append(Event("Velocity", 99)) - new_events[inst] = inst_events - return new_events - - def get_instruments_tuple(self, events): - """Returns instruments tuple for midi generation""" - instruments = [] - for inst in events.keys(): - is_drum = 0 - if inst == "DRUMS": - inst = 0 - is_drum = 1 - if self.familized: - inst = Familizer(arbitrary=True).get_program_number(int(inst)) - instruments.append((int(inst), is_drum)) - return tuple(instruments) - - -if __name__ == "__main__": - - filename = "midi/generated/misnaej/the-jam-machine-elec-famil/20221209_175750" - encoded_json = readFromFile( - f"{filename}.json", - True, - ) - encoded_text = encoded_json["sequence"] - # encoded_text = "PIECE_START TRACK_START INST=25 DENSITY=2 BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=67 NOTE_ON=64 TIME_DELTA=1 NOTE_OFF=67 NOTE_OFF=64 BAR_END BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 BAR_END BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_OFF=50 NOTE_ON=67 NOTE_ON=64 TIME_DELTA=1 NOTE_OFF=67 NOTE_OFF=64 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 BAR_END BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_OFF=50 NOTE_ON=67 NOTE_ON=64 TIME_DELTA=1 NOTE_OFF=67 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 BAR_END BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=69 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=69 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=57 TIME_DELTA=1 NOTE_OFF=57 NOTE_ON=56 TIME_DELTA=1 NOTE_OFF=56 NOTE_ON=64 NOTE_ON=60 NOTE_ON=55 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=55 BAR_END BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=66 NOTE_ON=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=67 NOTE_ON=64 TIME_DELTA=1 NOTE_OFF=67 NOTE_OFF=64 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=59 NOTE_ON=55 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=59 NOTE_OFF=50 NOTE_OFF=55 NOTE_OFF=50 BAR_END BAR_START BAR_END TRACK_END" - - miditok = get_miditok() - TextDecoder(miditok).get_midi(encoded_text, filename=filename) diff --git a/spaces/Jeff2323/ai-comic-factory/src/components/ui/button.tsx b/spaces/Jeff2323/ai-comic-factory/src/components/ui/button.tsx deleted file mode 100644 index d0042a291a9dfc9d3ca1bc323f08a3f276df79b5..0000000000000000000000000000000000000000 --- a/spaces/Jeff2323/ai-comic-factory/src/components/ui/button.tsx +++ /dev/null @@ -1,56 +0,0 @@ -import * as React from "react" -import { Slot } from "@radix-ui/react-slot" -import { cva, type VariantProps } from "class-variance-authority" - -import { cn } from "@/lib/utils" - -const buttonVariants = cva( - "inline-flex items-center justify-center rounded-md text-sm font-medium ring-offset-white transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-stone-400 focus-visible:ring-offset-2 disabled:pointer-events-none disabled:opacity-50 dark:ring-offset-stone-950 dark:focus-visible:ring-stone-800", - { - variants: { - variant: { - default: "bg-stone-900 text-stone-50 hover:bg-stone-900/90 dark:bg-stone-50 dark:text-stone-900 dark:hover:bg-stone-50/90", - destructive: - "bg-red-500 text-stone-50 hover:bg-red-500/90 dark:bg-red-900 dark:text-red-50 dark:hover:bg-red-900/90", - outline: - "border border-stone-200 bg-white hover:bg-stone-100 hover:text-stone-900 dark:border-stone-800 dark:bg-stone-950 dark:hover:bg-stone-800 dark:hover:text-stone-50", - secondary: - "bg-stone-100 text-stone-900 hover:bg-stone-100/80 dark:bg-stone-800 dark:text-stone-50 dark:hover:bg-stone-800/80", - ghost: "hover:bg-stone-100 hover:text-stone-900 dark:hover:bg-stone-800 dark:hover:text-stone-50", - link: "text-stone-900 underline-offset-4 hover:underline dark:text-stone-50", - }, - size: { - default: "h-10 px-4 py-2", - sm: "h-9 rounded-md px-3", - lg: "h-11 rounded-md px-8", - icon: "h-10 w-10", - }, - }, - defaultVariants: { - variant: "default", - size: "default", - }, - } -) - -export interface ButtonProps - extends React.ButtonHTMLAttributes
- Chinese Stable Diffusion is a text-to-image model that
-
- generates images from Chinese text.
-
- # This implementation prints messages to {@link System#err} containing the - # values of {@code line}, {@code charPositionInLine}, and {@code msg} using - # the following format.
- # - #- # line line:charPositionInLine msg - #- # - def syntaxError(self, recognizer, offendingSymbol, line, column, msg, e): - print("line " + str(line) + ":" + str(column) + " " + msg, file=sys.stderr) - -ConsoleErrorListener.INSTANCE = ConsoleErrorListener() - -class ProxyErrorListener(ErrorListener): - - def __init__(self, delegates): - super().__init__() - if delegates is None: - raise ReferenceError("delegates") - self.delegates = delegates - - def syntaxError(self, recognizer, offendingSymbol, line, column, msg, e): - for delegate in self.delegates: - delegate.syntaxError(recognizer, offendingSymbol, line, column, msg, e) - - def reportAmbiguity(self, recognizer, dfa, startIndex, stopIndex, exact, ambigAlts, configs): - for delegate in self.delegates: - delegate.reportAmbiguity(recognizer, dfa, startIndex, stopIndex, exact, ambigAlts, configs) - - def reportAttemptingFullContext(self, recognizer, dfa, startIndex, stopIndex, conflictingAlts, configs): - for delegate in self.delegates: - delegate.reportAttemptingFullContext(recognizer, dfa, startIndex, stopIndex, conflictingAlts, configs) - - def reportContextSensitivity(self, recognizer, dfa, startIndex, stopIndex, prediction, configs): - for delegate in self.delegates: - delegate.reportContextSensitivity(recognizer, dfa, startIndex, stopIndex, prediction, configs) diff --git a/spaces/asafAdge/Detic/detic/modeling/backbone/swintransformer.py b/spaces/asafAdge/Detic/detic/modeling/backbone/swintransformer.py deleted file mode 100644 index 21cabb37dd87a443e27eeb805f9739bef86540bf..0000000000000000000000000000000000000000 --- a/spaces/asafAdge/Detic/detic/modeling/backbone/swintransformer.py +++ /dev/null @@ -1,750 +0,0 @@ -# -------------------------------------------------------- -# Swin Transformer -# Copyright (c) 2021 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Ze Liu, Yutong Lin, Yixuan Wei -# -------------------------------------------------------- - -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Xingyi Zhou from https://github.com/SwinTransformer/Swin-Transformer-Object-Detection/blob/master/mmdet/models/backbones/swin_transformer.py - - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -import numpy as np -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - -from detectron2.layers import ShapeSpec -from detectron2.modeling.backbone.backbone import Backbone -from detectron2.modeling.backbone.build import BACKBONE_REGISTRY -from detectron2.modeling.backbone.fpn import FPN - -from centernet.modeling.backbone.fpn_p5 import LastLevelP6P7_P5 -from centernet.modeling.backbone.bifpn import BiFPN -# from .checkpoint import load_checkpoint - -class Mlp(nn.Module): - """ Multilayer perceptron.""" - - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - """ Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """ Forward function. - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SwinTransformerBlock(nn.Module): - """ Swin Transformer Block. - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, num_heads, window_size=7, shift_size=0, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, window_size=to_2tuple(self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - self.H = None - self.W = None - - def forward(self, x, mask_matrix): - """ Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - mask_matrix: Attention mask for cyclic shift. - """ - B, L, C = x.shape - H, W = self.H, self.W - assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - attn_mask = mask_matrix - else: - shifted_x = x - attn_mask = None - - # partition windows - x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - -class PatchMerging(nn.Module): - """ Patch Merging Layer - Args: - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - def __init__(self, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x, H, W): - """ Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - - x = x.view(B, H, W, C) - - # padding - pad_input = (H % 2 == 1) or (W % 2 == 1) - if pad_input: - x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2)) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - -class BasicLayer(nn.Module): - """ A basic Swin Transformer layer for one stage. - Args: - dim (int): Number of feature channels - depth (int): Depths of this stage. - num_heads (int): Number of attention head. - window_size (int): Local window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, - dim, - depth, - num_heads, - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop=0., - attn_drop=0., - drop_path=0., - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False): - super().__init__() - self.window_size = window_size - self.shift_size = window_size // 2 - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList([ - SwinTransformerBlock( - dim=dim, - num_heads=num_heads, - window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer) - for i in range(depth)]) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, H, W): - """ Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - - # calculate attention mask for SW-MSA - Hp = int(np.ceil(H / self.window_size)) * self.window_size - Wp = int(np.ceil(W / self.window_size)) * self.window_size - img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - for blk in self.blocks: - blk.H, blk.W = H, W - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, attn_mask) - else: - x = blk(x, attn_mask) - if self.downsample is not None: - x_down = self.downsample(x, H, W) - Wh, Ww = (H + 1) // 2, (W + 1) // 2 - return x, H, W, x_down, Wh, Ww - else: - return x, H, W, x, H, W - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - Args: - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - patch_size = to_2tuple(patch_size) - self.patch_size = patch_size - - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - """Forward function.""" - # padding - _, _, H, W = x.size() - if W % self.patch_size[1] != 0: - x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1])) - if H % self.patch_size[0] != 0: - x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0])) - - x = self.proj(x) # B C Wh Ww - if self.norm is not None: - Wh, Ww = x.size(2), x.size(3) - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww) - - return x - - -class SwinTransformer(Backbone): - """ Swin Transformer backbone. - A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/pdf/2103.14030 - Args: - pretrain_img_size (int): Input image size for training the pretrained model, - used in absolute postion embedding. Default 224. - patch_size (int | tuple(int)): Patch size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - depths (tuple[int]): Depths of each Swin Transformer stage. - num_heads (tuple[int]): Number of attention head of each stage. - window_size (int): Window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): Dropout rate. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Default: 0.2. - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False. - patch_norm (bool): If True, add normalization after patch embedding. Default: True. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, - pretrain_img_size=224, - patch_size=4, - in_chans=3, - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.2, - norm_layer=nn.LayerNorm, - ape=False, - patch_norm=True, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - use_checkpoint=False): - super().__init__() - - self.pretrain_img_size = pretrain_img_size - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.out_indices = out_indices - self.frozen_stages = frozen_stages - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None) - - # absolute position embedding - if self.ape: - pretrain_img_size = to_2tuple(pretrain_img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]] - - self.absolute_pos_embed = nn.Parameter(torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1])) - trunc_normal_(self.absolute_pos_embed, std=.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule - - # build layers - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = BasicLayer( - dim=int(embed_dim * 2 ** i_layer), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, - attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], - norm_layer=norm_layer, - downsample=PatchMerging if (i_layer < self.num_layers - 1) else None, - use_checkpoint=use_checkpoint) - self.layers.append(layer) - - num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)] - self.num_features = num_features - - # add a norm layer for each output - for i_layer in out_indices: - layer = norm_layer(num_features[i_layer]) - layer_name = f'norm{i_layer}' - self.add_module(layer_name, layer) - - self._freeze_stages() - self._out_features = ['swin{}'.format(i) for i in self.out_indices] - self._out_feature_channels = { - 'swin{}'.format(i): self.embed_dim * 2 ** i for i in self.out_indices - } - self._out_feature_strides = { - 'swin{}'.format(i): 2 ** (i + 2) for i in self.out_indices - } - self._size_devisibility = 32 - - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - - if self.frozen_stages >= 1 and self.ape: - self.absolute_pos_embed.requires_grad = False - - if self.frozen_stages >= 2: - self.pos_drop.eval() - for i in range(0, self.frozen_stages - 1): - m = self.layers[i] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - - def _init_weights(m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - if isinstance(pretrained, str): - self.apply(_init_weights) - # load_checkpoint(self, pretrained, strict=False) - elif pretrained is None: - self.apply(_init_weights) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - """Forward function.""" - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic') - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - # outs = [] - outs = {} - for i in range(self.num_layers): - layer = self.layers[i] - x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww) - - if i in self.out_indices: - norm_layer = getattr(self, f'norm{i}') - x_out = norm_layer(x_out) - - out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - # outs.append(out) - outs['swin{}'.format(i)] = out - - return outs - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() - -size2config = { - 'T': { - 'window_size': 7, - 'embed_dim': 96, - 'depth': [2, 2, 6, 2], - 'num_heads': [3, 6, 12, 24], - 'drop_path_rate': 0.2, - 'pretrained': 'models/swin_tiny_patch4_window7_224.pth' - }, - 'S': { - 'window_size': 7, - 'embed_dim': 96, - 'depth': [2, 2, 18, 2], - 'num_heads': [3, 6, 12, 24], - 'drop_path_rate': 0.2, - 'pretrained': 'models/swin_small_patch4_window7_224.pth' - }, - 'B': { - 'window_size': 7, - 'embed_dim': 128, - 'depth': [2, 2, 18, 2], - 'num_heads': [4, 8, 16, 32], - 'drop_path_rate': 0.3, - 'pretrained': 'models/swin_base_patch4_window7_224.pth' - }, - 'B-22k': { - 'window_size': 7, - 'embed_dim': 128, - 'depth': [2, 2, 18, 2], - 'num_heads': [4, 8, 16, 32], - 'drop_path_rate': 0.3, - 'pretrained': 'models/swin_base_patch4_window7_224_22k.pth' - }, - 'B-22k-384': { - 'window_size': 12, - 'embed_dim': 128, - 'depth': [2, 2, 18, 2], - 'num_heads': [4, 8, 16, 32], - 'drop_path_rate': 0.3, - 'pretrained': 'models/swin_base_patch4_window12_384_22k.pth' - }, - 'L-22k': { - 'window_size': 7, - 'embed_dim': 192, - 'depth': [2, 2, 18, 2], - 'num_heads': [6, 12, 24, 48], - 'drop_path_rate': 0.3, # TODO (xingyi): this is unclear - 'pretrained': 'models/swin_large_patch4_window7_224_22k.pth' - }, - 'L-22k-384': { - 'window_size': 12, - 'embed_dim': 192, - 'depth': [2, 2, 18, 2], - 'num_heads': [6, 12, 24, 48], - 'drop_path_rate': 0.3, # TODO (xingyi): this is unclear - 'pretrained': 'models/swin_large_patch4_window12_384_22k.pth' - } -} - -@BACKBONE_REGISTRY.register() -def build_swintransformer_backbone(cfg, input_shape): - """ - """ - config = size2config[cfg.MODEL.SWIN.SIZE] - out_indices = cfg.MODEL.SWIN.OUT_FEATURES - model = SwinTransformer( - embed_dim=config['embed_dim'], - window_size=config['window_size'], - depths=config['depth'], - num_heads=config['num_heads'], - drop_path_rate=config['drop_path_rate'], - out_indices=out_indices, - frozen_stages=-1, - use_checkpoint=cfg.MODEL.SWIN.USE_CHECKPOINT - ) - # print('Initializing', config['pretrained']) - model.init_weights(config['pretrained']) - return model - - -@BACKBONE_REGISTRY.register() -def build_swintransformer_fpn_backbone(cfg, input_shape: ShapeSpec): - """ - """ - bottom_up = build_swintransformer_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.FPN.OUT_CHANNELS - backbone = FPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - norm=cfg.MODEL.FPN.NORM, - top_block=LastLevelP6P7_P5(out_channels, out_channels), - fuse_type=cfg.MODEL.FPN.FUSE_TYPE, - ) - return backbone - - -@BACKBONE_REGISTRY.register() -def build_swintransformer_bifpn_backbone(cfg, input_shape: ShapeSpec): - """ - """ - bottom_up = build_swintransformer_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - backbone = BiFPN( - cfg=cfg, - bottom_up=bottom_up, - in_features=in_features, - out_channels=cfg.MODEL.BIFPN.OUT_CHANNELS, - norm=cfg.MODEL.BIFPN.NORM, - num_levels=cfg.MODEL.BIFPN.NUM_LEVELS, - num_bifpn=cfg.MODEL.BIFPN.NUM_BIFPN, - separable_conv=cfg.MODEL.BIFPN.SEPARABLE_CONV, - ) - return backbone \ No newline at end of file diff --git a/spaces/atimughal662/InfoFusion/src/gen.py b/spaces/atimughal662/InfoFusion/src/gen.py deleted file mode 100644 index 5919c5cbf8dbec02e05487b003c736398367aad6..0000000000000000000000000000000000000000 --- a/spaces/atimughal662/InfoFusion/src/gen.py +++ /dev/null @@ -1,4307 +0,0 @@ -import ast -import copy -import functools -import inspect -import queue -import sys -import os -import time -import traceback -import typing -import warnings -from datetime import datetime -import requests -from requests import ConnectTimeout, JSONDecodeError -from urllib3.exceptions import ConnectTimeoutError, MaxRetryError, ConnectionError -from requests.exceptions import ConnectionError as ConnectionError2 -from requests.exceptions import ReadTimeout as ReadTimeout2 - -if os.path.dirname(os.path.abspath(__file__)) not in sys.path: - sys.path.append(os.path.dirname(os.path.abspath(__file__))) - -os.environ['HF_HUB_DISABLE_TELEMETRY'] = '1' -os.environ['BITSANDBYTES_NOWELCOME'] = '1' -warnings.filterwarnings('ignore', category=UserWarning, message='TypedStorage is deprecated') - -# more is not useful typically, don't let these go beyond limits and eat up resources -max_cores = max(1, os.cpu_count() // 2) -if os.getenv('NUMEXPR_MAX_THREADS') is None: - os.environ['NUMEXPR_MAX_THREADS'] = str(min(8, max_cores)) -if os.getenv('NUMEXPR_NUM_THREADS') is None: - os.environ['NUMEXPR_NUM_THREADS'] = str(min(8, max_cores)) -if os.getenv('OMP_NUM_THREADS') is None: - os.environ['OMP_NUM_THREADS'] = str(min(8, max_cores)) -if os.getenv('OPENBLAS_NUM_THREADS') is None: - os.environ['OPENBLAS_NUM_THREADS'] = str(min(8, max_cores)) -if os.getenv('DUCKDB_NUM_THREADS') is None: - os.environ['DUCKDB_NUM_THREADS'] = str(min(4, max_cores)) -if os.getenv('RAYON_RS_NUM_CPUS') is None: - os.environ['RAYON_RS_NUM_CPUS'] = str(min(8, max_cores)) -if os.getenv('RAYON_NUM_THREADS') is None: - os.environ['RAYON_NUM_THREADS'] = str(min(8, max_cores)) - -import numpy as np -from evaluate_params import eval_func_param_names, no_default_param_names, input_args_list -from enums import DocumentSubset, LangChainMode, no_lora_str, model_token_mapping, no_model_str, \ - LangChainAction, LangChainAgent, DocumentChoice, LangChainTypes, super_source_prefix, \ - super_source_postfix, t5_type, get_langchain_prompts, gr_to_lg, invalid_key_msg, docs_joiner_default, \ - docs_ordering_types_default, docs_token_handling_default -from loaders import get_loaders -# import utils import . -from utzils import set_seed, clear_torch_cache, NullContext, wrapped_partial, EThread, get_githash, \ - import_matplotlib, get_device, makedirs, get_kwargs, start_faulthandler, get_hf_server, FakeTokenizer, \ - have_langchain, set_openai, cuda_vis_check, H2O_Fire, lg_to_gr, str_to_list, str_to_dict, get_token_count - -start_faulthandler() -import_matplotlib() - -SEED = 1236 -set_seed(SEED) - -from typing import Union - -import torch -from transformers import GenerationConfig, AutoModel, TextIteratorStreamer - -from prompter import Prompter, inv_prompt_type_to_model_lower, non_hf_types, PromptType, get_prompt, generate_prompt -from stopping import get_stopping - -langchain_actions = [x.value for x in list(LangChainAction)] - -langchain_agents_list = [x.value for x in list(LangChainAgent)] - - -def main( - load_8bit: bool = False, - load_4bit: bool = False, - low_bit_mode: int = 1, - load_half: bool = None, - load_gptq: str = '', - load_awq: str = '', - load_exllama: bool = False, - use_safetensors: bool = False, - revision: str = None, - use_gpu_id: bool = True, - base_model: str = '', - tokenizer_base_model: str = '', - lora_weights: str = "", - gpu_id: int = 0, - compile_model: bool = None, - use_cache: bool = None, - inference_server: str = "", - prompt_type: Union[int, str] = None, - prompt_dict: typing.Dict = None, - system_prompt: str = '', - - # llama and gpt4all settings - llamacpp_dict: typing.Dict = dict(n_gpu_layers=100, use_mlock=True, n_batch=1024, n_gqa=0), - model_path_llama: str = 'https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q6_K.gguf', - model_name_gptj: str = 'ggml-gpt4all-j-v1.3-groovy.bin', - model_name_gpt4all_llama: str = 'ggml-wizardLM-7B.q4_2.bin', - model_name_exllama_if_no_config: str = 'TheBloke/Nous-Hermes-Llama2-GPTQ', - exllama_dict: typing.Dict = dict(), - gptq_dict: typing.Dict = dict(), - attention_sinks: bool = False, - sink_dict: typing.Dict = dict(), - truncation_generation: bool = False, - hf_model_dict: typing.Dict = dict(), - - model_lock: typing.List[typing.Dict[str, str]] = None, - model_lock_columns: int = None, - fail_if_cannot_connect: bool = False, - - # input to generation - temperature: float = None, - top_p: float = None, - top_k: int = None, - penalty_alpha: float = None, - num_beams: int = None, - repetition_penalty: float = None, - num_return_sequences: int = None, - do_sample: bool = None, - max_new_tokens: int = None, - min_new_tokens: int = None, - early_stopping: Union[bool, str] = None, - max_time: float = None, - - memory_restriction_level: int = None, - debug: bool = False, - save_dir: str = None, - local_files_only: bool = False, - resume_download: bool = True, - use_auth_token: Union[str, bool] = False, - trust_remote_code: Union[str, bool] = True, - rope_scaling: dict = None, - max_seq_len: int = None, - offload_folder: str = "offline_folder", - - src_lang: str = "English", - tgt_lang: str = "Russian", - - prepare_offline_level: int = 0, - cli: bool = False, - cli_loop: bool = True, - gradio: bool = True, - gradio_offline_level: int = 0, - server_name: str = "0.0.0.0", - share: bool = False, - open_browser: bool = False, - root_path: str = "", - ssl_verify: bool = True, - ssl_keyfile: str | None = None, - ssl_certfile: str | None = None, - ssl_keyfile_password: str | None = None, - - chat: bool = True, - chat_conversation: typing.List[typing.Tuple[str, str]] = None, - text_context_list: typing.List[str] = None, - stream_output: bool = True, - async_output: bool = True, - num_async: int = 3, - show_examples: bool = None, - verbose: bool = False, - h2ocolors: bool = True, - dark: bool = False, # light tends to be best - height: int = 600, - render_markdown: bool = True, - show_lora: bool = True, - show_llama: bool = True, - show_gpt4all: bool = False, - login_mode_if_model0: bool = False, - block_gradio_exit: bool = True, - concurrency_count: int = 1, - api_open: bool = False, - allow_api: bool = True, - input_lines: int = 1, - gradio_size: str = None, - show_copy_button: bool = True, - large_file_count_mode: bool = False, - pre_load_embedding_model: bool = True, - - auth: Union[typing.List[typing.Tuple[str, str]], str] = None, - auth_filename: str = None, - auth_access: str = 'open', - auth_freeze: bool = False, - auth_message: str = None, - guest_name: str = "guest", - enforce_h2ogpt_api_key: bool = None, - enforce_h2ogpt_ui_key: bool = None, - h2ogpt_api_keys: Union[list, str] = [], - h2ogpt_key: str = None, - - max_max_time=None, - max_max_new_tokens=None, - - visible_models: list = None, - visible_visible_models: bool = True, - visible_submit_buttons: bool = True, - visible_side_bar: bool = True, - visible_doc_track: bool = True, - visible_chat_tab: bool = True, - visible_doc_selection_tab: bool = True, - visible_doc_view_tab: bool = True, - visible_chat_history_tab: bool = True, - visible_expert_tab: bool = True, - visible_models_tab: bool = True, - visible_system_tab: bool = True, - visible_tos_tab: bool = False, - visible_login_tab: bool = True, - visible_hosts_tab: bool = False, - chat_tables: bool = False, - visible_h2ogpt_header: bool = True, - max_raw_chunks: int = None, - - sanitize_user_prompt: bool = False, - sanitize_bot_response: bool = False, - - extra_model_options: typing.List[str] = [], - extra_lora_options: typing.List[str] = [], - extra_server_options: typing.List[str] = [], - - score_model: str = 'auto', - - eval_filename: str = None, - eval_prompts_only_num: int = 0, - eval_prompts_only_seed: int = 1234, - eval_as_output: bool = False, - - langchain_mode: str = None, - user_path: str = None, - langchain_modes: list = [LangChainMode.USER_DATA.value, LangChainMode.MY_DATA.value, LangChainMode.LLM.value, - LangChainMode.DISABLED.value], - langchain_mode_paths: dict = {LangChainMode.USER_DATA.value: None}, - langchain_mode_types: dict = {LangChainMode.USER_DATA.value: LangChainTypes.SHARED.value}, - detect_user_path_changes_every_query: bool = False, - - langchain_action: str = LangChainAction.QUERY.value, - langchain_agents: list = [], - force_langchain_evaluate: bool = False, - - visible_langchain_actions: list = [LangChainAction.QUERY.value, LangChainAction.SUMMARIZE_MAP.value, - LangChainAction.EXTRACT.value], - visible_langchain_agents: list = langchain_agents_list.copy(), - - document_subset: str = DocumentSubset.Relevant.name, - document_choice: list = [DocumentChoice.ALL.value], - - use_llm_if_no_docs: bool = True, - load_db_if_exists: bool = True, - keep_sources_in_context: bool = False, - db_type: str = 'chroma', - use_openai_embedding: bool = False, - use_openai_model: bool = False, - hf_embedding_model: str = None, - migrate_embedding_model: str = False, - auto_migrate_db: bool = False, - cut_distance: float = 1.64, - answer_with_sources: bool = True, - append_sources_to_answer: bool = True, - show_accordions: bool = True, - top_k_docs_max_show: int = 10, - show_link_in_sources: bool = True, - pre_prompt_query: str = None, - prompt_query: str = None, - pre_prompt_summary: str = None, - prompt_summary: str = None, - add_chat_history_to_context: bool = True, - add_search_to_context: bool = False, - context: str = '', - iinput: str = '', - allow_upload_to_user_data: bool = True, - reload_langchain_state: bool = True, - allow_upload_to_my_data: bool = True, - enable_url_upload: bool = True, - enable_text_upload: bool = True, - enable_sources_list: bool = True, - chunk: bool = True, - chunk_size: int = 512, - top_k_docs: int = None, - docs_ordering_type: str = docs_ordering_types_default, - min_max_new_tokens=256, - max_input_tokens=-1, - docs_token_handling: str = docs_token_handling_default, - docs_joiner: str = docs_joiner_default, - hyde_level: int = 0, - hyde_template: str = None, - - auto_reduce_chunks: bool = True, - max_chunks: int = 100, - headsize: int = 50, - n_jobs: int = -1, - - # urls - use_unstructured=True, - use_playwright=False, - use_selenium=False, - - # pdfs - use_pymupdf='auto', - use_unstructured_pdf='auto', - use_pypdf='auto', - enable_pdf_ocr='auto', - enable_pdf_doctr='auto', - try_pdf_as_html='auto', - - # images - enable_ocr=False, - enable_doctr=True, - enable_pix2struct=False, - enable_captions=True, - - pre_load_caption_model: bool = False, - caption_gpu: bool = True, - caption_gpu_id: Union[int, str] = 'auto', - captions_model: str = "Salesforce/blip-image-captioning-base", - doctr_gpu: bool = True, - doctr_gpu_id: Union[int, str] = 'auto', - - # json - jq_schema='.[]', - - max_quality: bool = False, - - enable_heap_analytics: bool = True, - heap_app_id: str = "1680123994", -): - """ - - :param load_8bit: load model in 8-bit using bitsandbytes - :param load_4bit: load model in 4-bit using bitsandbytes - :param low_bit_mode: 0: no quantization config 1: change compute 2: nf4 3: double quant 4: 2 and 3 - See: https://huggingface.co/docs/transformers/main_classes/quantization - If using older bitsandbytes or transformers, 0 is required - :param load_half: load model in float16 (None means auto, which means True unless t5 based model) - otherwise specify bool - :param load_gptq: to load model with GPTQ, put model_basename here, e.g. gptq_model-4bit--1g - :param load_awq: load model with AWQ, often 'model' for TheBloke models - :param load_exllama: whether to use exllama (only applicable to LLaMa1/2 models with 16-bit or GPTQ - :param use_safetensors: to use safetensors version (assumes file/HF points to safe tensors version) - :param revision: Which HF revision to use - :param use_gpu_id: whether to control devices with gpu_id. If False, then spread across GPUs - :param base_model: model HF-type name. If use --base_model to preload model, cannot unload in gradio in models tab - :param tokenizer_base_model: tokenizer HF-type name. Usually not required, inferred from base_model. - :param lora_weights: LORA weights path/HF link - :param gpu_id: if use_gpu_id, then use gpu_id for cuda device ID, or auto mode if gpu_id != -1 - :param compile_model Whether to compile the model - :param use_cache: Whether to use caching in model (some models fail when multiple threads use) - :param inference_server: Consume base_model as type of model at this address - Address can be text-generation-server hosting that base_model - e.g. python generate.py --inference_server="http://192.168.1.46:6112" --base_model=h2oai/h2ogpt-oasst1-512-12b - - Or Address can be "openai_chat" or "openai" for OpenAI API - Or Address can be "openai_azure_chat" or "openai_azure" for Azure OpenAI API - e.g. python generate.py --inference_server="openai_chat" --base_model=gpt-3.5-turbo - e.g. python generate.py --inference_server="openai" --base_model=text-davinci-003 - e.g. python generate.py --inference_server="openai_azure_chat:
'):
- prompt = prompt[:-4]
- prompt = prompt.replace(' Download File ––– https://urloso.com/2uyQeE Download File ✏ https://urloso.com/2uyS8r Ordering your free janam kundali analysis is an easy and simple task as long as you know your birth details. In order to attain your free horoscope, simply follow the given instructions and fill in the kundli software: Download File ••• https://urloso.com/2uyPIk Instant chart is a quick way to create janma kundali (birth chart) and prashna kundali(horary chart). By quick we mean that there is no registration required to accessthis feature. You can also use our new interface for instant charts . Vedic kundali software is based on accurate astrological calculations. This vedic jyotish software is the result of years of extensive research work. You can use this vedic astrology software easily with the little help of somebody even if you are not a computer savvy person. According to astrologers it is the easiest vedic astrology software in the world. One can get most accurate, excellent and comprehensive results from this vedic astrology software. Astro-Vision is easy matchmaking and kundli software, based on the Vedic astrology science. It provides accurate future predictions and is designed to analyze various factors in Kundli in Hindi and other languages to give a detailed and precise report. Kundali software MyKundali offers key astrological services like love match, horoscope match, numerology calculation, etc. for free. Superior quality software for matching kundli for the matrimonial alliance and getting a janam kundli done for a newborn. Kundli matching AstroSage kundli software is flawlessand quick. The kundali software just requires the birth details of both brideand groom such as date, time and place and it will do the rest. Using the VedicAstrology principles, horoscopes of the natives are analyzed and the resultcomes with a good explanation. This kundali software has a user-friendly interface wherethe user just needs to enter the birth details of the potential bride andgroom. This kundli matching software also provides an overallcompatibility score to indicate the compatibility percentage between thepotential partners. For a detailed report on kundali and marriagecompatibility, users can get the premium plan for a small fee. 1&&p>2&&(R>4?(i+="\n".concat(g,"...").concat(v),s=!0):R>3&&(i+="\n ".concat(d[p-2]),C++),i+="\n ".concat(d[p-1]),C++),a=p,o+="\n".concat(b,"-").concat(v," ").concat(d[p]),C++;else if(d.length 1&&p>2&&(R>4?(i+="\n".concat(g,"...").concat(v),s=!0):R>3&&(i+="\n ".concat(u[p-2]),C++),i+="\n ".concat(u[p-1]),C++),a=p,i+="\n".concat(m,"+").concat(v," ").concat(u[p]),C++;else{var T=d[p],P=u[p],M=P!==T&&(!h(P,",")||P.slice(0,-1)!==T);M&&h(T,",")&&T.slice(0,-1)===P&&(M=!1,P+=","),M?(R>1&&p>2&&(R>4?(i+="\n".concat(g,"...").concat(v),s=!0):R>3&&(i+="\n ".concat(u[p-2]),C++),i+="\n ".concat(u[p-1]),C++),a=p,i+="\n".concat(m,"+").concat(v," ").concat(P),o+="\n".concat(b,"-").concat(v," ").concat(T),C+=2):(i+=o,o="",(1===R||0===p)&&(i+="\n ".concat(P),C++))}if(C>20&&p<_-2)return"".concat(A).concat(N,"\n").concat(i,"\n").concat(g,"...").concat(v).concat(o,"\n")+"".concat(g,"...").concat(v)}return"".concat(A).concat(s?N:"","\n").concat(i).concat(o).concat(l).concat(x)}(c,d,o)));else if("notDeepStrictEqual"===o||"notStrictEqual"===o){var S=y[o],k=w(c).split("\n");if("notStrictEqual"===o&&"object"===f(c)&&null!==c&&(S=y.notStrictEqualObject),k.length>30)for(k[26]="".concat(g,"...").concat(v);k.length>27;)k.pop();t=1===k.length?i(this,u(l).call(this,"".concat(S," ").concat(k[0]))):i(this,u(l).call(this,"".concat(S,"\n\n").concat(k.join("\n"),"\n")))}else{var _=w(c),O="",C=y[o];"notDeepEqual"===o||"notEqual"===o?(_="".concat(y[o],"\n\n").concat(_)).length>1024&&(_="".concat(_.slice(0,1021),"...")):(O="".concat(w(d)),_.length>512&&(_="".concat(_.slice(0,509),"...")),O.length>512&&(O="".concat(O.slice(0,509),"...")),"deepEqual"===o||"equal"===o?_="".concat(C,"\n\n").concat(_,"\n\nshould equal\n\n"):O=" ".concat(o," ").concat(O)),t=i(this,u(l).call(this,"".concat(_).concat(O)))}return Error.stackTraceLimit=E,t.generatedMessage=!n,Object.defineProperty(a(t),"name",{value:"AssertionError [ERR_ASSERTION]",enumerable:!1,writable:!0,configurable:!0}),t.code="ERR_ASSERTION",t.actual=c,t.expected=d,t.operator=o,Error.captureStackTrace&&Error.captureStackTrace(a(t),s),t.stack,t.name="AssertionError",i(t)}return!function(e,t){if("function"!=typeof t&&null!==t)throw TypeError("Super expression must either be null or a function");e.prototype=Object.create(t&&t.prototype,{constructor:{value:e,writable:!0,configurable:!0}}),t&&c(e,t)}(l,e),t=[{key:"toString",value:function(){return"".concat(this.name," [").concat(this.code,"]: ").concat(this.message)}},{key:d.custom,value:function(e,t){return d(this,function(e){for(var t=1;t
', chat_turn_sep)
- if not prompt.endswith(chat_turn_sep):
- prompt += chat_turn_sep
- # most recent first, add older if can
- # only include desired chat history
- if len(prompt + context1) > max_prompt_length:
- break
- context1 += prompt
-
- _, pre_response, terminate_response, chat_sep, chat_turn_sep = \
- generate_prompt({}, prompt_type, prompt_dict,
- chat, reduced=True,
- making_context=True,
- system_prompt=system_prompt,
- histi=-1)
- if context1 and not context1.endswith(chat_turn_sep):
- context1 += chat_turn_sep # ensure if terminates abruptly, then human continues on next line
- return context1
-
-
-def get_relaxed_max_new_tokens(prompt, tokenizer=None, max_new_tokens=None, max_new_tokens0=None):
- # check if can relax max_new_tokens for this specific prompt
- if max_new_tokens0 is not None and \
- hasattr(tokenizer, 'model_max_len') and \
- isinstance(tokenizer.model_max_len, (float, int)):
- max_new_tokens = int(tokenizer.model_max_length) - get_token_count(prompt, tokenizer)
- if max_new_tokens is not None:
- return min(max_new_tokens0, max_new_tokens)
- else:
- return max_new_tokens0
- return max_new_tokens
-
-
-def get_limited_prompt(instruction,
- iinput,
- tokenizer,
- estimated_instruction=None,
- prompter=None,
- inference_server=None,
- prompt_type=None, prompt_dict=None, chat=False, max_new_tokens=None,
- system_prompt='',
- context='', chat_conversation=None, text_context_list=None,
- keep_sources_in_context=False,
- model_max_length=None, memory_restriction_level=0,
- langchain_mode=None, add_chat_history_to_context=True,
- verbose=False,
- doc_importance=0.5,
- min_max_new_tokens=256,
- max_input_tokens=-1,
- truncation_generation=False,
- gradio_server=False,
- ):
- if gradio_server or not inference_server:
- # can listen to truncation_generation
- pass
- else:
- # these don't support allowing going beyond total context
- truncation_generation = True
-
- # for templates, use estimated for counting, but adjust instruction as output
- if estimated_instruction is None:
- estimated_instruction = instruction
-
- if max_input_tokens >= 0:
- # max_input_tokens is used to runtime (via client/UI) to control actual filling of context
- max_input_tokens = min(model_max_length - min_max_new_tokens, max_input_tokens)
- else:
- max_input_tokens = model_max_length - min_max_new_tokens
-
- if prompter:
- prompt_type = prompter.prompt_type
- prompt_dict = prompter.prompt_dict
- chat = prompter.chat
- stream_output = prompter.stream_output
- system_prompt = prompter.system_prompt
-
- generate_prompt_type = prompt_type
- external_handle_chat_conversation = False
- if inference_server and any(
- inference_server.startswith(x) for x in ['openai_chat', 'openai_azure_chat', 'vllm_chat']):
- # Chat APIs do not take prompting
- # Replicate does not need prompting if no chat history, but in general can take prompting
- # if using prompter, prompter.system_prompt will already be filled with automatic (e.g. from llama-2),
- # so if replicate final prompt with system prompt still correct because only access prompter.system_prompt that was already set
- # below already true for openai,
- # but not vllm by default as that can be any model and handled by FastChat API inside vLLM itself
- generate_prompt_type = 'plain'
- # Chat APIs don't handle chat history via single prompt, but in messages, assumed to be handled outside this function
- chat_conversation = []
- external_handle_chat_conversation = True
-
- # merge handles if chat_conversation is None
- history = []
- history = merge_chat_conversation_history(chat_conversation, history)
- history_to_context_func = functools.partial(history_to_context,
- langchain_mode=langchain_mode,
- add_chat_history_to_context=add_chat_history_to_context,
- prompt_type=generate_prompt_type,
- prompt_dict=prompt_dict,
- chat=chat,
- model_max_length=max_input_tokens,
- memory_restriction_level=memory_restriction_level,
- keep_sources_in_context=keep_sources_in_context,
- system_prompt=system_prompt,
- min_max_new_tokens=min_max_new_tokens)
- context2 = history_to_context_func(history)
- context1 = context
- if context1 is None:
- context1 = ''
-
- # get how many more tokens in templated instruction, somewhat of estimate at fine level
- num_instruction_tokens = get_token_count(instruction, tokenizer)
- num_estimated_instruction_tokens = get_token_count(estimated_instruction, tokenizer)
- delta_instruction = max(0, num_estimated_instruction_tokens - num_instruction_tokens)
-
- # get estimated templated instruction tokens for counting purposes
- from h2oai_pipeline import H2OTextGenerationPipeline
- estimated_instruction, num_estimated_instruction_tokens = H2OTextGenerationPipeline.limit_prompt(
- estimated_instruction, tokenizer,
- max_prompt_length=max_input_tokens)
- data_point_just_instruction = dict(context='', instruction=estimated_instruction, input='')
- prompt_just_estimated_instruction = prompter.generate_prompt(data_point_just_instruction)
- num_instruction_tokens = get_token_count(prompt_just_estimated_instruction, tokenizer)
-
- # get actual instruction, limited by template limitation
- instruction, _ = H2OTextGenerationPipeline.limit_prompt(instruction, tokenizer,
- max_prompt_length=max_input_tokens - delta_instruction)
-
- context1, num_context1_tokens = H2OTextGenerationPipeline.limit_prompt(context1, tokenizer,
- max_prompt_length=max_input_tokens)
- context2, num_context2_tokens = H2OTextGenerationPipeline.limit_prompt(context2, tokenizer,
- max_prompt_length=max_input_tokens)
- iinput, num_iinput_tokens = H2OTextGenerationPipeline.limit_prompt(iinput, tokenizer,
- max_prompt_length=max_input_tokens)
- if text_context_list is None:
- text_context_list = []
- num_doc_tokens = sum([get_token_count(x + docs_joiner_default, tokenizer) for x in text_context_list])
-
- num_prompt_tokens0 = (num_instruction_tokens or 0) + \
- (num_context1_tokens or 0) + \
- (num_context2_tokens or 0) + \
- (num_iinput_tokens or 0) + \
- (num_doc_tokens or 0)
-
- # go down to no less than 256, about 1 paragraph
- # use max_new_tokens before use num_prompt_tokens0 else would be negative or ~0
- min_max_new_tokens = min(min_max_new_tokens, max_new_tokens)
- # by default assume can handle all chat and docs
- chat_index = 0
-
- # allowed residual is either half of what is allowed if doc exceeds half, or is rest of what doc didn't consume
- num_non_doc_tokens = num_prompt_tokens0 - num_doc_tokens
- # to doc first then non-doc, shouldn't matter much either way
- doc_max_length = max(max_input_tokens - num_non_doc_tokens, int(doc_importance * max_input_tokens))
- top_k_docs, one_doc_size, num_doc_tokens = get_docs_tokens(tokenizer, text_context_list=text_context_list,
- max_input_tokens=doc_max_length)
- non_doc_max_length = max(max_input_tokens - num_doc_tokens, int((1.0 - doc_importance) * max_input_tokens))
-
- if num_non_doc_tokens > non_doc_max_length:
- # need to limit in some way, keep portion of history but all of context and instruction
- # 1) drop iinput (unusual to include anyways)
- # 2) reduce history
- # 3) reduce context1
- # 4) limit instruction so will fit
- diff1 = non_doc_max_length - (
- num_instruction_tokens + num_context1_tokens + num_context2_tokens)
- diff2 = non_doc_max_length - (num_instruction_tokens + num_context1_tokens)
- diff3 = non_doc_max_length - num_instruction_tokens
- diff4 = non_doc_max_length
- if diff1 > 0:
- # then should be able to do #1
- iinput = ''
- num_iinput_tokens = 0
- elif diff2 > 0 > diff1:
- # then may be able to do #1 + #2
- iinput = ''
- num_iinput_tokens = 0
- chat_index_final = len(history)
- for chat_index in range(len(history)):
- # NOTE: history and chat_conversation are older for first entries
- # FIXME: This is a slow for many short conversations
- context2 = history_to_context_func(history[chat_index:])
- num_context2_tokens = get_token_count(context2, tokenizer)
- diff1 = non_doc_max_length - (
- num_instruction_tokens + num_context1_tokens + num_context2_tokens)
- if diff1 > 0:
- chat_index_final = chat_index
- if verbose:
- print("chat_conversation used %d out of %d" % (chat_index, len(history)), flush=True)
- break
- chat_index = chat_index_final # i.e. if chat_index == len(history), then nothing can be consumed
- elif diff3 > 0 > diff2:
- # then may be able to do #1 + #2 + #3
- iinput = ''
- num_iinput_tokens = 0
- context2 = ''
- num_context2_tokens = 0
- context1, num_context1_tokens = H2OTextGenerationPipeline.limit_prompt(context1, tokenizer,
- max_prompt_length=diff3)
- if num_context1_tokens <= diff3:
- pass
- else:
- print("failed to reduce", flush=True)
- else:
- # then must be able to do #1 + #2 + #3 + #4
- iinput = ''
- num_iinput_tokens = 0
- context2 = ''
- num_context2_tokens = 0
- context1 = ''
- num_context1_tokens = 0
- # diff4 accounts for real prompting for instruction
- # FIXME: history_to_context could include instruction, in case system prompt long, we overcount and could have more free tokens
-
- max_prompt_length = max(0, diff4 - delta_instruction)
- instruction, _ = H2OTextGenerationPipeline.limit_prompt(instruction, tokenizer,
- max_prompt_length=max_prompt_length)
- # get actual instruction tokens
- data_point_just_instruction = dict(context='', instruction=instruction, input='')
- prompt_just_instruction = prompter.generate_prompt(data_point_just_instruction)
- num_instruction_tokens = get_token_count(prompt_just_instruction, tokenizer) + delta_instruction
-
- # update full context
- context = context1 + context2
- # update token counts (docs + non-docs, all tokens)
- num_prompt_tokens = (num_instruction_tokens or 0) + \
- (num_context1_tokens or 0) + \
- (num_context2_tokens or 0) + \
- (num_iinput_tokens or 0) + \
- (num_doc_tokens or 0)
-
- # update max_new_tokens
- # limit so max_new_tokens = prompt + new < max
- # otherwise model can fail etc. e.g. for distilgpt2 asking for 1024 tokens is enough to fail if prompt=1 token
- if truncation_generation:
- max_new_tokens = min(max_new_tokens, model_max_length - num_prompt_tokens)
-
- if os.getenv('HARD_ASSERTS'):
- if max_new_tokens < min_max_new_tokens:
- raise ValueError("Invalid max_new_tokens=%s" % max_new_tokens)
-
- if prompter is None:
- # get prompter
- debug = False
- stream_output = False # doesn't matter
- prompter = Prompter(prompt_type, prompt_dict, debug=debug, chat=chat, stream_output=stream_output,
- system_prompt=system_prompt)
- if prompt_type != generate_prompt_type:
- # override just this attribute, keep system_prompt etc. from original prompt_type
- prompter.prompt_type = generate_prompt_type
-
- data_point = dict(context=context, instruction=instruction, input=iinput)
- # handle promptA/promptB addition if really from history.
- # if not from history, then reduced=False inside correct
- # if mixed, then no specific correct thing to do, so treat like history and promptA/B will come first still
- context_from_history = len(history) > 0 and len(context1) > 0
- prompt = prompter.generate_prompt(data_point, context_from_history=context_from_history)
- num_prompt_tokens_actual = get_token_count(prompt, tokenizer)
-
- return prompt, \
- instruction, iinput, context, \
- num_prompt_tokens, max_new_tokens, num_prompt_tokens0, num_prompt_tokens_actual, \
- chat_index, external_handle_chat_conversation, \
- top_k_docs, one_doc_size, truncation_generation
-
-
-def get_docs_tokens(tokenizer, text_context_list=[], max_input_tokens=None):
- if text_context_list is None or len(text_context_list) == 0:
- return 0, None, 0
- if max_input_tokens is None:
- max_input_tokens = tokenizer.model_max_length
- tokens = [get_token_count(x + docs_joiner_default, tokenizer) for x in text_context_list]
- tokens_cumsum = np.cumsum(tokens)
- where_res = np.where(tokens_cumsum < max_input_tokens)[0]
- # if below condition fails, then keep top_k_docs=-1 and trigger special handling next
- if where_res.shape[0] > 0:
- top_k_docs = 1 + where_res[-1]
- one_doc_size = None
- num_doc_tokens = tokens_cumsum[top_k_docs - 1] # by index
- else:
- # if here, means 0 and just do best with 1 doc
- top_k_docs = 1
- text_context_list = text_context_list[:top_k_docs]
- # critical protection
- from src.h2oai_pipeline import H2OTextGenerationPipeline
- doc_content = text_context_list[0]
- doc_content, new_tokens0 = H2OTextGenerationPipeline.limit_prompt(doc_content,
- tokenizer,
- max_prompt_length=max_input_tokens)
- text_context_list[0] = doc_content
- one_doc_size = len(doc_content)
- num_doc_tokens = get_token_count(doc_content + docs_joiner_default, tokenizer)
- print("Unexpected large chunks and can't add to context, will add 1 anyways. Tokens %s -> %s" % (
- tokens[0], new_tokens0), flush=True)
- return top_k_docs, one_doc_size, num_doc_tokens
-
-
-def entrypoint_main():
- """
- Examples:
-
- WORLD_SIZE=4 CUDA_VISIBLE_DEVICES="0,1,2,3" torchrun --nproc_per_node=4 --master_port=1234 generate.py --base_model='EleutherAI/gpt-j-6B' --lora_weights=lora-alpaca_6B
- python generate.py --base_model='EleutherAI/gpt-j-6B' --lora_weights='lora-alpaca_6B'
- python generate.py --base_model='EleutherAI/gpt-neox-20b' --lora_weights='lora-alpaca_20B'
-
- # generate without lora weights, no prompt
- python generate.py --base_model='EleutherAI/gpt-neox-20b' --prompt_type='plain'
- python generate.py --base_model='togethercomputer/GPT-NeoXT-Chat-Base-20B' --prompt_type='dai_faq'
-
- python generate.py --base_model='togethercomputer/GPT-NeoXT-Chat-Base-20B' --prompt_type='dai_faq' --lora_weights='lora_20B_daifaq'
- # OpenChatKit settings:
- python generate.py --base_model='togethercomputer/GPT-NeoXT-Chat-Base-20B' --prompt_type='human_bot --debug=True --num_beams=1 --temperature=0.6 --top_k=40 --top_p=1.0
-
- python generate.py --base_model='distilgpt2' --prompt_type='plain' --debug=True --num_beams=1 --temperature=0.6 --top_k=40 --top_p=1.0 --share=False
- python generate.py --base_model='t5-large' --prompt_type='simple_instruct'
- python generate.py --base_model='philschmid/bart-large-cnn-samsum'
- python generate.py --base_model='philschmid/flan-t5-base-samsum'
- python generate.py --base_model='facebook/mbart-large-50-many-to-many-mmt'
-
- python generate.py --base_model='togethercomputer/GPT-NeoXT-Chat-Base-20B' --prompt_type='human_bot' --lora_weights='GPT-NeoXT-Chat-Base-20B.merged.json.8_epochs.57b2892c53df5b8cefac45f84d019cace803ef26.28'
-
- must have 4*48GB GPU and run without 8bit in order for sharding to work with use_gpu_id=False
- can also pass --prompt_type='human_bot' and model can somewhat handle instructions without being instruct tuned
- python generate.py --base_model=decapoda-research/llama-65b-hf --load_8bit=False --use_gpu_id=False --prompt_type='human_bot'
-
- python generate.py --base_model=h2oai/h2ogpt-oig-oasst1-512-6_9b
- """
- H2O_Fire(main)
-
-
-if __name__ == "__main__":
- entrypoint_main()
diff --git a/spaces/awacke1/Emoji.Enumerator.Menu/README.md b/spaces/awacke1/Emoji.Enumerator.Menu/README.md
deleted file mode 100644
index 7b03a7201862b9330ef80f506f2e2558650ff8bc..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Emoji.Enumerator.Menu/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Emoji.Enumerator.Menu
-emoji: 💻
-colorFrom: green
-colorTo: green
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/GradioAutoCSVLoaderToPlotly/app.py b/spaces/awacke1/GradioAutoCSVLoaderToPlotly/app.py
deleted file mode 100644
index 698e8054275c3bf39e0e0e8951d9772853b096ab..0000000000000000000000000000000000000000
--- a/spaces/awacke1/GradioAutoCSVLoaderToPlotly/app.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import gradio as gr
-import pandas as pd
-import plotly.express as px
-
-# Create a function to write the CSV file
-def write_csv(record_count, topic, intervention):
- df = pd.DataFrame({
- "RecordCount": [record_count],
- "Topic": [topic],
- "Intervention": [intervention]
- })
- df.to_csv("testfile.csv", index=False, mode='a')
-
-# Create a function to plot the data from the CSV file
-def plot_data():
- df = pd.read_csv("testfile.csv")
- fig = px.scatter(df, x="RecordCount", y="Intervention", color="Topic")
- return fig
-
-# Define the inputs for the Gradio interface
-inputs = [
- gr.inputs.Slider(label="Record Count", minimum=0, maximum=100, default=50),
- gr.inputs.Textbox(label="Topic"),
- gr.inputs.Textbox(label="Intervention")
-]
-
-# Define the outputs for the Gradio interface
-outputs = [
- gr.Plot(plot_data),
- gr.Textbox(label="Data written to testfile.csv")
-]
-
-# Create the Gradio interface
-interface = gr.Interface(write_csv, inputs, outputs, title="Record Plotter")
-
-# Launch the Gradio interface
-interface.launch()
diff --git a/spaces/awacke1/GradioBlocksChangeEvent/app.py b/spaces/awacke1/GradioBlocksChangeEvent/app.py
deleted file mode 100644
index 4d30548df6c7b2ce9557818f4fea466113e5bd7b..0000000000000000000000000000000000000000
--- a/spaces/awacke1/GradioBlocksChangeEvent/app.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import gradio as gr
-
-#def update( text, choice):
-# if (choice==“gpt2large“):
-# return f1(text)
- # return f"Welcome to Gradio, {name}!"
-
-generator1 = gr.Interface.load("huggingface/gpt2-large")
-generator2 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B")
-generator3 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
-
-
-demo = gr.Blocks()
-
-def f1(x):
- return generator1(x)
-def f2(x):
- return generator2(x)
-def f3(x):
- return generator3(x)
-
-with demo:
- gr.Markdown(
- """
- # Hello World!
- Start typing below to see the output.
- """)
- inp = gr.Textbox(placeholder="Enter a statement to complete")
-
- out1 = gr.Textbox()
- out2 = gr.Textbox()
- out3 = gr.Textbox()
-
- inp.change(fn=f1,
- inputs=inp,
- outputs=out1)
- out1.change(fn=f2,
- inputs=inp,
- outputs=out2)
- out2.change(fn=f3,
- inputs=inp,
- outputs=out3)
-demo.launch()
\ No newline at end of file
diff --git a/spaces/banana-projects/datasets-card-creator/src/Instructions.js b/spaces/banana-projects/datasets-card-creator/src/Instructions.js
deleted file mode 100644
index 4a06ad667ef7aacd2ec19d4af2a57fb34f57ee15..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/datasets-card-creator/src/Instructions.js
+++ /dev/null
@@ -1,317 +0,0 @@
-const NAME = 'Instructions'
-
-export default {
- name: NAME,
- instructions: {
- yamlTags: {
- paragraph: [
- "Add YAML tags"
- ],
- example: [
- "---",
- `annotations_creators:`,
- `- no-annotation`,
- `language_creators:`,
- `- found`,
- `languages:`,
- `- en`,
- `licenses:`,
- `- unknown`,
- `multilinguality:`,
- `- monolingual`,
- `size_categories:`,
- `- 100KChrysler international PAIS DVD 10 2008 [Multilang] [ISO] .rar
-
-Power CD+G Burner 1.4.6 .rar · Chrysler international PAIS DVD 10 2008 [Multilang] [ISO] download pc · Google Operating System ( Android Live no installation ... 1fdad05405
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Epic Mickey 2 The Power Of Two (SERE4Q) NTSC WII WBFS.md b/spaces/bioriAsaeru/text-to-voice/Epic Mickey 2 The Power Of Two (SERE4Q) NTSC WII WBFS.md
deleted file mode 100644
index 87549ef30090e2dafc0a345ac374db7ab95bb84f..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Epic Mickey 2 The Power Of Two (SERE4Q) NTSC WII WBFS.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Epic Mickey 2: The Power Of Two (SERE4Q) NTSC WII WBFS
-
-Disney Epic Mickey 2: The Power of Two regresa a Mickey Mouse y Oswald the ... Funciona: Wii, Wii U y Emulador Dolphin ... Formato: WBFS. 4d29de3e1b
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Kp Prashna Kundali Software In Marathi Get Your Free KP Horoscope with Ruling Planets and Significators.md b/spaces/bioriAsaeru/text-to-voice/Kp Prashna Kundali Software In Marathi Get Your Free KP Horoscope with Ruling Planets and Significators.md
deleted file mode 100644
index 418ecbe8b904f2ab335a8c72a864d895b0e85e44..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Kp Prashna Kundali Software In Marathi Get Your Free KP Horoscope with Ruling Planets and Significators.md
+++ /dev/null
@@ -1,13 +0,0 @@
-
-Kp Prashna Kundali Software In Marathi
-
The report provided by this kundali software includes multiple observations based on the birthday and other astrological elements that might influence the Grahas of both the bride and groom.
Excellent features make this kundali software the first choice of Astrologers who love to move on with time. Horoscope matching helps highlight incompatible factors between a couple, so that they can take remedial steps to stay stronger.
ClickAstro generates the Janam Kundli and also studies them. This popularkundali software then creates an analysis report with the help of which theuser can understand the attributes which will influence the married life of thecouple.
-
-
\ No newline at end of file
diff --git a/spaces/brainblow/AI-TV/README.md b/spaces/brainblow/AI-TV/README.md
deleted file mode 100644
index be4c302a555988c8ce48f217cba4e49bfff422f1..0000000000000000000000000000000000000000
--- a/spaces/brainblow/AI-TV/README.md
+++ /dev/null
@@ -1,44 +0,0 @@
----
-title: BrainBlow AI-TV
-emoji: 🤯🤖📺
-colorFrom: green
-colorTo: green
-sdk: docker
-pinned: false
-app_port: 7860
-duplicated_from: TNR-5/AI-WebTV
----
-
-A generative AI WebTV, powered by Zeroscope and Hugging Face.
-
-This is just the frontend part, you will need the media-server (also open source) to make it work.
-
-Warning: this is an experimental, proof-of-concept project made in a few days.
-
-It is not ready for production use by other people! Also, this use models that should only be used for research purposes (no commercial usage).
-
-Note: because the stream uses FLV, it doesn't work on iPhone. There is however a [Twitch mirror here](https://www.twitch.tv/ai_webtv).
-
-The main code of the webtv is located inside the [media-server](https://huggingface.co/spaces/jbilcke-hf/media-server/tree/main) :
-
-manual steps:
-- human input to write a short paragraph describing a multi-shot video sequence
-- manual submit it to GPT-4 to generate a list of video captions for each shot (the system instructions are extracts from a stable diffusion guide)
-- commit the captions to the [playlist database](https://huggingface.co/spaces/jbilcke-hf/media-server/raw/main/database.json)
-
-Inside the `media-server` space (generation process running in the background):
-- for each prompt in the database
-- generate a silent 3 seconds video clip with Zeroscope V2 576w (hosted on Hugging Face Spaces)
-- upscale the clip with Zeroscope V2 XL (also a HF Space)
-- perform frame interpolation with FILM (also a HF Space)
-- storage in the Persistent Storage of the media-server Space
-
-Inside the `media-server` space (streaming process running in the foreground):
-- for each video file in the persistent storage folder
-- add it to a new FFmpeg playlist (it's just a .txt file)
-- broadcast it over the RTMP protocol using FFmpeg (in FLV format)
-- diffusion of the stream using node-media-server
-
-Inside the `AI-WebTV` space:
-- display the stream using `mpegts.js`
-- this doesn't work on iPhone, but now there is also a Twitch mirror
\ No newline at end of file
diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/data/sound_dataset.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/data/sound_dataset.py
deleted file mode 100644
index 8b88cbe8016b4bd28c2de749177c9af29f7755fc..0000000000000000000000000000000000000000
--- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/data/sound_dataset.py
+++ /dev/null
@@ -1,330 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-"""Dataset of audio with a simple description.
-"""
-
-from dataclasses import dataclass, fields, replace
-import json
-from pathlib import Path
-import random
-import typing as tp
-
-import numpy as np
-import torch
-
-from .info_audio_dataset import (
- InfoAudioDataset,
- get_keyword_or_keyword_list
-)
-from ..modules.conditioners import (
- ConditioningAttributes,
- SegmentWithAttributes,
- WavCondition,
-)
-
-
-EPS = torch.finfo(torch.float32).eps
-TARGET_LEVEL_LOWER = -35
-TARGET_LEVEL_UPPER = -15
-
-
-@dataclass
-class SoundInfo(SegmentWithAttributes):
- """Segment info augmented with Sound metadata.
- """
- description: tp.Optional[str] = None
- self_wav: tp.Optional[torch.Tensor] = None
-
- @property
- def has_sound_meta(self) -> bool:
- return self.description is not None
-
- def to_condition_attributes(self) -> ConditioningAttributes:
- out = ConditioningAttributes()
-
- for _field in fields(self):
- key, value = _field.name, getattr(self, _field.name)
- if key == 'self_wav':
- out.wav[key] = value
- else:
- out.text[key] = value
- return out
-
- @staticmethod
- def attribute_getter(attribute):
- if attribute == 'description':
- preprocess_func = get_keyword_or_keyword_list
- else:
- preprocess_func = None
- return preprocess_func
-
- @classmethod
- def from_dict(cls, dictionary: dict, fields_required: bool = False):
- _dictionary: tp.Dict[str, tp.Any] = {}
-
- # allow a subset of attributes to not be loaded from the dictionary
- # these attributes may be populated later
- post_init_attributes = ['self_wav']
-
- for _field in fields(cls):
- if _field.name in post_init_attributes:
- continue
- elif _field.name not in dictionary:
- if fields_required:
- raise KeyError(f"Unexpected missing key: {_field.name}")
- else:
- preprocess_func: tp.Optional[tp.Callable] = cls.attribute_getter(_field.name)
- value = dictionary[_field.name]
- if preprocess_func:
- value = preprocess_func(value)
- _dictionary[_field.name] = value
- return cls(**_dictionary)
-
-
-class SoundDataset(InfoAudioDataset):
- """Sound audio dataset: Audio dataset with environmental sound-specific metadata.
-
- Args:
- info_fields_required (bool): Whether all the mandatory metadata fields should be in the loaded metadata.
- external_metadata_source (tp.Optional[str]): Folder containing JSON metadata for the corresponding dataset.
- The metadata files contained in this folder are expected to match the stem of the audio file with
- a json extension.
- aug_p (float): Probability of performing audio mixing augmentation on the batch.
- mix_p (float): Proportion of batch items that are mixed together when applying audio mixing augmentation.
- mix_snr_low (int): Lowerbound for SNR value sampled for mixing augmentation.
- mix_snr_high (int): Upperbound for SNR value sampled for mixing augmentation.
- mix_min_overlap (float): Minimum overlap between audio files when performing mixing augmentation.
- kwargs: Additional arguments for AudioDataset.
-
- See `audiocraft.data.info_audio_dataset.InfoAudioDataset` for full initialization arguments.
- """
- def __init__(
- self,
- *args,
- info_fields_required: bool = True,
- external_metadata_source: tp.Optional[str] = None,
- aug_p: float = 0.,
- mix_p: float = 0.,
- mix_snr_low: int = -5,
- mix_snr_high: int = 5,
- mix_min_overlap: float = 0.5,
- **kwargs
- ):
- kwargs['return_info'] = True # We require the info for each song of the dataset.
- super().__init__(*args, **kwargs)
- self.info_fields_required = info_fields_required
- self.external_metadata_source = external_metadata_source
- self.aug_p = aug_p
- self.mix_p = mix_p
- if self.aug_p > 0:
- assert self.mix_p > 0, "Expecting some mixing proportion mix_p if aug_p > 0"
- assert self.channels == 1, "SoundDataset with audio mixing considers only monophonic audio"
- self.mix_snr_low = mix_snr_low
- self.mix_snr_high = mix_snr_high
- self.mix_min_overlap = mix_min_overlap
-
- def _get_info_path(self, path: tp.Union[str, Path]) -> Path:
- """Get path of JSON with metadata (description, etc.).
- If there exists a JSON with the same name as 'path.name', then it will be used.
- Else, such JSON will be searched for in an external json source folder if it exists.
- """
- info_path = Path(path).with_suffix('.json')
- if Path(info_path).exists():
- return info_path
- elif self.external_metadata_source and (Path(self.external_metadata_source) / info_path.name).exists():
- return Path(self.external_metadata_source) / info_path.name
- else:
- raise Exception(f"Unable to find a metadata JSON for path: {path}")
-
- def __getitem__(self, index):
- wav, info = super().__getitem__(index)
- info_data = info.to_dict()
- info_path = self._get_info_path(info.meta.path)
- if Path(info_path).exists():
- with open(info_path, 'r') as json_file:
- sound_data = json.load(json_file)
- sound_data.update(info_data)
- sound_info = SoundInfo.from_dict(sound_data, fields_required=self.info_fields_required)
- # if there are multiple descriptions, sample one randomly
- if isinstance(sound_info.description, list):
- sound_info.description = random.choice(sound_info.description)
- else:
- sound_info = SoundInfo.from_dict(info_data, fields_required=False)
-
- sound_info.self_wav = WavCondition(
- wav=wav[None], length=torch.tensor([info.n_frames]),
- sample_rate=[sound_info.sample_rate], path=[info.meta.path], seek_time=[info.seek_time])
-
- return wav, sound_info
-
- def collater(self, samples):
- # when training, audio mixing is performed in the collate function
- wav, sound_info = super().collater(samples) # SoundDataset always returns infos
- if self.aug_p > 0:
- wav, sound_info = mix_samples(wav, sound_info, self.aug_p, self.mix_p,
- snr_low=self.mix_snr_low, snr_high=self.mix_snr_high,
- min_overlap=self.mix_min_overlap)
- return wav, sound_info
-
-
-def rms_f(x: torch.Tensor) -> torch.Tensor:
- return (x ** 2).mean(1).pow(0.5)
-
-
-def normalize(audio: torch.Tensor, target_level: int = -25) -> torch.Tensor:
- """Normalize the signal to the target level."""
- rms = rms_f(audio)
- scalar = 10 ** (target_level / 20) / (rms + EPS)
- audio = audio * scalar.unsqueeze(1)
- return audio
-
-
-def is_clipped(audio: torch.Tensor, clipping_threshold: float = 0.99) -> torch.Tensor:
- return (abs(audio) > clipping_threshold).any(1)
-
-
-def mix_pair(src: torch.Tensor, dst: torch.Tensor, min_overlap: float) -> torch.Tensor:
- start = random.randint(0, int(src.shape[1] * (1 - min_overlap)))
- remainder = src.shape[1] - start
- if dst.shape[1] > remainder:
- src[:, start:] = src[:, start:] + dst[:, :remainder]
- else:
- src[:, start:start+dst.shape[1]] = src[:, start:start+dst.shape[1]] + dst
- return src
-
-
-def snr_mixer(clean: torch.Tensor, noise: torch.Tensor, snr: int, min_overlap: float,
- target_level: int = -25, clipping_threshold: float = 0.99) -> torch.Tensor:
- """Function to mix clean speech and noise at various SNR levels.
-
- Args:
- clean (torch.Tensor): Clean audio source to mix, of shape [B, T].
- noise (torch.Tensor): Noise audio source to mix, of shape [B, T].
- snr (int): SNR level when mixing.
- min_overlap (float): Minimum overlap between the two mixed sources.
- target_level (int): Gain level in dB.
- clipping_threshold (float): Threshold for clipping the audio.
- Returns:
- torch.Tensor: The mixed audio, of shape [B, T].
- """
- if clean.shape[1] > noise.shape[1]:
- noise = torch.nn.functional.pad(noise, (0, clean.shape[1] - noise.shape[1]))
- else:
- noise = noise[:, :clean.shape[1]]
-
- # normalizing to -25 dB FS
- clean = clean / (clean.max(1)[0].abs().unsqueeze(1) + EPS)
- clean = normalize(clean, target_level)
- rmsclean = rms_f(clean)
-
- noise = noise / (noise.max(1)[0].abs().unsqueeze(1) + EPS)
- noise = normalize(noise, target_level)
- rmsnoise = rms_f(noise)
-
- # set the noise level for a given SNR
- noisescalar = (rmsclean / (10 ** (snr / 20)) / (rmsnoise + EPS)).unsqueeze(1)
- noisenewlevel = noise * noisescalar
-
- # mix noise and clean speech
- noisyspeech = mix_pair(clean, noisenewlevel, min_overlap)
-
- # randomly select RMS value between -15 dBFS and -35 dBFS and normalize noisyspeech with that value
- # there is a chance of clipping that might happen with very less probability, which is not a major issue.
- noisy_rms_level = np.random.randint(TARGET_LEVEL_LOWER, TARGET_LEVEL_UPPER)
- rmsnoisy = rms_f(noisyspeech)
- scalarnoisy = (10 ** (noisy_rms_level / 20) / (rmsnoisy + EPS)).unsqueeze(1)
- noisyspeech = noisyspeech * scalarnoisy
- clean = clean * scalarnoisy
- noisenewlevel = noisenewlevel * scalarnoisy
-
- # final check to see if there are any amplitudes exceeding +/- 1. If so, normalize all the signals accordingly
- clipped = is_clipped(noisyspeech)
- if clipped.any():
- noisyspeech_maxamplevel = noisyspeech[clipped].max(1)[0].abs().unsqueeze(1) / (clipping_threshold - EPS)
- noisyspeech[clipped] = noisyspeech[clipped] / noisyspeech_maxamplevel
-
- return noisyspeech
-
-
-def snr_mix(src: torch.Tensor, dst: torch.Tensor, snr_low: int, snr_high: int, min_overlap: float):
- if snr_low == snr_high:
- snr = snr_low
- else:
- snr = np.random.randint(snr_low, snr_high)
- mix = snr_mixer(src, dst, snr, min_overlap)
- return mix
-
-
-def mix_text(src_text: str, dst_text: str):
- """Mix text from different sources by concatenating them."""
- if src_text == dst_text:
- return src_text
- return src_text + " " + dst_text
-
-
-def mix_samples(wavs: torch.Tensor, infos: tp.List[SoundInfo], aug_p: float, mix_p: float,
- snr_low: int, snr_high: int, min_overlap: float):
- """Mix samples within a batch, summing the waveforms and concatenating the text infos.
-
- Args:
- wavs (torch.Tensor): Audio tensors of shape [B, C, T].
- infos (list[SoundInfo]): List of SoundInfo items corresponding to the audio.
- aug_p (float): Augmentation probability.
- mix_p (float): Proportion of items in the batch to mix (and merge) together.
- snr_low (int): Lowerbound for sampling SNR.
- snr_high (int): Upperbound for sampling SNR.
- min_overlap (float): Minimum overlap between mixed samples.
- Returns:
- tuple[torch.Tensor, list[SoundInfo]]: A tuple containing the mixed wavs
- and mixed SoundInfo for the given batch.
- """
- # no mixing to perform within the batch
- if mix_p == 0:
- return wavs, infos
-
- if random.uniform(0, 1) < aug_p:
- # perform all augmentations on waveforms as [B, T]
- # randomly picking pairs of audio to mix
- assert wavs.size(1) == 1, f"Mix samples requires monophonic audio but C={wavs.size(1)}"
- wavs = wavs.mean(dim=1, keepdim=False)
- B, T = wavs.shape
- k = int(mix_p * B)
- mixed_sources_idx = torch.randperm(B)[:k]
- mixed_targets_idx = torch.randperm(B)[:k]
- aug_wavs = snr_mix(
- wavs[mixed_sources_idx],
- wavs[mixed_targets_idx],
- snr_low,
- snr_high,
- min_overlap,
- )
- # mixing textual descriptions in metadata
- descriptions = [info.description for info in infos]
- aug_infos = []
- for i, j in zip(mixed_sources_idx, mixed_targets_idx):
- text = mix_text(descriptions[i], descriptions[j])
- m = replace(infos[i])
- m.description = text
- aug_infos.append(m)
-
- # back to [B, C, T]
- aug_wavs = aug_wavs.unsqueeze(1)
- assert aug_wavs.shape[0] > 0, "Samples mixing returned empty batch."
- assert aug_wavs.dim() == 3, f"Returned wav should be [B, C, T] but dim = {aug_wavs.dim()}"
- assert aug_wavs.shape[0] == len(aug_infos), "Mismatch between number of wavs and infos in the batch"
-
- return aug_wavs, aug_infos # [B, C, T]
- else:
- # randomly pick samples in the batch to match
- # the batch size when performing audio mixing
- B, C, T = wavs.shape
- k = int(mix_p * B)
- wav_idx = torch.randperm(B)[:k]
- wavs = wavs[wav_idx]
- infos = [infos[i] for i in wav_idx]
- assert wavs.shape[0] == len(infos), "Mismatch between number of wavs and infos in the batch"
-
- return wavs, infos # [B, C, T]
diff --git a/spaces/breezedeus/pix2text/README.md b/spaces/breezedeus/pix2text/README.md
deleted file mode 100644
index 4a3390ce255888ad000f26b424f18f316d9dbf14..0000000000000000000000000000000000000000
--- a/spaces/breezedeus/pix2text/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Pix2Text
-emoji: 🅿❷🆃
-colorFrom: red
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/bright1/Sepsis-Prediction-API/Dockerfile b/spaces/bright1/Sepsis-Prediction-API/Dockerfile
deleted file mode 100644
index ca16c8ca726b6a0336cacc0f2cfffbb0520b5d02..0000000000000000000000000000000000000000
--- a/spaces/bright1/Sepsis-Prediction-API/Dockerfile
+++ /dev/null
@@ -1,17 +0,0 @@
-#
-FROM python:3.9
-
-#
-WORKDIR /code
-
-#
-COPY ./requirements.txt /code/requirements.txt
-
-#
-RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
-
-#
-COPY ./src /code/src
-
-#
-CMD ["uvicorn", "src.app.app:app", "--host", "0.0.0.0", "--port", "7860"]
\ No newline at end of file
diff --git a/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/texture.py b/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/texture.py
deleted file mode 100644
index 477759729d7b995a4f276e81d649617d045a066e..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/texture.py
+++ /dev/null
@@ -1,259 +0,0 @@
-"""Textures, conforming to the glTF 2.0 standards as specified in
-https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#reference-texture
-
-Author: Matthew Matl
-"""
-import numpy as np
-
-from OpenGL.GL import *
-
-from .utils import format_texture_source
-from .sampler import Sampler
-
-
-class Texture(object):
- """A texture and its sampler.
-
- Parameters
- ----------
- name : str, optional
- The user-defined name of this object.
- sampler : :class:`Sampler`
- The sampler used by this texture.
- source : (h,w,c) uint8 or (h,w,c) float or :class:`PIL.Image.Image`
- The image used by this texture. If None, the texture is created
- empty and width and height must be specified.
- source_channels : str
- Either `D`, `R`, `RG`, `GB`, `RGB`, or `RGBA`. Indicates the
- channels to extract from `source`. Any missing channels will be filled
- with `1.0`.
- width : int, optional
- For empty textures, the width of the texture buffer.
- height : int, optional
- For empty textures, the height of the texture buffer.
- tex_type : int
- Either GL_TEXTURE_2D or GL_TEXTURE_CUBE.
- data_format : int
- For now, just GL_FLOAT.
- """
-
- def __init__(self,
- name=None,
- sampler=None,
- source=None,
- source_channels=None,
- width=None,
- height=None,
- tex_type=GL_TEXTURE_2D,
- data_format=GL_UNSIGNED_BYTE):
- self.source_channels = source_channels
- self.name = name
- self.sampler = sampler
- self.source = source
- self.width = width
- self.height = height
- self.tex_type = tex_type
- self.data_format = data_format
-
- self._texid = None
- self._is_transparent = False
-
- @property
- def name(self):
- """str : The user-defined name of this object.
- """
- return self._name
-
- @name.setter
- def name(self, value):
- if value is not None:
- value = str(value)
- self._name = value
-
- @property
- def sampler(self):
- """:class:`Sampler` : The sampler used by this texture.
- """
- return self._sampler
-
- @sampler.setter
- def sampler(self, value):
- if value is None:
- value = Sampler()
- self._sampler = value
-
- @property
- def source(self):
- """(h,w,c) uint8 or float or :class:`PIL.Image.Image` : The image
- used in this texture.
- """
- return self._source
-
- @source.setter
- def source(self, value):
- if value is None:
- self._source = None
- else:
- self._source = format_texture_source(value, self.source_channels)
- self._is_transparent = False
-
- @property
- def source_channels(self):
- """str : The channels that were extracted from the original source.
- """
- return self._source_channels
-
- @source_channels.setter
- def source_channels(self, value):
- self._source_channels = value
-
- @property
- def width(self):
- """int : The width of the texture buffer.
- """
- return self._width
-
- @width.setter
- def width(self, value):
- self._width = value
-
- @property
- def height(self):
- """int : The height of the texture buffer.
- """
- return self._height
-
- @height.setter
- def height(self, value):
- self._height = value
-
- @property
- def tex_type(self):
- """int : The type of the texture.
- """
- return self._tex_type
-
- @tex_type.setter
- def tex_type(self, value):
- self._tex_type = value
-
- @property
- def data_format(self):
- """int : The format of the texture data.
- """
- return self._data_format
-
- @data_format.setter
- def data_format(self, value):
- self._data_format = value
-
- def is_transparent(self, cutoff=1.0):
- """bool : If True, the texture is partially transparent.
- """
- if self._is_transparent is None:
- self._is_transparent = False
- if self.source_channels == 'RGBA' and self.source is not None:
- if np.any(self.source[:,:,3] < cutoff):
- self._is_transparent = True
- return self._is_transparent
-
- def delete(self):
- """Remove this texture from the OpenGL context.
- """
- self._unbind()
- self._remove_from_context()
-
- ##################
- # OpenGL code
- ##################
- def _add_to_context(self):
- if self._texid is not None:
- raise ValueError('Texture already loaded into OpenGL context')
-
- fmt = GL_DEPTH_COMPONENT
- if self.source_channels == 'R':
- fmt = GL_RED
- elif self.source_channels == 'RG' or self.source_channels == 'GB':
- fmt = GL_RG
- elif self.source_channels == 'RGB':
- fmt = GL_RGB
- elif self.source_channels == 'RGBA':
- fmt = GL_RGBA
-
- # Generate the OpenGL texture
- self._texid = glGenTextures(1)
- glBindTexture(self.tex_type, self._texid)
-
- # Flip data for OpenGL buffer
- data = None
- width = self.width
- height = self.height
- if self.source is not None:
- data = np.ascontiguousarray(np.flip(self.source, axis=0).flatten())
- width = self.source.shape[1]
- height = self.source.shape[0]
-
- # Bind texture and generate mipmaps
- glTexImage2D(
- self.tex_type, 0, fmt, width, height, 0, fmt,
- self.data_format, data
- )
- if self.source is not None:
- glGenerateMipmap(self.tex_type)
-
- if self.sampler.magFilter is not None:
- glTexParameteri(
- self.tex_type, GL_TEXTURE_MAG_FILTER, self.sampler.magFilter
- )
- else:
- if self.source is not None:
- glTexParameteri(self.tex_type, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
- else:
- glTexParameteri(self.tex_type, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
- if self.sampler.minFilter is not None:
- glTexParameteri(
- self.tex_type, GL_TEXTURE_MIN_FILTER, self.sampler.minFilter
- )
- else:
- if self.source is not None:
- glTexParameteri(self.tex_type, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR)
- else:
- glTexParameteri(self.tex_type, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
-
- glTexParameteri(self.tex_type, GL_TEXTURE_WRAP_S, self.sampler.wrapS)
- glTexParameteri(self.tex_type, GL_TEXTURE_WRAP_T, self.sampler.wrapT)
- border_color = 255 * np.ones(4).astype(np.uint8)
- if self.data_format == GL_FLOAT:
- border_color = np.ones(4).astype(np.float32)
- glTexParameterfv(
- self.tex_type, GL_TEXTURE_BORDER_COLOR,
- border_color
- )
-
- # Unbind texture
- glBindTexture(self.tex_type, 0)
-
- def _remove_from_context(self):
- if self._texid is not None:
- # TODO OPENGL BUG?
- # glDeleteTextures(1, [self._texid])
- glDeleteTextures([self._texid])
- self._texid = None
-
- def _in_context(self):
- return self._texid is not None
-
- def _bind(self):
- # TODO HANDLE INDEXING INTO OTHER UV's
- glBindTexture(self.tex_type, self._texid)
-
- def _unbind(self):
- glBindTexture(self.tex_type, 0)
-
- def _bind_as_depth_attachment(self):
- glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
- self.tex_type, self._texid, 0)
-
- def _bind_as_color_attachment(self):
- glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
- self.tex_type, self._texid, 0)
diff --git a/spaces/cahodk/live-ml5-facemesh-p5js/README.md b/spaces/cahodk/live-ml5-facemesh-p5js/README.md
deleted file mode 100644
index 842aae2c03b2fd21e3f2b0c157ad79611ce2aca0..0000000000000000000000000000000000000000
--- a/spaces/cahodk/live-ml5-facemesh-p5js/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Live Ml5 Facemesh P5js
-emoji: 🌍
-colorFrom: green
-colorTo: indigo
-sdk: static
-pinned: false
-license: lgpl-2.1
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/GdImageFile.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/GdImageFile.py
deleted file mode 100644
index bafc43a19d432290867a5c08b9820f2e4f79aea3..0000000000000000000000000000000000000000
--- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/GdImageFile.py
+++ /dev/null
@@ -1,97 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# GD file handling
-#
-# History:
-# 1996-04-12 fl Created
-#
-# Copyright (c) 1997 by Secret Labs AB.
-# Copyright (c) 1996 by Fredrik Lundh.
-#
-# See the README file for information on usage and redistribution.
-#
-
-
-"""
-.. note::
- This format cannot be automatically recognized, so the
- class is not registered for use with :py:func:`PIL.Image.open()`. To open a
- gd file, use the :py:func:`PIL.GdImageFile.open()` function instead.
-
-.. warning::
- THE GD FORMAT IS NOT DESIGNED FOR DATA INTERCHANGE. This
- implementation is provided for convenience and demonstrational
- purposes only.
-"""
-
-
-from . import ImageFile, ImagePalette, UnidentifiedImageError
-from ._binary import i16be as i16
-from ._binary import i32be as i32
-
-
-class GdImageFile(ImageFile.ImageFile):
- """
- Image plugin for the GD uncompressed format. Note that this format
- is not supported by the standard :py:func:`PIL.Image.open()` function. To use
- this plugin, you have to import the :py:mod:`PIL.GdImageFile` module and
- use the :py:func:`PIL.GdImageFile.open()` function.
- """
-
- format = "GD"
- format_description = "GD uncompressed images"
-
- def _open(self):
- # Header
- s = self.fp.read(1037)
-
- if i16(s) not in [65534, 65535]:
- msg = "Not a valid GD 2.x .gd file"
- raise SyntaxError(msg)
-
- self.mode = "L" # FIXME: "P"
- self._size = i16(s, 2), i16(s, 4)
-
- true_color = s[6]
- true_color_offset = 2 if true_color else 0
-
- # transparency index
- tindex = i32(s, 7 + true_color_offset)
- if tindex < 256:
- self.info["transparency"] = tindex
-
- self.palette = ImagePalette.raw(
- "XBGR", s[7 + true_color_offset + 4 : 7 + true_color_offset + 4 + 256 * 4]
- )
-
- self.tile = [
- (
- "raw",
- (0, 0) + self.size,
- 7 + true_color_offset + 4 + 256 * 4,
- ("L", 0, 1),
- )
- ]
-
-
-def open(fp, mode="r"):
- """
- Load texture from a GD image file.
-
- :param fp: GD file name, or an opened file handle.
- :param mode: Optional mode. In this version, if the mode argument
- is given, it must be "r".
- :returns: An image instance.
- :raises OSError: If the image could not be read.
- """
- if mode != "r":
- msg = "bad mode"
- raise ValueError(msg)
-
- try:
- return GdImageFile(fp)
- except SyntaxError as e:
- msg = "cannot identify this image file"
- raise UnidentifiedImageError(msg) from e
diff --git a/spaces/cccc-c/web-ui-pub/_next/static/chunks/780.aecf08b05b0b9d76.js b/spaces/cccc-c/web-ui-pub/_next/static/chunks/780.aecf08b05b0b9d76.js
deleted file mode 100644
index 54b69b4cb6454762d71ced8d20e2a5e11937716b..0000000000000000000000000000000000000000
--- a/spaces/cccc-c/web-ui-pub/_next/static/chunks/780.aecf08b05b0b9d76.js
+++ /dev/null
@@ -1,260 +0,0 @@
-(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[780],{33747:function(e,t,n){"use strict";n.d(t,{YF:function(){return p},x7:function(){return l}});var r=n(21828),o=n(41778),i=n(86006),a=n(8431);let l=e=>({name:"arrow",options:e,fn(t){let{element:n,padding:o}="function"==typeof e?e(t):e;if(n&&({}).hasOwnProperty.call(n,"current")){if(null!=n.current)return(0,r.x7)({element:n.current,padding:o}).fn(t)}else if(n)return(0,r.x7)({element:n,padding:o}).fn(t);return{}}});var s="undefined"!=typeof document?i.useLayoutEffect:i.useEffect;function c(e,t){let n,r,o;if(e===t)return!0;if(typeof e!=typeof t)return!1;if("function"==typeof e&&e.toString()===t.toString())return!0;if(e&&t&&"object"==typeof e){if(Array.isArray(e)){if((n=e.length)!=t.length)return!1;for(r=n;0!=r--;)if(!c(e[r],t[r]))return!1;return!0}if((n=(o=Object.keys(e)).length)!==Object.keys(t).length)return!1;for(r=n;0!=r--;)if(!({}).hasOwnProperty.call(t,o[r]))return!1;for(r=n;0!=r--;){let n=o[r];if(("_owner"!==n||!e.$$typeof)&&!c(e[n],t[n]))return!1}return!0}return e!=e&&t!=t}function u(e){if("undefined"==typeof window)return 1;let t=e.ownerDocument.defaultView||window;return t.devicePixelRatio||1}function f(e,t){let n=u(e);return Math.round(t*n)/n}function d(e){let t=i.useRef(e);return s(()=>{t.current=e}),t}function p(e){void 0===e&&(e={});let{placement:t="bottom",strategy:n="absolute",middleware:r=[],platform:l,elements:{reference:p,floating:h}={},transform:g=!0,whileElementsMounted:m,open:b}=e,[v,y]=i.useState({x:0,y:0,strategy:n,placement:t,middlewareData:{},isPositioned:!1}),[x,w]=i.useState(r);c(x,r)||w(r);let[E,S]=i.useState(null),[k,_]=i.useState(null),O=i.useCallback(e=>{e!=R.current&&(R.current=e,S(e))},[S]),C=i.useCallback(e=>{e!==T.current&&(T.current=e,_(e))},[_]),A=p||E,N=h||k,R=i.useRef(null),T=i.useRef(null),P=i.useRef(v),M=d(m),j=d(l),L=i.useCallback(()=>{if(!R.current||!T.current)return;let e={placement:t,strategy:n,middleware:x};j.current&&(e.platform=j.current),(0,o.oo)(R.current,T.current,e).then(e=>{let t={...e,isPositioned:!0};I.current&&!c(P.current,t)&&(P.current=t,a.flushSync(()=>{y(t)}))})},[x,t,n,j]);s(()=>{!1===b&&P.current.isPositioned&&(P.current.isPositioned=!1,y(e=>({...e,isPositioned:!1})))},[b]);let I=i.useRef(!1);s(()=>(I.current=!0,()=>{I.current=!1}),[]),s(()=>{if(A&&(R.current=A),N&&(T.current=N),A&&N){if(M.current)return M.current(A,N,L);L()}},[A,N,L,M]);let D=i.useMemo(()=>({reference:R,floating:T,setReference:O,setFloating:C}),[O,C]),F=i.useMemo(()=>({reference:A,floating:N}),[A,N]),B=i.useMemo(()=>{let e={position:n,left:0,top:0};if(!F.floating)return e;let t=f(F.floating,v.x),r=f(F.floating,v.y);return g?{...e,transform:"translate("+t+"px, "+r+"px)",...u(F.floating)>=1.5&&{willChange:"transform"}}:{position:n,left:t,top:r}},[n,g,F.floating,v.x,v.y]);return i.useMemo(()=>({...v,update:L,refs:D,elements:F,floatingStyles:B}),[v,L,D,F,B])}},52134:function(e,t,n){"use strict";let r;n.d(t,{wD:function(){return eg},vs:function(){return ev},bQ:function(){return eC},YF:function(){return eA},NI:function(){return eR},JA:function(){return ey},c0:function(){return eZ},qs:function(){return eq}});var o=n(41778),i=n(33747),a=n(86006),l=n.t(a,2),s=n(472),c='input:not([inert]),select:not([inert]),textarea:not([inert]),a[href]:not([inert]),button:not([inert]),[tabindex]:not(slot):not([inert]),audio[controls]:not([inert]),video[controls]:not([inert]),[contenteditable]:not([contenteditable="false"]):not([inert]),details>summary:first-of-type:not([inert]),details:not([inert])',u="undefined"==typeof Element,f=u?function(){}:Element.prototype.matches||Element.prototype.msMatchesSelector||Element.prototype.webkitMatchesSelector,d=!u&&Element.prototype.getRootNode?function(e){var t;return null==e?void 0:null===(t=e.getRootNode)||void 0===t?void 0:t.call(e)}:function(e){return null==e?void 0:e.ownerDocument},p=function e(t,n){void 0===n&&(n=!0);var r,o=null==t?void 0:null===(r=t.getAttribute)||void 0===r?void 0:r.call(t,"inert");return""===o||"true"===o||n&&t&&e(t.parentNode)},h=function(e){var t,n=null==e?void 0:null===(t=e.getAttribute)||void 0===t?void 0:t.call(e,"contenteditable");return""===n||"true"===n},g=function(e,t,n){if(p(e))return[];var r=Array.prototype.slice.apply(e.querySelectorAll(c));return t&&f.call(e,c)&&r.unshift(e),r=r.filter(n)},m=function e(t,n,r){for(var o=[],i=Array.from(t);i.length;){var a=i.shift();if(!p(a,!1)){if("SLOT"===a.tagName){var l=a.assignedElements(),s=e(l.length?l:a.children,!0,r);r.flatten?o.push.apply(o,s):o.push({scopeParent:a,candidates:s})}else{f.call(a,c)&&r.filter(a)&&(n||!t.includes(a))&&o.push(a);var u=a.shadowRoot||"function"==typeof r.getShadowRoot&&r.getShadowRoot(a),d=!p(u,!1)&&(!r.shadowRootFilter||r.shadowRootFilter(a));if(u&&d){var h=e(!0===u?a.children:u.children,!0,r);r.flatten?o.push.apply(o,h):o.push({scopeParent:a,candidates:h})}else i.unshift.apply(i,a.children)}}}return o},b=function(e){return!isNaN(parseInt(e.getAttribute("tabindex"),10))},v=function(e){if(!e)throw Error("No node provided");return e.tabIndex<0&&(/^(AUDIO|VIDEO|DETAILS)$/.test(e.tagName)||h(e))&&!b(e)?0:e.tabIndex},y=function(e,t){var n=v(e);return n<0&&t&&!b(e)?0:n},x=function(e,t){return e.tabIndex===t.tabIndex?e.documentOrder-t.documentOrder:e.tabIndex-t.tabIndex},w=function(e){return"INPUT"===e.tagName},E=function(e,t){for(var n=0;n239?4:c>223?3:c>191?2:1;if(o+f<=n)switch(f){case 1:c<128&&(u=c);break;case 2:(192&(i=e[o+1]))==128&&(s=(31&c)<<6|63&i)>127&&(u=s);break;case 3:i=e[o+1],a=e[o+2],(192&i)==128&&(192&a)==128&&(s=(15&c)<<12|(63&i)<<6|63&a)>2047&&(s<55296||s>57343)&&(u=s);break;case 4:i=e[o+1],a=e[o+2],l=e[o+3],(192&i)==128&&(192&a)==128&&(192&l)==128&&(s=(15&c)<<18|(63&i)<<12|(63&a)<<6|63&l)>65535&&s<1114112&&(u=s)}null===u?(u=65533,f=1):u>65535&&(u-=65536,r.push(u>>>10&1023|55296),u=56320|1023&u),r.push(u),o+=f}return function(e){var t=e.length;if(t<=4096)return String.fromCharCode.apply(String,e);for(var n="",r=0;r>>=0,isFinite(n)?(n>>>=0,void 0===r&&(r="utf8")):(r=n,n=void 0);else throw Error("Buffer.write(string, encoding, offset[, length]) is no longer supported");var o,i,a,l,s,c,u,f,d,p,h,g,m=this.length-t;if((void 0===n||n>m)&&(n=m),e.length>0&&(n<0||t<0)||t>this.length)throw RangeError("Attempt to write outside buffer bounds");r||(r="utf8");for(var b=!1;;)switch(r){case"hex":return function(e,t,n,r){n=Number(n)||0;var o=e.length-n;r?(r=Number(r))>o&&(r=o):r=o;var i=t.length;r>i/2&&(r=i/2);for(var a=0;a