diff --git a/spaces/101-5/gpt4free/testing/aiservice/README.md b/spaces/101-5/gpt4free/testing/aiservice/README.md
deleted file mode 100644
index 83b06481024eaa01c8928f0f21c52f251749caea..0000000000000000000000000000000000000000
--- a/spaces/101-5/gpt4free/testing/aiservice/README.md
+++ /dev/null
@@ -1,2 +0,0 @@
-https://github.com/xtekky/gpt4free/issues/40#issuecomment-1629152431
-probably gpt-3.5
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Avoid Download Proteus 8 Full Crack Google Drive and Use the Genuine Version Instead.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Avoid Download Proteus 8 Full Crack Google Drive and Use the Genuine Version Instead.md
deleted file mode 100644
index a58842b74826c1a4b54bac39377f68cece1ae932..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Avoid Download Proteus 8 Full Crack Google Drive and Use the Genuine Version Instead.md
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
Download Proteus 8 Full Crack Google Drive: How to Do It and Why You Should Avoid It
-
Proteus 8 is a powerful software that allows you to design, simulate, and test electronic circuits and systems. It is widely used by engineers, students, hobbyists, and professionals who need to create and verify their electronic projects. Proteus 8 has many features and tools that can help you with your circuit design and analysis.
-
However, Proteus 8 is not a free software and requires a license to activate its full features. Some people may try to download Proteus 8 full crack from Google Drive, hoping to use the software without paying for it. However, this is not a smart move as it can expose your computer and data to various risks. In this article, we will explain why you should avoid downloading Proteus 8 full crack Google Drive and how to use the genuine version of the software safely and legally.
Why You Should Avoid Downloading Proteus 8 Full Crack Google Drive
-
Downloading Proteus 8 full crack from Google Drive may seem convenient and easy, but it comes with many drawbacks and risks. Here are some of the reasons why you should avoid downloading Proteus 8 full crack Google Drive:
-
-
It is illegal. Downloading and using Proteus 8 full crack from Google Drive is a violation of the software's terms and conditions and intellectual property rights. You may face legal action from the software developer or the authorities if you are caught using pirated software.
-
It is unreliable. Proteus 8 full crack from Google Drive may not work properly or have all the features and updates of the original software. You may encounter errors, bugs, crashes, or compatibility issues that can affect your work and productivity.
-
It is insecure. Proteus 8 full crack from Google Drive may contain viruses, malware, spyware, or other malicious programs that can harm your computer and data. These programs can steal your personal information, corrupt your files, damage your system, or even lock your computer and demand ransom.
-
It is unethical. Downloading and using Proteus 8 full crack from Google Drive is unfair to the software developer who has invested time, money, and effort to create and maintain the software. You are also depriving yourself of the benefits of using a genuine and licensed software that can help you with your electronic circuit design and simulation.
-
-
How to Use Proteus 8 Safely and Legally
-
If you want to use Proteus 8 for your electronic circuit design and simulation needs, you should purchase a genuine license from the official website or an authorized dealer. This way, you can enjoy the full features and benefits of the software without any risks or hassles. Here are the steps to use Proteus 8 safely and legally:
-
-
Download the software from the official website. Go to https://www.labcenter.com/downloads/ and choose the latest version of Proteus 8 for your operating system. You can also download a free trial version if you want to test the software before buying it.
-
Install the software on your computer. Run the downloaded file and follow the instructions on the screen to complete the installation process. You may need to enter your administrator password or grant permission to install the software.
-
Activate the license. After installing the software, you need to activate the license to use it. You can do this online or offline depending on your preference. To activate online, you need to enter your serial number and activation key that you received after purchasing the license. To activate offline, you need to generate an unlock key from the website using your serial number and request code that you get from the software.
-
Create a project. Once you activate the license, you can create a project in Proteus 8 by entering your project name, description, location, etc. You can also import your existing projects from other formats if you have any.
-
Start using the software. After creating a project, you can start using Proteus 8 for your circuit design and simulation needs. You can use various features and tools available in the software such as schematic capture, PCB
- ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download CSI SAFE 2020 for Free and Discover Its Amazing Features.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download CSI SAFE 2020 for Free and Discover Its Amazing Features.md
deleted file mode 100644
index b0819eb97b4bea3839cb3a3a57722860b5238c4d..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download CSI SAFE 2020 for Free and Discover Its Amazing Features.md
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
How to Download CSI SAFE 2020 for Free
-
CSI SAFE is a powerful software for structural design and analysis of concrete slabs and foundations. It can handle complex geometries, loads, and reinforcement patterns, as well as perform nonlinear and dynamic analysis. CSI SAFE 2020 is the latest version of the software, which offers many new features and enhancements.
If you want to try out CSI SAFE 2020 for free, you can download a trial version from the official website of Computers and Structures, Inc. (CSI). The trial version is valid for 30 days and has full functionality. However, you will need to register with your name and email address to get the download link and activation code.
-
To download CSI SAFE 2020 for free, follow these steps:
Fill out the form with your name, email address, country, and company name. You can also select your preferred language and unit system.
-
Check your email for the download link and activation code. You may need to check your spam folder if you don't see it in your inbox.
-
Click on the download link and save the file to your computer. The file size is about 1 GB.
-
Run the installer and follow the instructions. You will need to enter the activation code when prompted.
-
Enjoy using CSI SAFE 2020 for free for 30 days!
-
-
Note that the trial version of CSI SAFE 2020 is for evaluation purposes only and cannot be used for commercial or academic projects. If you want to use the software for longer or for professional purposes, you will need to purchase a license from CSI or their authorized resellers.
CSI SAFE 2020 is a comprehensive software for designing and analyzing concrete slabs and foundations. It can handle various types of slabs, such as flat, waffle, ribbed, mat, and composite. It can also design and detail foundations, such as isolated, combined, strip, pile cap, and mat.
-
CSI SAFE 2020 has a user-friendly interface that allows you to create and edit models easily. You can import and export data from other CSI products, such as SAP2000, ETABS, and CSiBridge. You can also import and export data from other formats, such as DXF, DWG, IFC, and Excel.
-
-
CSI SAFE 2020 has a powerful analysis engine that can perform linear and nonlinear analysis of slabs and foundations. It can account for various effects, such as cracking, creep, shrinkage, temperature, and soil-structure interaction. It can also perform dynamic analysis, such as modal, response spectrum, time history, and harmonic.
-
CSI SAFE 2020 has a comprehensive design and detailing module that can check and optimize the reinforcement of slabs and foundations according to various codes and standards. It can generate detailed reports and drawings that show the layout, quantities, and notes of the reinforcement. It can also export the reinforcement data to BIM software, such as Revit.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ghost Recon Breakpoint PC Walkthrough How to Customize Your Character Use Your Skills and Deal with Enemies.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ghost Recon Breakpoint PC Walkthrough How to Customize Your Character Use Your Skills and Deal with Enemies.md
deleted file mode 100644
index f4fe04224bb7f90b58f97f54f06a56300add9122..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ghost Recon Breakpoint PC Walkthrough How to Customize Your Character Use Your Skills and Deal with Enemies.md
+++ /dev/null
@@ -1,47 +0,0 @@
-
-
Ghost Recon Breakpoint PC Walkthrough: Tips and Tricks for Surviving the Open World
-
-
If you are looking for a ghost recon breakpoint pc walkthrough, you have come to the right place. Ghost Recon Breakpoint is a tactical shooter game that puts you in the shoes of a special forces soldier who has to survive on a hostile island. The game features a vast open world that you can explore, complete missions, and engage in combat with enemies. However, the game can also be challenging and overwhelming, especially for beginners. That's why we have prepared this ghost recon breakpoint pc walkthrough to help you out.
-
-
In this ghost recon breakpoint pc walkthrough, we will cover some basic tips and tricks that will make your life easier in the game. We will also give you some pointers on how to customize your character, use your skills and gadgets, and deal with different types of enemies. Whether you are playing solo or co-op, this ghost recon breakpoint pc walkthrough will help you enjoy the game more.
One of the first things you should do in Ghost Recon Breakpoint is to customize your character. You can choose from different classes, each with their own abilities and perks. You can also change your appearance, gear, and weapons. You can access the customization menu by pressing I on your keyboard or by visiting a bivouac (a campsite where you can rest and prepare).
-
-
The four classes available in Ghost Recon Breakpoint are:
-
-
-
Assault: This class is good for aggressive players who like to deal damage and take hits. They have increased health and can use an adrenaline rush ability that boosts their damage and resistance.
-
Sharpshooter: This class is good for stealthy players who like to snipe enemies from afar. They have increased accuracy and can use a sensor launcher ability that reveals enemies' locations.
-
Panther: This class is good for sneaky players who like to infiltrate enemy bases and avoid detection. They have increased stealth and can use a cloaking spray ability that makes them invisible for a short time.
-
Medic: This class is good for supportive players who like to heal and revive their teammates. They have increased revive speed and can use a drone healer ability that heals themselves and their allies.
-
-
-
You can switch between classes at any time by visiting a bivouac. You can also unlock new skills and perks for each class by earning skill points and completing challenges.
-
-
Use Your Skills and Gadgets
-
-
Another important aspect of Ghost Recon Breakpoint is to use your skills and gadgets effectively. You have access to a skill tree that lets you unlock various abilities that enhance your combat, survival, and reconnaissance capabilities. You can spend skill points to unlock new skills or upgrade existing ones. You can also equip different gadgets that give you an edge in different situations.
-
-
Some of the most useful skills and gadgets in Ghost Recon Breakpoint are:
-
-
-
Binoculars: These allow you to scout the area and mark enemies, vehicles, and resources. You can also use them to sync shot enemies with your teammates or your AI companions.
-
Drones: These are remote-controlled devices that you can use to scout, distract, or attack enemies. You can also use them to hack enemy drones or vehicles.
-
Syringes: These are consumable items that you can use to heal yourself or your teammates. You can also use them to cure injuries or status effects.
-
Mines: These are explosive devices that you can place on the ground or on walls. They will detonate when an enemy comes near them or when you trigger them manually.
-
C4: These are explosive charges that you can place on vehicles, doors, or generators. They will detonate when you trigger them manually or when an enemy shoots them.
-
-
-
You can access your skills and gadgets by pressing TAB on your keyboard or by using the wheel menu. You can also craft new gadgets or refill your ammo at bivouacs or ammo crates.
-
-
Deal with Different Types of Enemies
-
-
The last thing we will cover in this ghost recon breakpoint pc walkthrough is how to deal with different types of enemies. The game features a variety of enemies that have different behaviors, weapons, and weaknesses. You will need to adapt your strategy depending on the enemy you are facing.
-
-
-
Some of the most common types of enemies in Ghost Recon Breakpoint are:
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Catalogo De Sellos Edifil Espana 2012 Pdf LINK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Catalogo De Sellos Edifil Espana 2012 Pdf LINK.md
deleted file mode 100644
index 72a466f07307417e0b2f6a003820a2e50c3cc9ea..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Catalogo De Sellos Edifil Espana 2012 Pdf LINK.md
+++ /dev/null
@@ -1,113 +0,0 @@
-
-
Catalogo De Sellos Edifil Espana 2012 Pdf: una guía para los amantes de la filatelia
-
-
Si eres un apasionado de la filatelia y te interesa la historia postal de España y sus dependencias, seguramente querrás tener en tus manos el Catalogo De Sellos Edifil Espana 2012 Pdf, una obra de referencia que recoge todos los sellos emitidos desde 1850 hasta 2012.
El Catalogo De Sellos Edifil Espana 2012 Pdf es un documento digital que puedes descargar gratis desde internet y que te ofrece una información completa y detallada de cada sello, con su imagen, descripción, valor facial, fecha de emisión, tirada, dentado, impresión y cotización.
-
-
Además, el Catalogo De Sellos Edifil Espana 2012 Pdf incluye también los sellos de Andorra, las antiguas colonias españolas, los números de ciudad, los sobres y las hojitas bloque. Todo ello con una presentación cuidada y una calidad gráfica excelente.
-
-
¿Qué ventajas tiene el Catalogo De Sellos Edifil Espana 2012 Pdf?
-
-
El Catalogo De Sellos Edifil Espana 2012 Pdf tiene muchas ventajas para los coleccionistas y aficionados a la filatelia. Algunas de ellas son:
-
-
-
Es un catálogo actualizado y completo, que abarca más de 160 años de historia postal de España y sus dependencias.
-
Es un catálogo fácil de consultar y de usar, ya que se puede acceder a él desde cualquier dispositivo electrónico con conexión a internet.
-
Es un catálogo gratuito, que se puede descargar sin ningún coste y sin necesidad de registrarse en ninguna página web.
-
Es un catálogo útil y práctico, que te permite conocer el valor de mercado de tus sellos y planificar tus compras y ventas.
-
Es un catálogo interesante y educativo, que te permite aprender sobre la historia, la cultura y el arte de España y sus territorios a través de sus sellos.
-
-
-
¿Cómo descargar el Catalogo De Sellos Edifil Espana 2012 Pdf?
-
-
Descargar el Catalogo De Sellos Edifil Espana 2012 Pdf es muy sencillo. Solo tienes que seguir estos pasos:
-
-
-
Entra en el buscador de internet de tu preferencia y escribe el nombre del catálogo: Catalogo De Sellos Edifil Espana 2012 Pdf.
-
Selecciona uno de los resultados que te aparecen y haz clic en él. Te llevará a una página web donde podrás ver el catálogo online o descargarlo en formato PDF.
-
Si quieres ver el catálogo online, solo tienes que desplazarte por las páginas y hacer zoom para ampliar las imágenes. Si quieres descargarlo en formato PDF, solo tienes que hacer clic en el botón de descarga y elegir la carpeta donde quieres guardarlo.
-
Una vez descargado el catálogo en tu dispositivo electrónico, podrás abrirlo con cualquier programa que lea archivos PDF y disfrutar de él siempre que quieras.
-
-
-
Así de fácil es obtener el Catalogo De Sellos Edifil Espana 2012 Pdf, un documento imprescindible para los amantes de la filatelia. No esperes más y descárgalo ya. Te sorprenderá la cantidad y la calidad de los sellos que contiene.
-
¿Qué sellos puedes encontrar en el Catalogo De Sellos Edifil Espana 2012 Pdf?
-
-
El Catalogo De Sellos Edifil Espana 2012 Pdf te ofrece una gran variedad de sellos de diferentes épocas, temáticas y estilos. Algunos de los sellos que puedes encontrar son:
-
-
-
-
Sellos de los antiguos estados italianos, como el Reino de Vittorio Emanuele II, el Reino de Italia, la R.S.I., la Luogotenenza, Trieste A y B, Fiume y las ocupaciones extranjeras.
-
Sellos de las antiguas colonias españolas, como Andorra, Somalia A.F.I., los oficinas postales en el extranjero y las emisiones locales.
-
Sellos del periodo de la Primera Guerra Mundial y la Segunda Guerra Mundial, con las ocupaciones italianas y las ocupaciones extranjeras de territorios españoles.
-
Sellos de San Marino, el Vaticano y el S.M. Orden de Malta, con sus respectivas series básicas y conmemorativas.
-
Sellos de España desde 1850 hasta 2012, con sus diferentes series filatélicas, como la serie básica del Rey Juan Carlos I, la serie turística y monumental, la serie América-UPAEP, la serie pintura española, la serie faros, la serie micología, la serie bailes populares y la serie moda española.
-
-
-
¿Cómo usar el Catalogo De Sellos Edifil Espana 2012 Pdf?
-
-
El Catalogo De Sellos Edifil Espana 2012 Pdf es una herramienta muy útil para los coleccionistas y aficionados a la filatelia. Para usarlo correctamente, debes tener en cuenta lo siguiente:
-
-
-
Identifica el sello que quieres consultar y busca su número de catálogo en el índice alfabético o en el índice temático.
-
Localiza el sello en el catálogo y compara su imagen con la del sello real. Fíjate en los detalles como el color, el dentado, la impresión y las marcas.
-
Lee la descripción del sello y anota su valor facial, su fecha de emisión, su tirada y su cotización. También puedes ver si el sello forma parte de una serie o de una hojita bloque.
-
Repite el proceso con todos los sellos que quieras consultar y clasifica tu colección según tus criterios personales.
-
-
-
El Catalogo De Sellos Edifil Espana 2012 Pdf es un documento imprescindible para los amantes de la filatelia. No esperes más y descárgalo ya. Te sorprenderá la cantidad y la calidad de los sellos que contiene.
-
¿Qué otras publicaciones puedes encontrar en Edifil?
-
-
Edifil es una editorial especializada en filatelia que lleva más de 80 años ofreciendo productos y servicios de calidad a los coleccionistas. Además del Catalogo De Sellos Edifil Espana 2012 Pdf, puedes encontrar otras publicaciones interesantes, como:
-
-
-
Catálogos de sellos de otros países y regiones, como Portugal, Andorra, Francia, Europa Occidental, Europa Oriental, América del Sur, América Central y Caribe, África y Asia.
-
Catálogos de otros temas filatélicos, como sellos de correo aéreo, sellos de correo marítimo, sellos de correo ferroviario, sellos de correo militar, sellos de entero postal y sellos de prefilatelia.
-
Catálogos de otras editoriales y asociaciones filatélicas, como Yvert et Tellier, Ángel Laiz, Federación Española de Sociedades Filatélicas y Federación Internacional de Filatelia.
-
Revistas y libros sobre filatelia, historia postal y cultura general, con artículos, reportajes, entrevistas, noticias y curiosidades.
-
-
-¿Cómo comprar el Catalogo De Sellos Edifil Espana 2012 Pdf?
-
-
Si quieres comprar el Catalogo De Sellos Edifil Espana 2012 Pdf, tienes varias opciones. Puedes hacerlo a través de la página web de Edifil, donde podrás pagar con tarjeta de crédito o débito, PayPal o transferencia bancaria. También puedes hacerlo por teléfono o por correo electrónico, indicando tus datos personales y la forma de pago. Otra opción es acudir a una librería o tienda especializada en filatelia y solicitar el catálogo.
-
-
El precio del Catalogo De Sellos Edifil Espana 2012 Pdf es de 35 euros (IVA incluido) y los gastos de envío son gratuitos para España peninsular. Para otros destinos, consulta las tarifas en la página web de Edifil o contacta con el servicio de atención al cliente.
-
-
No lo dudes más y compra ya el Catalogo De Sellos Edifil Espana 2012 Pdf, un catálogo imprescindible para los amantes de la filatelia. Te sorprenderá la cantidad y la calidad de los sellos que contiene.
-¿Qué beneficios tiene la filatelia para tu salud mental?
-
-
La filatelia es una afición que te puede aportar muchos beneficios para tu salud mental. Algunos de ellos son:
-
-
-
Te ayuda a relajarte y a reducir el estrés, ya que te permite desconectar de las preocupaciones y los problemas del día a día.
-
Te estimula la memoria y la atención, ya que te obliga a recordar y a observar los detalles de los sellos y su historia.
-
Te fomenta la creatividad y la imaginación, ya que te permite crear tu propia colección y personalizarla según tus gustos y preferencias.
-
Te enriquece culturalmente, ya que te permite aprender sobre la historia, la geografía, el arte, la ciencia y la sociedad de diferentes países y épocas a través de los sellos.
-
Te facilita el contacto social, ya que te permite compartir tu afición con otras personas que tienen los mismos intereses y pasiones que tú.
-
-
-
La filatelia es una afición que te puede hacer más feliz y más inteligente. No lo dudes y descarga ya el Catalogo De Sellos Edifil Espana 2012 Pdf, un catálogo imprescindible para los amantes de la filatelia. Te sorprenderá la cantidad y la calidad de los sellos que contiene.
-
-¿Cómo vender tus sellos online?
-
-
Si tienes una colección de sellos que quieres vender online, puedes seguir algunos consejos que te ayudarán a hacerlo de forma segura y rentable. Algunos de ellos son:
-
-
-
Valora tus sellos correctamente, usando el Catalogo De Sellos Edifil Espana 2012 Pdf como referencia. Ten en cuenta el estado de conservación, la rareza y la demanda de tus sellos.
-
Elige una plataforma adecuada para vender tus sellos online, como eBay, Delcampe, Todocolección o Filatelia.com. Compara las comisiones, las condiciones y las opiniones de otros vendedores.
-
Prepara una descripción detallada y honesta de tus sellos, incluyendo el número de catálogo, el valor facial, la fecha de emisión, el dentado, la impresión y las posibles defectos o marcas. Acompaña tu descripción con fotos claras y nítidas de tus sellos.
-
Fija un precio justo y competitivo para tus sellos, basándote en el valor de mercado y en los precios de otros vendedores. Puedes optar por un precio fijo o por una subasta.
-
Cuida el embalaje y el envío de tus sellos, usando sobres acolchados o rígidos, fundas protectoras y etiquetas identificativas. Ofrece varias opciones de envío y seguimiento a tus compradores.
-
-
-
Vender tus sellos online puede ser una forma fácil y rápida de obtener un dinero extra por tu colección. Solo necesitas el Catalogo De Sellos Edifil Espana 2012 Pdf, una buena conexión a internet y un poco de paciencia. ¡Suerte con tu venta!
-Conclusión
-
-
El Catalogo De Sellos Edifil Espana 2012 Pdf es un documento imprescindible para los amantes de la filatelia. Es un catálogo completo y actualizado que recoge todos los sellos emitidos por España y sus dependencias postales desde 1850 hasta 2012. Es un catálogo gratuito que se puede descargar fácilmente desde internet y que ofrece una información detallada y una calidad gráfica excelente de cada sello. Es un catálogo útil y práctico que te permite conocer el valor de mercado de tus sellos y planificar tus compras y ventas. Es un catálogo interesante y educativo que te permite aprender sobre la historia, la cultura y el arte de España y sus territorios a través de sus sellos.
-
-
En este artículo, te hemos explicado qué es el Catalogo De Sellos Edifil Espana 2012 Pdf, qué ventajas tiene, cómo descargarlo, qué sellos puedes encontrar en él, qué otras publicaciones puedes encontrar en Edifil, qué opinan los usuarios del catálogo, qué beneficios tiene la filatelia para tu salud mental y cómo vender tus sellos online. Esperamos que te haya sido útil y que te animes a descargar el catálogo y a disfrutar de tu afición a la filatelia.
-
-
Si te ha gustado este artículo, compártelo con tus amigos y déjanos un comentario con tu opinión. También puedes suscribirte a nuestro boletín para recibir más artículos sobre filatelia y otros temas de tu interés. ¡Gracias por leernos!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Parking Multiplayer APK Download for PC - Free Simulation Game with Car Tuning and Free Walking.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Parking Multiplayer APK Download for PC - Free Simulation Game with Car Tuning and Free Walking.md
deleted file mode 100644
index 46ed4bf9e1a743536d4c0da800b8039e720117a4..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Parking Multiplayer APK Download for PC - Free Simulation Game with Car Tuning and Free Walking.md
+++ /dev/null
@@ -1,203 +0,0 @@
-
-
How to Download Car Parking Multiplayer APK for PC
-
Car Parking Multiplayer is a realistic driving simulator game for Android devices that lets you explore a detailed city, customize your cars, and compete with other players online. If you are a fan of this game and want to play it on a bigger screen with better controls, you might be wondering how to download car parking multiplayer apk for PC. In this article, we will show you two ways to play car parking multiplayer on your PC, either with Windows 11 or with Android emulators.
-
What is Car Parking Multiplayer?
-
Car Parking Multiplayer is a game developed by olzhass that simulates various aspects of driving and parking in a realistic 3D environment. You can choose from over 100 different cars, from sports cars to trucks, and customize them with various parts and accessories. You can also interact with other players in online multiplayer mode, chat with them, exchange cars, or challenge them to races and drifts. You can also explore the open-world city, which has different areas such as airports, beaches, deserts, and more. You can also find hidden places and secrets in the city, such as a tank or a UFO.
Some of the features of car parking multiplayer are:
-
-
Realistic car physics and damage system
-
Manual transmission with clutch and gear shifting
-
Dynamic weather and day-night cycle
-
Free walking mode and interactive elements
-
Car tuning and customization
-
Online multiplayer mode with voice chat
-
Different game modes such as racing, drifting, free ride, and more
-
Over 100 cars and 82 real-life parking scenarios
-
Frequent updates and new content
-
-
Requirements for Car Parking Multiplayer
-
To play car parking multiplayer on your Android device, you need to have:
-
-
An Android device running version 5.0 or higher
-
At least 900 MB of free storage space
-
A stable internet connection for online mode
-
An optional gamepad or steering wheel for better control
-
-
Why Play Car Parking Multiplayer on PC?
-
While car parking multiplayer is designed for mobile devices, some players might prefer to play it on their PC for various reasons. Here are some of the advantages and disadvantages of playing car parking multiplayer on PC.
-
Advantages of Playing on PC
-
Some of the benefits of playing car parking multiplayer on PC are:
-
How to download car parking multiplayer apk for pc
-Car parking multiplayer apk for pc free download
-Car parking multiplayer apk for pc windows 10
-Car parking multiplayer apk for pc online
-Car parking multiplayer apk for pc bluestacks
-Car parking multiplayer apk for pc noxplayer
-Car parking multiplayer apk for pc simulation game
-Car parking multiplayer apk for pc latest version
-Car parking multiplayer apk for pc mod
-Car parking multiplayer apk for pc hack
-Car parking multiplayer apk for pc cheats
-Car parking multiplayer apk for pc unlimited money
-Car parking multiplayer apk for pc gameplay
-Car parking multiplayer apk for pc review
-Car parking multiplayer apk for pc tips and tricks
-Car parking multiplayer apk for pc guide
-Car parking multiplayer apk for pc tutorial
-Car parking multiplayer apk for pc walkthrough
-Car parking multiplayer apk for pc best cars
-Car parking multiplayer apk for pc customizations
-Car parking multiplayer apk for pc open world mode
-Car parking multiplayer apk for pc free walking
-Car parking multiplayer apk for pc car tuning
-Car parking multiplayer apk for pc realistic graphics
-Car parking multiplayer apk for pc olzhass developer
-Car parking multiplayer apk for pc update 2023
-Car parking multiplayer apk for pc new features
-Car parking multiplayer apk for pc download link
-Car parking multiplayer apk for pc system requirements
-Car parking multiplayer apk for pc error fix
-Car parking multiplayer apk for pc installation process
-Car parking multiplayer apk for pc offline mode
-Car parking multiplayer apk for pc single player mode
-Car parking multiplayer apk for pc police mode
-Car parking multiplayer apk for pc racing mode
-Car parking multiplayer apk for pc drifting mode
-Car parking multiplayer apk for pc challenges mode
-Car parking multiplayer apk for pc missions mode
-Car parking multiplayer apk for pc fun mode
-Car parking multiplayer apk for pc chat mode
-Car parking multiplayer apk for pc voice chat mode
-Car parking multiplayer apk for pc friends mode
-Car parking multiplayer apk for pc invite mode
-Car parking multiplayer apk for pc join mode
-Car parking multiplayer apk for pc create mode
-Car parking multiplayer apk for pc server mode
-Car parking multiplayer apk for pc private mode
-Car parking multiplayer apk for pc public mode
-
-
Bigger screen and better graphics: You can enjoy the game's realistic graphics and details on a larger monitor or TV screen, which can enhance your immersion and enjoyment.
-
Better controls and performance: You can use your keyboard and mouse or a controller to control your car more precisely and comfortably. You can also adjust the game settings to optimize the performance and reduce lag or crashes.
-
Easier communication and recording: You can use your PC's microphone and speakers to communicate with other players more clearly and conveniently. You can also use your PC's software to record or stream your gameplay or share it with others.
-
More options and features: You can access more features and options on your PC, such as modding, cheats, or hacks, that might not be available or safe on your mobile device.
-
-
Disadvantages of Playing on PC
-
Some of the drawbacks of playing car parking multiplayer on PC are:
-
-
Compatibility and security issues: You might encounter some compatibility or security issues when playing car parking multiplayer on PC, such as errors, bugs, viruses, or malware. You might also need to update your PC's drivers or software to run the game smoothly.
-
Higher cost and space: You might need to spend more money and space to play car parking multiplayer on PC, such as buying a PC, a monitor, a controller, or an emulator. You might also need to download and install additional files or programs to play the game.
-
Less mobility and convenience: You might lose some of the mobility and convenience of playing car parking multiplayer on your mobile device, such as playing anywhere, anytime, or with anyone. You might also need to switch between devices or accounts to sync your progress or data.
-
-
How to Play Car Parking Multiplayer on PC with Windows 11
-
One of the ways to play car parking multiplayer on PC is to use Windows 11, the latest operating system from Microsoft that supports Android apps natively. This means that you can run Android apps on your PC without using any emulators or third-party software. Here are the steps to play car parking multiplayer on PC with Windows 11.
-
Steps to Install Windows Subsystem for Android
-
Before you can install and play car parking multiplayer on your PC with Windows 11, you need to enable the Windows Subsystem for Android (WSA), which is the feature that allows you to run Android apps on your PC. Here are the steps to install WSA:
-
-
Open the Start menu and search for "Turn Windows features on or off". Click on it to open a new window.
-
Scroll down and find "Windows Subsystem for Android". Check the box next to it and click OK.
-
Wait for the installation process to complete and restart your PC if prompted.
-
Open the Microsoft Store app and search for "Windows Subsystem for Android". Click on it and install it on your PC.
-
Wait for the installation process to complete and launch WSA from the Start menu.
-
-
Steps to Install Car Parking Multiplayer from Amazon Appstore
-
After you have installed WSA on your PC, you can install car parking multiplayer from the Amazon Appstore, which is the default app store for WSA. Here are the steps to install car parking multiplayer from Amazon Appstore:
-
-
Open WSA from the Start menu and click on the Amazon Appstore icon.
-
Sign in with your Amazon account or create a new one if you don't have one.
-
Search for "Car Parking Multiplayer" in the search bar and click on it.
-
Click on the "Get" button and wait for the download and installation process to complete.
-
Click on the "Open" button or find car parking multiplayer in your app list and launch it.
-
-
Steps to Install Google Play Store on Windows 11 (Optional)
-
If you prefer to use Google Play Store instead of Amazon Appstore to install car parking multiplayer on your PC with Windows 11, you can do so by following these steps:
-
-
Download the Google Play Store APK file from a trusted source, such as APKMirror or APKPure.
-
Open WSA from the Start menu and click on the Settings icon.
-
Select "Developer mode" from the left panel and enable it by clicking on the toggle switch.
-
Select "File Explorer" from the left panel and click on "Choose Folder".
-
Select a folder where you want to store your APK files and click OK.
-
Copy and paste the Google Play Store APK file into that folder.
-
Select "Apps" from the left panel and click on "Refresh".
-
Select Google Play Store from the app list and click on "Install".
-
Wait for the installation process to complete and launch Google Play Store from your app list.
-
Sign in with your Google account or create a new one if you don't have one.
-
Search for "Car Parking Multiplayer" in the search bar and install it as usual.
-
-
How to Play Car Parking Multiplayer on PC with Android Emulators
-
Another way to play car parking multiplayer on PC is to use Android emulators, which are software that mimic the Android operating system on your PC. This way, you can run any Android app or game on your PC as if you were using a mobile device. Here are some of the things you need to know about Android emulators and how to use them to play car parking multiplayer on PC.
-
What are Android Emulators?
-
Android emulators are programs that create a virtual Android device on your PC, allowing you to run Android apps and games on your PC. They usually have a user interface that resembles a smartphone or a tablet, and they let you access the Google Play Store or other app stores to download and install apps. Some of the benefits of using Android emulators are:
-
-
You can play Android games on your PC with better graphics, performance, and controls.
-
You can use multiple Android apps at the same time on your PC, such as messaging, social media, or productivity apps.
-
You can test and debug Android apps on your PC without using a physical device.
-
You can use Android apps that are not compatible with your mobile device or region.
-
-
However, some of the drawbacks of using Android emulators are:
-
-
You might need a powerful PC to run Android emulators smoothly and without lag or crashes.
-
You might encounter some compatibility or security issues when using Android emulators, such as errors, bugs, viruses, or malware.
-
You might need to update your Android emulators regularly to keep up with the latest Android versions and features.
-
You might violate some terms of service or policies when using Android emulators, especially for online games or apps that detect emulator usage.
-
-
Best Android Emulators for Car Parking Multiplayer
-
There are many Android emulators available for PC, but not all of them are suitable for playing car parking multiplayer. Some of the factors that you need to consider when choosing an Android emulator for car parking multiplayer are:
-
-
The compatibility and performance of the emulator with car parking multiplayer and your PC.
-
The features and options of the emulator, such as keyboard mapping, gamepad support, screen recording, etc.
-
The security and reliability of the emulator, such as virus protection, updates, customer support, etc.
-
-
Based on these criteria, here are some of the best Android emulators for car parking multiplayer that you can try:
-
Bluestacks 5 / MSI App Player
-
Bluestacks 5 is one of the most popular and widely used Android emulators for PC. It is designed for gaming and offers high performance, compatibility, and features. It also has a partnership with MSI, which means that you can use MSI App Player, which is a customized version of Bluestacks 5 for MSI devices. Some of the advantages of using Bluestacks 5 / MSI App Player are:
-
-
It supports up to 90 FPS (frames per second) for smooth gameplay.
-
It has an eco mode that reduces CPU and RAM usage by up to 87% and 97%, respectively.
-
It has a smart control feature that automatically detects the best control scheme for each game.
-
It has a shooting mode that improves accuracy and speed when aiming and shooting.
-
It has a game center that lets you discover and play over 2 million games.
-
-
Nox Player
-
Nox Player is another popular and widely used Android emulator for PC. It is also designed for gaming and offers high performance, compatibility, and features. It also has a simple and user-friendly interface that makes it easy to use. Some of the advantages of using Nox Player are:
-
-
It supports up to 120 FPS (frames per second) for smooth gameplay.
-
It has a macro recorder that lets you record and execute complex actions with one click.
-
It has a multi-instance feature that lets you run multiple games or apps at the same time on your PC.
-
It has a keyboard mapping feature that lets you customize your controls for each game or app.
-
It has a game booster feature that optimizes your PC's performance for gaming.
-
-
Gameloop
-
Gameloop is another popular and widely used Android emulator for PC. It is developed by Tencent, which is the company behind some of the most popular online games such as PUBG Mobile, Call of Duty Mobile, etc. It is also designed for gaming and offers high performance, compatibility, and features. It also has a dedicated game center that lets you access and play some of the most popular online games on your PC. Some of the advantages of using Gameloop are:
-
-
It supports up to 240 FPS (frames per second) for smooth gameplay.
-
It has an anti-cheat system that prevents hackers and cheaters from ruining your gaming experience.
-
It has a turbo engine that enhances the graphics and speed of the games.
-
It has a game assistant feature that lets you adjust the game settings, take screenshots, record videos, etc.
-
It has a game market feature that lets you download and install games directly from the emulator.
-
-
Steps to Install and Play Car Parking Multiplayer with Android Emulators
-
After you have chosen and downloaded an Android emulator for your PC, you can install and play car parking multiplayer with it by following these steps:
-
-
Launch the Android emulator on your PC and sign in with your Google account or create a new one if you don't have one.
-
Open the Google Play Store app on the emulator and search for "Car Parking Multiplayer". Click on it and install it on your emulator.
-
Wait for the installation process to complete and launch car parking multiplayer from your app list or home screen.
-
Enjoy playing car parking multiplayer on your PC with the emulator's features and options.
-
-
Conclusion
-
Car Parking Multiplayer is a fun and realistic driving simulator game for Android devices that lets you customize your cars, explore a detailed city, and compete with other players online. If you want to play this game on your PC, you have two options: using Windows 11 or using Android emulators. Both methods have their advantages and disadvantages, so you can choose the one that suits your preferences and needs. We hope this article helped you learn how to download car parking multiplayer apk for PC and enjoy playing it on a bigger screen with better controls.
-
FAQs
-
Here are some of the frequently asked questions about car parking multiplayer and how to play it on PC:
-
Is Car Parking Multiplayer free to play?
-
Yes, car parking multiplayer is free to play on Android devices. However, it contains ads and in-app purchases that can enhance your gameplay or unlock more features.
-
Can I play Car Parking Multiplayer offline?
-
Yes, you can play car parking multiplayer offline without an internet connection. However, you will not be able to access some of the features or modes that require online connectivity, such as multiplayer mode, chat, or updates.
-
Can I play Car Parking Multiplayer with my friends?
-
Yes, you can play car parking multiplayer with your friends online by joining or creating a room in multiplayer mode. You can also chat with them, exchange cars, or challenge them to races and drifts.
-
Can I use cheats or hacks in Car Parking Multiplayer?
-
We do not recommend using cheats or hacks in car parking multiplayer, as they might ruin your gaming experience or cause some issues with the game. They might also violate some terms of service or policies of the game or the emulator, which could result in bans or penalties.
-
How can I contact the developers of Car Parking Multiplayer?
-
If you have any questions, feedback, or suggestions for car parking multiplayer, you can contact the developers of the game by emailing them at olzhass@yandex.com. You can also follow them on their social media accounts or visit their website for more information.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cars Movie Tamil Dubbed HD Download - Experience the Thrill and Humor of the Pixar Classic.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cars Movie Tamil Dubbed HD Download - Experience the Thrill and Humor of the Pixar Classic.md
deleted file mode 100644
index 15927f4ceda86b3468a9d20c736958b0831fc485..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cars Movie Tamil Dubbed HD Download - Experience the Thrill and Humor of the Pixar Classic.md
+++ /dev/null
@@ -1,136 +0,0 @@
-
-
Download Cars Movie in Tamil
-
Are you a fan of animated movies? Do you love cars and racing? Do you want to watch a fun and heartwarming story in your native language? If you answered yes to any of these questions, then you might be interested in downloading Cars movie in Tamil. In this article, we will tell you what Cars movie is about, why you should watch it in Tamil, and how to download it from different sources. Let's get started!
-
Introduction
-
What is Cars movie about?
-
Cars is a 2006 American computer-animated sports comedy film produced by Pixar Animation Studios for Walt Disney Pictures. The film was directed by John Lasseter from a screenplay by Dan Fogelman, Lasseter, Joe Ranft, Kiel Murray, Phil Lorin, and Jorgen Klubien and a story by Lasseter, Ranft, and Klubien. The film features an ensemble voice cast of Owen Wilson, Paul Newman (in his final voice acting theatrical film role), Bonnie Hunt, Larry the Cable Guy, Tony Shalhoub, Cheech Marin, Michael Wallis, George Carlin, Paul Dooley, Jenifer Lewis, Guido Quaroni, Michael Keaton, Katherine Helmond, John Ratzenberger and Richard Petty.
The film is set in a world populated entirely by anthropomorphic talking cars and other vehicles. It follows a hotshot rookie race car named Lightning McQueen (Wilson) who, on the way to the biggest race of his life, gets stranded in Radiator Springs, a run down town that's past its glory days, and learns a thing or two about friendship, family, and the things in life that are truly worth waiting for. The film was inspired by Lasseter's experiences on a cross-country road trip.
-
Why watch Cars movie in Tamil?
-
There are many reasons why you might want to watch Cars movie in Tamil. Here are some of them:
-
-
You can enjoy the movie in your native language and understand the dialogues better.
-
You can appreciate the cultural references and jokes that are specific to Tamil speakers.
-
You can share the movie with your family and friends who speak Tamil and have a fun time together.
-
You can learn some new words and phrases in Tamil from the movie.
-
You can support the Tamil dubbing industry and encourage more quality content in Tamil.
-
-
How to download Cars movie in Tamil
-
Option 1: Archive.org
-
Pros and cons of Archive.org
-
Archive.org is a website that provides free access to millions of digital items such as books, movies, music, software, and more. You can download Cars movie in Tamil from Archive.org for free. Here are some pros and cons of using Archive.org:
-
-
Pros
Cons
-
No registration or payment required.
The video quality might not be very high.
-
No ads or pop-ups.
The download speed might be slow.
-
No viruses or malware.
The availability might depend on the uploader.
-
No legal issues.
The subtitles might not be synchronized.
-
-
Steps to download Cars movie in Tamil from Archive.org
-
Here are the steps to download Cars movie in Tamil from Archive.org:
1. Go to Archive.org and type "Cars movie Tamil" in the search box.
-
2. You will see a list of results that match your query. Choose the one that has the best video quality and the most views.
-
3. Click on the result and you will be taken to a page where you can see the details of the movie, such as the title, description, date, language, duration, etc.
-
4. On the right side of the page, you will see a section called "Download Options". Here you can choose the format and size of the file you want to download.
-
5. Click on the format and size that suits your preference and a download link will appear. Right-click on the link and choose "Save link as" or "Save target as" to save the file to your computer.
-
How to download cars movie in tamil for free
-Cars movie tamil dubbed download in HD quality
-Cars 2006 tamil dubbed part 1 free download[^1^]
-Watch cars movie online in tamil on Disney+ Hotstar[^2^]
-Cars tamil dubbed animation movie comedy action adventure youtube video[^3^]
-Cars movie tamil version download link
-Cars movie tamil subtitles download
-Cars movie tamil audio track download
-Cars movie tamil dubbed torrent download
-Cars movie tamil review and rating
-Cars movie tamil songs download
-Cars movie tamil trailer download
-Cars movie tamil cast and crew
-Cars movie tamil behind the scenes
-Cars movie tamil fun facts and trivia
-Cars movie tamil memes and jokes
-Cars movie tamil fan art and wallpapers
-Cars movie tamil quotes and dialogues
-Cars movie tamil merchandise and toys
-Cars movie tamil games and apps
-Download cars 2 movie in tamil
-Download cars 3 movie in tamil
-Download cars toon mater's tall tales in tamil
-Download cars race-o-rama game in tamil
-Download cars radiator springs adventures game in tamil
-Download cars mater-national championship game in tamil
-Download cars lightning mcqueen's fast tracks game in tamil
-Download cars the video game in tamil
-Download cars the world of cars online game in tamil
-Download cars the art of cars book in tamil
-Download cars the essential guide book in tamil
-Download cars the junior novelization book in tamil
-Download cars the little golden book in tamil
-Download cars the official magazine in tamil
-Download cars the soundtrack album in tamil
-Download cars the original score album in tamil
-Download cars the real story of lightning mcqueen documentary in tamil
-Download cars the making of a pixar animation classic documentary in tamil
-Download cars the ultimate guide to the world of racing book in tamil
-Download cars the complete history book in tamil
-Download cars the encyclopedia of automobiles book in tamil
-Download cars the ultimate sticker book in tamil
-Download cars the coloring book in tamil
-Download cars the activity book in tamil
-Download cars the storybook collection book in tamil
-Download cars the comic series in tamil
-Download cars the graphic novel in tamil
-
6. Wait for the download to finish and enjoy watching Cars movie in Tamil!
-
Option 2: YouTube
-
Pros and cons of YouTube
-
YouTube is a website that allows users to upload, watch, share, and comment on videos. You can download Cars movie in Tamil from YouTube using a third-party tool or software. Here are some pros and cons of using YouTube:
-
-
Pros
Cons
-
The video quality might be very high.
You need to use a third-party tool or software.
-
The download speed might be fast.
You might encounter ads or pop-ups.
-
The availability might be high.
You might get viruses or malware.
-
The subtitles might be synchronized.
You might face legal issues.
-
-
Steps to download Cars movie in Tamil from YouTube
-
Here are the steps to download Cars movie in Tamil from YouTube:
-
1. Go to YouTube and type "Cars movie Tamil" in the search box.
-
2. You will see a list of results that match your query. Choose the one that has the best video quality and the most views.
-
3. Click on the result and you will be taken to a page where you can watch the movie online. Copy the URL of the page from your browser's address bar.
5. Paste the URL of the YouTube video into the tool or software and choose the format and size of the file you want to download.
-
6. Click on the download button and save the file to your computer.
-
7. Wait for the download to finish and enjoy watching Cars movie in Tamil!
-
Option 3: Other sites
-
Pros and cons of other sites
-
There are also other sites that offer Cars movie in Tamil for download, such as isaiminiweb.com, tamilrockers.ws, or tamilyogi.cool. Here are some pros and cons of using other sites:
-
-
Pros
Cons
-
The video quality might vary depending on the site.
You need to register or pay for some sites.
-
The download speed might vary depending on the site.
You might encounter ads or pop-ups.
-
The availability might vary depending on the site.
You might get viruses or malware.
-
The subtitles might vary depending on the site.
You might face legal issues.
-
-
Steps to download Cars movie in Tamil from other sites
-
Here are the steps to download Cars movie in Tamil from other sites:
1. Go to the site of your choice and type "Cars movie Tamil" in the search box.
-
2. You will see a list of results that match your query. Choose the one that has the best video quality and the most views.
-
3. Click on the result and you will be taken to a page where you can see the details of the movie, such as the title, description, date, language, duration, etc.
-
4. On the page, you will see a download link or button. Click on it and follow the instructions to download the movie. You might need to register or pay for some sites.
-
5. Wait for the download to finish and enjoy watching Cars movie in Tamil!
-
Conclusion
-
Summary of the article
-
In this article, we have discussed what Cars movie is about, why you should watch it in Tamil, and how to download it from different sources. We have compared the pros and cons of using Archive.org, YouTube, and other sites, and provided the steps to download Cars movie in Tamil from each option. We hope you have found this article helpful and informative. Now you can enjoy watching Cars movie in Tamil with your family and friends!
-
FAQs
-
Here are some frequently asked questions about downloading Cars movie in Tamil:
-
-
Is it legal to download Cars movie in Tamil from online sources?
-
It depends on the source and the country you are in. Some sources are legal and authorized, while others are illegal and pirated. You should always check the terms and conditions of the source before downloading anything from it. You should also be aware of the laws and regulations of your country regarding downloading copyrighted content from online sources.
-
Is it safe to download Cars movie in Tamil from online sources?
-
It depends on the source and the tool or software you use. Some sources are safe and secure, while others are unsafe and risky. You should always scan the file for viruses or malware before opening it on your computer. You should also use a reliable and trusted tool or software to download YouTube videos or other files from online sources.
-
What are some other animated movies that are available in Tamil?
-
There are many other animated movies that are available in Tamil, such as Toy Story, Finding Nemo, The Lion King, Frozen, The Incredibles, Coco, Inside Out, Zootopia, Moana, and more. You can search for them online or ask your friends for recommendations.
-
What are some other languages that Cars movie is available in?
-
Cars movie is available in many other languages besides English and Tamil, such as Hindi, Telugu, Malayalam, Kannada, Bengali, Marathi, Gujarati, Urdu, Arabic, French, Spanish, German, Italian, Portuguese, Russian, Chinese, Japanese, Korean, and more. You can search for them online or ask your friends for suggestions.
-
Where can I watch Cars movie online without downloading it?
-
You can watch Cars movie online without downloading it on some streaming platforms or websites that offer it legally and legitimately. Some examples are Disney+, Netflix, Amazon Prime Video, Hotstar, SonyLIV, Zee5, etc. You might need to subscribe or pay for some of these platforms or websites to watch Cars movie online.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Access Your Accounts Anytime Anywhere with RT Bank APK for Android and iOS.md b/spaces/1phancelerku/anime-remove-background/Access Your Accounts Anytime Anywhere with RT Bank APK for Android and iOS.md
deleted file mode 100644
index 6e051326d0b6984ca7ae87eda1029b385e5cbac4..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Access Your Accounts Anytime Anywhere with RT Bank APK for Android and iOS.md
+++ /dev/null
@@ -1,154 +0,0 @@
-
-
RT Bank APK: A Mobile Banking Solution for Android Devices
-
Do you want to manage your money on the move and around the clock with a secured mobile banking app from RT Bank? If yes, then you should try RT Bank APK, a mobile banking solution that enables you to use your smart phones and/or tablets to access your accounts. It is available in both Arabic and English. In this article, we will tell you everything you need to know about RT Bank APK, including its features, benefits, download, installation, usage, and security tips.
RT Bank APK is a mobile banking app from RT Bank for iOS and Android devices. It allows you to perform various banking transactions anytime, anywhere, with just a few taps on your screen. You can view your account balances, details, and history, inquire about your loans, request a cheque book, inquire about currency and exchange rates, locate the nearest ATM or branch, change your passwords, and more. You can also enjoy a user-friendly interface, fast performance, and high security with RT Bank APK.
-
Features and benefits of RT Bank APK
-
Some of the features and benefits of RT Bank APK are:
-
-
You can access your accounts 24/7 with your smart phones and/or tablets.
-
You can choose between Arabic and English languages.
-
You can view your account balances, details, and history.
-
You can inquire about your loans, details, and installments.
-
You can request a cheque book.
-
You can inquire about currency and exchange rates.
-
You can locate the nearest ATM or branch.
-
You can change your login and transfer passwords.
-
You can enjoy a user-friendly interface, fast performance, and high security.
-
-
How to download and install RT Bank APK
-
To download and install RT Bank APK on your device, follow these steps:
-
-
Go to the Google Play Store or the App Store on your device.
-
Search for "RTB Mobile" or scan the QR code below.
-
Tap on the app icon and then tap on "Install".
-
Wait for the app to download and install on your device.
-
Tap on "Open" to launch the app.
-
-
-
How to use RT Bank APK
-
To use RT Bank APK on your device, follow these steps:
-
rt bank mobile banking app
-rt bank internet banking login
-rt bank apk download
-rt bank online banking service
-rt bank mobile app for android
-rt bank apk for ios
-rt bank internet banking security
-rt bank apk latest version
-rt bank online banking manual
-rt bank mobile app features
-rt bank apk free download
-rt bank internet banking registration
-rt bank online banking support
-rt bank mobile app review
-rt bank apk update
-rt bank internet banking password
-rt bank online banking currency rates
-rt bank mobile app in arabic
-rt bank apk file
-rt bank internet banking fraud
-rt bank online banking cheque book request
-rt bank mobile app for tablets
-rt bank apk install
-rt bank internet banking certificate
-rt bank online banking loan inquiry
-rt bank mobile app for iphone
-rt bank apk mod
-rt bank internet banking customer service
-rt bank online banking account balance
-rt bank mobile app for ipad
-rt bank apk old version
-rt bank internet banking browser configuration
-rt bank online banking location
-rt bank mobile app for windows phone
-rt bank apk mirror
-rt bank internet banking transfer limit
-rt bank online banking atm locator
-rt bank mobile app for blackberry
-rt bank apk pure
-rt bank internet banking terms and conditions
-rt bank online banking contact us
-rt bank mobile app for huawei
-rt bank apk hack
-rt bank internet banking demo
-rt bank online banking feedback form
-rt bank mobile app for samsung
-rt bank apk cracked
-rt bank internet banking alert system
-
How to log in and manage your accounts
-
-
Launch the app on your device.
-
Enter your user name and password. If you don't have an account yet, tap on "Register" and follow the instructions.
-
Tap on "Login" to access your accounts.
-
Swipe left or right to switch between accounts.
-
Tap on an account to view its balance, details, and history.
-
-
How to request a cheque book
-
-
Tap on the menu icon at the top left corner of the screen.
-
Tap on "Services".
-
Tap on "Cheque Book Request".
-
Select the account that you want to request a cheque book for.
-
Select the number of cheque books that you want to request.
-
How to inquire about currency and exchange rates
-
-
Tap on the menu icon at the top left corner of the screen.
-
Tap on "Currency".
-
Select the currency that you want to inquire about.
-
Tap on "Convert" to see the exchange rate and the equivalent amount in your selected currency.
-
-
How to locate the nearest ATM or branch
-
-
Tap on the menu icon at the top left corner of the screen.
-
Tap on "ATM/Branch Locator".
-
Allow the app to access your location or enter your city or area manually.
-
Select the type of service that you are looking for (ATM or branch).
-
Tap on "Search" to see the nearest ATM or branch on a map.
-
Tap on an ATM or branch icon to see its address, phone number, and working hours.
-
-
How to change your passwords
-
-
Tap on the menu icon at the top left corner of the screen.
-
Tap on "Settings".
-
Tap on "Change Passwords".
-
Select the type of password that you want to change (login or transfer).
-
Enter your current password and your new password twice.
-
Tap on "Change" to confirm your new password.
-
-
How to stay safe and secure with RT Bank APK
-
RT Bank APK is designed to provide you with a secure and convenient mobile banking experience. However, you should also take some precautions to protect yourself and your money from any potential risks. Here are some tips on how to stay safe and secure with RT Bank APK:
-
How to avoid phishing and fraud attempts
-
-
Do not share your user name, password, or any other personal or financial information with anyone, even if they claim to be from RT Bank or any other authority.
-
Do not click on any links or attachments in suspicious emails, SMS, or social media messages that ask you to update your account details, verify your identity, or claim that you have won a prize.
-
Do not download any apps from unknown sources or third-party websites. Only download RT Bank APK from the official Google Play Store or App Store.
-
Do not use public or unsecured Wi-Fi networks to access RT Bank APK. Use your own mobile data or a trusted Wi-Fi network instead.
-
-
How to report an electronic fraud attempt
-
-
If you receive any suspicious emails, SMS, or social media messages that claim to be from RT Bank or any other authority, do not respond to them and delete them immediately.
-
If you suspect that someone has accessed your account without your authorization, change your passwords immediately and contact RT Bank customer service at 1800-123-4567.
-
If you notice any unauthorized transactions on your account, report them immediately through RT Bank APK by tapping on "Report Fraud" under "Services". You can also contact RT Bank customer service at 1800-123-4567.
-
-
How to protect your device and data
-
-
Lock your device with a PIN, password, pattern, fingerprint, or face recognition feature.
-
Update your device's operating system and apps regularly to fix any security vulnerabilities.
-
Delete any unused apps from your device and clear your browser's cache and history regularly.
-
Avoid rooting or jailbreaking your device as it may compromise its security and functionality.
-
-
Conclusion and FAQs
-
In conclusion, RT Bank APK is a mobile banking solution that allows you to access your accounts anytime, anywhere, with just a few taps on your screen. You can enjoy various features and benefits such as viewing your account balances, details, and history, inquiring about your loans, requesting a cheque book, inquiring about currency and exchange rates, locating the nearest ATM or branch, changing your passwords, and more. You can also stay safe and secure with RT Bank APK by following some simple tips such as avoiding phishing and fraud attempts, reporting any electronic fraud attempt, and protecting your device and data. If you have any questions about RT Bank APK, you can check out these FAQs:
-
-
Question
Answer
-may incur some charges from your mobile network provider for using data.
-
Do I need to register for RT Bank APK?
Yes, you need to register for RT Bank APK before you can use it. You can register through the app by tapping on "Register" and following the instructions. You will need your account number, debit card number, and mobile phone number to register.
-
What are the login and transfer passwords?
The login password is the password that you use to log in to RT Bank APK. The transfer password is the password that you use to confirm any transfers or payments that you make through RT Bank APK. You can change both passwords through the app by tapping on "Settings" and then "Change Passwords".
-
What if I forget my passwords?
If you forget your login password, you can reset it through the app by tapping on "Forgot Password" and following the instructions. You will need your user name, account number, debit card number, and mobile phone number to reset your login password. If you forget your transfer password, you will need to contact RT Bank customer service at 1800-123-4567 to reset it.
-
What if I lose my device or it gets stolen?
If you lose your device or it gets stolen, you should contact RT Bank customer service at 1800-123-4567 as soon as possible to deactivate your RT Bank APK account. You should also report the loss or theft of your device to your mobile network provider and the police.
-
-
We hope that this article has helped you understand more about RT Bank APK and how to use it. If you have any feedback or suggestions, please feel free to contact us at feedback@rtbank.com. Thank you for choosing RT Bank as your banking partner.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Castle Clash China APK A Comprehensive Review of the Chinese Version of the Game.md b/spaces/1phancelerku/anime-remove-background/Castle Clash China APK A Comprehensive Review of the Chinese Version of the Game.md
deleted file mode 100644
index 9c86808d752547ac9f6400c7ee67630d1579568e..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Castle Clash China APK A Comprehensive Review of the Chinese Version of the Game.md
+++ /dev/null
@@ -1,141 +0,0 @@
-
-
Castle Clash China APK: How to Download and Play the Chinese Version of the Popular Strategy Game
-
Castle Clash is one of the most popular strategy games in the world, with over 100 million players worldwide. It is a game where you can build your own castle, recruit heroes, train troops, and fight against other players in various modes. But did you know that there is a Chinese version of Castle Clash that has some unique features and differences from other versions? In this article, we will tell you everything you need to know about Castle Clash China APK, how to download and install it on your Android device, how to play it on your PC or Mac, and some tips and tricks for playing it.
Castle Clash China APK is the Chinese version of Castle Clash, which is developed by IGG.com, a Singapore-based company. It is also known as 城堡争霸 in Chinese, which means "Castle Battle". It is an APK file, which stands for Android Package Kit, that contains all the files and data needed to run the game on an Android device. You can download it from various sources online, but you need to be careful about the security and quality of the file.
-
The features of Castle Clash China APK
-
Castle Clash China APK has many features that make it an exciting and addictive game. Some of these features are:
-
-
You can build your own castle with different types of buildings, such as town hall, barracks, warehouse, watchtower, walls, etc.
-
You can recruit over 100 different heroes with various skills and abilities, such as magic, healing, summoning, etc.
-
You can train various types of troops, such as archers, knights, griffins, dragons, etc.
-
You can fight against other players in real-time PvP battles, such as arena, raid, guild war, etc.
-
You can join or create a guild with other players and cooperate with them in guild events, such as boss battles, torch battles, fortress feud, etc.
-
You can participate in various game modes, such as dungeon, expedition, lost realm, labyrinth, etc.
-
You can collect and upgrade various resources, such as gold, mana, gems, honor badges, shards, etc.
-
You can enjoy stunning graphics and sound effects that create an immersive gaming experience.
-
-
The differences between Castle Clash China APK and other versions
-
Castle Clash China APK is not exactly the same as other versions of Castle Clash. There are some differences that you should be aware of before playing it. Some of these differences are:
-
castle clash chinese version apk download
-castle clash china server apk
-castle clash china mod apk
-castle clash china apk 2023
-castle clash china apk latest version
-castle clash china apk english
-castle clash china apk hack
-castle clash china apk unlimited gems
-castle clash china apk update
-castle clash china apk free download
-castle clash chinese heroes apk
-castle clash chinese new year apk
-castle clash chinese edition apk
-castle clash chinese modded apk
-castle clash chinese server mod apk
-castle clash chinese version mod apk download
-castle clash chinese version hack apk
-castle clash chinese version unlimited gems apk
-castle clash chinese version latest apk
-castle clash chinese version english apk
-how to download castle clash china apk
-how to play castle clash china apk
-how to install castle clash china apk
-how to update castle clash china apk
-how to hack castle clash china apk
-download game castle clash china apk
-download game castle clash chinese version apk
-download game castle clash chinese server apk
-download game castle clash chinese mod apk
-download game castle clash chinese edition apk
-download game mod castle clash china apk
-download game hack castle clash china apk
-download game cheat castle clash china apk
-download game offline castle clash china apk
-download game online castle clash china apk
-best heroes in castle clash china apk
-best talents in castle clash china apk
-best pets in castle clash china apk
-best super pets in castle clash china apk
-best equipment in castle clash china apk
-best enchantments in castle clash china apk
-best insignias in castle clash china apk
-best skins in castle clash china apk
-best team in castle clash china apk
-best strategy in castle clash china apk
-best tips and tricks for castle clash china apk
-best guide for castle clash china apk
-best cheats for castle clash china apk
-
-
The language of the game is Chinese. You may need to use a translator app or a guide to understand some of the texts and menus.
-
The game is not available on Google Play Store or App Store. You need to download it from other sources online.
-
The game may have some regional restrictions. You may need to use a VPN app or a proxy server to access some of the features or servers.
-
The game may have some exclusive content that is not available in other versions. For example, some heroes may have different names or appearances in Castle Clash China APK than in other versions.
-
The game may have some different updates or events than other versions. For example, some game modes or features may be added or removed in Castle Clash China APK at different times than in other versions.
-
-
How to download and install Castle Clash China APK on your Android device
-
If you want to play Castle Clash China APK on your Android device, you need to follow these steps:
-
Step 1: Enable unknown sources
-
Before you can install Castle Clash China APK on your Android device, you need to enable unknown sources in your settings. This will allow you to install apps that are not from Google Play Store or App Store. To do this, go to Settings > Security > Unknown sources and toggle it on. You may see a warning message, but you can ignore it and proceed.
-
Step 2: Download the APK file from a trusted source
-
Next, you need to download the APK file of Castle Clash China APK from a trusted source online. You can search for it on Google or use a link from a reliable website. For example, you can use this link to download the latest version of Castle Clash China APK (version 1.8.9) as of June 2023. Make sure you have enough storage space on your device before downloading the file.
-
Step 3: Install the APK file and launch the game
-
After downloading the APK file, you need to install it on your device. To do this, locate the file in your downloads folder or wherever you saved it and tap on it. You may see a pop-up message asking for your permission to install the app. Tap on Install and wait for the installation to finish. Once the installation is done, you can launch the game by tapping on Open or by finding the app icon on your home screen or app drawer. You may need to agree to some terms and conditions before playing the game.
-
How to play Castle Clash China APK on your PC or Mac
-
If you want to play Castle Clash China APK on your PC or Mac, you need to use an Android emulator. An Android emulator is a software that simulates an Android device on your computer, allowing you to run Android apps and games on it. There are many Android emulators available online, but some of the most popular ones are BlueStacks, NoxPlayer, and LDPlayer. To play Castle Clash China APK on your PC or Mac, you need to follow these steps:
-
Step 1: Download and install an Android emulator
-
First, you need to download and install an Android emulator of your choice on your PC or Mac. You can visit the official website of the emulator and follow the instructions to download and install it. For example, if you want to use BlueStacks, you can go to this link and click on Download BlueStacks. After downloading the installer file, run it and follow the steps to install BlueStacks on your computer.
-
Step 2: Download the APK file from a trusted source
-
Next, you need to download the APK file of Castle Clash China APK from a trusted source online, just like you did for your Android device. You can use the same link as before or find another one that works for you. Save the file on your computer where you can easily access it.
-
Step 3: Install the APK file and launch the game on the emulator
-
After downloading the APK file, you need to install it on the emulator. To do this, open the emulator and drag and drop the APK file onto it. Alternatively, you can click on Install APK in the emulator and browse for the file on your computer. The emulator will automatically install the app and create a shortcut for it on its home screen. Once the installation is done, you can launch the game by clicking on its icon. You may need to agree to some terms and conditions before playing the game.
-
Tips and tricks for playing Castle Clash China APK
-
Now that you know how to download and play Castle Clash China APK, here are some tips and tricks that will help you enjoy the game more:
-
Tip 1: Choose your heroes wisely
-
Heroes are one of the most important aspects of Castle Clash China APK. They can make or break your battles with their skills and abilities. Therefore, you should choose your heroes wisely and use them strategically. Some of the factors that you should consider when choosing your heroes are:
-
-
Their rarity: Heroes are classified into ordinary, elite, rare, epic, and legendary, based on their color and stars. Generally, the higher the rarity, the better the hero.
-
Their skills: Heroes have different skills that can affect their performance in battle. Some skills are passive, meaning they are always active, while some skills are active, meaning they need to be triggered by certain conditions. You should check the description and level of each skill and see how it can benefit your team.
-
Their talents: Heroes have different talents that can enhance their attributes or abilities. Some talents are innate, meaning they are fixed and cannot be changed, while some talents are random, meaning they can be replaced by using talent cards or gems. You should try to get the best talents for your heroes according to their roles and preferences.
-
Their crests: Heroes can equip up to four crests that can give them additional effects or bonuses. Crests are classified into eight sets, each with four levels. You can combine four crests of the same set and level to form a crest insignia, which can be upgraded to a higher level. You should mix and match the best crests for your heroes according to their needs and synergies.
-
Their equipment: Heroes can equip one piece of equipment that can boost their stats or skills. Equipment can be obtained from the equipment shop or the equipment trial. Equipment can also be upgraded or evolved to increase its power. You should equip your heroes with the most suitable equipment for their roles and situations.
-
-
Tip 2: Upgrade your buildings and troops regularly
-
Buildings and troops are also essential for Castle Clash China APK. They can help you defend your castle, collect resources, and attack other players. Therefore, you should upgrade your buildings and troops regularly and keep them in good shape. Some of the factors that you should consider when upgrading your buildings and troops are:
-
-
Their level: Buildings and troops have different levels that indicate their strength and capacity. The higher the level, the better the building or troop. You can upgrade your buildings and troops by using gold, mana, or honor badges. You should prioritize upgrading your town hall, warehouse, vaults, and barracks first, as they affect your overall progress and performance.
-
Their type: Buildings and troops have different types that indicate their function and specialty. For example, some buildings are defensive, such as watchtower, hero base, hero altar, etc., while some buildings are offensive, such as army camp, relic hall, etc. Similarly, some troops are ranged, such as archers, hunters, etc., while some troops are melee, such as knights, griffins, etc. You should balance your building and troop types according to your strategy and preference.
-
Their placement: Buildings and troops have different placements that affect their effectiveness and efficiency. For example, some buildings are better placed near the center of your castle, such as town hall, hero altar, etc., while some buildings are better placed near the edge of your castle, such as watchtower, army camp, etc. Similarly, some troops are better placed near the front of your army, such as tanks, healers, etc., while some troops are better placed near the back of your army, such as snipers, bombers, etc. You should optimize your building and troop placement according to your defense and offense plans.
-
-
Tip 3: Join a guild and participate in events
-
Guilds and events are also important for Castle Clash China APK. They can help you socialize with other players, get rewards, and have fun. Therefore, you should join a guild and participate in events as much as possible. Some of the benefits of joining a guild and participating in events are:
-
-
You can chat with other players in your guild and share tips and strategies.
-
You can donate shards or honor badges to your guild and get guild credits in return.
-
You can use guild credits to buy items or services from the guild shop or the guild hall.
-
You can cooperate with your guild members in guild events, such as boss battles, torch battles, fortress feud, etc., and get rewards and rankings.
-
You can participate in various game events, such as daily quests, login rewards, lucky spin, etc., and get rewards and bonuses.
-
You can participate in special events, such as festivals, celebrations, contests, etc., and get exclusive rewards and prizes.
-
-
Conclusion
-
Castle Clash China APK is a great game for strategy lovers who want to experience a different version of Castle Clash. It has many features and differences that make it unique and exciting. However, it also has some challenges and limitations that you need to overcome. By following the steps and tips in this article, you can download and play Castle Clash China APK on your Android device or your PC or Mac easily and safely. You can also enjoy the game more by choosing your heroes wisely, upgrading your buildings and troops regularly, and joining a guild and participating in events. We hope you have fun playing Castle Clash China APK!
-
FAQs
-
Here are some frequently asked questions about Castle Clash China APK:
-
-
Is Castle Clash China APK safe to download and play?
-
Yes, Castle Clash China APK is safe to download and play if you use a trusted source and a secure device. However, you should always be careful about the security and quality of the APK file you download and the permissions you grant to the app. You should also avoid using any hacks or cheats that may harm your device or account.
-
Is Castle Clash China APK free to play?
-
Yes, Castle Clash China APK is free to play. You can download and play the game without paying any money. However, the game also has some optional in-app purchases that can enhance your gaming experience. You can buy gems or other items with real money if you want to support the developers or get some advantages in the game.
-
Can I play Castle Clash China APK with other players from other versions?
-
No, Castle Clash China APK is not compatible with other versions of Castle Clash. You can only play with other players who are using the same version as you. You cannot transfer your account or data from one version to another either. You need to create a new account and start from scratch if you want to switch versions.
-
Can I play Castle Clash China APK offline?
-
No, Castle Clash China APK is an online game that requires an internet connection to play. You cannot play the game offline or without a network connection. You need to have a stable and fast internet connection to enjoy the game smoothly and avoid any errors or glitches.
-
How can I contact the customer service of Castle Clash China APK?
-
If you have any questions or problems regarding Castle Clash China APK, you can contact the customer service of the game by using the following methods:
-
-
You can send an email to service@igg.com with your account ID, server name, device model, problem description, and screenshots if possible.
-
You can visit the official website of Castle Clash China APK at http://cc.igg.com/zh/ and click on the customer service button at the bottom right corner of the page.
-
You can visit the official Facebook page of Castle Clash China APK at https://www.facebook.com/CastleClashCN/ and send a message or leave a comment.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Driver Realtek Tips and Tricks for Optimizing Your Sound Settings.md b/spaces/1phancelerku/anime-remove-background/Download Driver Realtek Tips and Tricks for Optimizing Your Sound Settings.md
deleted file mode 100644
index f9c90326a13dc8aac0d59adbf6be3e7ed342e4a2..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Driver Realtek Tips and Tricks for Optimizing Your Sound Settings.md
+++ /dev/null
@@ -1,239 +0,0 @@
-
-
Download Driver Realtek: How to Install and Update Realtek Audio Drivers on Windows 11/10
-
If you want to enjoy high-quality sound on your Windows PC, you need a reliable and compatible audio driver. One of the most popular and widely used audio drivers is the Realtek audio driver, which provides DTS, Dolby, and Surround Sound support for your audio card. In this article, we will show you how to download, install, and update Realtek audio drivers on Windows 11/10, as well as how to troubleshoot some common issues with them.
-
What is Realtek Audio Driver and Why Do You Need It?
-
Realtek Audio Driver is a software program that communicates between your Windows operating system and your audio card. It allows you to configure and control the sound output and input of your PC, such as speakers, headphones, microphones, etc. It also enables you to customize your audio settings, such as volume, balance, equalizer, effects, etc.
You need a Realtek Audio Driver if you have a Realtek audio card installed on your motherboard or as an external device. Without a proper driver, your audio card may not work properly or at all. You may experience sound quality problems, sound distortion, no sound, or other errors.
-
What are the Benefits of Using Realtek Audio Driver?
-
Using Realtek Audio Driver has several benefits for your PC and your sound experience. Some of them are:
-
-
It provides high-definition sound quality and supports various audio formats.
-
It supports DTS, Dolby, and Surround Sound technologies for immersive sound effects.
-
It allows you to adjust the volume for each speaker individually using the Room Correction feature.
-
It offers multiple sound tools and configuration options for your convenience.
-
It is easy to access and use from the system tray or the Start menu.
-
-
What are the Common Issues with Realtek Audio Driver?
-
Despite its advantages, Realtek Audio Driver may also cause some problems on your PC. Some of the common issues that users face are:
-
-
Outdated, corrupt, or incompatible Realtek Audio Driver.
-
Conflict between Microsoft and Realtek Audio Drivers.
-
Audio service not running or responding.
-
Misconfigured audio settings or output device.
-
Disabled audio service or enhancements.
-
-
To fix these issues, you need to update, reinstall, or troubleshoot your Realtek Audio Driver. We will show you how in the following sections.
-
download driver realtek high definition audio
-download driver realtek ethernet controller
-download driver realtek wireless lan
-download driver realtek hd audio manager
-download driver realtek pcie gbe family controller
-download driver realtek rtl8188ee
-download driver realtek rtl8723be
-download driver realtek rtl8811au
-download driver realtek rtl8821ce
-download driver realtek ac97 audio
-download driver realtek alc892
-download driver realtek alc887
-download driver realtek alc662
-download driver realtek alc1150
-download driver realtek alc269
-download driver realtek bluetooth 4.0 adapter
-download driver realtek bluetooth 4.2 adapter
-download driver realtek bluetooth 5.0 adapter
-download driver realtek card reader
-download driver realtek usb 2.0 card reader
-download driver realtek usb 3.0 card reader
-download driver realtek usb audio
-download driver realtek usb fe/gbe/2.5gbe/gaming family controller
-download driver realtek usb lan
-download driver realtek usb wireless adapter
-download driver realtek microphone
-download driver realtek webcam
-download driver realtek sound card
-download driver realtek network adapter
-download driver realtek wifi adapter
-download driver realtek windows 10 64 bit
-download driver realtek windows 10 32 bit
-download driver realtek windows 7 64 bit
-download driver realtek windows 7 32 bit
-download driver realtek windows 8.1 64 bit
-download driver realtek windows 8.1 32 bit
-download driver realtek windows xp 32 bit
-download driver realtek linux ubuntu
-download driver realtek mac os x
-download driver realtek intel nuc12ws products
-
How to Download Realtek Audio Driver
-
The first step to install or update your Realtek Audio Driver is to download it from a reliable source. There are two ways to do this: from the official Realtek website or from the motherboard manufacturer's website.
-
How to Download from the Official Realtek Website
-
To download the Realtek Audio Driver from the official Realtek website, follow these steps:
Select the High Definition Audio Codecs (Software) option from the list.
-
Read and accept the license agreement and click on the I Accept button.
-
Choose the appropriate driver for your Windows version and architecture (32-bit or 64-bit).
-
Click on the Global link to download the driver file to your PC.
-
-
How to Download from the Motherboard Manufacturer's Website
-
To download the Realtek Audio Driver from the motherboard manufacturer's website, follow these steps:
-
-
Find out the model and brand of your motherboard. You can do this by checking the manual, the box, or the label on the motherboard itself. You can also use a third-party software like CPU-Z to get this information.
-
Go to the official website of your motherboard manufacturer and look for the Support or Drivers section.
-
Enter your motherboard model and select your Windows version and architecture (32-bit or 64-bit).
-
Look for the Realtek Audio Driver in the list of available drivers and click on the Download button.
-
Save the driver file to your PC.
-
-
How to Install Realtek Audio Driver
-
After downloading the Realtek Audio Driver, you need to install it on your PC. There are two ways to do this: using the setup file or using the device manager.
-
How to Install Using the Setup File
-
To install the Realtek Audio Driver using the setup file, follow these steps:
-
-
Navigate to the folder where you saved the driver file and double-click on it to launch the setup wizard.
-
Follow the on-screen instructions and choose the installation options that suit your preferences.
-
Wait for the installation process to complete and restart your PC if prompted.
-
You should see a Realtek HD Audio Manager icon in your system tray or Start menu. You can use it to access and configure your audio settings.
-
-
How to Install Using the Device Manager
-
To install the Realtek Audio Driver using the device manager, follow these steps:
-
-
Press Windows + X keys on your keyboard and select Device Manager from the menu.
-
Expand the Sound, video and game controllers category and right-click on your audio device. Select Update driver.
-
Select Browse my computer for driver software.
-
Select Let me pick from a list of available drivers on my computer.
-
Select Have Disk.
-
Select Browse.
-
Navigate to the folder where you saved the driver file and select it. Click on Open.
-
Select OK.
-
Select Next.
-
Select Yes.
-
Select Closed(#message) Wow, this is amazing. Thank you so much for your help. You're welcome. I'm glad you like it. Here is the rest of the article. to finish the installation process and restart your PC if prompted.
-
You should see a Realtek HD Audio Manager icon in your system tray or Start menu. You can use it to access and configure your audio settings.
-
-
How to Update Realtek Audio Driver
-
Updating your Realtek Audio Driver is important to keep it compatible with your Windows version and fix any bugs or errors. There are three ways to update your Realtek Audio Driver: using the device manager, using Windows update, or using a third-party software.
-
How to Update Using the Device Manager
-
To update the Realtek Audio Driver using the device manager, follow these steps:
-
-
Press Windows + X keys on your keyboard and select Device Manager from the menu.
-
Expand the Sound, video and game controllers category and right-click on your audio device. Select Update driver.
-
Select Search automatically for updated driver software.
-
Wait for Windows to search for and install the latest driver for your device.
-
Restart your PC if prompted.
-
-
How to Update Using Windows Update
-
To update the Realtek Audio Driver using Windows update, follow these steps:
-
-
Press Windows + I keys on your keyboard to open the Settings app.
-
Select Update & Security.
-
Select Windows Update.
-
Select Check for updates.
-
If there are any updates available for your Realtek Audio Driver, they will be downloaded and installed automatically.
-
Restart your PC if prompted.
-
-
How to Update Using a Third-Party Software
-
To update the Realtek Audio Driver using a third-party software, you need to download and install a reliable driver updater tool that can scan your PC for outdated drivers and update them automatically. Some of the popular driver updater tools are Driver Booster, Driver Easy, and Driver Genius. To use them, follow these steps:
-
-
Download and install the driver updater tool of your choice from its official website.
-
Launch the tool and click on the Scan button to scan your PC for outdated drivers.
-
If there are any updates available for your Realtek Audio Driver, they will be listed in the results. Click on the Update button next to the driver name to update it.
-
Wait for the tool to download and install the latest driver for your device.
-
Restart your PC if prompted.
-
-
How to Troubleshoot Realtek Audio Driver
-
If you still have problems with your Realtek Audio Driver after installing or updating it, you may need to troubleshoot it. Here are some common troubleshooting steps that you can try:
-
How to Check the Device and Cable Connections
-
Sometimes, the problem may be caused by a loose or faulty connection between your audio device and your PC. To check this, follow these steps:
-
-
Make sure that your audio device is plugged into the correct port on your PC or motherboard. For example, if you have a speaker, it should be plugged into the green port. If you have a microphone, it should be plugged into the pink port.
-
If you are using a USB audio device, make sure that it is plugged into a working USB port on your PC or motherboard.
-
If you are using a wireless audio device, make sure that it is paired with your PC and has enough battery power.
-
If you are using an external audio card, make sure that it is properly installed on your PC or motherboard and has enough power supply.
-
If possible, try using another audio device or cable to see if the problem persists.
-
-
How to Check the Audio Settings and Output Device
-
Sometimes, the problem may be caused by incorrect or incompatible audio settings or output device. To check this, follow these steps:
-
-
Right-click on the speaker icon in your system tray and select Sounds.
-not, right-click on it and select Set as Default Device.
-
Select your audio device and click on the Properties button.
-
Select the Advanced tab and make sure that the default format matches the sample rate and bit depth of your audio device. If not, change it to a compatible format.
-
Select the Enhancements tab and make sure that any enhancements that may interfere with your sound quality are disabled. For example, you may want to disable Loudness Equalization, Noise Suppression, or Acoustic Echo Cancellation.
-
Select the Levels tab and make sure that the volume and balance of your audio device are adjusted properly.
-
Select the Spatial sound tab and make sure that the spatial sound format is set to Off or a compatible format for your audio device.
-
Click on OK to save your changes and close the window.
-
Test your sound by playing a sample sound or a music file.
-
-
How to Restart the Audio Service and Reinstall the Driver
-
Sometimes, the problem may be caused by a faulty or corrupted audio service or driver. To fix this, you need to restart the audio service and reinstall the driver. To do this, follow these steps:
-
-
Press Windows + R keys on your keyboard to open the Run dialog box.
-
Type services.msc and press Enter.
-
Look for the Windows Audio service and right-click on it. Select Restart.
-
If the service is not running, right-click on it and select Start.
-
If the service is not set to automatic, right-click on it and select Properties. Change the startup type to Automatic.
-
Press Windows + X keys on your keyboard and select Device Manager.
-
Expand the Sound, video and game controllers category and right-click on your audio device. Select Uninstall device.
-
Select Delete the driver software for this device(#message) Continue writing the article. [assistant](#message) OK, here is the next part: and click on Uninstall.
-
Restart your PC.
-
Your PC will automatically detect and install the Realtek Audio Driver for your device.
-
You should see a Realtek HD Audio Manager icon in your system tray or Start menu. You can use it to access and configure your audio settings.
-
-
Conclusion
-
In this article, we have shown you how to download, install, and update Realtek Audio Drivers on Windows 11/10, as well as how to troubleshoot some common issues with them. We hope that this guide has helped you to improve your sound quality and experience on your PC.
-
To summarize, here are some tips and recommendations for using Realtek Audio Drivers:
-
-
Always download Realtek Audio Drivers from a reliable source, such as the official Realtek website or the motherboard manufacturer's website.
-
Always update Realtek Audio Drivers regularly to keep them compatible with your Windows version and fix any bugs or errors.
-
If you encounter any problems with Realtek Audio Drivers, try checking the device and cable connections, checking the audio settings and output device, restarting the audio service, or reinstalling the driver.
-
If you need more help or support with Realtek Audio Drivers, you can visit their official website or contact their customer service.
- Manager, you can either click on the Realtek HD Audio Manager icon in your system tray or Start menu, or go to the Control Panel and select Realtek HD Audio Manager. You will see a user interface with various tabs and options. You can explore them and adjust them according to your preferences.
-
Press Windows + X keys on your keyboard and select Apps and Features.
-
Look for the Realtek Audio Driver in the list of installed programs and click on it.
-
Select Uninstall.
-
Follow the on-screen instructions and confirm your choice.
-
Restart your PC if prompted.
-
-
Note that uninstalling Realtek Audio Driver may cause your audio device to stop working or work improperly. You may need to install another compatible driver for your audio device.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Garena Mod Menu Apk for Free Fire MAX and Enjoy Premium Features.md b/spaces/1phancelerku/anime-remove-background/Download Garena Mod Menu Apk for Free Fire MAX and Enjoy Premium Features.md
deleted file mode 100644
index 73dce397ee73c45763fd26803d704d93fc6609e8..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Garena Mod Menu Apk for Free Fire MAX and Enjoy Premium Features.md
+++ /dev/null
@@ -1,95 +0,0 @@
-
-
Garena Mod Menu Apk: What Is It and How to Use It?
-
If you are a fan of Garena Free Fire, a popular survival shooter game for mobile devices, you might have heard of Garena mod menu apk. This is a modified version of the original game apk that allows users to access various cheats and hacks. In this article, we will explain what Garena mod menu apk is, what features it offers, how to install it, and what risks it entails.
A popular survival shooter game for mobile devices
-
Garena Free Fire is a world-famous survival shooter game available on mobile. Each 10-minute game places you on a remote island where you are pit against 49 other players, all seeking survival. Players freely choose their starting point with their parachute, and aim to stay in the safe zone for as long as possible. Drive vehicles to explore the vast map, hide in the wild, or become invisible by proning under grass or rifts. Ambush, snipe, survive, there is only one goal: to survive and answer the call of duty.
-
Different game modes and features
-
Free Fire offers a variety of exciting game modes with all Free Fire players via exclusive Firelink technology. You can enjoy fast-paced 4v4 Clash Squad matches, classic 50-player Battle Royale matches, or special modes such as Rampage, Bomb Squad, or Zombie Invasion. You can also customize your character with hundreds of outfits, accessories, weapons, vehicles, and pets. You can also create squads of up to 4 players and communicate with your team using in-game voice chat.
-
What is a mod menu apk?
-
A modified version of the original game apk
-
A mod menu apk is a changed version of the game’s original apk that can be used to get free cheats. With this mod menu apk, you don’t need any other programs to load cheats into the game. Perfect for those who do not know how to hack Free Fire. The mod menu apk has a user-friendly interface that allows you to toggle on and off different cheats with a simple tap.
-
garena free fire max mod menu apk
-garena free fire mod menu apk download
-garena free fire mod menu apk latest version
-garena free fire mod menu apk unlimited diamonds
-garena free fire mod menu apk 2021
-garena free fire mod menu apk for android
-garena free fire mod menu apk no root
-garena free fire mod menu apk anti ban
-garena free fire mod menu apk god mode
-garena free fire mod menu apk auto headshot
-garena free fire mod menu apk aimbot
-garena free fire mod menu apk esp
-garena free fire mod menu apk wallhack
-garena free fire mod menu apk speed hack
-garena free fire mod menu apk unlock all characters
-garena free fire mod menu apk unlock all skins
-garena free fire mod menu apk unlock all emotes
-garena free fire mod menu apk unlock all weapons
-garena free fire mod menu apk unlock all pets
-garena free fire mod menu apk unlock all bundles
-garena free fire mod menu apk vip features
-garena free fire mod menu apk mega mod
-garena free fire mod menu apk obb file
-garena free fire mod menu apk mediafıre link
-garena free fire mod menu apk 100% working
-how to install garena free fire mod menu apk
-how to use garena free fire mod menu apk
-how to update garena free fire mod menu apk
-how to download garena free fire mod menu apk on pc
-how to download garena free fire mod menu apk on ios
-is garena free fire mod menu apk safe
-is garena free fire mod menu apk legal
-is garena free fire mod menu apk real
-is garena free fire mod menu apk online or offline
-best website to download garena free fire mod menu apk
-best settings for garena free fire mod menu apk
-best features of garena free fire mod menu apk
-best tips and tricks for garena free fire mod menu apk
-best gameplay of garena free fire mod menu apk
-best review of garena free fire mod menu apk
-
Allows users to access various cheats and hacks
-
The mod menu apk offers tons of cheats for its users. Some of the most popular ones are unlimited diamonds and coins, wallhack, aimbot, ESP hack, flying hack, unlock characters, and skins hack. These cheats can give you an edge over your enemies and help you win more matches. However, they also come with some risks that you should be aware of before using them.
-
What are the features of Garena mod menu apk?
-
Unlimited diamonds and coins
-
Diamonds and coins are the in-game currency in Free Fire and without them you can’t even purchase a skin in the game. With the mod menu apk, you can get unlimited diamonds and coins for free. You can use them to buy anything you want in the game, such as outfits, weapons, vehicles, pets, or elite passes.
-
Wallhack
Wallhack is a cheat that allows you to see through walls and other obstacles. You can spot your enemies easily and shoot them before they see you. You can also avoid ambushes and traps by knowing where your enemies are hiding. Wallhack can give you a huge advantage in Free Fire, especially in close-quarters combat.
-
Aimbot
-
Aimbot is a cheat that automatically aims and shoots your enemies for you. You don’t need to worry about your accuracy or reaction time. Just point your weapon in the general direction of your enemy and let the aimbot do the rest. You can kill your enemies with one shot and win every firefight. Aimbot is one of the most powerful cheats in Free Fire, but also one of the most risky ones.
-
ESP hack
-
ESP hack is a cheat that shows you extra information about your enemies on your screen. You can see their name, health, distance, weapon, and location. You can also see their footsteps, items, and vehicles. ESP hack can help you plan your strategy and avoid unnecessary fights. ESP hack can make you more aware of your surroundings and improve your survival chances.
-
Flying hack
-
Flying hack is a cheat that allows you to fly in the air like Superman. You can move faster and reach places that are normally inaccessible. You can also surprise your enemies from above and escape from danger easily. Flying hack can make you more mobile and unpredictable in Free Fire, but also more noticeable and vulnerable.
-
Unlock characters
-
Free Fire has a roster of over 40 characters, each with their own unique skills and abilities. However, not all of them are available for free. Some of them require diamonds or coins to unlock. With the mod menu apk, you can unlock all the characters for free and use them in the game. You can experiment with different combinations of characters and skills and find the ones that suit your playstyle.
-
Skins hack
-
Skins are cosmetic items that change the appearance of your character, weapons, vehicles, or pets. They have no effect on the gameplay, but they can make you look cooler and more stylish. Free Fire has a huge collection of skins, but most of them are expensive or rare. With the mod menu apk, you can get all the skins for free and use them in the game. You can customize your character and show off your personality with different skins.
-
How to install Garena mod menu apk?
-
Download the mod menu apk from a trusted source
-
The first step to install Garena mod menu apk is to download it from a trusted source. There are many websites that claim to offer the mod menu apk, but not all of them are safe or reliable. Some of them may contain malware or viruses that can harm your device or steal your data. To avoid this, you should only download the mod menu apk from a reputable source that has positive reviews and feedback from other users. You can also scan the mod menu apk file with an antivirus program before installing it.
-
Enable unknown sources in your device settings
-
The second step to install Garena mod menu apk is to enable unknown sources in your device settings. This is because the mod menu apk is not from the official Google Play Store or App Store, so your device may not allow you to install it by default. To enable unknown sources, you need to go to your device settings, then security or privacy, then toggle on the option that says "allow installation of apps from unknown sources" or something similar. This will allow you to install the mod menu apk without any problems.
-
Install the mod menu apk and launch the game
-
The third step to install Garena mod menu apk is to install it and launch the game. To install it, you need to locate the mod menu apk file on your device storage, then tap on it and follow the instructions on the screen. It may take a few minutes for the installation to complete. Once it is done, you can launch the game by tapping on its icon on your home screen or app drawer. You will see a mod menu icon on the top left corner of the game screen. Tap on it to access the cheats and hacks.
-
What are the risks of using Garena mod menu apk?
-
Possible detection and ban by the game developers
Possible detection and ban by the game developers
-
One of the biggest risks of using Garena mod menu apk is that you may get detected and banned by the game developers. The game developers have a strict anti-cheat system that monitors the game activity and detects any abnormal behavior. If you are caught using the mod menu apk, you may face consequences such as account suspension, permanent ban, or legal action. You may also lose your progress, achievements, and rewards in the game. Therefore, you should use the mod menu apk at your own risk and discretion.
-
Malware and viruses from unverified sources
-
Another risk of using Garena mod menu apk is that you may get malware and viruses from unverified sources. As mentioned earlier, not all websites that offer the mod menu apk are safe or reliable. Some of them may contain malicious code that can infect your device or steal your data. You may also get unwanted ads, pop-ups, or redirects that can annoy you or compromise your privacy. To avoid this, you should only download the mod menu apk from a trusted source and scan it with an antivirus program before installing it.
-
Loss of original account and data
-
A third risk of using Garena mod menu apk is that you may lose your original account and data. The mod menu apk is not compatible with the official version of the game, so you cannot use your existing account or data with it. You have to create a new account and start from scratch. You also cannot play with other players who are using the official version of the game, as they are on different servers. You may also face compatibility issues or errors while playing the game with the mod menu apk. Therefore, you should backup your original account and data before using the mod menu apk.
-
Conclusion
-
Garena mod menu apk is a modified version of the original game apk that allows users to access various cheats and hacks in Free Fire. It offers features such as unlimited diamonds and coins, wallhack, aimbot, ESP hack, flying hack, unlock characters, and skins hack. However, it also comes with some risks such as possible detection and ban by the game developers, malware and viruses from unverified sources, and loss of original account and data. Therefore, you should use it at your own risk and discretion.
-
FAQs
-
-
Question
Answer
-
Is Garena mod menu apk legal?
No, Garena mod menu apk is not legal. It violates the terms of service and policies of the game developers. It also infringes on their intellectual property rights. Using it may result in legal action from the game developers.
-
Is Garena mod menu apk safe?
Not necessarily. Garena mod menu apk may contain malware or viruses that can harm your device or steal your data. It may also get detected and banned by the game developers. It may also cause compatibility issues or errors while playing the game. Therefore, you should only download it from a trusted source and scan it with an antivirus program before installing it.
-
Can I use Garena mod menu apk with my existing account?
No, you cannot use Garena mod menu apk with your existing account. The mod menu apk is not compatible with the official version of the game, so you have to create a new account and start from scratch. You also cannot play with other players who are using the official version of the game, as they are on different servers.
-
How can I update Garena mod menu apk?
You can update Garena mod menu apk by downloading the latest version from a trusted source and installing it over the previous version. However, you should be careful as some updates may not work with the mod menu apk or may increase the chances of detection and ban by the game developers.
-
Are there any alternatives to Garena mod menu apk?
Yes, there are some alternatives to Garena mod menu apk such as scripts, injectors, or tools that can also provide cheats and hacks for Free Fire. However, they also have similar risks and drawbacks as Garena mod menu apk.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Ministrike 3.7 APK for Android - Enjoy the Best Counter-Strike Tribute.md b/spaces/1phancelerku/anime-remove-background/Download Ministrike 3.7 APK for Android - Enjoy the Best Counter-Strike Tribute.md
deleted file mode 100644
index 2ce24954456c7b94f39f027352c8c12078e516f8..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Ministrike 3.7 APK for Android - Enjoy the Best Counter-Strike Tribute.md
+++ /dev/null
@@ -1,76 +0,0 @@
-
-
Download MiniStrike 3.7: A Fun and Fast-Paced Shooter Game for Android
-
If you are looking for a fun and fast-paced shooter game for your Android device, you should definitely check out MiniStrike. MiniStrike is a tribute to the popular Counter-Strike game, but with a cute and pixelated style. You can play online with other players, or offline with bots, in different modes and maps. You can also customize your character and your weapons with various skins and items. In this article, we will show you how to download MiniStrike 3.7, the latest version of the game, which has bug fixes and improvements, and no ads or in-app purchases.
MiniStrike is a shooter game developed by Malo The Toad, an independent game developer from France. The game was released in 2016 and has been updated regularly since then. The game is inspired by Counter-Strike, one of the most popular and influential shooter games of all time.
-
A tribute to Counter-Strike
-
MiniStrike pays homage to Counter-Strike by recreating some of its iconic features, such as the gameplay mechanics, the weapons, the sounds, and the maps. You can choose between two teams, terrorists or counter-terrorists, and complete different objectives, such as planting or defusing bombs, rescuing hostages, or eliminating the enemy team. You can also buy weapons and equipment at the beginning of each round, using the money you earn from killing enemies or completing objectives.
-
A multiplayer game with different modes and maps
-
MiniStrike is a multiplayer game that allows you to play online with other players from around the world, or offline with bots. You can join or create rooms with different settings, such as the number of players, the game mode, and the map. The game has four modes: deathmatch, team deathmatch, bomb defusal, and hostage rescue. The game also has 15 maps, some of which are based on Counter-Strike maps, such as de_dust2, cs_office, or de_nuke.
-
A customizable game with skins and weapons
-
MiniStrike is a customizable game that lets you personalize your character and your weapons with various skins and items. You can unlock skins by playing the game or by watching ads. You can also buy items with coins that you earn from playing or from daily rewards. You can equip different items for your head, body, hands, feet, and backpack. You can also change the skin of your weapons, such as pistols, rifles, shotguns, snipers, or knives.
-
Why download MiniStrike 3.7?
-
MiniStrike 3.7 is the latest version of the game that was released on June 14th, 2021. This version has several bug fixes and improvements that make the game more stable and enjoyable. Here are some of the reasons why you should download MiniStrike 3.7:
-
The latest version with bug fixes and improvements
-
MiniStrike 3.7 has fixed some of the issues that were reported by players in previous versions, such as crashes, glitches, lagging, or freezing. The developer has also improved some of the features of the game, such as the graphics quality, the sound effects, the user interface, or the gameplay balance. The developer has also added some new content to the game , such as new skins, new weapons, and new maps.
-
How to download ministrike 3.7 on android
-Download ministrike 3.7 apk for free
-Ministrike 3.7 latest version download
-Download ministrike 3.7 mod apk with unlimited money
-Ministrike 3.7 gameplay and review
-Download ministrike 3.7 for pc using emulator
-Ministrike 3.7 tips and tricks
-Download ministrike 3.7 offline installer
-Ministrike 3.7 update and patch notes
-Download ministrike 3.7 from apkpure.com[^1^]
-Ministrike 3.7 cheats and hacks
-Download ministrike 3.7 for ios devices
-Ministrike 3.7 best weapons and maps
-Download ministrike 3.7 from google play store
-Ministrike 3.7 system requirements and compatibility
-Download ministrike 3.7 for windows phone
-Ministrike 3.7 multiplayer mode and servers
-Download ministrike 3.7 from amazon appstore
-Ministrike 3.7 ratings and feedback
-Download ministrike 3.7 for mac os x
-Ministrike 3.7 skins and customization
-Download ministrike 3.7 from uptodown.com
-Ministrike 3.7 bugs and issues
-Download ministrike 3.7 from softonic.com
-Ministrike 3.7 achievements and leaderboards
-Download ministrike 3.7 from apkmonk.com
-Ministrike 3.7 clans and tournaments
-Download ministrike 3.7 from apk-dl.com
-Ministrike 3.7 chat and voice commands
-Download ministrike 3.7 from apkmirror.com
-
The best way to enjoy the game without ads or in-app purchases
-
MiniStrike 3.7 is the best way to enjoy the game without any ads or in-app purchases. The game is completely free and does not require any registration or login. You can play the game without any interruptions or distractions from ads or pop-ups. You can also access all the features and content of the game without spending any real money. You can unlock skins and items by playing the game or by watching ads voluntarily. You can also earn coins by playing the game or by claiming daily rewards.
-
The easiest way to install the game on your device
-
MiniStrike 3.7 is the easiest way to install the game on your Android device. You do not need to download the game from the Google Play Store, which may not be compatible with your device or may not have the latest version of the game. You can download the game from the APKPure website, which is a trusted and reliable source of APK files for Android apps and games. You can install the game on your device in a few simple steps, which we will explain in the next section.
-
How to download MiniStrike 3.7?
-
Downloading MiniStrike 3.7 is very easy and fast. You just need to follow these steps:
-
Step 1: Go to the APKPure website
-
The first step is to go to the APKPure website, which is https://apkpure.com/ministrike/com.ministrike. This is where you can find the latest version of MiniStrike 3.7, as well as other versions of the game. You can also read more information about the game, such as its description, features, screenshots, reviews, and ratings.
-
Step 2: Click on the download button
-
The second step is to click on the download button, which is located at the top right corner of the website. This will start downloading the APK file of MiniStrike 3.7 on your device. The file size is about 35 MB, so it should not take too long to download.
-
Step 3: Allow unknown sources on your device
-
The third step is to allow unknown sources on your device, which means that you can install apps and games that are not from the Google Play Store. To do this, you need to go to your device settings, then security, then enable unknown sources. This will allow you to install MiniStrike 3.7 on your device.
-
Step 4: Install the APK file and launch the game
-
The fourth and final step is to install the APK file and launch the game. To do this, you need to locate the downloaded file on your device, then tap on it to start installing it. Once the installation is complete, you can tap on the open button to launch the game. Alternatively, you can find the game icon on your home screen or app drawer and tap on it to launch the game.
-
Conclusion
-
MiniStrike 3.7 is a fun and fast-paced shooter game for Android devices that pays tribute to Counter-Strike. You can play online with other players or offline with bots in different modes and maps. You can also customize your character and your weapons with various skins and items. MiniStrike 3.7 is the latest version of the game that has bug fixes and improvements, and no ads or in-app purchases. You can download MiniStrike 3.7 from the APKPure website in a few easy steps.
-
FAQs
-
Here are some of the frequently asked questions about MiniStrike 3.7:
-
Q: Is MiniStrike 3.7 safe to download and install?
-
A: Yes, MiniStrike 3.7 is safe to download and install from the APKPure website, which is a trusted and reliable source of APK files for Android apps and games. The website scans all the files for viruses and malware before uploading them.
-
Q: Is MiniStrike 3.7 compatible with my device?
-
A: MiniStrike 3.7 is compatible with most Android devices that have Android 4.1 or higher as their operating system. However, some devices may not be able to run the game smoothly due to their hardware specifications or performance issues.
-
Q: How can I update MiniStrike 3.7?
-
A: You can update MiniStrike 3.7 by downloading and installing the latest version of the game from the APKPure website, which will always have the newest version of the game. You can also enable the auto-update option on the website, which will notify you when a new version of the game is available and download it automatically.
-
Q: How can I contact the developer of MiniStrike 3.7?
-
A: You can contact the developer of MiniStrike 3.7 by sending an email to ministrikegame@gmail.com. You can also follow the developer on Twitter at @MaloTheToad, where he posts updates and news about the game.
-
Q: How can I support the developer of MiniStrike 3.7?
-
A: You can support the developer of MiniStrike 3.7 by rating and reviewing the game on the APKPure website, or by sharing the game with your friends and family. You can also donate to the developer via PayPal at https://www.paypal.me/malothetoad, or by watching ads voluntarily in the game.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download NBA 2K20 APK 98.0.2 for Android - Experience the Classic 2K Action on the Go.md b/spaces/1phancelerku/anime-remove-background/Download NBA 2K20 APK 98.0.2 for Android - Experience the Classic 2K Action on the Go.md
deleted file mode 100644
index 6f088fc4b7fcda4677cb5c5b514da35795b2e308..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download NBA 2K20 APK 98.0.2 for Android - Experience the Classic 2K Action on the Go.md
+++ /dev/null
@@ -1,281 +0,0 @@
-
-
NBA 2K20 APK: The Ultimate Basketball Game for Android
-
If you are a fan of basketball and want to experience the thrill of playing on your mobile device, then you should try NBA 2K20 APK. This is the latest version of the popular NBA 2K series, which is developed by 2K, Inc. and offers the most realistic and immersive basketball simulation ever. In this article, we will tell you everything you need to know about NBA 2K20 APK, including its features, how to download and install it, how to play it, and its pros and cons.
-
What is NBA 2K20 APK?
-
NBA 2K20 APK is an Android game that lets you play as your favorite NBA players and teams in various game modes and challenges. You can create your own custom player, join a team, compete in tournaments, or just enjoy a casual game with friends. You can also explore the NBA culture and lifestyle, with exclusive content from celebrities, influencers, and legends. NBA 2K20 APK is the ultimate basketball game for Android, with stunning graphics, realistic physics, smooth controls, and engaging gameplay.
NBA 2K20 APK has many features that make it stand out from other basketball games. Here are some of them:
-
- Realistic graphics and gameplay
-
NBA 2K20 APK uses advanced technology to deliver lifelike graphics and animations, with detailed player models, facial expressions, movements, and reactions. The game also features realistic sound effects, commentary, crowd noise, and music. The gameplay is smooth and responsive, with intuitive controls and mechanics. You can feel the impact of every shot, pass, dribble, steal, block, and dunk.
-
- Multiple game modes and challenges
-
NBA 2K20 APK offers a variety of game modes and challenges to suit your preferences and skills. You can play in the following modes:
-
-
MyCAREER: This is the main mode where you create your own custom player and follow his journey from rookie to legend. You can customize your player's appearance, attributes, skills, style, and equipment. You can also interact with other players, coaches, agents, fans, and media. You can earn coins, rewards, badges, and endorsements as you progress.
-
MyTEAM: This is the mode where you build your own dream team of NBA players from past and present. You can collect cards, trade players, upgrade your roster, and compete in various online and offline modes. You can also participate in special events, challenges, tournaments, and seasons.
-
Blacktop: This is the mode where you play street basketball in various locations around the world. You can choose from different formats, such as 1v1, 2v2, 3v3, or 5v5. You can also customize the rules, time limit, score limit, difficulty level, and court size.
-
Quick Game: This is the mode where you play a single game with any NBA team of your choice. You can choose from different settings, such as quarter length, difficulty level, camera angle, and uniforms.
-
Play Now Online: This is the mode where you play online against other players from around the world. You can choose from different tiers, leagues, and rankings. You can also chat with your opponents and view their stats and records.
-
2KTV: This is the mode where you watch the official NBA 2K TV show, hosted by Alexis Morgan and Chris Manning. You can learn tips and tricks, watch interviews, get updates, and participate in interactive quizzes and polls.
-
-
- Customization and personalization options
-
NBA 2K20 APK gives you the freedom to customize and personalize your game experience. You can change the settings, such as the language, subtitles, controls, camera, audio, and graphics. You can also edit the rosters, ratings, contracts, injuries, and transactions of any NBA team. You can also create your own custom teams, players, jerseys, courts, logos, and arenas.
-
- Online multiplayer and social features
-
NBA 2K20 APK allows you to play online with or against other players from around the world. You can join or create a crew, chat with your friends, send messages, invite players, join parties, and voice chat. You can also share your game highlights, screenshots, videos, and achievements on social media platforms, such as Facebook, Twitter, Instagram, and YouTube.
-
How to download and install NBA 2K20 APK?
-
If you want to download and install NBA 2K20 APK on your Android device, you need to follow these steps:
-
- Requirements and compatibility
-
Before you download and install NBA 2K20 APK, you need to make sure that your device meets the following requirements:
-
-
Your device must have Android 4.3 or higher operating system.
-
Your device must have at least 3 GB of free storage space.
-
Your device must have at least 2 GB of RAM.
-
Your device must have a stable internet connection.
-
Your device must support OpenGL ES 3.0 or higher.
-
-
- Steps to download and install NBA 2K20 APK
-
After you check the requirements and compatibility of your device, you can proceed to download and install NBA 2K20 APK by following these steps:
-
98.0.2 nba 2k20 apk download free
-98.0.2 nba 2k20 apk mod unlimited money
-98.0.2 nba 2k20 apk obb data
-98.0.2 nba 2k20 apk offline
-98.0.2 nba 2k20 apk latest version
-98.0.2 nba 2k20 apk android
-98.0.2 nba 2k20 apk full game
-98.0.2 nba 2k20 apk update
-98.0.2 nba 2k20 apk no verification
-98.0.2 nba 2k20 apk revdl
-98.0.2 nba 2k20 apk rexdl
-98.0.2 nba 2k20 apk mirror
-98.0.2 nba 2k20 apk pure
-98.0.2 nba 2k20 apk hack
-98.0.2 nba 2k20 apk cracked
-98.0.2 nba 2k20 apk andropalace
-98.0.2 nba 2k20 apk highly compressed
-98.0.2 nba 2k20 apk for pc
-98.0.2 nba 2k20 apk gameplay
-98.0.2 nba 2k20 apk features
-98.0.2 nba 2k20 apk requirements
-98.0.2 nba 2k20 apk size
-98.0.2 nba 2k20 apk installation guide
-98.0.2 nba 2k20 apk best settings
-98.0.2 nba 2k20 apk cheats
-98.0.2 nba 2k20 apk tips and tricks
-98.0.2 nba 2k20 apk review
-98.0.2 nba 2k20 apk ratings
-98.0.2 nba 2k20 apk screenshots
-98.0.2 nba 2k20 apk video
-how to download and install the latest version of the NBA game on your Android device using the APK file[^1^]
-how to play NBA basketball game with realistic graphics and smooth controls on your phone or tablet using the APK file[^1^]
-how to enjoy the new features and modes of the NBA simulation game with the latest update of the APK file[^1^]
-how to get unlimited VC and MT coins in the NBA sports game with the modded version of the APK file[^1^]
-how to fix common errors and issues of the NBA mobile game with the patched version of the APK file[^1^]
-how to transfer your progress and data from the previous versions of the NBA game to the new one using the OBB file[^1^]
-how to play NBA game offline without internet connection using the APK file[^1^]
-how to customize your players and teams in the NBA game with the APK file[^1^]
-how to unlock all the premium features and items in the NBA game with the APK file[^1^]
-how to compete with other players online in the NBA game with the APK file[^1^]
-
-
Go to the official website of NBA 2K20 APK (https://www.nba2k.com/android) and click on the download button.
-
Wait for the download to finish and locate the NBA 2K20 APK file on your device.
-
Tap on the NBA 2K20 APK file and allow the installation from unknown sources if prompted.
-
Wait for the installation to complete and launch the game.
-
Enjoy playing NBA 2K20 APK on your Android device.
-
-
How to play NBA 2K20 APK?
-
If you are new to NBA 2K20 APK or want to improve your skills, you might want to know some tips and tricks on how to play the game. Here are some of them:
-
- Tips and tricks for beginners
-
If you are a beginner in NBA 2K20 APK, you might want to follow these tips and tricks:
-
-
Start with the tutorial mode to learn the basic controls and mechanics of the game.
-
Play in the quick game mode to practice your skills and get familiar with the teams and players.
-
Adjust the difficulty level according to your preference and skill level. You can choose from rookie, pro, all-star, superstar, or hall of fame.
-
Use the auto-play feature if you want to let the game play for you. You can also switch between manual and auto-play anytime during the game.
-
Use the pause menu to access various options, such as settings, stats, replays, substitutions, and tips.
-
Use the virtual joystick and buttons to control your player and perform various actions, such as moving, shooting, passing, dribbling, stealing, blocking, and dunking.
-
Use the sprint button to run faster and the turbo button to boost your energy and performance.
-
Use the shot meter to time your shots and aim for the green zone for a perfect shot.
-
Use the pro stick to perform advanced moves and skills, such as spin moves, step backs, crossovers, fadeaways, and euro steps.
-
Use the icon pass to pass the ball to a specific teammate by tapping on his icon.
-
Use the pick and roll to set a screen for your teammate and create an open space for a shot or a drive.
-
Use the post up to back down your defender and create a favorable position for a shot or a pass.
-
Use the defensive assist to help you stay in front of your opponent and prevent him from scoring.
-
Use the swipe gestures to perform quick actions, such as stealing, blocking, rebounding, and switching players.
-
-
- Best players and teams to choose
-
If you want to have an edge over your opponents in NBA 2K20 APK, you might want to choose the best players and teams in the game. Here are some of them:
-
-
-
Player
-
Team
-
Overall Rating
-
-
-
LeBron James
-
Los Angeles Lakers
-
97
-
-
-
Kawhi Leonard
-
Los Angeles Clippers
-
97
-
-
-
Giannis Antetokounmpo
-
Milwaukee Bucks
-
96
-
-
-
James Harden
-
Houston Rockets
-
96
-
-
-
Kevin Durant
-
Brooklyn Nets
-
96
-
-
-
Stephen Curry
-
Golden State Warriors
-
95
-
-
Anthony Davis
-
Los Angeles Lakers
-
94
-
-
-
Luka Doncic
-
Dallas Mavericks
-
94
-
-
-
Damian Lillard
-
Portland Trail Blazers
-
94
-
-
-
Joel Embiid
-
Philadelphia 76ers
-
91
-
-
-
Kyrie Irving
-
Brooklyn Nets
-
91
-
-
-
Russell Westbrook
-
Houston Rockets
-
90
-
-
-
-
As you can see, these players are the highest rated in the game and have the best skills, attributes, and abilities. They can dominate the game in any position and situation. You can also choose from the following teams, which are the best in the game based on their overall rating, roster, chemistry, and performance:
-
-
-
Team
-
Overall Rating
-
-
-
Los Angeles Lakers
-
97
-
-
-
Los Angeles Clippers
-
96
-
-
-
Milwaukee Bucks
-
95
-
-
-
Brooklyn Nets
-
94
-
-
-
Houston Rockets
-
93
-
-
-
Golden State Warriors
-
92
-
-
-
-
Philadelphia 76ers
-
91
-
-
-
Dallas Mavericks
-
90
-
-
-
Portland Trail Blazers
-
89
-
-
-
Boston Celtics
-
88
-
-
-
Toronto Raptors
-
87
-
-
-
These teams have the best combination of star players, depth, balance, and chemistry. They can compete with any other team in the game and have a high chance of winning the championship.
-
- How to earn coins and rewards
-
If you want to unlock more features, items, and content in NBA 2K20 APK, you need to earn coins and rewards. Here are some ways to do that:
-
-
Complete the daily, weekly, and monthly objectives and missions. You can find them in the main menu or the game modes. They will give you coins, cards, packs, badges, and other rewards.
-
Play in the MyTEAM mode and participate in the events, challenges, tournaments, and seasons. You can earn coins, cards, packs, badges, and other rewards based on your performance and ranking.
-
Play in the MyCAREER mode and progress through your career. You can earn coins, rewards, badges, and endorsements based on your performance and popularity.
-
Watch the 2KTV show and answer the interactive quizzes and polls. You can earn coins, cards, packs, badges, and other rewards based on your answers.
-
Use the locker codes feature to redeem free codes that give you coins, cards, packs, badges, and other rewards. You can find the codes on the official NBA 2K social media accounts or websites.
-
Use the spin the wheel feature to spin a wheel that gives you a random reward. You can access this feature once a day in the MyTEAM or MyCAREER mode.
-
-
Pros and cons of NBA 2K20 APK
-
NBA 2K20 APK is not a perfect game and has its pros and cons. Here are some of them:
-
- Pros
-
-
The game has amazing graphics and sound effects that make it look and feel like a real NBA game.
-
The game has multiple game modes and challenges that offer a lot of variety and replay value.
-
The game has a lot of customization and personalization options that allow you to create your own unique player and team.
-
The game has online multiplayer and social features that allow you to play with or against other players from around the world.
-
The game has exclusive content from celebrities, influencers, and legends that enhance the NBA culture and lifestyle.
-
-
- Cons
-
-
The game requires a lot of storage space and RAM to run smoothly on your device.
-
The game requires a stable internet connection to access some of the features and content.
-
The game has some bugs and glitches that affect the gameplay and performance.
-
The game has some ads and in-app purchases that can be annoying or expensive.
-
The game can be difficult or frustrating for some players due to the high level of competition and skill required.
Conclusion
-
NBA 2K20 APK is a great game for basketball fans and gamers who want to enjoy a realistic and immersive basketball simulation on their Android devices. The game has many features, modes, challenges, and content that make it fun and engaging. The game also has some drawbacks, such as the high requirements, the internet dependency, the bugs and glitches, the ads and in-app purchases, and the difficulty level. However, these cons do not outweigh the pros and do not ruin the overall experience of the game. NBA 2K20 APK is definitely worth downloading and playing if you love basketball and want to experience the ultimate basketball game for Android.
-
FAQs
-
Here are some frequently asked questions about NBA 2K20 APK:
-
- Is NBA 2K20 APK free?
-
Yes, NBA 2K20 APK is free to download and play. However, the game has some ads and in-app purchases that can enhance your game experience or unlock more features and content.
-
- Is NBA 2K20 APK safe?
-
Yes, NBA 2K20 APK is safe to download and install on your device. The game does not contain any viruses, malware, or spyware that can harm your device or data. However, you should always download the game from the official website or a trusted source to avoid any risks.
-
- Is NBA 2K20 APK offline?
-
No, NBA 2K20 APK is not offline. The game requires a stable internet connection to access some of the features and content, such as the online multiplayer, the social features, the updates, and the exclusive content. You can still play some of the modes and challenges offline, but you will miss out on some of the benefits and rewards of the online features.
-
- How to update NBA 2K20 APK?
-
To update NBA 2K20 APK, you need to follow these steps:
-
-
Go to the official website of NBA 2K20 APK (https://www.nba2k.com/android) and check if there is a new version available.
-
If there is a new version available, click on the download button and wait for the download to finish.
-
Locate the NBA 2K20 APK file on your device and tap on it to install the new version.
-
Wait for the installation to complete and launch the game.
-
Enjoy playing the updated version of NBA 2K20 APK.
-
-
- How to contact NBA 2K20 APK support?
-
If you have any issues, questions, feedback, or suggestions about NBA 2K20 APK, you can contact the NBA 2K20 APK support team by following these steps:
-
-
Go to the main menu of the game and tap on the settings icon.
-
Tap on the help button and choose the option that suits your issue or question.
-
Fill out the form with your details and message and submit it.
-
Wait for a response from the NBA 2K20 APK support team.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download and Install Cars for BeamNG.drive A Step-by-Step Tutorial.md b/spaces/1phancelerku/anime-remove-background/Download and Install Cars for BeamNG.drive A Step-by-Step Tutorial.md
deleted file mode 100644
index 9b5f9485dd845a46c87c861f48b23c1eeaf791b8..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download and Install Cars for BeamNG.drive A Step-by-Step Tutorial.md
+++ /dev/null
@@ -1,127 +0,0 @@
-
-
How to Download Cars for BeamNG.drive: A Complete Guide
-
If you are a fan of realistic driving games, you have probably heard of BeamNG.drive, a dynamic soft-body physics vehicle simulator that can do just about anything. Whether you want to crash cars, race them, or customize them, BeamNG.drive offers you a wide range of possibilities and options. But did you know that you can also download cars for BeamNG.drive from various sources and add them to your game? In this article, we will show you how to download cars for BeamNG.drive, why you should do it, and what tips and tricks you should know before installing them.
Before we get into how to download cars for BeamNG.drive, let's first understand what this game is all about. BeamNG.drive is a game that was released in 2015 as an early access title on Steam, and it has been constantly updated and improved ever since. It is developed by BeamNG, a small team of passionate programmers and artists who have created their own physics engine from scratch. The game has three main features that make it stand out from other driving games:
-
A realistic driving simulator with soft-body physics
-
The core of BeamNG.drive is its physics engine, which simulates every component of a vehicle in real time using nodes (mass points) and beams (springs). This means that every crash, collision, or deformation is calculated realistically and accurately, resulting in true-to-life behavior. You can see your car crumple, bend, break, or explode depending on how you drive it. You can also tweak every aspect of your car's performance, such as wheels, suspension, engines, brakes, steering, gears, etc. The game also features realistic sounds, graphics, lighting, weather, and damage effects.
-
A sandbox game with dozens of customizable vehicles and environments
-
BeamNG.drive offers you dozens of refined, totally customizable vehicles for you to experiment with. Whether it's a compact car or massive truck, you can tweak away at all the moving parts to create just about any driving experience you want. You can also choose from 12 sprawling open-world environments that range from tropical jungles to urban highways. Each environment has its own terrain
A modding-friendly game with a vibrant community
-
One of the best things about BeamNG.drive is that it is very modding-friendly. You can create your own vehicles, maps, scenarios, skins, sounds, and more using the game's built-in tools or external software. You can also download and install mods made by other players from various sources, such as the official BeamNG website, Steam Workshop, or other websites. The game has a very active and supportive community of modders and players who share their creations, feedback, and ideas. You can also join online multiplayer sessions and play with or against other people.
-
Why Download Cars for BeamNG.drive?
-
Now that you know what BeamNG.drive is, you might be wondering why you should download cars for it. After all, the game already has plenty of vehicles to choose from, right? Well, there are several reasons why downloading cars for BeamNG.drive can enhance your gameplay experience and make it more fun and diverse. Here are some of them:
-
To enhance your gameplay experience with new models, features, and styles
-
Downloading cars for BeamNG.drive can give you access to new models that are not available in the base game. These models can have different features, such as unique engines, transmissions, suspensions, body parts, etc. They can also have different styles, such as classic cars, sports cars, muscle cars, supercars, etc. You can find cars that suit your preferences and tastes, or try out something new and different. You can also mix and match different parts from different mods to create your own custom car.
-
beamng drive free download vehicles
-beamng drive car mods download
-beamng drive toy building block car
-beamng drive custom vehicles pack
-beamng drive covet gambler 500
-beamng drive barstow on d-series frame
-beamng drive squatted d-series
-beamng drive scintilla electric vehicle
-beamng drive gavril grand marshal widebody
-beamng drive capsule-shaped bus
-beamng drive experimental toy car
-beamng drive throttle value keyboard
-beamng drive naorl scintilla ev
-beamng drive huge poland flag
-beamng drive zycans custom vehicle pack
-beamng drive dondai supremo as the covet
-beamng drive etk 800 the diy shedbox
-beamng drive aw goal boring german car
-beamng drive miramar gambler 500 b
-beamng drive civetta bolide 6-speed gearbox
-beamng download cars for pc
-beamng download cars for mac
-beamng download cars for android
-beamng download cars for ios
-beamng download cars for xbox one
-beamng download cars for ps4
-beamng download cars for switch
-beamng download cars for linux
-beamng download cars for windows 10
-beamng download cars for steam
-how to download cars in beamng drive
-where to download cars for beamng drive
-best cars to download for beamng drive
-easiest way to download cars for beamng drive
-fastest cars to download for beamng drive
-coolest cars to download for beamng drive
-most realistic cars to download for beamng drive
-most popular cars to download for beamng drive
-most fun cars to download for beamng drive
-most customizable cars to download for beamng drive
-new cars to download for beamng drive 2023
-latest cars to download for beamng drive 2023
-top 10 cars to download for beamng drive 2023
-top 50 cars to download for beamng drive 2023
-top 100 cars to download for beamng drive 2023
-top rated cars to download for beamng drive 2023
-top downloaded cars for beamng drive 2023
-top reviewed cars for beamng drive 2023
-top recommended cars for beamng drive 2023
-
To explore different types of vehicles and driving scenarios
-
Downloading cars for BeamNG.drive can also allow you to explore different types of vehicles and driving scenarios that you might not encounter in the base game. For example, you can download cars that are designed for off-road driving, drifting, racing, stunt driving, demolition derby, etc. You can also download cars that are based on real-life vehicles or fictional ones from movies, games, or other media. You can test your skills and challenge yourself with different vehicles and situations.
-
To support the modders and creators who make the game more diverse and fun
-
Another reason why you should download cars for BeamNG.drive is to support the modders and creators who make them. These people spend a lot of time and effort to create high-quality mods that add value and variety to the game. They also share their mods for free for everyone to enjoy. By downloading their mods, you are showing your appreciation and encouragement for their work. You are also helping them to improve their skills and create more mods in the future.
-
How to Download Cars for BeamNG.drive?
-
Now that you know why you should download cars for BeamNG.drive, let's get into how to do it. There are three main sources where you can download cars for BeamNG.drive: the official BeamNG website, Steam Workshop, and other sources. Each source has its own advantages and disadvantages, so you should choose the one that suits you best. Here is how to download cars from each source:
-
From the official BeamNG website
-
The official BeamNG website is the primary source where you can download cars for BeamNG.drive. It has a dedicated section called Vehicles, where you can find hundreds of car mods made by the developers or the community. The website has a simple and user-friendly interface where you can browse, search, filter, sort, and download car mods easily. Here is how to download cars from the official BeamNG website:
-
-
Browse the Vehicles category and find the car you want. You can use the filters on the left side to narrow down your search by type, style, rating, popularity, etc.
-
Click on the car mod you want to view its details page. You can see some screenshots, videos, descriptions, ratings, comments, and other information about the mod.
-
Click on the Download button on the top right corner of the page and save the file to your computer. The file will be in ZIP format.
-
Extract the file using a program like WinRAR or 7-Zip and copy the folder inside it to your BeamNG.drive mods folder. The default location of this folder is C:\Users\YourName\Documents\BeamNG.drive\mods.
-
Launch the game and enable the mod from the in-game mod manager. You can access this by pressing Esc on your keyboard and clicking on Mods on the bottom left corner of the screen.
-
Enjoy your new car!
-
From Steam Workshop
-
Another source where you can download cars for BeamNG.drive is Steam Workshop, a platform where Steam users can create and share content for various games. Steam Workshop has a large and active community of modders and players who upload and download car mods for BeamNG.drive. Steam Workshop has some advantages over the official BeamNG website, such as automatic updates, easier installation, and integration with Steam. However, it also has some disadvantages, such as lower quality control, limited search options, and dependency on Steam. Here is how to download cars from Steam Workshop:
-
-
Subscribe to the car mod you want from the Steam Workshop page. You can access this page by launching Steam, going to your Library, right-clicking on BeamNG.drive, selecting Properties, and clicking on Browse the Workshop. You can also use this link to go directly to the BeamNG.drive workshop page.
-
Browse or search for the car mod you want. You can use the tabs on the right side to filter by categories, tags, ratings, etc. You can also use the search bar on the top right corner to enter keywords.
-
Click on the car mod you want to view its details page. You can see some screenshots, videos, descriptions, ratings, comments, and other information about the mod.
-
Click on the Subscribe button on the top right corner of the page. This will automatically download the mod to your computer and install it to your game.
-
Launch the game and enable the mod from the in-game mod manager. You can access this by pressing Esc on your keyboard and clicking on Mods on the bottom left corner of the screen.
-
Enjoy your new car!
-
From other sources
-
The third source where you can download cars for BeamNG.drive is from other websites or forums that host car mods for the game. These sources can have some advantages over the official BeamNG website and Steam Workshop, such as more variety, exclusivity, or novelty. However, they also have some disadvantages, such as potential viruses, malware, or incompatible files. You should be careful and cautious when downloading car mods from other sources, and follow the instructions provided by the mod author or website. Here is how to download cars from other sources:
-
-
Be careful of potential viruses, malware, or incompatible files. Before downloading any car mod from an unknown source, you should scan it with an antivirus program and check its compatibility and quality. You should also read the reviews and comments from other users who have downloaded the mod.
-
Follow the instructions provided by the mod author or website. Different car mods may have different installation methods or requirements. You should follow the instructions carefully and make sure you have everything you need to run the mod. Some common steps are:
-
Download the car mod file from the website or forum. The file may be in ZIP, RAR, 7Z, or other formats.
-
Extract the file using a program like WinRAR or 7-Zip and copy the folder inside it to your BeamNG.drive mods folder. The default location of this folder is C:\Users\YourName\Documents\BeamNG.drive\mods.
-
Launch the game and enable the mod from the in-game mod manager. You can access this by pressing Esc on your keyboard and clicking on Mods on the bottom left corner of the screen.
-
-
-
Enjoy your new car!
-
-
Tips and Tricks for Downloading Cars for BeamNG.drive
-
Downloading cars for BeamNG.drive can be a fun and rewarding experience, but it can also be a frustrating and disappointing one if you don't know what you are doing. To avoid any problems or issues with your car mods, you should follow some tips and tricks that will help you download, install, and use them properly. Here are some of them:
-
Read the mod description, reviews, and comments carefully before downloading
-
Before you download any car mod for BeamNG.drive, you should read its description, reviews, and comments carefully. This will help you understand what the mod does, how it works, what it requires, and what it offers. You can also learn about any bugs, glitches, or compatibility issues that the mod may have. You can also see what other users think about the mod and how they rate it. This will help you decide whether the mod is worth downloading or not.
-
Check for updates and patches for your mods regularly
-
After you download and install a car mod for BeamNG.drive, you should check for updates and patches for it regularly. This will help you keep your mod up to date and fix any errors or problems that it may have. You can check for updates and patches by visiting the source where you downloaded the mod from, such as the official BeamNG website, Steam Workshop, or other websites or forums. You can also use some tools or programs that can automatically update your mods for you.
-
Backup your game files and mods before installing new ones
-
Before you install any new car mod for BeamNG.drive, you should backup your game files and mods first. This will help you prevent any data loss or corruption that may occur due to installing a faulty or incompatible mod. You can backup your game files and mods by copying them to another location on your computer or an external drive. You can also use some tools or programs that can backup your game files and mods for you.
-
Don't use too many mods at once to avoid performance issues or crashes
-
While using car mods for BeamNG.drive can be fun and exciting, it can also be taxing on your computer's resources and stability. If you use too many mods at once, you may experience performance issues such as lagging, stuttering, freezing, or crashing. To avoid this, you should limit the number of mods you use at a time and disable any unnecessary ones. You should also monitor your computer's CPU, RAM, GPU, and disk usage while playing the game with mods.
-
Conclusion
-
In conclusion, downloading cars for BeamNG.drive can be a great way to enhance your gameplay experience and make it more fun and diverse. You can download cars from various sources such as the official BeamNG website, Steam Workshop, or other websites or forums. You can also create your own cars using the game's tools or external software. However, you should be careful and cautious when downloading and installing car mods, and follow some tips and tricks that will help you avoid any problems or issues. You should also backup your game files and mods, check for updates and patches, read the mod descriptions and reviews, and don't use too many mods at once. We hope that this article has helped you learn how to download cars for BeamNG.drive, why you should do it, and what tips and tricks you should know before installing them. If you are ready to try out some car mods for BeamNG.drive, here are some links to popular or recommended car mods that you can download from the official BeamNG website or Steam Workshop: - The CrashHard Dummy: A realistic crash test dummy that can be used in any vehicle. - The ETK 800 Series: A series of luxury sedans with various configurations and features. - The Hirochi Sunburst: A sporty hatchback with a rally-inspired design and performance. - The Gavril D-Series: A versatile pickup truck with a lot of customization options and accessories. - The Ibishu Pessima: A classic Japanese sedan with two generations and a lot of nostalgia. Have fun downloading cars for BeamNG.drive and enjoy the game!
FAQs
-
Here are some frequently asked questions about downloading cars for BeamNG.drive:
-
-
Q: How do I uninstall a car mod for BeamNG.drive?
-
A: To uninstall a car mod for BeamNG.drive, you can either delete the mod folder from your BeamNG.drive mods folder, or disable the mod from the in-game mod manager. If you downloaded the mod from Steam Workshop, you can also unsubscribe from it on the Steam Workshop page.
-
Q: How do I update a car mod for BeamNG.drive?
-
A: To update a car mod for BeamNG.drive, you can either download the latest version of the mod from the source where you downloaded it from, or use a tool or program that can automatically update your mods for you. If you downloaded the mod from Steam Workshop, it will be updated automatically by Steam.
-
Q: How do I create my own car mod for BeamNG.drive?
-
A: To create your own car mod for BeamNG.drive, you can use the game's built-in tools or external software to design and model your car. You can also use existing car mods as a base or reference for your car. You can find more information and tutorials on how to create car mods on the official BeamNG website or forum.
-
Q: How do I share my car mod for BeamNG.drive?
-
A: To share your car mod for BeamNG.drive, you can upload it to the official BeamNG website, Steam Workshop, or other websites or forums that host car mods for the game. You should also provide a detailed description, screenshots, videos, and other information about your mod to attract more users and feedback.
-
Q: How do I find more car mods for BeamNG.drive?
-
A: To find more car mods for BeamNG.drive, you can visit the official BeamNG website, Steam Workshop, or other websites or forums that host car mods for the game. You can also use search engines, social media, YouTube, or other platforms to discover new or popular car mods.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/__init__.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/__init__.py
deleted file mode 100644
index 2f93cab80ded8e7239bb96eb6e364c3fd4fb46d9..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from .ldm import LatentDiffusion
-from .utils import seed_everything
-from .pipeline import *
\ No newline at end of file
diff --git a/spaces/AIFILMS/speecht5-tts-demo/README.md b/spaces/AIFILMS/speecht5-tts-demo/README.md
deleted file mode 100644
index b00de1f0412a56568cc8b554a4ee8b880a8b7afb..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/speecht5-tts-demo/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: SpeechT5 Speech Synthesis Demo
-emoji: 👩🎤
-colorFrom: yellow
-colorTo: blue
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: Matthijs/speecht5-tts-demo
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/vocoders/vocoder_utils.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/vocoders/vocoder_utils.py
deleted file mode 100644
index db5d5ca1765928e4b047db04435a8a39b52592ca..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/vocoders/vocoder_utils.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import librosa
-
-from utils.hparams import hparams
-import numpy as np
-
-
-def denoise(wav, v=0.1):
- spec = librosa.stft(y=wav, n_fft=hparams['fft_size'], hop_length=hparams['hop_size'],
- win_length=hparams['win_size'], pad_mode='constant')
- spec_m = np.abs(spec)
- spec_m = np.clip(spec_m - v, a_min=0, a_max=None)
- spec_a = np.angle(spec)
-
- return librosa.istft(spec_m * np.exp(1j * spec_a), hop_length=hparams['hop_size'],
- win_length=hparams['win_size'])
diff --git a/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/pytorch/evaluate.py b/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/pytorch/evaluate.py
deleted file mode 100644
index 7f1fa38eedd9e9cd2580143ceb92aba8f81becf3..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/pytorch/evaluate.py
+++ /dev/null
@@ -1,42 +0,0 @@
-from sklearn import metrics
-
-from pytorch_utils import forward
-
-
-class Evaluator(object):
- def __init__(self, model):
- """Evaluator.
-
- Args:
- model: object
- """
- self.model = model
-
- def evaluate(self, data_loader):
- """Forward evaluation data and calculate statistics.
-
- Args:
- data_loader: object
-
- Returns:
- statistics: dict,
- {'average_precision': (classes_num,), 'auc': (classes_num,)}
- """
-
- # Forward
- output_dict = forward(
- model=self.model,
- generator=data_loader,
- return_target=True)
-
- clipwise_output = output_dict['clipwise_output'] # (audios_num, classes_num)
- target = output_dict['target'] # (audios_num, classes_num)
-
- average_precision = metrics.average_precision_score(
- target, clipwise_output, average=None)
-
- auc = metrics.roc_auc_score(target, clipwise_output, average=None)
-
- statistics = {'average_precision': average_precision, 'auc': auc}
-
- return statistics
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/timm_model.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/timm_model.py
deleted file mode 100644
index 071dd148c772f398e87ecbfc836dcfa4a3ae01af..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/timm_model.py
+++ /dev/null
@@ -1,106 +0,0 @@
-""" timm model adapter
-
-Wraps timm (https://github.com/rwightman/pytorch-image-models) models for use as a vision tower in CLIP model.
-"""
-from collections import OrderedDict
-
-import torch.nn as nn
-
-try:
- import timm
- from timm.models.layers import Mlp, to_2tuple
- from timm.models.layers.attention_pool2d import RotAttentionPool2d
- from timm.models.layers.attention_pool2d import AttentionPool2d as AbsAttentionPool2d
-except ImportError as e:
- timm = None
-
-from .utils import freeze_batch_norm_2d
-
-
-class TimmModel(nn.Module):
- """ timm model adapter
- # FIXME this adapter is a work in progress, may change in ways that break weight compat
- """
-
- def __init__(
- self,
- model_name,
- embed_dim,
- image_size=224,
- pool='avg',
- proj='linear',
- drop=0.,
- pretrained=False):
- super().__init__()
- if timm is None:
- raise RuntimeError("Please `pip install timm` to use timm models.")
-
- self.image_size = to_2tuple(image_size)
- self.trunk = timm.create_model(model_name, pretrained=pretrained)
- feat_size = self.trunk.default_cfg.get('pool_size', None)
- feature_ndim = 1 if not feat_size else 2
- if pool in ('abs_attn', 'rot_attn'):
- assert feature_ndim == 2
- # if attn pooling used, remove both classifier and default pool
- self.trunk.reset_classifier(0, global_pool='')
- else:
- # reset global pool if pool config set, otherwise leave as network default
- reset_kwargs = dict(global_pool=pool) if pool else {}
- self.trunk.reset_classifier(0, **reset_kwargs)
- prev_chs = self.trunk.num_features
-
- head_layers = OrderedDict()
- if pool == 'abs_attn':
- head_layers['pool'] = AbsAttentionPool2d(prev_chs, feat_size=feat_size, out_features=embed_dim)
- prev_chs = embed_dim
- elif pool == 'rot_attn':
- head_layers['pool'] = RotAttentionPool2d(prev_chs, out_features=embed_dim)
- prev_chs = embed_dim
- else:
- assert proj, 'projection layer needed if non-attention pooling is used.'
-
- # NOTE attention pool ends with a projection layer, so proj should usually be set to '' if such pooling is used
- if proj == 'linear':
- head_layers['drop'] = nn.Dropout(drop)
- head_layers['proj'] = nn.Linear(prev_chs, embed_dim)
- elif proj == 'mlp':
- head_layers['mlp'] = Mlp(prev_chs, 2 * embed_dim, embed_dim, drop=drop)
-
- self.head = nn.Sequential(head_layers)
-
- def lock(self, unlocked_groups=0, freeze_bn_stats=False):
- """ lock modules
- Args:
- unlocked_groups (int): leave last n layer groups unlocked (default: 0)
- """
- if not unlocked_groups:
- # lock full model
- for param in self.trunk.parameters():
- param.requires_grad = False
- if freeze_bn_stats:
- freeze_batch_norm_2d(self.trunk)
- else:
- # NOTE: partial freeze requires latest timm (master) branch and is subject to change
- try:
- # FIXME import here until API stable and in an official release
- from timm.models.helpers import group_parameters, group_modules
- except ImportError:
- raise RuntimeError(
- 'Please install latest timm `pip install git+https://github.com/rwightman/pytorch-image-models`')
- matcher = self.trunk.group_matcher()
- gparams = group_parameters(self.trunk, matcher)
- max_layer_id = max(gparams.keys())
- max_layer_id = max_layer_id - unlocked_groups
- for group_idx in range(max_layer_id + 1):
- group = gparams[group_idx]
- for param in group:
- self.trunk.get_parameter(param).requires_grad = False
- if freeze_bn_stats:
- gmodules = group_modules(self.trunk, matcher, reverse=True)
- gmodules = {k for k, v in gmodules.items() if v <= max_layer_id}
- freeze_batch_norm_2d(self.trunk, gmodules)
-
- def forward(self, x):
- x = self.trunk(x)
- x = self.head(x)
- return x
diff --git a/spaces/AIWaves/Debate/src/agents/Component/PromptComponent.py b/spaces/AIWaves/Debate/src/agents/Component/PromptComponent.py
deleted file mode 100644
index dc590d4734e14cad93ab5560cb7b4f08bd45c416..0000000000000000000000000000000000000000
--- a/spaces/AIWaves/Debate/src/agents/Component/PromptComponent.py
+++ /dev/null
@@ -1,133 +0,0 @@
-from abc import abstractmethod
-
-
-class PromptComponent:
- def __init__(self):
- pass
-
- @abstractmethod
- def get_prompt(self, agent):
- pass
-
-class TaskComponent(PromptComponent):
- def __init__(self, task):
- super().__init__()
- self.task = task
-
- def get_prompt(self, agent):
- return f"""The task you need to execute is: {self.task}.\n"""
-
-
-class OutputComponent(PromptComponent):
- def __init__(self, output):
- super().__init__()
- self.output = output
-
- def get_prompt(self, agent):
- return f"""Please contact the above to extract <{self.output}> and {self.output}>, \
- do not perform additional output, please output in strict accordance with the above format!\n"""
-
-
-class SystemComponent(PromptComponent):
- def __init__(self,system_prompt):
- super().__init__()
- self.system_prompt = system_prompt
-
- def get_prompt(self, agent):
- return self.system_prompt
-
-class LastComponent(PromptComponent):
- def __init__(self, last_prompt):
- super().__init__()
- self.last_prompt = last_prompt
-
- def get_prompt(self, agent):
- return self.last_prompt
-
-
-class StyleComponent(PromptComponent):
- """
- 角色、风格组件
- """
-
- def __init__(self, role):
- super().__init__()
- self.role = role
-
- def get_prompt(self, agent):
- name = agent.name
- style = agent.style
- return f"""Now your role is:\n{self.role}, your name is:\n{name}. \
- You need to follow the output style:\n.\n"""
-
-
-class RuleComponent(PromptComponent):
- def __init__(self, rule):
- super().__init__()
- self.rule = rule
-
- def get_prompt(self, agent):
- return f"""The rule you need to follow is:\n{self.rule}.\n"""
-
-
-class DemonstrationComponent(PromptComponent):
- """
- input a list,the example of answer.
- """
-
- def __init__(self, demonstrations):
- super().__init__()
- self.demonstrations = demonstrations
-
- def add_demonstration(self, demonstration):
- self.demonstrations.append(demonstration)
-
- def get_prompt(self, agent):
- prompt = "Here are demonstrations you can refer to:\n"
- for demonstration in self.demonstrations:
- prompt += "\n" + demonstration
- prompt += "\n"
- return prompt
-
-
-class CoTComponent(PromptComponent):
- """
- input a list,the example of answer.
- """
-
- def __init__(self, demonstrations):
- super().__init__()
- self.demonstrations = demonstrations
-
- def add_demonstration(self, demonstration):
- self.demonstrations.append(demonstration)
-
- def get_prompt(self, agent):
- prompt = "You need to think in detail before outputting, the thinking case is as follows:\n"
- for demonstration in self.demonstrations:
- prompt += "\n" + demonstration
- prompt += "\n"
- return prompt
-
-
-class CustomizeComponent(PromptComponent):
- """
- Custom template
- template(str) : example: "i am {}"
- keywords(list) : example : ["name"]
- example : agent.environment.shared_memory["name"] = "Lilong"
- the component will get the keyword attribute from the environment, and then add it to the template.
- Return : "i am Lilong"
- """
- def __init__(self, template, keywords) -> None:
- super().__init__()
- self.template = template
- self.keywords = keywords
-
- def get_prompt(self, agent):
- template_keyword = {}
- for keyword in self.keywords:
-
- current_keyword = agent.environment.shared_memory[keyword]
- template_keyword[keyword] = current_keyword
- return self.template.format(**template_keyword)
\ No newline at end of file
diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/MODEL_CARD.md b/spaces/AbandonedMuse/UnlimitedMusicGen/MODEL_CARD.md
deleted file mode 100644
index 6c2c9f883969eb905e74ad3376966d156cc5ca00..0000000000000000000000000000000000000000
--- a/spaces/AbandonedMuse/UnlimitedMusicGen/MODEL_CARD.md
+++ /dev/null
@@ -1,81 +0,0 @@
-# MusicGen Model Card
-
-## Model details
-
-**Organization developing the model:** The FAIR team of Meta AI.
-
-**Model date:** MusicGen was trained between April 2023 and May 2023.
-
-**Model version:** This is the version 1 of the model.
-
-**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
-
-**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation][arxiv].
-
-**Citation details** See [our paper][arxiv]
-
-**License** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
-
-**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
-
-## Intended use
-**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
-
-- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
-- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
-
-**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
-
-**Out-of-scope use cases** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
-
-## Metrics
-
-**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
-
-- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
-- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
-- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
-
-Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
-
-- Overall quality of the music samples;
-- Text relevance to the provided text input;
-- Adherence to the melody for melody-guided music generation.
-
-More details on performance measures and human studies can be found in the paper.
-
-**Decision thresholds:** Not applicable.
-
-## Evaluation datasets
-
-The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
-
-## Training datasets
-
-The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
-
-## Quantitative analysis
-
-More information can be found in the paper [Simple and Controllable Music Generation][arxiv], in the Experimental Setup section.
-
-## Limitations and biases
-
-**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
-
-**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
-
-**Limitations:**
-
-- The model is not able to generate realistic vocals.
-- The model has been trained with English descriptions and will not perform as well in other languages.
-- The model does not perform equally well for all music styles and cultures.
-- The model sometimes generates end of songs, collapsing to silence.
-- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
-
-**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
-
-**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
-
-**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
-
-[arxiv]: https://arxiv.org/abs/2306.05284
diff --git a/spaces/Abdullah-Habib/Rabbit_or_Hare/app.py b/spaces/Abdullah-Habib/Rabbit_or_Hare/app.py
deleted file mode 100644
index e8ecf74d725f5e813426116f6a3df6d6aa1fa63c..0000000000000000000000000000000000000000
--- a/spaces/Abdullah-Habib/Rabbit_or_Hare/app.py
+++ /dev/null
@@ -1,20 +0,0 @@
-__all__ = ['is_Rabbit',"learn",'classify_image', 'categories','image','label','examples','intf']
-
-# Cell
-from fastai.vision.all import *
-import gradio as gr
-def is_Rabbit(x): return x[0].isupper()
-
-
-learn = load_learner ('model.pkl')
-# Cell
-categories = ('Hare','Rabbit')
-def classify_image (img) :
- pred, idx, probs = learn.predict(img)
- return dict(zip(categories, map(float,probs)))
-# Cell
-image = gr.inputs.Image(shape=(192, 192))
-label = gr.outputs.Label ()
-examples=['Rabbit.jpg', 'TestRabbit.jpg','Hare.jpg']
-intf = gr.Interface(fn=classify_image,inputs=image, outputs=label, examples=examples)
-intf.launch(inline=False)
\ No newline at end of file
diff --git a/spaces/Abhilashvj/planogram-compliance/utils/torch_utils.py b/spaces/Abhilashvj/planogram-compliance/utils/torch_utils.py
deleted file mode 100644
index 760788cf8cfd8f47ba64c4dbea5a5cb20838e9b6..0000000000000000000000000000000000000000
--- a/spaces/Abhilashvj/planogram-compliance/utils/torch_utils.py
+++ /dev/null
@@ -1,613 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-PyTorch utils
-"""
-
-import math
-import os
-import platform
-import subprocess
-import time
-import warnings
-from contextlib import contextmanager
-from copy import deepcopy
-from pathlib import Path
-
-import torch
-import torch.distributed as dist
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.nn.parallel import DistributedDataParallel as DDP
-
-from utils.general import LOGGER, check_version, colorstr, file_date, git_describe
-
-LOCAL_RANK = int(
- os.getenv("LOCAL_RANK", -1)
-) # https://pytorch.org/docs/stable/elastic/run.html
-RANK = int(os.getenv("RANK", -1))
-WORLD_SIZE = int(os.getenv("WORLD_SIZE", 1))
-
-try:
- import thop # for FLOPs computation
-except ImportError:
- thop = None
-
-# Suppress PyTorch warnings
-warnings.filterwarnings(
- "ignore",
- message="User provided device_type of 'cuda', but CUDA is not available. Disabling",
-)
-warnings.filterwarnings("ignore", category=UserWarning)
-
-
-def smart_inference_mode(torch_1_9=check_version(torch.__version__, "1.9.0")):
- # Applies torch.inference_mode() decorator if torch>=1.9.0 else torch.no_grad() decorator
- def decorate(fn):
- return (torch.inference_mode if torch_1_9 else torch.no_grad)()(fn)
-
- return decorate
-
-
-def smartCrossEntropyLoss(label_smoothing=0.0):
- # Returns nn.CrossEntropyLoss with label smoothing enabled for torch>=1.10.0
- if check_version(torch.__version__, "1.10.0"):
- return nn.CrossEntropyLoss(label_smoothing=label_smoothing)
- if label_smoothing > 0:
- LOGGER.warning(
- f"WARNING ⚠️ label smoothing {label_smoothing} requires torch>=1.10.0"
- )
- return nn.CrossEntropyLoss()
-
-
-def smart_DDP(model):
- # Model DDP creation with checks
- assert not check_version(torch.__version__, "1.12.0", pinned=True), (
- "torch==1.12.0 torchvision==0.13.0 DDP training is not supported due to a known issue. "
- "Please upgrade or downgrade torch to use DDP. See https://github.com/ultralytics/yolov5/issues/8395"
- )
- if check_version(torch.__version__, "1.11.0"):
- return DDP(
- model,
- device_ids=[LOCAL_RANK],
- output_device=LOCAL_RANK,
- static_graph=True,
- )
- else:
- return DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK)
-
-
-def reshape_classifier_output(model, n=1000):
- # Update a TorchVision classification model to class count 'n' if required
- from models.common import Classify
-
- name, m = list(
- (model.model if hasattr(model, "model") else model).named_children()
- )[
- -1
- ] # last module
- if isinstance(m, Classify): # YOLOv5 Classify() head
- if m.linear.out_features != n:
- m.linear = nn.Linear(m.linear.in_features, n)
- elif isinstance(m, nn.Linear): # ResNet, EfficientNet
- if m.out_features != n:
- setattr(model, name, nn.Linear(m.in_features, n))
- elif isinstance(m, nn.Sequential):
- types = [type(x) for x in m]
- if nn.Linear in types:
- i = types.index(nn.Linear) # nn.Linear index
- if m[i].out_features != n:
- m[i] = nn.Linear(m[i].in_features, n)
- elif nn.Conv2d in types:
- i = types.index(nn.Conv2d) # nn.Conv2d index
- if m[i].out_channels != n:
- m[i] = nn.Conv2d(
- m[i].in_channels,
- n,
- m[i].kernel_size,
- m[i].stride,
- bias=m[i].bias is not None,
- )
-
-
-@contextmanager
-def torch_distributed_zero_first(local_rank: int):
- # Decorator to make all processes in distributed training wait for each local_master to do something
- if local_rank not in [-1, 0]:
- dist.barrier(device_ids=[local_rank])
- yield
- if local_rank == 0:
- dist.barrier(device_ids=[0])
-
-
-def device_count():
- # Returns number of CUDA devices available. Safe version of torch.cuda.device_count(). Supports Linux and Windows
- assert platform.system() in (
- "Linux",
- "Windows",
- ), "device_count() only supported on Linux or Windows"
- try:
- cmd = (
- "nvidia-smi -L | wc -l"
- if platform.system() == "Linux"
- else 'nvidia-smi -L | find /c /v ""'
- ) # Windows
- return int(
- subprocess.run(cmd, shell=True, capture_output=True, check=True)
- .stdout.decode()
- .split()[-1]
- )
- except Exception:
- return 0
-
-
-def select_device(device="", batch_size=0, newline=True):
- # device = None or 'cpu' or 0 or '0' or '0,1,2,3'
- s = f"YOLOv5 🚀 {git_describe() or file_date()} Python-{platform.python_version()} torch-{torch.__version__} "
- device = (
- str(device).strip().lower().replace("cuda:", "").replace("none", "")
- ) # to string, 'cuda:0' to '0'
- cpu = device == "cpu"
- mps = device == "mps" # Apple Metal Performance Shaders (MPS)
- if cpu or mps:
- os.environ[
- "CUDA_VISIBLE_DEVICES"
- ] = "-1" # force torch.cuda.is_available() = False
- elif device: # non-cpu device requested
- os.environ[
- "CUDA_VISIBLE_DEVICES"
- ] = device # set environment variable - must be before assert is_available()
- assert torch.cuda.is_available() and torch.cuda.device_count() >= len(
- device.replace(",", "")
- ), f"Invalid CUDA '--device {device}' requested, use '--device cpu' or pass valid CUDA device(s)"
-
- if (
- not cpu and not mps and torch.cuda.is_available()
- ): # prefer GPU if available
- devices = (
- device.split(",") if device else "0"
- ) # range(torch.cuda.device_count()) # i.e. 0,1,6,7
- n = len(devices) # device count
- if (
- n > 1 and batch_size > 0
- ): # check batch_size is divisible by device_count
- assert (
- batch_size % n == 0
- ), f"batch-size {batch_size} not multiple of GPU count {n}"
- space = " " * (len(s) + 1)
- for i, d in enumerate(devices):
- p = torch.cuda.get_device_properties(i)
- s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / (1 << 20):.0f}MiB)\n" # bytes to MB
- arg = "cuda:0"
- elif (
- mps
- and getattr(torch, "has_mps", False)
- and torch.backends.mps.is_available()
- ): # prefer MPS if available
- s += "MPS\n"
- arg = "mps"
- else: # revert to CPU
- s += "CPU\n"
- arg = "cpu"
-
- if not newline:
- s = s.rstrip()
- LOGGER.info(s)
- return torch.device(arg)
-
-
-def time_sync():
- # PyTorch-accurate time
- if torch.cuda.is_available():
- torch.cuda.synchronize()
- return time.time()
-
-
-def profile(input, ops, n=10, device=None):
- """YOLOv5 speed/memory/FLOPs profiler
- Usage:
- input = torch.randn(16, 3, 640, 640)
- m1 = lambda x: x * torch.sigmoid(x)
- m2 = nn.SiLU()
- profile(input, [m1, m2], n=100) # profile over 100 iterations
- """
- results = []
- if not isinstance(device, torch.device):
- device = select_device(device)
- print(
- f"{'Params':>12s}{'GFLOPs':>12s}{'GPU_mem (GB)':>14s}{'forward (ms)':>14s}{'backward (ms)':>14s}"
- f"{'input':>24s}{'output':>24s}"
- )
-
- for x in input if isinstance(input, list) else [input]:
- x = x.to(device)
- x.requires_grad = True
- for m in ops if isinstance(ops, list) else [ops]:
- m = m.to(device) if hasattr(m, "to") else m # device
- m = (
- m.half()
- if hasattr(m, "half")
- and isinstance(x, torch.Tensor)
- and x.dtype is torch.float16
- else m
- )
- tf, tb, t = 0, 0, [0, 0, 0] # dt forward, backward
- try:
- flops = (
- thop.profile(m, inputs=(x,), verbose=False)[0] / 1e9 * 2
- ) # GFLOPs
- except Exception:
- flops = 0
-
- try:
- for _ in range(n):
- t[0] = time_sync()
- y = m(x)
- t[1] = time_sync()
- try:
- _ = (
- (
- sum(yi.sum() for yi in y)
- if isinstance(y, list)
- else y
- )
- .sum()
- .backward()
- )
- t[2] = time_sync()
- except Exception: # no backward method
- # print(e) # for debug
- t[2] = float("nan")
- tf += (t[1] - t[0]) * 1000 / n # ms per op forward
- tb += (t[2] - t[1]) * 1000 / n # ms per op backward
- mem = (
- torch.cuda.memory_reserved() / 1e9
- if torch.cuda.is_available()
- else 0
- ) # (GB)
- s_in, s_out = (
- tuple(x.shape) if isinstance(x, torch.Tensor) else "list"
- for x in (x, y)
- ) # shapes
- p = (
- sum(x.numel() for x in m.parameters())
- if isinstance(m, nn.Module)
- else 0
- ) # parameters
- print(
- f"{p:12}{flops:12.4g}{mem:>14.3f}{tf:14.4g}{tb:14.4g}{str(s_in):>24s}{str(s_out):>24s}"
- )
- results.append([p, flops, mem, tf, tb, s_in, s_out])
- except Exception as e:
- print(e)
- results.append(None)
- torch.cuda.empty_cache()
- return results
-
-
-def is_parallel(model):
- # Returns True if model is of type DP or DDP
- return type(model) in (
- nn.parallel.DataParallel,
- nn.parallel.DistributedDataParallel,
- )
-
-
-def de_parallel(model):
- # De-parallelize a model: returns single-GPU model if model is of type DP or DDP
- return model.module if is_parallel(model) else model
-
-
-def initialize_weights(model):
- for m in model.modules():
- t = type(m)
- if t is nn.Conv2d:
- pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif t is nn.BatchNorm2d:
- m.eps = 1e-3
- m.momentum = 0.03
- elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]:
- m.inplace = True
-
-
-def find_modules(model, mclass=nn.Conv2d):
- # Finds layer indices matching module class 'mclass'
- return [
- i for i, m in enumerate(model.module_list) if isinstance(m, mclass)
- ]
-
-
-def sparsity(model):
- # Return global model sparsity
- a, b = 0, 0
- for p in model.parameters():
- a += p.numel()
- b += (p == 0).sum()
- return b / a
-
-
-def prune(model, amount=0.3):
- # Prune model to requested global sparsity
- import torch.nn.utils.prune as prune
-
- for name, m in model.named_modules():
- if isinstance(m, nn.Conv2d):
- prune.l1_unstructured(m, name="weight", amount=amount) # prune
- prune.remove(m, "weight") # make permanent
- LOGGER.info(f"Model pruned to {sparsity(model):.3g} global sparsity")
-
-
-def fuse_conv_and_bn(conv, bn):
- # Fuse Conv2d() and BatchNorm2d() layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/
- fusedconv = (
- nn.Conv2d(
- conv.in_channels,
- conv.out_channels,
- kernel_size=conv.kernel_size,
- stride=conv.stride,
- padding=conv.padding,
- dilation=conv.dilation,
- groups=conv.groups,
- bias=True,
- )
- .requires_grad_(False)
- .to(conv.weight.device)
- )
-
- # Prepare filters
- w_conv = conv.weight.clone().view(conv.out_channels, -1)
- w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var)))
- fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape))
-
- # Prepare spatial bias
- b_conv = (
- torch.zeros(conv.weight.size(0), device=conv.weight.device)
- if conv.bias is None
- else conv.bias
- )
- b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(
- torch.sqrt(bn.running_var + bn.eps)
- )
- fusedconv.bias.copy_(
- torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn
- )
-
- return fusedconv
-
-
-def model_info(model, verbose=False, imgsz=640):
- # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320]
- n_p = sum(x.numel() for x in model.parameters()) # number parameters
- n_g = sum(
- x.numel() for x in model.parameters() if x.requires_grad
- ) # number gradients
- if verbose:
- print(
- f"{'layer':>5} {'name':>40} {'gradient':>9} {'parameters':>12} {'shape':>20} {'mu':>10} {'sigma':>10}"
- )
- for i, (name, p) in enumerate(model.named_parameters()):
- name = name.replace("module_list.", "")
- print(
- "%5g %40s %9s %12g %20s %10.3g %10.3g"
- % (
- i,
- name,
- p.requires_grad,
- p.numel(),
- list(p.shape),
- p.mean(),
- p.std(),
- )
- )
-
- try: # FLOPs
- p = next(model.parameters())
- stride = (
- max(int(model.stride.max()), 32)
- if hasattr(model, "stride")
- else 32
- ) # max stride
- im = torch.empty(
- (1, p.shape[1], stride, stride), device=p.device
- ) # input image in BCHW format
- flops = (
- thop.profile(deepcopy(model), inputs=(im,), verbose=False)[0]
- / 1e9
- * 2
- ) # stride GFLOPs
- imgsz = (
- imgsz if isinstance(imgsz, list) else [imgsz, imgsz]
- ) # expand if int/float
- fs = f", {flops * imgsz[0] / stride * imgsz[1] / stride:.1f} GFLOPs" # 640x640 GFLOPs
- except Exception:
- fs = ""
-
- name = (
- Path(model.yaml_file).stem.replace("yolov5", "YOLOv5")
- if hasattr(model, "yaml_file")
- else "Model"
- )
- LOGGER.info(
- f"{name} summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}"
- )
-
-
-def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416)
- # Scales img(bs,3,y,x) by ratio constrained to gs-multiple
- if ratio == 1.0:
- return img
- h, w = img.shape[2:]
- s = (int(h * ratio), int(w * ratio)) # new size
- img = F.interpolate(
- img, size=s, mode="bilinear", align_corners=False
- ) # resize
- if not same_shape: # pad/crop img
- h, w = (math.ceil(x * ratio / gs) * gs for x in (h, w))
- return F.pad(
- img, [0, w - s[1], 0, h - s[0]], value=0.447
- ) # value = imagenet mean
-
-
-def copy_attr(a, b, include=(), exclude=()):
- # Copy attributes from b to a, options to only include [...] and to exclude [...]
- for k, v in b.__dict__.items():
- if (
- (len(include) and k not in include)
- or k.startswith("_")
- or k in exclude
- ):
- continue
- else:
- setattr(a, k, v)
-
-
-def smart_optimizer(model, name="Adam", lr=0.001, momentum=0.9, decay=1e-5):
- # YOLOv5 3-param group optimizer: 0) weights with decay, 1) weights no decay, 2) biases no decay
- g = [], [], [] # optimizer parameter groups
- bn = tuple(
- v for k, v in nn.__dict__.items() if "Norm" in k
- ) # normalization layers, i.e. BatchNorm2d()
- for v in model.modules():
- for p_name, p in v.named_parameters(recurse=0):
- if p_name == "bias": # bias (no decay)
- g[2].append(p)
- elif p_name == "weight" and isinstance(v, bn): # weight (no decay)
- g[1].append(p)
- else:
- g[0].append(p) # weight (with decay)
-
- if name == "Adam":
- optimizer = torch.optim.Adam(
- g[2], lr=lr, betas=(momentum, 0.999)
- ) # adjust beta1 to momentum
- elif name == "AdamW":
- optimizer = torch.optim.AdamW(
- g[2], lr=lr, betas=(momentum, 0.999), weight_decay=0.0
- )
- elif name == "RMSProp":
- optimizer = torch.optim.RMSprop(g[2], lr=lr, momentum=momentum)
- elif name == "SGD":
- optimizer = torch.optim.SGD(
- g[2], lr=lr, momentum=momentum, nesterov=True
- )
- else:
- raise NotImplementedError(f"Optimizer {name} not implemented.")
-
- optimizer.add_param_group(
- {"params": g[0], "weight_decay": decay}
- ) # add g0 with weight_decay
- optimizer.add_param_group(
- {"params": g[1], "weight_decay": 0.0}
- ) # add g1 (BatchNorm2d weights)
- LOGGER.info(
- f"{colorstr('optimizer:')} {type(optimizer).__name__}(lr={lr}) with parameter groups "
- f"{len(g[1])} weight(decay=0.0), {len(g[0])} weight(decay={decay}), {len(g[2])} bias"
- )
- return optimizer
-
-
-def smart_hub_load(repo="ultralytics/yolov5", model="yolov5s", **kwargs):
- # YOLOv5 torch.hub.load() wrapper with smart error/issue handling
- if check_version(torch.__version__, "1.9.1"):
- kwargs[
- "skip_validation"
- ] = True # validation causes GitHub API rate limit errors
- if check_version(torch.__version__, "1.12.0"):
- kwargs["trust_repo"] = True # argument required starting in torch 0.12
- try:
- return torch.hub.load(repo, model, **kwargs)
- except Exception:
- return torch.hub.load(repo, model, force_reload=True, **kwargs)
-
-
-def smart_resume(
- ckpt, optimizer, ema=None, weights="yolov5s.pt", epochs=300, resume=True
-):
- # Resume training from a partially trained checkpoint
- best_fitness = 0.0
- start_epoch = ckpt["epoch"] + 1
- if ckpt["optimizer"] is not None:
- optimizer.load_state_dict(ckpt["optimizer"]) # optimizer
- best_fitness = ckpt["best_fitness"]
- if ema and ckpt.get("ema"):
- ema.ema.load_state_dict(ckpt["ema"].float().state_dict()) # EMA
- ema.updates = ckpt["updates"]
- if resume:
- assert start_epoch > 0, (
- f"{weights} training to {epochs} epochs is finished, nothing to resume.\n"
- f"Start a new training without --resume, i.e. 'python train.py --weights {weights}'"
- )
- LOGGER.info(
- f"Resuming training from {weights} from epoch {start_epoch} to {epochs} total epochs"
- )
- if epochs < start_epoch:
- LOGGER.info(
- f"{weights} has been trained for {ckpt['epoch']} epochs. Fine-tuning for {epochs} more epochs."
- )
- epochs += ckpt["epoch"] # finetune additional epochs
- return best_fitness, start_epoch, epochs
-
-
-class EarlyStopping:
- # YOLOv5 simple early stopper
- def __init__(self, patience=30):
- self.best_fitness = 0.0 # i.e. mAP
- self.best_epoch = 0
- self.patience = patience or float(
- "inf"
- ) # epochs to wait after fitness stops improving to stop
- self.possible_stop = False # possible stop may occur next epoch
-
- def __call__(self, epoch, fitness):
- if (
- fitness >= self.best_fitness
- ): # >= 0 to allow for early zero-fitness stage of training
- self.best_epoch = epoch
- self.best_fitness = fitness
- delta = epoch - self.best_epoch # epochs without improvement
- self.possible_stop = delta >= (
- self.patience - 1
- ) # possible stop may occur next epoch
- stop = delta >= self.patience # stop training if patience exceeded
- if stop:
- LOGGER.info(
- f"Stopping training early as no improvement observed in last {self.patience} epochs. "
- f"Best results observed at epoch {self.best_epoch}, best model saved as best.pt.\n"
- f"To update EarlyStopping(patience={self.patience}) pass a new patience value, "
- f"i.e. `python train.py --patience 300` or use `--patience 0` to disable EarlyStopping."
- )
- return stop
-
-
-class ModelEMA:
- """Updated Exponential Moving Average (EMA) from https://github.com/rwightman/pytorch-image-models
- Keeps a moving average of everything in the model state_dict (parameters and buffers)
- For EMA details see https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage
- """
-
- def __init__(self, model, decay=0.9999, tau=2000, updates=0):
- # Create EMA
- self.ema = deepcopy(de_parallel(model)).eval() # FP32 EMA
- self.updates = updates # number of EMA updates
- self.decay = lambda x: decay * (
- 1 - math.exp(-x / tau)
- ) # decay exponential ramp (to help early epochs)
- for p in self.ema.parameters():
- p.requires_grad_(False)
-
- def update(self, model):
- # Update EMA parameters
- self.updates += 1
- d = self.decay(self.updates)
-
- msd = de_parallel(model).state_dict() # model state_dict
- for k, v in self.ema.state_dict().items():
- if v.dtype.is_floating_point: # true for FP16 and FP32
- v *= d
- v += (1 - d) * msd[k].detach()
- # assert v.dtype == msd[k].dtype == torch.float32, f'{k}: EMA {v.dtype} and model {msd[k].dtype} must be FP32'
-
- def update_attr(
- self, model, include=(), exclude=("process_group", "reducer")
- ):
- # Update EMA attributes
- copy_attr(self.ema, model, include, exclude)
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/conversation/[id]/message/[messageId]/prompt/$types.d.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/conversation/[id]/message/[messageId]/prompt/$types.d.ts
deleted file mode 100644
index 29f5f4dfa623ada8e806d11e23fd9aec08a2694f..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/conversation/[id]/message/[messageId]/prompt/$types.d.ts
+++ /dev/null
@@ -1,9 +0,0 @@
-import type * as Kit from '@sveltejs/kit';
-
-type Expand = T extends infer O ? { [K in keyof O]: O[K] } : never;
-type RouteParams = { id: string; messageId: string }
-type RouteId = '/conversation/[id]/message/[messageId]/prompt';
-
-export type EntryGenerator = () => Promise> | Array;
-export type RequestHandler = Kit.RequestHandler;
-export type RequestEvent = Kit.RequestEvent;
\ No newline at end of file
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/server/websearch/summarizeWeb.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/server/websearch/summarizeWeb.ts
deleted file mode 100644
index 2998f79e6939f16f6d5c6ff2967bead5729470e7..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/server/websearch/summarizeWeb.ts
+++ /dev/null
@@ -1,39 +0,0 @@
-import { HF_ACCESS_TOKEN } from "$env/static/private";
-import { HfInference } from "@huggingface/inference";
-import { defaultModel } from "$lib/server/models";
-import type { BackendModel } from "../models";
-import { generateFromDefaultEndpoint } from "../generateFromDefaultEndpoint";
-
-export async function summarizeWeb(content: string, query: string, model: BackendModel) {
- // if HF_ACCESS_TOKEN is set, we use a HF dedicated endpoint for summarization
- try {
- if (HF_ACCESS_TOKEN) {
- const summary = (
- await new HfInference(HF_ACCESS_TOKEN).summarization({
- model: "facebook/bart-large-cnn",
- inputs: content,
- parameters: {
- max_length: 512,
- },
- })
- ).summary_text;
- return summary;
- }
- } catch (e) {
- console.log(e);
- }
-
- // else we use the LLM to generate a summary
- const summaryPrompt = defaultModel.webSearchSummaryPromptRender({
- answer: content
- .split(" ")
- .slice(0, model.parameters?.truncate ?? 0)
- .join(" "),
- query: query,
- });
- const summary = await generateFromDefaultEndpoint(summaryPrompt).then((txt: string) =>
- txt.trim()
- );
-
- return summary;
-}
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinputbase/methods/SetSwatchColor.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinputbase/methods/SetSwatchColor.js
deleted file mode 100644
index 85a00df87a18e3c8e01e12bdbe1d863895bb2340..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinputbase/methods/SetSwatchColor.js
+++ /dev/null
@@ -1,13 +0,0 @@
-var SetSwatchColor = function (swatch, color) {
- if (!swatch) {
- return;
- }
-
- if (swatch.setTint) {
- swatch.setTint(color);
- } else if (swatch.setFillStyle) {
- swatch.setFillStyle(color);
- }
-}
-
-export default SetSwatchColor;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/modal/Modal.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/modal/Modal.d.ts
deleted file mode 100644
index 7d9d3770293b1e04a4956c7ccfbfd7ed11806e2c..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/modal/Modal.d.ts
+++ /dev/null
@@ -1,2 +0,0 @@
-import { ModalBehavoir, Modal, ModalPromise, ModalClose } from '../../../plugins/modal';
-export { ModalBehavoir, Modal, ModalPromise, ModalClose };
\ No newline at end of file
diff --git a/spaces/Alesmikes/Elvirespeak/app.py b/spaces/Alesmikes/Elvirespeak/app.py
deleted file mode 100644
index 4aa96bb395ac671f23ee99a6151b613c2f7051fa..0000000000000000000000000000000000000000
--- a/spaces/Alesmikes/Elvirespeak/app.py
+++ /dev/null
@@ -1,142 +0,0 @@
-"""
-this model only supports english since text to speech is an english only model
-"""
-from google.cloud import texttospeech
-import os
-import openai
-import gradio as gr
-from dotenv import load_dotenv
-import pinecone
-
-"""
-login to gcp
-"""
-os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "gcp_access_key.json"
-# Instantiates a client
-client = texttospeech.TextToSpeechClient()
-
-"""
-Connecting to Open AI API
-"""
-load_dotenv()
-openai.organization = os.getenv("OPENAI_ORG")
-openai.api_key = os.getenv("OPENAI_API_KEY")
-EMBEDDING_MODEL = "text-embedding-ada-002"
-"""
-Connecting to pincone API and assign index
-"""
-index_name = 'economic-forecast'
-pinecone.init(
- api_key=os.getenv("Pinecone_KEY"),
- environment=os.getenv("Pinecone_ENV")
-)
-
-## initial a first message to define GPT's role
-
-
-"""
-define the text -> speech function
-"""
-def text2speech(text):
-
- # Set the text input to be synthesized
- synthesis_input = texttospeech.SynthesisInput(text=text)
-
- # Build the voice request, select the language code ("en-US") and the ssml
- # voice gender ("neutral")
- voice = texttospeech.VoiceSelectionParams(
- language_code="en-US", name="en-US-News-K", ssml_gender=texttospeech.SsmlVoiceGender.FEMALE
- )
-
- # Select the type of audio file you want returned
- audio_config = texttospeech.AudioConfig(
- audio_encoding=texttospeech.AudioEncoding.MP3
- )
-
- # Perform the text-to-speech request on the text input with the selected
- # voice parameters and audio file type
- response = client.synthesize_speech(
- input=synthesis_input, voice=voice, audio_config=audio_config
- )
- # The response's audio_content is binary.
- with open("output.mp3", "wb") as out:
- # Write the response to the output file.
- out.write(response.audio_content)
- print('Audio content written to file "output.mp3"')
-
-"""
-define voice -> gpt -> text -> voice workflow
-"""
-def transcribe(audio):
- reset_chat_history()
- voice_path = get_response(audio)
- messages = get_response(audio, return_messages=True)
- chat_text = "\n".join([f"{msg['role']}: {msg['content']}" for msg in messages])
- with open(voice_path, 'rb') as f:
- voice_bytes = f.read()
- return voice_bytes, chat_text
- #global messages
-
- """
- gradio output file doesn't have .wav so rename the file to the correct format
- """
- extension = ".wav"
- audiofomated = f"{audio}{extension}"
- os.rename(audio,audiofomated)
-
- """
- pass the audio file to whisper to transcribe
-
- """
- audio_file = open(audiofomated, "rb")
- transcript = openai.Audio.transcribe("whisper-1", audio_file)
-
-
- """
- run cosin similarity to find context
- """
- ### Input the question and search for the relavent text
- index = pinecone.Index(index_name)
- query = openai.Embedding.create(input=transcript["text"], model=EMBEDDING_MODEL)["data"][0]["embedding"] # embed the user query into an embedding vector
- res = index.query(query, top_k=3, include_metadata=True) # run cosin similarity to search the most relevant embeded content; this is done in pinecone only
- contexts = [
- x['metadata']['text'] for x in res['matches']
- ]
- merged_context = "".join(contexts)
- contextwithQuestion = "Context: " + "\n"+ merged_context + "*End of the context*" + "\n\n" + "Question: " + transcript["text"]
-
-
- """
- pass the transcripted text to GPT
- """
- messages = [
- {"role": "system",
- "content":
- "You are Elvire. Forest oracle dedicated to share her knowledge with accidental strangers.\
- "}
-]
- messages.append({"role": "user", "content":contextwithQuestion}) ## add user input to the list of message
-
- response = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=messages
- ) ## pass the list of message to GPT
-
- messages.append({"role": "assistant", "content":response["choices"][0]["message"]["content"]}) ## add GPT response to the list of message
- text2speech(response["choices"][0]["message"]["content"]) ## create mp3 voice output
-
- voice_path = os.path.abspath("output.mp3")
-
- return voice_path, "\n".join([f"{msg['role']}: {msg['content']}" for msg in messages])
-
-output_text = gr.outputs.Textbox(label="Chat Messages")
-
-audio_input = gr.inputs.Audio(source="microphone", type="filepath", label="Speak here...")
-chat_output = gr.outputs.Textbox(label="Chat Messages")
-audio_output = gr.outputs.Audio(type="bytes", label="Synthesized Voice")
-
-gr.Interface(fn=transcribe,
- inputs=audio_input,
- outputs=[audio_output, chat_output],
- live=True,
- allow_flagging=False).launch()
\ No newline at end of file
diff --git a/spaces/Amrrs/pdf-table-extractor/app.py b/spaces/Amrrs/pdf-table-extractor/app.py
deleted file mode 100644
index 7f439d37fa694685b129aee76553768f81f5af24..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/pdf-table-extractor/app.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import streamlit as st # data app development
-import subprocess # process in the os
-from subprocess import STDOUT, check_call #os process manipuation
-import os #os process manipuation
-import base64 # byte object into a pdf file
-import camelot as cam # extracting tables from PDFs
-
-# to run this only once and it's cached
-@st.cache
-def gh():
- """install ghostscript on the linux machine"""
- proc = subprocess.Popen('apt-get install -y ghostscript', shell=True, stdin=None, stdout=open(os.devnull,"wb"), stderr=STDOUT, executable="/bin/bash")
- proc.wait()
-
-gh()
-
-
-
-st.title("PDF Table Extractor")
-st.subheader("with `Camelot` Python library")
-
-st.image("https://raw.githubusercontent.com/camelot-dev/camelot/master/docs/_static/camelot.png", width=200)
-
-
-# file uploader on streamlit
-
-input_pdf = st.file_uploader(label = "upload your pdf here", type = 'pdf')
-
-st.markdown("### Page Number")
-
-page_number = st.text_input("Enter the page # from where you want to extract the PDF eg: 3", value = 1)
-
-# run this only when a PDF is uploaded
-
-if input_pdf is not None:
- # byte object into a PDF file
- with open("input.pdf", "wb") as f:
- base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8')
- f.write(base64.b64decode(base64_pdf))
- f.close()
-
- # read the pdf and parse it using stream
- table = cam.read_pdf("input.pdf", pages = page_number, flavor = 'stream')
-
- st.markdown("### Number of Tables")
-
- # display the output after parsing
- st.write(table)
-
- # display the table
-
- if len(table) > 0:
-
- # extract the index value of the table
-
- option = st.selectbox(label = "Select the Table to be displayed", options = range(len(table) + 1))
-
- st.markdown('### Output Table')
-
- # display the dataframe
-
- st.dataframe(table[int(option)-1].df)
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/assigners/center_region_assigner.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/assigners/center_region_assigner.py
deleted file mode 100644
index 488e3b615318787751cab3211e38dd9471c666be..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/assigners/center_region_assigner.py
+++ /dev/null
@@ -1,335 +0,0 @@
-import torch
-
-from ..builder import BBOX_ASSIGNERS
-from ..iou_calculators import build_iou_calculator
-from .assign_result import AssignResult
-from .base_assigner import BaseAssigner
-
-
-def scale_boxes(bboxes, scale):
- """Expand an array of boxes by a given scale.
-
- Args:
- bboxes (Tensor): Shape (m, 4)
- scale (float): The scale factor of bboxes
-
- Returns:
- (Tensor): Shape (m, 4). Scaled bboxes
- """
- assert bboxes.size(1) == 4
- w_half = (bboxes[:, 2] - bboxes[:, 0]) * .5
- h_half = (bboxes[:, 3] - bboxes[:, 1]) * .5
- x_c = (bboxes[:, 2] + bboxes[:, 0]) * .5
- y_c = (bboxes[:, 3] + bboxes[:, 1]) * .5
-
- w_half *= scale
- h_half *= scale
-
- boxes_scaled = torch.zeros_like(bboxes)
- boxes_scaled[:, 0] = x_c - w_half
- boxes_scaled[:, 2] = x_c + w_half
- boxes_scaled[:, 1] = y_c - h_half
- boxes_scaled[:, 3] = y_c + h_half
- return boxes_scaled
-
-
-def is_located_in(points, bboxes):
- """Are points located in bboxes.
-
- Args:
- points (Tensor): Points, shape: (m, 2).
- bboxes (Tensor): Bounding boxes, shape: (n, 4).
-
- Return:
- Tensor: Flags indicating if points are located in bboxes, shape: (m, n).
- """
- assert points.size(1) == 2
- assert bboxes.size(1) == 4
- return (points[:, 0].unsqueeze(1) > bboxes[:, 0].unsqueeze(0)) & \
- (points[:, 0].unsqueeze(1) < bboxes[:, 2].unsqueeze(0)) & \
- (points[:, 1].unsqueeze(1) > bboxes[:, 1].unsqueeze(0)) & \
- (points[:, 1].unsqueeze(1) < bboxes[:, 3].unsqueeze(0))
-
-
-def bboxes_area(bboxes):
- """Compute the area of an array of bboxes.
-
- Args:
- bboxes (Tensor): The coordinates ox bboxes. Shape: (m, 4)
-
- Returns:
- Tensor: Area of the bboxes. Shape: (m, )
- """
- assert bboxes.size(1) == 4
- w = (bboxes[:, 2] - bboxes[:, 0])
- h = (bboxes[:, 3] - bboxes[:, 1])
- areas = w * h
- return areas
-
-
-@BBOX_ASSIGNERS.register_module()
-class CenterRegionAssigner(BaseAssigner):
- """Assign pixels at the center region of a bbox as positive.
-
- Each proposals will be assigned with `-1`, `0`, or a positive integer
- indicating the ground truth index.
- - -1: negative samples
- - semi-positive numbers: positive sample, index (0-based) of assigned gt
-
- Args:
- pos_scale (float): Threshold within which pixels are
- labelled as positive.
- neg_scale (float): Threshold above which pixels are
- labelled as positive.
- min_pos_iof (float): Minimum iof of a pixel with a gt to be
- labelled as positive. Default: 1e-2
- ignore_gt_scale (float): Threshold within which the pixels
- are ignored when the gt is labelled as shadowed. Default: 0.5
- foreground_dominate (bool): If True, the bbox will be assigned as
- positive when a gt's kernel region overlaps with another's shadowed
- (ignored) region, otherwise it is set as ignored. Default to False.
- """
-
- def __init__(self,
- pos_scale,
- neg_scale,
- min_pos_iof=1e-2,
- ignore_gt_scale=0.5,
- foreground_dominate=False,
- iou_calculator=dict(type='BboxOverlaps2D')):
- self.pos_scale = pos_scale
- self.neg_scale = neg_scale
- self.min_pos_iof = min_pos_iof
- self.ignore_gt_scale = ignore_gt_scale
- self.foreground_dominate = foreground_dominate
- self.iou_calculator = build_iou_calculator(iou_calculator)
-
- def get_gt_priorities(self, gt_bboxes):
- """Get gt priorities according to their areas.
-
- Smaller gt has higher priority.
-
- Args:
- gt_bboxes (Tensor): Ground truth boxes, shape (k, 4).
-
- Returns:
- Tensor: The priority of gts so that gts with larger priority is \
- more likely to be assigned. Shape (k, )
- """
- gt_areas = bboxes_area(gt_bboxes)
- # Rank all gt bbox areas. Smaller objects has larger priority
- _, sort_idx = gt_areas.sort(descending=True)
- sort_idx = sort_idx.argsort()
- return sort_idx
-
- def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None):
- """Assign gt to bboxes.
-
- This method assigns gts to every bbox (proposal/anchor), each bbox \
- will be assigned with -1, or a semi-positive number. -1 means \
- negative sample, semi-positive number is the index (0-based) of \
- assigned gt.
-
- Args:
- bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4).
- gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4).
- gt_bboxes_ignore (tensor, optional): Ground truth bboxes that are
- labelled as `ignored`, e.g., crowd boxes in COCO.
- gt_labels (tensor, optional): Label of gt_bboxes, shape (num_gts,).
-
- Returns:
- :obj:`AssignResult`: The assigned result. Note that \
- shadowed_labels of shape (N, 2) is also added as an \
- `assign_result` attribute. `shadowed_labels` is a tensor \
- composed of N pairs of anchor_ind, class_label], where N \
- is the number of anchors that lie in the outer region of a \
- gt, anchor_ind is the shadowed anchor index and class_label \
- is the shadowed class label.
-
- Example:
- >>> self = CenterRegionAssigner(0.2, 0.2)
- >>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]])
- >>> gt_bboxes = torch.Tensor([[0, 0, 10, 10]])
- >>> assign_result = self.assign(bboxes, gt_bboxes)
- >>> expected_gt_inds = torch.LongTensor([1, 0])
- >>> assert torch.all(assign_result.gt_inds == expected_gt_inds)
- """
- # There are in total 5 steps in the pixel assignment
- # 1. Find core (the center region, say inner 0.2)
- # and shadow (the relatively ourter part, say inner 0.2-0.5)
- # regions of every gt.
- # 2. Find all prior bboxes that lie in gt_core and gt_shadow regions
- # 3. Assign prior bboxes in gt_core with a one-hot id of the gt in
- # the image.
- # 3.1. For overlapping objects, the prior bboxes in gt_core is
- # assigned with the object with smallest area
- # 4. Assign prior bboxes with class label according to its gt id.
- # 4.1. Assign -1 to prior bboxes lying in shadowed gts
- # 4.2. Assign positive prior boxes with the corresponding label
- # 5. Find pixels lying in the shadow of an object and assign them with
- # background label, but set the loss weight of its corresponding
- # gt to zero.
- assert bboxes.size(1) == 4, 'bboxes must have size of 4'
- # 1. Find core positive and shadow region of every gt
- gt_core = scale_boxes(gt_bboxes, self.pos_scale)
- gt_shadow = scale_boxes(gt_bboxes, self.neg_scale)
-
- # 2. Find prior bboxes that lie in gt_core and gt_shadow regions
- bbox_centers = (bboxes[:, 2:4] + bboxes[:, 0:2]) / 2
- # The center points lie within the gt boxes
- is_bbox_in_gt = is_located_in(bbox_centers, gt_bboxes)
- # Only calculate bbox and gt_core IoF. This enables small prior bboxes
- # to match large gts
- bbox_and_gt_core_overlaps = self.iou_calculator(
- bboxes, gt_core, mode='iof')
- # The center point of effective priors should be within the gt box
- is_bbox_in_gt_core = is_bbox_in_gt & (
- bbox_and_gt_core_overlaps > self.min_pos_iof) # shape (n, k)
-
- is_bbox_in_gt_shadow = (
- self.iou_calculator(bboxes, gt_shadow, mode='iof') >
- self.min_pos_iof)
- # Rule out center effective positive pixels
- is_bbox_in_gt_shadow &= (~is_bbox_in_gt_core)
-
- num_gts, num_bboxes = gt_bboxes.size(0), bboxes.size(0)
- if num_gts == 0 or num_bboxes == 0:
- # If no gts exist, assign all pixels to negative
- assigned_gt_ids = \
- is_bbox_in_gt_core.new_zeros((num_bboxes,),
- dtype=torch.long)
- pixels_in_gt_shadow = assigned_gt_ids.new_empty((0, 2))
- else:
- # Step 3: assign a one-hot gt id to each pixel, and smaller objects
- # have high priority to assign the pixel.
- sort_idx = self.get_gt_priorities(gt_bboxes)
- assigned_gt_ids, pixels_in_gt_shadow = \
- self.assign_one_hot_gt_indices(is_bbox_in_gt_core,
- is_bbox_in_gt_shadow,
- gt_priority=sort_idx)
-
- if gt_bboxes_ignore is not None and gt_bboxes_ignore.numel() > 0:
- # No ground truth or boxes, return empty assignment
- gt_bboxes_ignore = scale_boxes(
- gt_bboxes_ignore, scale=self.ignore_gt_scale)
- is_bbox_in_ignored_gts = is_located_in(bbox_centers,
- gt_bboxes_ignore)
- is_bbox_in_ignored_gts = is_bbox_in_ignored_gts.any(dim=1)
- assigned_gt_ids[is_bbox_in_ignored_gts] = -1
-
- # 4. Assign prior bboxes with class label according to its gt id.
- assigned_labels = None
- shadowed_pixel_labels = None
- if gt_labels is not None:
- # Default assigned label is the background (-1)
- assigned_labels = assigned_gt_ids.new_full((num_bboxes, ), -1)
- pos_inds = torch.nonzero(
- assigned_gt_ids > 0, as_tuple=False).squeeze()
- if pos_inds.numel() > 0:
- assigned_labels[pos_inds] = gt_labels[assigned_gt_ids[pos_inds]
- - 1]
- # 5. Find pixels lying in the shadow of an object
- shadowed_pixel_labels = pixels_in_gt_shadow.clone()
- if pixels_in_gt_shadow.numel() > 0:
- pixel_idx, gt_idx =\
- pixels_in_gt_shadow[:, 0], pixels_in_gt_shadow[:, 1]
- assert (assigned_gt_ids[pixel_idx] != gt_idx).all(), \
- 'Some pixels are dually assigned to ignore and gt!'
- shadowed_pixel_labels[:, 1] = gt_labels[gt_idx - 1]
- override = (
- assigned_labels[pixel_idx] == shadowed_pixel_labels[:, 1])
- if self.foreground_dominate:
- # When a pixel is both positive and shadowed, set it as pos
- shadowed_pixel_labels = shadowed_pixel_labels[~override]
- else:
- # When a pixel is both pos and shadowed, set it as shadowed
- assigned_labels[pixel_idx[override]] = -1
- assigned_gt_ids[pixel_idx[override]] = 0
-
- assign_result = AssignResult(
- num_gts, assigned_gt_ids, None, labels=assigned_labels)
- # Add shadowed_labels as assign_result property. Shape: (num_shadow, 2)
- assign_result.set_extra_property('shadowed_labels',
- shadowed_pixel_labels)
- return assign_result
-
- def assign_one_hot_gt_indices(self,
- is_bbox_in_gt_core,
- is_bbox_in_gt_shadow,
- gt_priority=None):
- """Assign only one gt index to each prior box.
-
- Gts with large gt_priority are more likely to be assigned.
-
- Args:
- is_bbox_in_gt_core (Tensor): Bool tensor indicating the bbox center
- is in the core area of a gt (e.g. 0-0.2).
- Shape: (num_prior, num_gt).
- is_bbox_in_gt_shadow (Tensor): Bool tensor indicating the bbox
- center is in the shadowed area of a gt (e.g. 0.2-0.5).
- Shape: (num_prior, num_gt).
- gt_priority (Tensor): Priorities of gts. The gt with a higher
- priority is more likely to be assigned to the bbox when the bbox
- match with multiple gts. Shape: (num_gt, ).
-
- Returns:
- tuple: Returns (assigned_gt_inds, shadowed_gt_inds).
-
- - assigned_gt_inds: The assigned gt index of each prior bbox \
- (i.e. index from 1 to num_gts). Shape: (num_prior, ).
- - shadowed_gt_inds: shadowed gt indices. It is a tensor of \
- shape (num_ignore, 2) with first column being the \
- shadowed prior bbox indices and the second column the \
- shadowed gt indices (1-based).
- """
- num_bboxes, num_gts = is_bbox_in_gt_core.shape
-
- if gt_priority is None:
- gt_priority = torch.arange(
- num_gts, device=is_bbox_in_gt_core.device)
- assert gt_priority.size(0) == num_gts
- # The bigger gt_priority, the more preferable to be assigned
- # The assigned inds are by default 0 (background)
- assigned_gt_inds = is_bbox_in_gt_core.new_zeros((num_bboxes, ),
- dtype=torch.long)
- # Shadowed bboxes are assigned to be background. But the corresponding
- # label is ignored during loss calculation, which is done through
- # shadowed_gt_inds
- shadowed_gt_inds = torch.nonzero(is_bbox_in_gt_shadow, as_tuple=False)
- if is_bbox_in_gt_core.sum() == 0: # No gt match
- shadowed_gt_inds[:, 1] += 1 # 1-based. For consistency issue
- return assigned_gt_inds, shadowed_gt_inds
-
- # The priority of each prior box and gt pair. If one prior box is
- # matched bo multiple gts. Only the pair with the highest priority
- # is saved
- pair_priority = is_bbox_in_gt_core.new_full((num_bboxes, num_gts),
- -1,
- dtype=torch.long)
-
- # Each bbox could match with multiple gts.
- # The following codes deal with this situation
- # Matched bboxes (to any gt). Shape: (num_pos_anchor, )
- inds_of_match = torch.any(is_bbox_in_gt_core, dim=1)
- # The matched gt index of each positive bbox. Length >= num_pos_anchor
- # , since one bbox could match multiple gts
- matched_bbox_gt_inds = torch.nonzero(
- is_bbox_in_gt_core, as_tuple=False)[:, 1]
- # Assign priority to each bbox-gt pair.
- pair_priority[is_bbox_in_gt_core] = gt_priority[matched_bbox_gt_inds]
- _, argmax_priority = pair_priority[inds_of_match].max(dim=1)
- assigned_gt_inds[inds_of_match] = argmax_priority + 1 # 1-based
- # Zero-out the assigned anchor box to filter the shadowed gt indices
- is_bbox_in_gt_core[inds_of_match, argmax_priority] = 0
- # Concat the shadowed indices due to overlapping with that out side of
- # effective scale. shape: (total_num_ignore, 2)
- shadowed_gt_inds = torch.cat(
- (shadowed_gt_inds, torch.nonzero(
- is_bbox_in_gt_core, as_tuple=False)),
- dim=0)
- # `is_bbox_in_gt_core` should be changed back to keep arguments intact.
- is_bbox_in_gt_core[inds_of_match, argmax_priority] = 1
- # 1-based shadowed gt indices, to be consistent with `assigned_gt_inds`
- if shadowed_gt_inds.numel() > 0:
- shadowed_gt_inds[:, 1] += 1
- return assigned_gt_inds, shadowed_gt_inds
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index 155e28f42194112703bb21473e5e3dd0fca40d49..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/gcnet_r50-d8.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_769x769_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_769x769_40k_cityscapes.py
deleted file mode 100644
index c6e7e58508f31627766b8ab748bd81cd51c77eca..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_769x769_40k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './pspnet_r50-d8_769x769_40k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example-chat-stream.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example-chat-stream.py
deleted file mode 100644
index bfa5d4f580b65d40c0dfa3b32ec6b5d940783f03..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example-chat-stream.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import asyncio
-import html
-import json
-import sys
-
-try:
- import websockets
-except ImportError:
- print("Websockets package not found. Make sure it's installed.")
-
-# For local streaming, the websockets are hosted without ssl - ws://
-HOST = 'localhost:5005'
-URI = f'ws://{HOST}/api/v1/chat-stream'
-
-# For reverse-proxied streaming, the remote will likely host with ssl - wss://
-# URI = 'wss://your-uri-here.trycloudflare.com/api/v1/stream'
-
-
-async def run(user_input, history):
- # Note: the selected defaults change from time to time.
- request = {
- 'user_input': user_input,
- 'max_new_tokens': 250,
- 'auto_max_new_tokens': False,
- 'max_tokens_second': 0,
- 'history': history,
- 'mode': 'instruct', # Valid options: 'chat', 'chat-instruct', 'instruct'
- 'character': 'Example',
- 'instruction_template': 'Vicuna-v1.1', # Will get autodetected if unset
- 'your_name': 'You',
- # 'name1': 'name of user', # Optional
- # 'name2': 'name of character', # Optional
- # 'context': 'character context', # Optional
- # 'greeting': 'greeting', # Optional
- # 'name1_instruct': 'You', # Optional
- # 'name2_instruct': 'Assistant', # Optional
- # 'context_instruct': 'context_instruct', # Optional
- # 'turn_template': 'turn_template', # Optional
- 'regenerate': False,
- '_continue': False,
- 'chat_instruct_command': 'Continue the chat dialogue below. Write a single reply for the character "<|character|>".\n\n<|prompt|>',
-
- # Generation params. If 'preset' is set to different than 'None', the values
- # in presets/preset-name.yaml are used instead of the individual numbers.
- 'preset': 'None',
- 'do_sample': True,
- 'temperature': 0.7,
- 'top_p': 0.1,
- 'typical_p': 1,
- 'epsilon_cutoff': 0, # In units of 1e-4
- 'eta_cutoff': 0, # In units of 1e-4
- 'tfs': 1,
- 'top_a': 0,
- 'repetition_penalty': 1.18,
- 'repetition_penalty_range': 0,
- 'top_k': 40,
- 'min_length': 0,
- 'no_repeat_ngram_size': 0,
- 'num_beams': 1,
- 'penalty_alpha': 0,
- 'length_penalty': 1,
- 'early_stopping': False,
- 'mirostat_mode': 0,
- 'mirostat_tau': 5,
- 'mirostat_eta': 0.1,
- 'grammar_string': '',
- 'guidance_scale': 1,
- 'negative_prompt': '',
-
- 'seed': -1,
- 'add_bos_token': True,
- 'truncation_length': 2048,
- 'ban_eos_token': False,
- 'custom_token_bans': '',
- 'skip_special_tokens': True,
- 'stopping_strings': []
- }
-
- async with websockets.connect(URI, ping_interval=None) as websocket:
- await websocket.send(json.dumps(request))
-
- while True:
- incoming_data = await websocket.recv()
- incoming_data = json.loads(incoming_data)
-
- match incoming_data['event']:
- case 'text_stream':
- yield incoming_data['history']
- case 'stream_end':
- return
-
-
-async def print_response_stream(user_input, history):
- cur_len = 0
- async for new_history in run(user_input, history):
- cur_message = new_history['visible'][-1][1][cur_len:]
- cur_len += len(cur_message)
- print(html.unescape(cur_message), end='')
- sys.stdout.flush() # If we don't flush, we won't see tokens in realtime.
-
-
-if __name__ == '__main__':
- user_input = "Please give me a step-by-step guide on how to plant a tree in my backyard."
-
- # Basic example
- history = {'internal': [], 'visible': []}
-
- # "Continue" example. Make sure to set '_continue' to True above
- # arr = [user_input, 'Surely, here is']
- # history = {'internal': [arr], 'visible': [arr]}
-
- asyncio.run(print_response_stream(user_input, history))
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/pipelines/llava/llava.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/pipelines/llava/llava.py
deleted file mode 100644
index 306ab227d093c29dd9fb62b49b7cbd140b143788..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/pipelines/llava/llava.py
+++ /dev/null
@@ -1,148 +0,0 @@
-import time
-from abc import abstractmethod
-from typing import List, Tuple
-
-import torch
-from huggingface_hub import hf_hub_download
-from PIL import Image
-from transformers import CLIPImageProcessor, CLIPVisionModel
-
-from extensions.multimodal.abstract_pipeline import AbstractMultimodalPipeline
-from modules import shared
-from modules.logging_colors import logger
-from modules.text_generation import encode
-
-
-class LLaVA_v0_Pipeline(AbstractMultimodalPipeline):
- CLIP_REPO = "openai/clip-vit-large-patch14"
-
- def __init__(self, params: dict) -> None:
- super().__init__()
- self.clip_device = self._get_device("vision_device", params)
- self.clip_dtype = self._get_dtype("vision_bits", params)
- self.projector_device = self._get_device("projector_device", params)
- self.projector_dtype = self._get_dtype("projector_bits", params)
- self.image_processor, self.vision_tower, self.mm_projector = self._load_models()
-
- def _load_models(self):
- start_ts = time.time()
-
- logger.info(f"LLaVA - Loading CLIP from {LLaVA_v0_Pipeline.CLIP_REPO} as {self.clip_dtype} on {self.clip_device}...")
- image_processor = CLIPImageProcessor.from_pretrained(LLaVA_v0_Pipeline.CLIP_REPO, torch_dtype=self.clip_dtype)
- vision_tower = CLIPVisionModel.from_pretrained(LLaVA_v0_Pipeline.CLIP_REPO, torch_dtype=self.clip_dtype).to(self.clip_device)
-
- logger.info(f"LLaVA - Loading projector from {self.llava_projector_repo()} as {self.projector_dtype} on {self.projector_device}...")
- projector_path = hf_hub_download(self.llava_projector_repo(), self.llava_projector_filename())
- mm_projector = torch.nn.Linear(*self.llava_projector_shape())
- projector_data = torch.load(projector_path)
- mm_projector.weight = torch.nn.Parameter(projector_data['model.mm_projector.weight'].to(dtype=self.projector_dtype), False)
- mm_projector.bias = torch.nn.Parameter(projector_data['model.mm_projector.bias'].to(dtype=self.projector_dtype), False)
- mm_projector = mm_projector.to(self.projector_device)
-
- logger.info(f"LLaVA supporting models loaded, took {time.time() - start_ts:.2f} seconds")
- return image_processor, vision_tower, mm_projector
-
- @staticmethod
- def image_start() -> str:
- return ""
-
- @staticmethod
- def image_end() -> str:
- return ""
-
- @staticmethod
- def num_image_embeds() -> int:
- return 256
-
- @staticmethod
- def embed_tokens(input_ids: torch.Tensor) -> torch.Tensor:
- for attr in ['', 'model', 'model.model', 'model.model.model']:
- tmp = getattr(shared.model, attr, None) if attr != '' else shared.model
- if tmp is not None and hasattr(tmp, 'embed_tokens'):
- func = tmp.embed_tokens
- break
- else:
- raise ValueError('The embed_tokens method has not been found for this loader.')
-
- return func(input_ids).to(shared.model.device, dtype=shared.model.dtype)
-
- @staticmethod
- def placeholder_embeddings() -> torch.Tensor:
- return LLaVA_v0_Pipeline.embed_tokens(encode(""*256, add_bos_token=False)[0])
-
- def embed_images(self, images: List[Image.Image]) -> torch.Tensor:
- images = self.image_processor(images, return_tensors='pt')['pixel_values']
- images = images.to(self.clip_device, dtype=self.clip_dtype)
-
- with torch.no_grad():
- image_forward_outs = self.vision_tower(images, output_hidden_states=True)
- select_hidden_state_layer = -2
- select_hidden_state = image_forward_outs.hidden_states[select_hidden_state_layer]
- image_features = select_hidden_state[:, 1:].to(self.projector_device, dtype=self.projector_dtype)
- image_features = self.mm_projector(image_features)
- return image_features.to(shared.model.device, dtype=shared.model.dtype)
-
- @staticmethod
- @abstractmethod
- def llava_projector_repo() -> str:
- pass
-
- @staticmethod
- @abstractmethod
- def llava_projector_filename() -> str:
- pass
-
- @staticmethod
- @abstractmethod
- def llava_projector_shape() -> Tuple[int, int]:
- pass
-
-
-class LLaVA_v0_13B_Pipeline(LLaVA_v0_Pipeline):
- def __init__(self, params: dict) -> None:
- super().__init__(params)
-
- @staticmethod
- def name() -> str:
- return "llava-13b"
-
- @staticmethod
- def placeholder_token_id() -> int:
- return 32000
-
- @staticmethod
- def llava_projector_shape() -> Tuple[int, int]:
- return (1024, 5120)
-
- @staticmethod
- def llava_projector_filename() -> str:
- return "mm_projector.bin"
-
- @staticmethod
- def llava_projector_repo() -> str:
- return "liuhaotian/LLaVA-13b-delta-v0"
-
-
-class LLaVA_v0_7B_Pipeline(LLaVA_v0_Pipeline):
- def __init__(self, params: dict) -> None:
- super().__init__(params)
-
- @staticmethod
- def name() -> str:
- return "llava-7b"
-
- @staticmethod
- def placeholder_token_id() -> int:
- return 32001
-
- @staticmethod
- def llava_projector_shape() -> Tuple[int, int]:
- return (1024, 4096)
-
- @staticmethod
- def llava_projector_filename() -> str:
- return "mm_projector.bin"
-
- @staticmethod
- def llava_projector_repo() -> str:
- return "liuhaotian/LLaVA-7b-delta-v0"
diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/setup.py b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/setup.py
deleted file mode 100644
index c9ea7d0d2f3d2fcf66d6f6e2aa0eb1a97a524bb6..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/setup.py
+++ /dev/null
@@ -1,21 +0,0 @@
-import os
-
-import pkg_resources
-from setuptools import setup, find_packages
-
-setup(
- name="clip",
- py_modules=["clip"],
- version="1.0",
- description="",
- author="OpenAI",
- packages=find_packages(exclude=["tests*"]),
- install_requires=[
- str(r)
- for r in pkg_resources.parse_requirements(
- open(os.path.join(os.path.dirname(__file__), "requirements.txt"))
- )
- ],
- include_package_data=True,
- extras_require={'dev': ['pytest']},
-)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/hed/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/hed/__init__.py
deleted file mode 100644
index a6a8fc712fba02b033dea13bfe33204b8d3c9139..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/hed/__init__.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# This is an improved version and model of HED edge detection with Apache License, Version 2.0.
-# Please use this implementation in your products
-# This implementation may produce slightly different results from Saining Xie's official implementations,
-# but it generates smoother edges and is more suitable for ControlNet as well as other image-to-image translations.
-# Different from official models and other implementations, this is an RGB-input model (rather than BGR)
-# and in this way it works better for gradio's RGB protocol
-
-import os
-import cv2
-import torch
-import numpy as np
-
-from einops import rearrange
-from annotator.util import annotator_ckpts_path
-
-
-class DoubleConvBlock(torch.nn.Module):
- def __init__(self, input_channel, output_channel, layer_number):
- super().__init__()
- self.convs = torch.nn.Sequential()
- self.convs.append(torch.nn.Conv2d(in_channels=input_channel, out_channels=output_channel, kernel_size=(3, 3), stride=(1, 1), padding=1))
- for i in range(1, layer_number):
- self.convs.append(torch.nn.Conv2d(in_channels=output_channel, out_channels=output_channel, kernel_size=(3, 3), stride=(1, 1), padding=1))
- self.projection = torch.nn.Conv2d(in_channels=output_channel, out_channels=1, kernel_size=(1, 1), stride=(1, 1), padding=0)
-
- def __call__(self, x, down_sampling=False):
- h = x
- if down_sampling:
- h = torch.nn.functional.max_pool2d(h, kernel_size=(2, 2), stride=(2, 2))
- for conv in self.convs:
- h = conv(h)
- h = torch.nn.functional.relu(h)
- return h, self.projection(h)
-
-
-class ControlNetHED_Apache2(torch.nn.Module):
- def __init__(self):
- super().__init__()
- self.norm = torch.nn.Parameter(torch.zeros(size=(1, 3, 1, 1)))
- self.block1 = DoubleConvBlock(input_channel=3, output_channel=64, layer_number=2)
- self.block2 = DoubleConvBlock(input_channel=64, output_channel=128, layer_number=2)
- self.block3 = DoubleConvBlock(input_channel=128, output_channel=256, layer_number=3)
- self.block4 = DoubleConvBlock(input_channel=256, output_channel=512, layer_number=3)
- self.block5 = DoubleConvBlock(input_channel=512, output_channel=512, layer_number=3)
-
- def __call__(self, x):
- h = x - self.norm
- h, projection1 = self.block1(h)
- h, projection2 = self.block2(h, down_sampling=True)
- h, projection3 = self.block3(h, down_sampling=True)
- h, projection4 = self.block4(h, down_sampling=True)
- h, projection5 = self.block5(h, down_sampling=True)
- return projection1, projection2, projection3, projection4, projection5
-
-
-class HEDdetector:
- def __init__(self):
- remote_model_path = "https://huggingface.co/lllyasviel/Annotators/resolve/main/ControlNetHED.pth"
- modelpath = os.path.join(annotator_ckpts_path, "ControlNetHED.pth")
- if not os.path.exists(modelpath):
- from basicsr.utils.download_util import load_file_from_url
- load_file_from_url(remote_model_path, model_dir=annotator_ckpts_path)
- self.netNetwork = ControlNetHED_Apache2().float().cuda().eval()
- self.netNetwork.load_state_dict(torch.load(modelpath))
-
- def __call__(self, input_image):
- assert input_image.ndim == 3
- H, W, C = input_image.shape
- with torch.no_grad():
- image_hed = torch.from_numpy(input_image.copy()).float().cuda()
- image_hed = rearrange(image_hed, 'h w c -> 1 c h w')
- edges = self.netNetwork(image_hed)
- edges = [e.detach().cpu().numpy().astype(np.float32)[0, 0] for e in edges]
- edges = [cv2.resize(e, (W, H), interpolation=cv2.INTER_LINEAR) for e in edges]
- edges = np.stack(edges, axis=2)
- edge = 1 / (1 + np.exp(-np.mean(edges, axis=2).astype(np.float64)))
- edge = (edge * 255.0).clip(0, 255).astype(np.uint8)
- return edge
-
-
-def nms(x, t, s):
- x = cv2.GaussianBlur(x.astype(np.float32), (0, 0), s)
-
- f1 = np.array([[0, 0, 0], [1, 1, 1], [0, 0, 0]], dtype=np.uint8)
- f2 = np.array([[0, 1, 0], [0, 1, 0], [0, 1, 0]], dtype=np.uint8)
- f3 = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.uint8)
- f4 = np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]], dtype=np.uint8)
-
- y = np.zeros_like(x)
-
- for f in [f1, f2, f3, f4]:
- np.putmask(y, cv2.dilate(x, kernel=f) == x, x)
-
- z = np.zeros_like(y, dtype=np.uint8)
- z[y > t] = 255
- return z
diff --git a/spaces/Arnx/MusicGenXvAKN/audiocraft/models/musicgen.py b/spaces/Arnx/MusicGenXvAKN/audiocraft/models/musicgen.py
deleted file mode 100644
index 007dd9e0ed1cfd359fb4889e7f4108248e189941..0000000000000000000000000000000000000000
--- a/spaces/Arnx/MusicGenXvAKN/audiocraft/models/musicgen.py
+++ /dev/null
@@ -1,362 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Main model for using MusicGen. This will combine all the required components
-and provide easy access to the generation API.
-"""
-
-import os
-import typing as tp
-
-import torch
-
-from .encodec import CompressionModel
-from .lm import LMModel
-from .builders import get_debug_compression_model, get_debug_lm_model
-from .loaders import load_compression_model, load_lm_model, HF_MODEL_CHECKPOINTS_MAP
-from ..data.audio_utils import convert_audio
-from ..modules.conditioners import ConditioningAttributes, WavCondition
-from ..utils.autocast import TorchAutocast
-
-
-MelodyList = tp.List[tp.Optional[torch.Tensor]]
-MelodyType = tp.Union[torch.Tensor, MelodyList]
-
-
-class MusicGen:
- """MusicGen main model with convenient generation API.
-
- Args:
- name (str): name of the model.
- compression_model (CompressionModel): Compression model
- used to map audio to invertible discrete representations.
- lm (LMModel): Language model over discrete representations.
- """
- def __init__(self, name: str, compression_model: CompressionModel, lm: LMModel,
- max_duration: float = 30):
- self.name = name
- self.compression_model = compression_model
- self.lm = lm
- self.max_duration = max_duration
- self.device = next(iter(lm.parameters())).device
- self.generation_params: dict = {}
- self.set_generation_params(duration=15) # 15 seconds by default
- self._progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None
- if self.device.type == 'cpu':
- self.autocast = TorchAutocast(enabled=False)
- else:
- self.autocast = TorchAutocast(
- enabled=True, device_type=self.device.type, dtype=torch.float16)
-
- @property
- def frame_rate(self) -> int:
- """Roughly the number of AR steps per seconds."""
- return self.compression_model.frame_rate
-
- @property
- def sample_rate(self) -> int:
- """Sample rate of the generated audio."""
- return self.compression_model.sample_rate
-
- @property
- def audio_channels(self) -> int:
- """Audio channels of the generated audio."""
- return self.compression_model.channels
-
- @staticmethod
- def get_pretrained(name: str = 'melody', device=None):
- """Return pretrained model, we provide four models:
- - small (300M), text to music, # see: https://huggingface.co/facebook/musicgen-small
- - medium (1.5B), text to music, # see: https://huggingface.co/facebook/musicgen-medium
- - melody (1.5B) text to music and text+melody to music, # see: https://huggingface.co/facebook/musicgen-melody
- - large (3.3B), text to music, # see: https://huggingface.co/facebook/musicgen-large
- """
-
- if device is None:
- if torch.cuda.device_count():
- device = 'cuda'
- else:
- device = 'cpu'
-
- if name == 'debug':
- # used only for unit tests
- compression_model = get_debug_compression_model(device)
- lm = get_debug_lm_model(device)
- return MusicGen(name, compression_model, lm)
-
- if name not in HF_MODEL_CHECKPOINTS_MAP:
- if not os.path.isfile(name) and not os.path.isdir(name):
- raise ValueError(
- f"{name} is not a valid checkpoint name. "
- f"Choose one of {', '.join(HF_MODEL_CHECKPOINTS_MAP.keys())}"
- )
-
- cache_dir = os.environ.get('MUSICGEN_ROOT', None)
- compression_model = load_compression_model(name, device=device, cache_dir=cache_dir)
- lm = load_lm_model(name, device=device, cache_dir=cache_dir)
- if name == 'melody':
- lm.condition_provider.conditioners['self_wav'].match_len_on_eval = True
-
- return MusicGen(name, compression_model, lm)
-
- def set_generation_params(self, use_sampling: bool = True, top_k: int = 250,
- top_p: float = 0.0, temperature: float = 1.0,
- duration: float = 30.0, cfg_coef: float = 3.0,
- two_step_cfg: bool = False, extend_stride: float = 18):
- """Set the generation parameters for MusicGen.
-
- Args:
- use_sampling (bool, optional): Use sampling if True, else do argmax decoding. Defaults to True.
- top_k (int, optional): top_k used for sampling. Defaults to 250.
- top_p (float, optional): top_p used for sampling, when set to 0 top_k is used. Defaults to 0.0.
- temperature (float, optional): Softmax temperature parameter. Defaults to 1.0.
- duration (float, optional): Duration of the generated waveform. Defaults to 30.0.
- cfg_coef (float, optional): Coefficient used for classifier free guidance. Defaults to 3.0.
- two_step_cfg (bool, optional): If True, performs 2 forward for Classifier Free Guidance,
- instead of batching together the two. This has some impact on how things
- are padded but seems to have little impact in practice.
- extend_stride: when doing extended generation (i.e. more than 30 seconds), by how much
- should we extend the audio each time. Larger values will mean less context is
- preserved, and shorter value will require extra computations.
- """
- assert extend_stride < self.max_duration, "Cannot stride by more than max generation duration."
- self.extend_stride = extend_stride
- self.duration = duration
- self.generation_params = {
- 'use_sampling': use_sampling,
- 'temp': temperature,
- 'top_k': top_k,
- 'top_p': top_p,
- 'cfg_coef': cfg_coef,
- 'two_step_cfg': two_step_cfg,
- }
-
- def set_custom_progress_callback(self, progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None):
- """Override the default progress callback."""
- self._progress_callback = progress_callback
-
- def generate_unconditional(self, num_samples: int, progress: bool = False) -> torch.Tensor:
- """Generate samples in an unconditional manner.
-
- Args:
- num_samples (int): Number of samples to be generated.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- descriptions: tp.List[tp.Optional[str]] = [None] * num_samples
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None)
- return self._generate_tokens(attributes, prompt_tokens, progress)
-
- def generate(self, descriptions: tp.List[str], progress: bool = False) -> torch.Tensor:
- """Generate samples conditioned on text.
-
- Args:
- descriptions (tp.List[str]): A list of strings used as text conditioning.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None)
- assert prompt_tokens is None
- return self._generate_tokens(attributes, prompt_tokens, progress)
-
- def generate_with_chroma(self, descriptions: tp.List[str], melody_wavs: MelodyType,
- melody_sample_rate: int, progress: bool = False) -> torch.Tensor:
- """Generate samples conditioned on text and melody.
-
- Args:
- descriptions (tp.List[str]): A list of strings used as text conditioning.
- melody_wavs: (torch.Tensor or list of Tensor): A batch of waveforms used as
- melody conditioning. Should have shape [B, C, T] with B matching the description length,
- C=1 or 2. It can be [C, T] if there is a single description. It can also be
- a list of [C, T] tensors.
- melody_sample_rate: (int): Sample rate of the melody waveforms.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- if isinstance(melody_wavs, torch.Tensor):
- if melody_wavs.dim() == 2:
- melody_wavs = melody_wavs[None]
- if melody_wavs.dim() != 3:
- raise ValueError("Melody wavs should have a shape [B, C, T].")
- melody_wavs = list(melody_wavs)
- else:
- for melody in melody_wavs:
- if melody is not None:
- assert melody.dim() == 2, "One melody in the list has the wrong number of dims."
-
- melody_wavs = [
- convert_audio(wav, melody_sample_rate, self.sample_rate, self.audio_channels)
- if wav is not None else None
- for wav in melody_wavs]
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions=descriptions, prompt=None,
- melody_wavs=melody_wavs)
- assert prompt_tokens is None
- return self._generate_tokens(attributes, prompt_tokens, progress)
-
- def generate_continuation(self, prompt: torch.Tensor, prompt_sample_rate: int,
- descriptions: tp.Optional[tp.List[tp.Optional[str]]] = None,
- progress: bool = False) -> torch.Tensor:
- """Generate samples conditioned on audio prompts.
-
- Args:
- prompt (torch.Tensor): A batch of waveforms used for continuation.
- Prompt should be [B, C, T], or [C, T] if only one sample is generated.
- prompt_sample_rate (int): Sampling rate of the given audio waveforms.
- descriptions (tp.List[str], optional): A list of strings used as text conditioning. Defaults to None.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- if prompt.dim() == 2:
- prompt = prompt[None]
- if prompt.dim() != 3:
- raise ValueError("prompt should have 3 dimensions: [B, C, T] (C = 1).")
- prompt = convert_audio(prompt, prompt_sample_rate, self.sample_rate, self.audio_channels)
- if descriptions is None:
- descriptions = [None] * len(prompt)
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, prompt)
- assert prompt_tokens is not None
- return self._generate_tokens(attributes, prompt_tokens, progress)
-
- @torch.no_grad()
- def _prepare_tokens_and_attributes(
- self,
- descriptions: tp.Sequence[tp.Optional[str]],
- prompt: tp.Optional[torch.Tensor],
- melody_wavs: tp.Optional[MelodyList] = None,
- ) -> tp.Tuple[tp.List[ConditioningAttributes], tp.Optional[torch.Tensor]]:
- """Prepare model inputs.
-
- Args:
- descriptions (tp.List[str]): A list of strings used as text conditioning.
- prompt (torch.Tensor): A batch of waveforms used for continuation.
- melody_wavs (tp.Optional[torch.Tensor], optional): A batch of waveforms
- used as melody conditioning. Defaults to None.
- """
- attributes = [
- ConditioningAttributes(text={'description': description})
- for description in descriptions]
-
- if melody_wavs is None:
- for attr in attributes:
- attr.wav['self_wav'] = WavCondition(
- torch.zeros((1, 1), device=self.device),
- torch.tensor([0], device=self.device),
- path='null_wav') # type: ignore
- else:
- if self.name != "melody":
- raise RuntimeError("This model doesn't support melody conditioning. "
- "Use the `melody` model.")
- assert len(melody_wavs) == len(descriptions), \
- f"number of melody wavs must match number of descriptions! " \
- f"got melody len={len(melody_wavs)}, and descriptions len={len(descriptions)}"
- for attr, melody in zip(attributes, melody_wavs):
- if melody is None:
- attr.wav['self_wav'] = WavCondition(
- torch.zeros((1, 1), device=self.device),
- torch.tensor([0], device=self.device),
- path='null_wav') # type: ignore
- else:
- attr.wav['self_wav'] = WavCondition(
- melody.to(device=self.device),
- torch.tensor([melody.shape[-1]], device=self.device))
-
- if prompt is not None:
- if descriptions is not None:
- assert len(descriptions) == len(prompt), "Prompt and nb. descriptions doesn't match"
- prompt = prompt.to(self.device)
- prompt_tokens, scale = self.compression_model.encode(prompt)
- assert scale is None
- else:
- prompt_tokens = None
- return attributes, prompt_tokens
-
- def _generate_tokens(self, attributes: tp.List[ConditioningAttributes],
- prompt_tokens: tp.Optional[torch.Tensor], progress: bool = False) -> torch.Tensor:
- """Generate discrete audio tokens given audio prompt and/or conditions.
-
- Args:
- attributes (tp.List[ConditioningAttributes]): Conditions used for generation (text/melody).
- prompt_tokens (tp.Optional[torch.Tensor]): Audio prompt used for continuation.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- Returns:
- torch.Tensor: Generated audio, of shape [B, C, T], T is defined by the generation params.
- """
- total_gen_len = int(self.duration * self.frame_rate)
- max_prompt_len = int(min(self.duration, self.max_duration) * self.frame_rate)
- current_gen_offset: int = 0
-
- def _progress_callback(generated_tokens: int, tokens_to_generate: int):
- generated_tokens += current_gen_offset
- if self._progress_callback is not None:
- # Note that total_gen_len might be quite wrong depending on the
- # codebook pattern used, but with delay it is almost accurate.
- self._progress_callback(generated_tokens, total_gen_len)
- else:
- print(f'{generated_tokens: 6d} / {total_gen_len: 6d}', end='\r')
-
- if prompt_tokens is not None:
- assert max_prompt_len >= prompt_tokens.shape[-1], \
- "Prompt is longer than audio to generate"
-
- callback = None
- if progress:
- callback = _progress_callback
-
- if self.duration <= self.max_duration:
- # generate by sampling from LM, simple case.
- with self.autocast:
- gen_tokens = self.lm.generate(
- prompt_tokens, attributes,
- callback=callback, max_gen_len=total_gen_len, **self.generation_params)
-
- else:
- # now this gets a bit messier, we need to handle prompts,
- # melody conditioning etc.
- ref_wavs = [attr.wav['self_wav'] for attr in attributes]
- all_tokens = []
- if prompt_tokens is None:
- prompt_length = 0
- else:
- all_tokens.append(prompt_tokens)
- prompt_length = prompt_tokens.shape[-1]
-
- stride_tokens = int(self.frame_rate * self.extend_stride)
-
- while current_gen_offset + prompt_length < total_gen_len:
- time_offset = current_gen_offset / self.frame_rate
- chunk_duration = min(self.duration - time_offset, self.max_duration)
- max_gen_len = int(chunk_duration * self.frame_rate)
- for attr, ref_wav in zip(attributes, ref_wavs):
- wav_length = ref_wav.length.item()
- if wav_length == 0:
- continue
- # We will extend the wav periodically if it not long enough.
- # we have to do it here rather than in conditioners.py as otherwise
- # we wouldn't have the full wav.
- initial_position = int(time_offset * self.sample_rate)
- wav_target_length = int(self.max_duration * self.sample_rate)
- print(initial_position / self.sample_rate, wav_target_length / self.sample_rate)
- positions = torch.arange(initial_position,
- initial_position + wav_target_length, device=self.device)
- attr.wav['self_wav'] = WavCondition(
- ref_wav[0][:, positions % wav_length],
- torch.full_like(ref_wav[1], wav_target_length))
- with self.autocast:
- gen_tokens = self.lm.generate(
- prompt_tokens, attributes,
- callback=callback, max_gen_len=max_gen_len, **self.generation_params)
- if prompt_tokens is None:
- all_tokens.append(gen_tokens)
- else:
- all_tokens.append(gen_tokens[:, :, prompt_tokens.shape[-1]:])
- prompt_tokens = gen_tokens[:, :, stride_tokens:]
- prompt_length = prompt_tokens.shape[-1]
- current_gen_offset += stride_tokens
-
- gen_tokens = torch.cat(all_tokens, dim=-1)
-
- # generate audio
- assert gen_tokens.dim() == 3
- with torch.no_grad():
- gen_audio = self.compression_model.decode(gen_tokens, None)
- return gen_audio
diff --git a/spaces/AsakuraMizu/moe-tts/monotonic_align/__init__.py b/spaces/AsakuraMizu/moe-tts/monotonic_align/__init__.py
deleted file mode 100644
index 40b6f64aa116c74cac2f6a33444c9eeea2fdb38c..0000000000000000000000000000000000000000
--- a/spaces/AsakuraMizu/moe-tts/monotonic_align/__init__.py
+++ /dev/null
@@ -1,21 +0,0 @@
-from numpy import zeros, int32, float32
-from torch import from_numpy
-
-from .core import maximum_path_jit
-
-
-def maximum_path(neg_cent, mask):
- """ numba optimized version.
- neg_cent: [b, t_t, t_s]
- mask: [b, t_t, t_s]
- """
- device = neg_cent.device
- dtype = neg_cent.dtype
- neg_cent = neg_cent.data.cpu().numpy().astype(float32)
- path = zeros(neg_cent.shape, dtype=int32)
-
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32)
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32)
- maximum_path_jit(path, neg_cent, t_t_max, t_s_max)
- return from_numpy(path).to(device=device, dtype=dtype)
-
diff --git a/spaces/Bart92/RVC_HF/demucs/tasnet.py b/spaces/Bart92/RVC_HF/demucs/tasnet.py
deleted file mode 100644
index ecc1257925ea8f4fbe389ddd6d73ce9fdf45f6d4..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/demucs/tasnet.py
+++ /dev/null
@@ -1,452 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-#
-# Created on 2018/12
-# Author: Kaituo XU
-# Modified on 2019/11 by Alexandre Defossez, added support for multiple output channels
-# Here is the original license:
-# The MIT License (MIT)
-#
-# Copyright (c) 2018 Kaituo XU
-#
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-#
-# The above copyright notice and this permission notice shall be included in all
-# copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-
-import math
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from .utils import capture_init
-
-EPS = 1e-8
-
-
-def overlap_and_add(signal, frame_step):
- outer_dimensions = signal.size()[:-2]
- frames, frame_length = signal.size()[-2:]
-
- subframe_length = math.gcd(frame_length, frame_step) # gcd=Greatest Common Divisor
- subframe_step = frame_step // subframe_length
- subframes_per_frame = frame_length // subframe_length
- output_size = frame_step * (frames - 1) + frame_length
- output_subframes = output_size // subframe_length
-
- subframe_signal = signal.view(*outer_dimensions, -1, subframe_length)
-
- frame = torch.arange(0, output_subframes,
- device=signal.device).unfold(0, subframes_per_frame, subframe_step)
- frame = frame.long() # signal may in GPU or CPU
- frame = frame.contiguous().view(-1)
-
- result = signal.new_zeros(*outer_dimensions, output_subframes, subframe_length)
- result.index_add_(-2, frame, subframe_signal)
- result = result.view(*outer_dimensions, -1)
- return result
-
-
-class ConvTasNet(nn.Module):
- @capture_init
- def __init__(self,
- sources,
- N=256,
- L=20,
- B=256,
- H=512,
- P=3,
- X=8,
- R=4,
- audio_channels=2,
- norm_type="gLN",
- causal=False,
- mask_nonlinear='relu',
- samplerate=44100,
- segment_length=44100 * 2 * 4):
- """
- Args:
- sources: list of sources
- N: Number of filters in autoencoder
- L: Length of the filters (in samples)
- B: Number of channels in bottleneck 1 × 1-conv block
- H: Number of channels in convolutional blocks
- P: Kernel size in convolutional blocks
- X: Number of convolutional blocks in each repeat
- R: Number of repeats
- norm_type: BN, gLN, cLN
- causal: causal or non-causal
- mask_nonlinear: use which non-linear function to generate mask
- """
- super(ConvTasNet, self).__init__()
- # Hyper-parameter
- self.sources = sources
- self.C = len(sources)
- self.N, self.L, self.B, self.H, self.P, self.X, self.R = N, L, B, H, P, X, R
- self.norm_type = norm_type
- self.causal = causal
- self.mask_nonlinear = mask_nonlinear
- self.audio_channels = audio_channels
- self.samplerate = samplerate
- self.segment_length = segment_length
- # Components
- self.encoder = Encoder(L, N, audio_channels)
- self.separator = TemporalConvNet(
- N, B, H, P, X, R, self.C, norm_type, causal, mask_nonlinear)
- self.decoder = Decoder(N, L, audio_channels)
- # init
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_normal_(p)
-
- def valid_length(self, length):
- return length
-
- def forward(self, mixture):
- """
- Args:
- mixture: [M, T], M is batch size, T is #samples
- Returns:
- est_source: [M, C, T]
- """
- mixture_w = self.encoder(mixture)
- est_mask = self.separator(mixture_w)
- est_source = self.decoder(mixture_w, est_mask)
-
- # T changed after conv1d in encoder, fix it here
- T_origin = mixture.size(-1)
- T_conv = est_source.size(-1)
- est_source = F.pad(est_source, (0, T_origin - T_conv))
- return est_source
-
-
-class Encoder(nn.Module):
- """Estimation of the nonnegative mixture weight by a 1-D conv layer.
- """
- def __init__(self, L, N, audio_channels):
- super(Encoder, self).__init__()
- # Hyper-parameter
- self.L, self.N = L, N
- # Components
- # 50% overlap
- self.conv1d_U = nn.Conv1d(audio_channels, N, kernel_size=L, stride=L // 2, bias=False)
-
- def forward(self, mixture):
- """
- Args:
- mixture: [M, T], M is batch size, T is #samples
- Returns:
- mixture_w: [M, N, K], where K = (T-L)/(L/2)+1 = 2T/L-1
- """
- mixture_w = F.relu(self.conv1d_U(mixture)) # [M, N, K]
- return mixture_w
-
-
-class Decoder(nn.Module):
- def __init__(self, N, L, audio_channels):
- super(Decoder, self).__init__()
- # Hyper-parameter
- self.N, self.L = N, L
- self.audio_channels = audio_channels
- # Components
- self.basis_signals = nn.Linear(N, audio_channels * L, bias=False)
-
- def forward(self, mixture_w, est_mask):
- """
- Args:
- mixture_w: [M, N, K]
- est_mask: [M, C, N, K]
- Returns:
- est_source: [M, C, T]
- """
- # D = W * M
- source_w = torch.unsqueeze(mixture_w, 1) * est_mask # [M, C, N, K]
- source_w = torch.transpose(source_w, 2, 3) # [M, C, K, N]
- # S = DV
- est_source = self.basis_signals(source_w) # [M, C, K, ac * L]
- m, c, k, _ = est_source.size()
- est_source = est_source.view(m, c, k, self.audio_channels, -1).transpose(2, 3).contiguous()
- est_source = overlap_and_add(est_source, self.L // 2) # M x C x ac x T
- return est_source
-
-
-class TemporalConvNet(nn.Module):
- def __init__(self, N, B, H, P, X, R, C, norm_type="gLN", causal=False, mask_nonlinear='relu'):
- """
- Args:
- N: Number of filters in autoencoder
- B: Number of channels in bottleneck 1 × 1-conv block
- H: Number of channels in convolutional blocks
- P: Kernel size in convolutional blocks
- X: Number of convolutional blocks in each repeat
- R: Number of repeats
- C: Number of speakers
- norm_type: BN, gLN, cLN
- causal: causal or non-causal
- mask_nonlinear: use which non-linear function to generate mask
- """
- super(TemporalConvNet, self).__init__()
- # Hyper-parameter
- self.C = C
- self.mask_nonlinear = mask_nonlinear
- # Components
- # [M, N, K] -> [M, N, K]
- layer_norm = ChannelwiseLayerNorm(N)
- # [M, N, K] -> [M, B, K]
- bottleneck_conv1x1 = nn.Conv1d(N, B, 1, bias=False)
- # [M, B, K] -> [M, B, K]
- repeats = []
- for r in range(R):
- blocks = []
- for x in range(X):
- dilation = 2**x
- padding = (P - 1) * dilation if causal else (P - 1) * dilation // 2
- blocks += [
- TemporalBlock(B,
- H,
- P,
- stride=1,
- padding=padding,
- dilation=dilation,
- norm_type=norm_type,
- causal=causal)
- ]
- repeats += [nn.Sequential(*blocks)]
- temporal_conv_net = nn.Sequential(*repeats)
- # [M, B, K] -> [M, C*N, K]
- mask_conv1x1 = nn.Conv1d(B, C * N, 1, bias=False)
- # Put together
- self.network = nn.Sequential(layer_norm, bottleneck_conv1x1, temporal_conv_net,
- mask_conv1x1)
-
- def forward(self, mixture_w):
- """
- Keep this API same with TasNet
- Args:
- mixture_w: [M, N, K], M is batch size
- returns:
- est_mask: [M, C, N, K]
- """
- M, N, K = mixture_w.size()
- score = self.network(mixture_w) # [M, N, K] -> [M, C*N, K]
- score = score.view(M, self.C, N, K) # [M, C*N, K] -> [M, C, N, K]
- if self.mask_nonlinear == 'softmax':
- est_mask = F.softmax(score, dim=1)
- elif self.mask_nonlinear == 'relu':
- est_mask = F.relu(score)
- else:
- raise ValueError("Unsupported mask non-linear function")
- return est_mask
-
-
-class TemporalBlock(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size,
- stride,
- padding,
- dilation,
- norm_type="gLN",
- causal=False):
- super(TemporalBlock, self).__init__()
- # [M, B, K] -> [M, H, K]
- conv1x1 = nn.Conv1d(in_channels, out_channels, 1, bias=False)
- prelu = nn.PReLU()
- norm = chose_norm(norm_type, out_channels)
- # [M, H, K] -> [M, B, K]
- dsconv = DepthwiseSeparableConv(out_channels, in_channels, kernel_size, stride, padding,
- dilation, norm_type, causal)
- # Put together
- self.net = nn.Sequential(conv1x1, prelu, norm, dsconv)
-
- def forward(self, x):
- """
- Args:
- x: [M, B, K]
- Returns:
- [M, B, K]
- """
- residual = x
- out = self.net(x)
- # TODO: when P = 3 here works fine, but when P = 2 maybe need to pad?
- return out + residual # look like w/o F.relu is better than w/ F.relu
- # return F.relu(out + residual)
-
-
-class DepthwiseSeparableConv(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size,
- stride,
- padding,
- dilation,
- norm_type="gLN",
- causal=False):
- super(DepthwiseSeparableConv, self).__init__()
- # Use `groups` option to implement depthwise convolution
- # [M, H, K] -> [M, H, K]
- depthwise_conv = nn.Conv1d(in_channels,
- in_channels,
- kernel_size,
- stride=stride,
- padding=padding,
- dilation=dilation,
- groups=in_channels,
- bias=False)
- if causal:
- chomp = Chomp1d(padding)
- prelu = nn.PReLU()
- norm = chose_norm(norm_type, in_channels)
- # [M, H, K] -> [M, B, K]
- pointwise_conv = nn.Conv1d(in_channels, out_channels, 1, bias=False)
- # Put together
- if causal:
- self.net = nn.Sequential(depthwise_conv, chomp, prelu, norm, pointwise_conv)
- else:
- self.net = nn.Sequential(depthwise_conv, prelu, norm, pointwise_conv)
-
- def forward(self, x):
- """
- Args:
- x: [M, H, K]
- Returns:
- result: [M, B, K]
- """
- return self.net(x)
-
-
-class Chomp1d(nn.Module):
- """To ensure the output length is the same as the input.
- """
- def __init__(self, chomp_size):
- super(Chomp1d, self).__init__()
- self.chomp_size = chomp_size
-
- def forward(self, x):
- """
- Args:
- x: [M, H, Kpad]
- Returns:
- [M, H, K]
- """
- return x[:, :, :-self.chomp_size].contiguous()
-
-
-def chose_norm(norm_type, channel_size):
- """The input of normlization will be (M, C, K), where M is batch size,
- C is channel size and K is sequence length.
- """
- if norm_type == "gLN":
- return GlobalLayerNorm(channel_size)
- elif norm_type == "cLN":
- return ChannelwiseLayerNorm(channel_size)
- elif norm_type == "id":
- return nn.Identity()
- else: # norm_type == "BN":
- # Given input (M, C, K), nn.BatchNorm1d(C) will accumulate statics
- # along M and K, so this BN usage is right.
- return nn.BatchNorm1d(channel_size)
-
-
-# TODO: Use nn.LayerNorm to impl cLN to speed up
-class ChannelwiseLayerNorm(nn.Module):
- """Channel-wise Layer Normalization (cLN)"""
- def __init__(self, channel_size):
- super(ChannelwiseLayerNorm, self).__init__()
- self.gamma = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1]
- self.beta = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1]
- self.reset_parameters()
-
- def reset_parameters(self):
- self.gamma.data.fill_(1)
- self.beta.data.zero_()
-
- def forward(self, y):
- """
- Args:
- y: [M, N, K], M is batch size, N is channel size, K is length
- Returns:
- cLN_y: [M, N, K]
- """
- mean = torch.mean(y, dim=1, keepdim=True) # [M, 1, K]
- var = torch.var(y, dim=1, keepdim=True, unbiased=False) # [M, 1, K]
- cLN_y = self.gamma * (y - mean) / torch.pow(var + EPS, 0.5) + self.beta
- return cLN_y
-
-
-class GlobalLayerNorm(nn.Module):
- """Global Layer Normalization (gLN)"""
- def __init__(self, channel_size):
- super(GlobalLayerNorm, self).__init__()
- self.gamma = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1]
- self.beta = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1]
- self.reset_parameters()
-
- def reset_parameters(self):
- self.gamma.data.fill_(1)
- self.beta.data.zero_()
-
- def forward(self, y):
- """
- Args:
- y: [M, N, K], M is batch size, N is channel size, K is length
- Returns:
- gLN_y: [M, N, K]
- """
- # TODO: in torch 1.0, torch.mean() support dim list
- mean = y.mean(dim=1, keepdim=True).mean(dim=2, keepdim=True) # [M, 1, 1]
- var = (torch.pow(y - mean, 2)).mean(dim=1, keepdim=True).mean(dim=2, keepdim=True)
- gLN_y = self.gamma * (y - mean) / torch.pow(var + EPS, 0.5) + self.beta
- return gLN_y
-
-
-if __name__ == "__main__":
- torch.manual_seed(123)
- M, N, L, T = 2, 3, 4, 12
- K = 2 * T // L - 1
- B, H, P, X, R, C, norm_type, causal = 2, 3, 3, 3, 2, 2, "gLN", False
- mixture = torch.randint(3, (M, T))
- # test Encoder
- encoder = Encoder(L, N)
- encoder.conv1d_U.weight.data = torch.randint(2, encoder.conv1d_U.weight.size())
- mixture_w = encoder(mixture)
- print('mixture', mixture)
- print('U', encoder.conv1d_U.weight)
- print('mixture_w', mixture_w)
- print('mixture_w size', mixture_w.size())
-
- # test TemporalConvNet
- separator = TemporalConvNet(N, B, H, P, X, R, C, norm_type=norm_type, causal=causal)
- est_mask = separator(mixture_w)
- print('est_mask', est_mask)
-
- # test Decoder
- decoder = Decoder(N, L)
- est_mask = torch.randint(2, (B, K, C, N))
- est_source = decoder(mixture_w, est_mask)
- print('est_source', est_source)
-
- # test Conv-TasNet
- conv_tasnet = ConvTasNet(N, L, B, H, P, X, R, C, norm_type=norm_type)
- est_source = conv_tasnet(mixture)
- print('est_source', est_source)
- print('est_source size', est_source.size())
diff --git a/spaces/CVPR/CVPR2022_papers/paper_list.py b/spaces/CVPR/CVPR2022_papers/paper_list.py
deleted file mode 100644
index e242466fa3d25d428ea8d52f0765474374c6c652..0000000000000000000000000000000000000000
--- a/spaces/CVPR/CVPR2022_papers/paper_list.py
+++ /dev/null
@@ -1,102 +0,0 @@
-from __future__ import annotations
-
-import pandas as pd
-
-
-class PaperList:
- def __init__(self):
- self.table = pd.read_csv('papers.csv')
- self._preprcess_table()
-
- self.table_header = '''
-
-
Paper
-
Authors
-
pdf
-
Supp
-
arXiv
-
GitHub
-
HF Spaces
-
HF Models
-
HF Datasets
-
'''
-
- def _preprcess_table(self) -> None:
- self.table['title_lowercase'] = self.table.title.str.lower()
-
- rows = []
- for row in self.table.itertuples():
- paper = f'{row.title}'
- pdf = f'pdf'
- supp = f'supp' if isinstance(
- row.supp, str) else ''
- arxiv = f'arXiv' if isinstance(
- row.arxiv, str) else ''
- github = f'GitHub' if isinstance(
- row.github, str) else ''
- hf_space = f'Space' if isinstance(
- row.hf_space, str) else ''
- hf_model = f'Model' if isinstance(
- row.hf_model, str) else ''
- hf_dataset = f'Dataset' if isinstance(
- row.hf_dataset, str) else ''
- row = f'''
-
Domino's Base APK is a file that lets you download and install the app for Domino's Pizza on your Android device without using the Google Play Store. It has some advantages over the regular app, such as access to exclusive features, deals, and discounts. It also has some amazing features, such as integration with voice assistant, smartwatch, and car, and option to pay with cash, card, PayPal, or Apple Pay. It is easy to download and install Domino's Base APK, as well as to use it to order pizza from Domino's. However, it is not the only app that lets you order pizza or food online. There are some alternatives that you might want to try if you want to compare prices, quality, or variety.
-
FAQs
-
Here are some frequently asked questions about Domino's Base APK:
-
Is Domino's Base APK safe?
-
Domino's Base APK is generally safe to download and install on your device. However, you have to be careful about the source of the APK file, as some websites might contain malware or viruses that can harm your device or steal your data. To avoid this, you should look for websites that have positive reviews, ratings, and feedback from other users. You can also use antivirus software or VPN services to protect your device and your privacy.
-
Is Domino's Base APK legal?
-
Domino's Base APK is legal to download and install on your device. However, it might violate some terms and conditions of Domino's Pizza or Google Play Store. For example, it might bypass some restrictions or policies set by these parties. Therefore, you should use Domino's Base APK at your own risk and discretion.
-
Is Domino's Base APK free?
-
Domino's Base APK is free to download and install on your device. However, you might have to pay for some items or services within the app. For example, you have to pay for your pizza order, delivery fee, tip, or taxes. You might also have to pay for some premium features or subscriptions within the app.
-
How do I update Domino's Base APK?
-
To update Domino's Base APK, you have to follow the same steps as downloading and installing it. You have to find a website that offers the latest version of the APK file and download and install it on your device. You might also have to uninstall the previous version of the app before installing the new one.
-
How do I uninstall Domino's Base APK?
-
To uninstall Domino's Base APK, you have to follow these steps:
-
-
Go to your device's settings and look for apps or applications options.
-
Find Domino's Base APK and tap on it.
-
Tap on "Uninstall" or "Delete" and confirm your action.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Efootball PES 2023 The Best ISO File for PS2 Gamers.md b/spaces/congsaPfin/Manga-OCR/logs/Efootball PES 2023 The Best ISO File for PS2 Gamers.md
deleted file mode 100644
index 76ea8ac7c8aa49a4922b173069e8f086dd37aaac..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Efootball PES 2023 The Best ISO File for PS2 Gamers.md
+++ /dev/null
@@ -1,187 +0,0 @@
-
-
PES ISO File Download 2023: How to Get the Latest Version of eFootball
-
If you are a fan of soccer games, you might have heard of PES, or Pro Evolution Soccer, one of the most popular franchises in the genre. PES is known for its realistic graphics, gameplay, and licensed teams and players. However, if you want to enjoy the latest version of PES, which is now called eFootball 2023, you might need to download a PES ISO file.
A PES ISO file is a compressed file that contains all the data and files needed to run the game on different devices, such as PC, PlayStation, Xbox, or mobile phones. By downloading a PES ISO file, you can play eFootball 2023 without having to buy the game or install it from a disc.
-
In this article, we will explain what a PES ISO file is, what are its benefits, what is eFootball 2023, how to download a PES ISO file 2023, and how to play eFootball 2023 with a PES ISO file. Let's get started!
-
What is PES ISO File?
-
An ISO file is a type of archive file that contains an exact copy of a disc, such as a CD or DVD. An ISO file can be used to create a backup of a disc, or to transfer its contents to another device.
-
A PES ISO file is an ISO file that contains all the data and files needed to run a PES game on different devices. A PES ISO file can be created by ripping a PES disc, or by downloading it from a trusted source online.
-
Benefits of PES ISO File
-
There are several benefits of using a PES ISO file to play eFootball 2023, such as:
-
pes 2023 iso file for ppsspp download
-pes 2023 ps2 iso download mr games
-pes 2023 psp iso save data and textures
-download pes 2023 iso file for android
-pes 2023 iso file for pc free download
-pes efootball 2023 iso ps2 game
-how to install pes 2023 iso on psp
-pes 2023 iso file download with commentary
-pes 2023 ps2 iso english version download
-pes 2023 ppsspp iso file highly compressed
-pes 2023 iso file download for ps4
-pes 2023 iso file update kits and transfers
-pes 2023 iso file download offline mode
-pes 2023 ps2 iso full hd graphics download
-pes 2023 psp iso latest player ratings
-download pes 2023 iso file for xbox one
-pes 2023 iso file with new stadiums and teams
-pes 2023 iso file download no password
-pes 2023 ps2 iso best camera angle download
-pes 2023 ppsspp iso file multiplayer mode
-pes 2023 iso file download for windows 10
-pes 2023 iso file with real faces and names
-pes 2023 iso file download with cheats and tricks
-pes 2023 ps2 iso new features and gameplay download
-pes 2023 psp iso best sound quality and music
-download pes 2023 iso file for mac os
-pes 2023 iso file with custom kits and logos
-pes 2023 iso file download with patch and mods
-pes 2023 ps2 iso original version download
-pes 2023 ppsspp iso file smooth and fast performance
-pes 2023 iso file download for linux ubuntu
-pes 2023 iso file with latest skills and animations
-pes 2023 iso file download with online mode and tournaments
-pes 2023 ps2 iso ultimate edition download
-pes 2023 psp iso realistic physics and graphics
-download pes 2023 iso file for chrome os
-pes 2023 iso file with all leagues and cups
-pes 2023 iso file download with career mode and manager mode
-pes 2023 ps2 iso classic teams and players download
-pes 2023 ppsspp iso file easy controls and settings
-
-
You can play eFootball 2023 without having to buy the game or install it from a disc.
-
You can play eFootball 2023 on any device that supports an emulator, such as PC, PlayStation, Xbox, or mobile phones.
-
You can play eFootball 2023 with improved graphics and performance, as well as custom patches and mods.
-
You can play eFootball 2023 offline or online with other players who use a PES ISO file.
-
-
What is eFootball 2023?
-
eFootball 2023 is the latest installment in the PES series, developed and published by Konami. It was released on September 29, 2021 for PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series X/S, and mobile devices.
-
eFootball 2023 is a free-to-play game that offers a new football experience with unparalleled realism and gameplay. It features full national team squads of Euro 2023, more realistic animations, player models, enhanced physics, photorealistic visuals, and improved artificial intelligence.
-
eFootball 2023 also has a large eSports platform for football fans around the world to enjoy the best head-to-head experience, no matter their device of choice. It has various modes and features, such as:
-
Features of eFootball 2023
-
-
eFootball - eFootball is the main mode of eFootball 2023, where you can play online matches with other players around the world. You can choose from various match types, such as 1v1, 2v2, 3v3, or 11v11. You can also join or create a clan and compete in clan battles and tournaments. You can earn eFootball points by playing eFootball matches, which you can use to unlock new players, kits, stadiums, and more.
-
Master League - Master League is the classic single-player mode of PES, where you can create your own club and manage it from the ground up. You can sign players, hire staff, set tactics, train your team, and compete in various leagues and cups. You can also experience a realistic transfer market, where players have their own personalities, preferences, and values. You can also customize your club's logo, kit, stadium, and sponsors.
-
Matchday - Matchday is a special mode that reflects the real-life events and matches of the football world. You can choose a side and play online matches with other players who support the same team. You can earn points for your side by winning matches and scoring goals. The points are accumulated throughout the week and determine the winner of the Matchday event. You can also watch the grand final match between the top players of each side and cheer for your team.
-
Edit Mode - Edit Mode is a mode that allows you to customize various aspects of the game, such as players, teams, leagues, kits, stadiums, balls, and more. You can create your own original content or download content created by other users. You can also apply custom patches and mods to enhance your game experience.
-
-
System Requirements for eFootball 2023
-
To play eFootball 2023 on PC, you need to meet the following system requirements:
-
-
-
Minimum
-
Recommended
-
-
-
OS: Windows 10 64-bit
-
OS: Windows 10 64-bit
-
-
-
CPU: Intel Core i5-3470 or AMD FX-4350
-
CPU: Intel Core i7-3770 or AMD FX-8350
-
-
-
RAM: 8 GB
-
RAM: 16 GB
-
-
-
GPU: NVIDIA GeForce GTX 670 or AMD Radeon HD 7870
-
GPU: NVIDIA GeForce GTX 760 or AMD Radeon R9 270X
-
-
-
DirectX: Version 11
-
DirectX: Version 11
-
-
-
Storage: 40 GB available space
-
Storage: 40 GB available space
-
-
-
Network: Broadband Internet connection
-
Network: Broadband Internet connection
-
-
-
Sound Card: DirectX compatible soundcard or onboard chipset
-
Sound Card: DirectX compatible soundcard or onboard chipset
-
-
How to Download PES ISO File 2023
-
To play eFootball 2023 with a PES ISO file, you need to follow these steps:
-
Step 1: Download the PES ISO File from a Trusted Source
-
The first step is to download the PES ISO file from a trusted source online. There are many websites that offer PES ISO files for download, but you need to be careful and avoid any malicious or fake links. Some of the trusted sources that we recommend are:
-
-
[PES Patch]: This website offers various PES patches, mods, updates, and ISO files for download. You can find the latest PES ISO file 2023 here.
-
[PES Universe]: This website is a community of PES fans who create and share custom content for the game. You can find the latest PES ISO file 2023 here.
-
[PES Mobile]: This website is dedicated to PES mobile games. You can find the latest PES ISO file 2023 here.
-
[PES Futebol]: This website is another source of PES patches, mods, updates, and ISO files for download. You can find the latest PES ISO file 2023 here.
-
Step 2: Extract the PES ISO File Using a Zip Extractor
-
The second step is to extract the PES ISO file using a zip extractor. A zip extractor is a software that can decompress and extract files from a compressed archive, such as a zip file. Some of the zip extractors that we recommend are:
-
-
[WinRAR]: This is a popular and powerful zip extractor that can handle various types of compressed files, such as rar, zip, 7z, iso, and more. You can download WinRAR here.
-
[7-Zip]: This is a free and open-source zip extractor that can also handle various types of compressed files, such as zip, rar, 7z, iso, and more. You can download 7-Zip here.
-
[ZArchiver]: This is a zip extractor for mobile devices that can also handle various types of compressed files, such as zip, rar, 7z, iso, and more. You can download ZArchiver here.
-
-
To extract the PES ISO file using a zip extractor, you need to follow these steps:
-
-
Locate the PES ISO file that you have downloaded on your device.
-
Right-click on the PES ISO file and select "Extract Here" or "Extract to" depending on your zip extractor.
-
Wait for the extraction process to finish. You should see a folder with the same name as the PES ISO file.
-
Open the folder and you should see the PES ISO file inside.
-
-
Step 3: Transfer the PES ISO File to Your Device
-
The third step is to transfer the PES ISO file to your device. Depending on what device you want to play eFootball 2023 on, you need to transfer the PES ISO file to a specific location on your device. Here are some examples:
-
-
If you want to play eFootball 2023 on PC, you need to transfer the PES ISO file to a folder where you have installed an emulator, such as C:\Program Files\PCSX2\isos.
-
If you want to play eFootball 2023 on PlayStation, you need to transfer the PES ISO file to a USB flash drive or an external hard drive that is formatted in FAT32 or exFAT.
-
If you want to play eFootball 2023 on Xbox, you need to transfer the PES ISO file to a USB flash drive or an external hard drive that is formatted in NTFS or exFAT.
-
If you want to play eFootball 2023 on mobile phones, you need to transfer the PES ISO file to a folder on your internal storage or SD card, such as Android\data\com.pesmobile\files\isos.
-
-
To transfer the PES ISO file to your device, you need to follow these steps:
-
-
Connect your device to your PC using a USB cable or a wireless connection.
-
Open your device's storage on your PC and locate the folder where you want to transfer the PES ISO file.
-
Drag and drop the PES ISO file from your PC to your device's folder.
-
Wait for the transfer process to finish. You should see the PES ISO file on your device's folder.
-
-
Step 4: Install an Emulator to Run the PES ISO File
-
The fourth step is to install an emulator to run the PES ISO file. An emulator is a software that can simulate another device's hardware and software on your device. By using an emulator, you can run games and applications that are not compatible with your device.
-
To play eFootball 2023 with a PES ISO file, you need to install an emulator that can run PlayStation 2 games, such as:
-
-
[PCSX2]: This is a popular and powerful emulator for PC that can run PlayStation 2 games with high compatibility and performance. You can download PCSX2 here.
-
[PPSSPP]: This is a popular and powerful emulator for mobile devices that can run PlayStation Portable games with high compatibility and performance. You can download PPSSPP here.
-
-
To install an emulator on your device, you need to follow these steps:
-
-
Download the emulator from its official website or app store.
-
Run the installer or open the app and follow the instructions on the screen.
-
Configure the settings and controls of the emulator according to your preference.
-
Make sure that you have installed the necessary BIOS files and plugins for the emulator to run the PES ISO file. You can find the BIOS files and plugins here.
-
-
How to Play eFootball 2023 with PES ISO File
-
The final step is to play eFootball 2023 with a PES ISO file. To do this, you need to follow these steps:
-
Step 1: Launch the Emulator and Locate the PES ISO File
-
Open the emulator that you have installed on your device and locate the PES ISO file that you have transferred to your device. You can use the file browser or the game library of the emulator to find the PES ISO file.
-
Select the PES ISO file and press the play button or double-click on it to launch the game. You should see the game loading screen and then the main menu of eFootball 2023.
-
Step 2: Adjust the Settings and Controls According to Your Preference
-
Before you start playing eFootball 2023, you might want to adjust the settings and controls of the game according to your preference. You can access the settings and controls menu from the main menu of eFootball 2023 or from the emulator's menu.
-
You can change various aspects of the game, such as the language, difficulty, camera angle, sound volume, graphics quality, and more. You can also customize the controls of the game, such as the buttons, joysticks, keyboard, mouse, or touch screen.
-
Make sure that you save your settings and controls before you exit the menu.
-
Step 3: Enjoy the Game with Realistic Graphics and Gameplay
-
Now you are ready to enjoy eFootball 2023 with a PES ISO file. You can choose from various modes and features of the game, such as eFootball, Master League, Matchday, Edit Mode, and more.
-
You can also play offline or online with other players who use a PES ISO file. You can join or create a clan and compete in clan battles and tournaments. You can also earn eFootball points by playing eFootball matches, which you can use to unlock new players, kits, stadiums, and more.
-
You can also enjoy the realistic graphics and gameplay of eFootball 2023, which are enhanced by using a PES ISO file. You can see the full national team squads of Euro 2023, more realistic animations, player models, enhanced physics, photorealistic visuals, and improved artificial intelligence.
-
Conclusion
-
In this article, we have explained what a PES ISO file is, what are its benefits, what is eFootball 2023, how to download a PES ISO file 2023, and how to play eFootball 2023 with a PES ISO file.
-
We hope that this article has helped you to understand how to get the latest version of eFootball 2023 by using a PES ISO file. By following these steps, you can enjoy eFootball 2023 without having to buy the game or install it from a disc.
-
If you have any questions or feedback about this article, please feel free to leave a comment below. We would love to hear from you!
-
FAQs
-
Here are some of the frequently asked questions about PES ISO file download 2023:
-
-
Q: Is it legal to download a PES ISO file? - A: It depends on your country's laws and regulations regarding intellectual property rights and piracy. Generally speaking, it is not legal to download a PES ISO file if you do not own a copy of the original game or if you do not have permission from the game developer or publisher. However, some countries may allow downloading a PES ISO file for personal use or backup purposes only.
-
Q: Is it safe to download a PES ISO file? - A: It depends on where you download it from. There are many websites that offer PES ISO files for download, but some of them may contain viruses, malware, spyware, or other harmful software that can damage your device or steal your personal information. Therefore, you should always download a PES ISO file from a trusted source online, such as the ones we have recommended in this article.
-
Q: What is the difference between PES and eFootball? - A: PES and eFootball are both names of the same game series developed and published by Konami. However, starting from 2021, Konami decided to rebrand PES as eFootball to reflect its focus on online gaming and eSports. Therefore, eFootball is the new name of PES from 2021 onwards.
-
Q: What is the size of the PES ISO file 2023? - A: The size of the PES ISO file 2023 may vary depending on the source and the version of the file. However, the average size of the PES ISO file 2023 is around 4 GB. You should make sure that you have enough storage space on your device before downloading the PES ISO file 2023.
-
Q: How can I update the PES ISO file 2023? - A: You can update the PES ISO file 2023 by downloading and applying the latest patches, mods, and updates from the trusted sources that we have recommended in this article. You can also check the official website or social media accounts of Konami for any news or announcements regarding eFootball 2023 updates.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Get Google Play Store APK 6 and Enjoy the Best Apps Games and More.md b/spaces/congsaPfin/Manga-OCR/logs/Get Google Play Store APK 6 and Enjoy the Best Apps Games and More.md
deleted file mode 100644
index c47c0db0f3f1173a318c2cd9ccc514efceecb81a..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Get Google Play Store APK 6 and Enjoy the Best Apps Games and More.md
+++ /dev/null
@@ -1,93 +0,0 @@
-
-
Google Play Store APK 6: What You Need to Know
-
If you are an Android user, you probably know what Google Play Store is. It is the official app store for Android devices, where you can find millions of apps, games, books, and more. But did you know that there is a new version of Google Play Store available? It is called Google Play Store APK 6, and it comes with some new features and improvements. In this article, we will tell you everything you need to know about Google Play Store APK 6, including what it is, why you need it, how to download and install it, and how to use it.
-
What is Google Play Store APK 6?
-
The official app store for Android devices
-
Google Play Store is the official app store for Android devices. It is developed and maintained by Google, and it offers a variety of apps, games, books, and more for Android users. You can use Google Play Store to browse and search for apps, games, books, and more that suit your needs and preferences. You can also use Google Play Store to download and update apps, games, books, and more on your device. You can also use Google Play Store to manage your account and settings, such as payment methods, parental controls, subscriptions, etc.
Google Play Store APK 6 is the latest version of Google Play Store. It was released in June 2023, and it comes with some new features and improvements. Some of the new features and improvements include:
-
-
A new design that makes browsing and searching easier and faster
-
A new section that shows personalized recommendations based on your interests and behavior
-
A new feature that lets you pre-register for upcoming apps and games
-
A new feature that lets you share apps and games with your friends via Nearby Share
-
A new feature that lets you see app ratings and reviews from trusted sources
-
Improved performance and stability
-
-
Why do you need Google Play Store APK 6?
-
To access millions of apps, games, books, and more
-
One of the main reasons why you need Google Play Store APK 6 is to access millions of apps, games, books, and more on your Android device. Google Play Store has over 3 million apps, over 1 million games, over 5 million books, and more for you to choose from. You can find apps, games, books, and more for every category, genre, interest, purpose, and occasion. Whether you want to play games, read books, watch movies, listen to music, learn something new, or do anything else on your device, you can find what you need on Google Play Store.
-
To enjoy new features and improvements
-
Another reason why you need Google Play Store APK 6 is to enjoy new features and improvements that make your experience better. As we mentioned earlier, Google Play Store APK 6 comes with some new features and improvements that make browsing and searching easier and faster. You can also enjoy personalized recommendations based on your interests and behavior. You can also pre-register for upcoming apps and games that you are interested in. You can also share apps and games with your friends via Nearby Share. You can also see app ratings and reviews from trusted sources.
All these new features and improvements make Google Play Store APK 6 more user-friendly, convenient, and fun. You can enjoy a better app store experience with Google Play Store APK 6.
-
How to download and install Google Play Store APK 6?
-
Check your device compatibility and settings
-
Before you download and install Google Play Store APK 6, you need to check your device compatibility and settings. Google Play Store APK 6 is compatible with Android devices running Android 4.1 or higher. You also need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps that are not from the official Google Play Store.
-
Download the APK file from a trusted source
-
After you check your device compatibility and settings, you need to download the APK file from a trusted source. An APK file is an Android application package file that contains the app's code, resources, and manifest. You can download the Google Play Store APK 6 file from various websites that offer APK downloads, such as APKMirror, APKPure, or Uptodown. Make sure you download the latest version of the file, which is 6.0.5 as of June 2023. You can also scan the file with an antivirus software before installing it to ensure its safety.
-
Install the APK file on your device
-
Once you download the APK file, you need to install it on your device. To do this, locate the file on your device's storage and tap on it. You will see a prompt asking you to confirm the installation. Tap on Install and wait for the process to complete. You may also see a prompt asking you to grant permissions to the app. Tap on Accept and continue. After the installation is done, you will see a message saying that the app is installed. You can then open the app and start using it.
-
How to download and install google play store apk 6
-Google play store apk 6 latest version download
-Google play store apk 6 for android free download
-Google play store apk 6 update
-Google play store apk 6 not working
-Google play store apk 6 for fire tablet
-Google play store apk 6 uptodown
-Google play store apk 6 mod
-Google play store apk 6 offline installer
-Google play store apk 6 for pc
-Google play store apk 6 for samsung
-Google play store apk 6 old version
-Google play store apk 6 error
-Google play store apk 6 for kindle fire
-Google play store apk 6 xda
-Google play store apk 6 cracked
-Google play store apk 6 no root
-Google play store apk 6 for huawei
-Google play store apk 6 pure
-Google play store apk 6 pro
-Google play store apk 6 beta
-Google play store apk 6 dark mode
-Google play store apk 6 for android tv
-Google play store apk 6 apkpure
-Google play store apk 6 patched
-Google play store apk 6 for chromebook
-Google play store apk 6 lite
-Google play store apk 6 alternative
-Google play store apk 6 install failed
-Google play store apk 6 reddit
-Google play store apk 6 review
-Google play store apk 6 features
-Google play store apk 6 malware
-Google play store apk 6 safe
-Google play store apk 6 premium
-Google play store apk 6 hack
-Google play store apk 6 fix
-Google play store apk 6 tips and tricks
-Google play store apk 6 guide
-Google play store apk 6 tutorial
-
How to use Google Play Store APK 6?
-
Browse and search for apps, games, books, and more
-
To use Google Play Store APK 6, you can browse and search for apps, games, books, and more on your device. You can use the navigation bar at the bottom of the screen to switch between different categories, such as Apps, Games, Books, etc. You can also use the search bar at the top of the screen to type in keywords or phrases related to what you are looking for. You can also use filters and sorting options to narrow down your results. You can also swipe left or right to see different sections, such as Top Charts, Editors' Choice, For You, etc.
-
Download and update apps, games, books, and more
-
To download and update apps, games, books, and more on your device, you can use Google Play Store APK 6. To download an app, game, book, or anything else, tap on its icon or name on the screen. You will see a page with more information about it, such as description, screenshots, ratings, reviews, etc. Tap on Install or Buy (if it is a paid item) and follow the instructions to complete the download. To update an app, game, book, or anything else, tap on the Menu icon (three horizontal lines) at the top left corner of the screen. Tap on My Apps & Games and then tap on Update All or Update next to each item that needs an update.
-
Manage your account and settings
-
To manage your account and settings on your device, you can use Google Play Store APK 6. To access your account and settings, tap on the Menu icon (three horizontal lines) at the top left corner of the screen. Tap on Account to see your payment methods, subscriptions, rewards, order history, etc. Tap on Settings to see your preferences, notifications, parental controls,
security, etc. You can also tap on Help & Feedback to get support or send feedback to Google Play Store.
-
Conclusion
-
Google Play Store APK 6 is the latest version of Google Play Store, the official app store for Android devices. It offers millions of apps, games, books, and more for Android users. It also comes with new features and improvements that make browsing and searching easier and faster. You can download and install Google Play Store APK 6 on your device by following the steps we explained in this article. You can also use Google Play Store APK 6 to browse and search for apps, games, books, and more, download and update them, and manage your account and settings. We hope you found this article helpful and informative. If you have any questions or comments, please feel free to leave them below.
-
FAQs
-
-
What is the difference between Google Play Store and Google Play Services?
-
Google Play Store is the app store for Android devices, where you can find and download apps, games, books, and more. Google Play Services is a background service that provides core functionality for Android devices, such as authentication, location, synchronization, etc. You need both Google Play Store and Google Play Services to use your Android device properly.
-
How can I update Google Play Store APK 6?
-
You can update Google Play Store APK 6 by downloading and installing the latest version of the APK file from a trusted source. You can also check for updates on your device by going to Settings > Apps > Google Play Store > App Details > Update.
-
Is Google Play Store APK 6 safe to use?
-
Google Play Store APK 6 is safe to use if you download it from a trusted source and scan it with an antivirus software before installing it. However, you should be careful when downloading and installing apps from unknown sources, as they may contain malware or viruses that can harm your device or data.
-
How can I uninstall Google Play Store APK 6?
-
You can uninstall Google Play Store APK 6 by going to Settings > Apps > Google Play Store > Uninstall. However, we do not recommend uninstalling Google Play Store APK 6, as it may cause problems with your device or other apps. If you have any issues with Google Play Store APK 6, you can try clearing its cache and data, or contacting its support team.
-
How can I contact Google Play Store support team?
-
You can contact Google Play Store support team by going to Menu > Help & Feedback on the app. You can also visit the official website of Google Play Store or call the toll-free number 1-855-466-4438.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Get Messenger on Your Desktop Link Download Guide.md b/spaces/congsaPfin/Manga-OCR/logs/How to Get Messenger on Your Desktop Link Download Guide.md
deleted file mode 100644
index ce9c6a44ddcf64ea79a364a69bd835dbaceb26df..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/How to Get Messenger on Your Desktop Link Download Guide.md
+++ /dev/null
@@ -1,158 +0,0 @@
-
-
-
-
-
How to Download Messenger on Your PC or Mac
-
Do you want to stay connected with your friends and family on Facebook without using your phone or browser? If so, you might be interested in downloading Messenger on your PC or Mac. Messenger is a free all-in-one communication app that lets you send text, voice and video messages, make group calls, share files and photos, watch videos together, and more. In this article, we will show you how to download Messenger on your desktop device, how to use it, what are its benefits and drawbacks, and what are some alternatives you can try.
Messenger desktop app has many features that make it a great choice for staying in touch with your loved ones. Here are some of them:
-
-
Text, audio and video calls: You can send unlimited text messages, make high-quality voice and video calls, and even record and send voice and video messages.
-
Group chats: You can send unlimited text messages, make high-quality voice and video calls, and even record and send voice and video messages.
-
Group chats: You can create group chats with up to 250 people, add group admins, change group names and photos, and use @mentions to get someone's attention.
-
Privacy settings: You can control who can contact you, block unwanted messages and calls, report abusive behavior, and manage your active status.
-
Custom reactions: You can express yourself with more than just a thumbs up. You can choose from a variety of emojis and stickers to react to messages.
-
Chat themes: You can customize your chat background with different colors, gradients, and images to suit your mood or personality.
-
Watch together: You can watch videos from Facebook Watch, IGTV, Reels, TV shows, movies, and more with your friends and family in real time.
-
Stickers, GIFs and emojis: You can spice up your conversations with thousands of stickers, GIFs and emojis from the Messenger library or create your own.
-
Files, photos and videos: You can share files, photos and videos of any size and format with your contacts. You can also use the built-in camera to take selfies or capture moments.
-
Plans and polls: You can create plans and polls to organize events, get opinions, or make decisions with your group.
-
Location sharing: You can share your live location with your friends and family for a specified period of time or request their location.
-
Money transfer: You can send and receive money securely and easily with Facebook Pay (available in select countries).
-
Business chat: You can connect with businesses to get customer support, make reservations, shop online, and more.
-
-
How to Download Messenger Desktop App from Messenger.com
-
If you want to download Messenger desktop app from the official website, here are the steps you need to follow:
Click on Download for Windows or Download for Mac depending on your device.
-
Open the installer file and follow the instructions.
-
-
You will need to have Windows 10 or macOS 10.10 or higher to run the app. The app will automatically update itself when a new version is available.
-
How to Download Messenger Desktop App from Microsoft Store or App Store
-
If you prefer to download Messenger desktop app from the Microsoft Store or the App Store, here are the steps you need to follow:
-
link download messenger app for pc
-link download messenger lite apk
-link download messenger for mac
-link download messenger desktop app
-link download messenger video call
-link download messenger for windows 10
-link download messenger for android
-link download messenger for iphone
-link download messenger beta version
-link download messenger dark mode
-link download messenger stickers free
-link download messenger chat history
-link download messenger group chat
-link download messenger voice messages
-link download messenger games online
-link download messenger qr code scanner
-link download messenger rooms app
-link download messenger kids app
-link download messenger business account
-link download messenger new update
-link download messenger old version
-link download messenger without facebook
-link download messenger offline installer
-link download messenger web app
-link download messenger themes and colors
-link download messenger reactions and emojis
-link download messenger watch together feature
-link download messenger send money feature
-link download messenger chat with businesses feature
-link download messenger cross-app messaging feature
-link download messenger privacy settings feature
-link download messenger custom reactions feature
-link download messenger chat themes feature
-link download messenger record and send feature
-link download messenger express yourself feature
-link download messenger send files feature
-link download messenger plan and make it happen feature
-link download messenger send location feature
-link download messenger compatible across platforms feature
-how to get the link to download the Messenger app on Google Play Store?
-how to get the link to download the Messenger app on Apple App Store?
-how to get the direct link to download the Messenger app on your phone?
-how to get the latest version of the Messenger app by using the link to download it?
-how to get the best experience of using the Messenger app by following the instructions on the link to download it?
-how to get access to all the features of the Messenger app by clicking on the link to download it?
-how to get in touch with your friends and family on the Messenger app by using the link to download it?
-how to get more information about the Messenger app by visiting the official website on the link to download it?
-how to get help and support for the Messenger app by contacting the developer on the link to download it?
-
-
Go to Microsoft Store or App Store on your device.
-
Search for Messenger in the search bar.
-
Click on Get or Install and wait for the app to download.
-
-
You will need to have Windows 10 or macOS 10.12 or higher to run the app. The app will automatically update itself when a new version is available.
-
How to Use Messenger Desktop App
-
Once you have downloaded Messenger desktop app on your PC or Mac, you can start using it right away. Here are the steps you need to follow:
-
-
Launch the app from your desktop.
-
Log in with your Facebook account or create a new one if you don't have one already.
-
Start chatting with your friends and family by clicking on their names or searching for them in the search bar.
-
-
You can also access other features of the app by clicking on the icons at the top or bottom of the screen. For example, you can click on the video camera icon to start a video call, the phone icon to start a voice call, the plus icon to create a group chat, the settings icon to change your preferences, and so on.
-
Benefits of Using Messenger Desktop App
-
Messenger desktop app has many benefits that make it a convenient and enjoyable way to communicate with your loved ones. Here are some of them:
-
-
Larger screen: You can enjoy a bigger and clearer view of your conversations, photos, videos, and other content on your desktop screen. You can also resize the app window according to your preference.
-
Keyboard shortcuts: You can enjoy a bigger and clearer view of your conversations, photos, videos, and other content on your desktop screen. You can also resize the app window according to your preference.
-
Keyboard shortcuts: You can use your keyboard to perform various actions on the app, such as sending messages, starting calls, switching chats, and more. You can find the list of keyboard shortcuts by clicking on the settings icon and then on Keyboard Shortcuts.
-
Notifications: You can get notified of new messages and calls on your desktop, even when the app is minimized or closed. You can also customize your notification settings by clicking on the settings icon and then on Notifications.
-
Synced messages: You can access all your messages and chats across your devices, whether you use Messenger on your phone, tablet, browser, or desktop. You can also sync your contacts and preferences across your devices.
-
Dark mode: You can switch to dark mode to reduce eye strain and save battery life. You can toggle dark mode on or off by clicking on the settings icon and then on Dark Mode.
-
-
Drawbacks of Using Messenger Desktop App
-
Messenger desktop app also has some drawbacks that you should be aware of before using it. Here are some of them:
-
-
Requires internet connection: You need to have a stable internet connection to use the app. If you lose connection or have a slow network, you might experience delays, glitches, or errors.
-
Limited features compared to mobile app: The desktop app does not have some features that are available on the mobile app, such as stories, camera effects, games, and discover tab. You also cannot make group video calls with more than 50 people on the desktop app.
-
Data usage: The app uses data to send and receive messages and calls. Depending on your data plan and usage, you might incur additional charges from your internet service provider or carrier.
-
-
Tips and Tricks for Using Messenger Desktop App
-
To make the most out of Messenger desktop app, here are some tips and tricks you can try:
-
-
Change chat settings: You can change various chat settings by clicking on the info icon at the top right corner of any chat. For example, you can change the chat name, photo, color, emoji, or theme. You can also mute notifications, ignore messages, or block contacts from there.
-
Mute notifications: If you want to silence all notifications from the app, you can click on the settings icon and then on Notifications. You can choose to mute notifications for a specific period of time or until you turn them back on.
-
Archive or delete conversations: If you want to clean up your chat list, you can archive or delete conversations by right-clicking on them. Archiving a conversation will hide it from your chat list until you search for it or receive a new message from it. Deleting a conversation will remove it from your chat list permanently.
-
Use keyboard shortcuts: As mentioned earlier, you can use keyboard shortcuts to perform various actions on the app faster and easier. You can find the list of keyboard shortcuts by clicking on the settings icon and then on Keyboard Shortcuts.
-
-
Alternatives to Messenger Desktop App
-
If you are looking for other options to communicate with your friends and family on your desktop device, here are some alternatives you can try:
-
-
WhatsApp Desktop: WhatsApp is another popular messaging app owned by Facebook that lets you send text, voice and video messages, make group calls, share files and photos, and more. You can download WhatsApp Desktop from WhatsApp.com/download.
-
Skype: Skype is a well-known video calling app that also lets you send text, voice and video messages, make group calls, share files and photos, and more. You can download Skype from Skype.com/en/get-skype.
-
Telegram Desktop: Telegram is a secure and fast messaging app that lets you send text, voice and video messages, make group calls, share files and photos, and more. You can download Telegram Desktop from Desktop.Telegram.org.
-
Signal Desktop: Signal is a privacy-focused messaging app that lets you send text, voice and video messages, make group calls, share files and photos, and more. You can download Signal Desktop from Signal > Signal Desktop from Signal.org/download.
-
-
Conclusion
-
Messenger desktop app is a great way to communicate with your friends and family on your PC or Mac. It has many features that make it fun and easy to use, such as text, audio and video calls, group chats, custom reactions, chat themes, watch together, stickers, GIFs and emojis, files, photos and videos, plans and polls, location sharing, money transfer, and business chat. It also has some benefits over the mobile app, such as larger screen, keyboard shortcuts, notifications, synced messages, and dark mode. However, it also has some drawbacks, such as requiring internet connection, having limited features compared to the mobile app, and using data. You can download Messenger desktop app from Messenger.com, Microsoft Store, or App Store. You can also try some alternatives to Messenger desktop app, such as WhatsApp Desktop, Skype, Telegram Desktop, or Signal Desktop.
-
We hope you found this article helpful and informative. If you have any questions or feedback, please let us know in the comments below. Thank you for reading!
-
FAQs
-
Here are some frequently asked questions about Messenger desktop app:
-
-
What are the system requirements for Messenger Desktop App?
-
The system requirements for Messenger desktop app are:
-
-
Windows 10 or macOS 10.10 or higher
-
At least 512 MB of RAM
-
At least 150 MB of free disk space
-
A stable internet connection
-
-
How can I update Messenger Desktop App?
-
Messenger desktop app will automatically update itself when a new version is available. You can also check for updates manually by clicking on the settings icon and then on About Messenger. If there is an update available, you will see a button to download and install it.
-
How can I log out of Messenger Desktop App?
-
To log out of Messenger desktop app, you can click on the settings icon and then on Log Out. You can also switch accounts by clicking on the settings icon and then on Switch Account.
-
How can I report a problem with Messenger Desktop App?
-
To report a problem with Messenger desktop app, you can click on the settings icon and then on Report a Problem. You can describe the issue you are facing and attach screenshots if possible. You can also send feedback or suggestions by clicking on the settings icon and then on Send Feedback.
-
How can I delete Messenger Desktop App?
-
To delete Messenger desktop app from your device, you can follow these steps:
-
-
For Windows: Go to Control Panel > Programs > Uninstall a Program. Find Messenger in the list and click on Uninstall.
-
For Mac: Go to Finder > Applications. Find Messenger in the list and drag it to the Trash.
-
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Learn to Throw Knives Like a Pro with Knife Hit - Shooting Master APK.md b/spaces/congsaPfin/Manga-OCR/logs/Learn to Throw Knives Like a Pro with Knife Hit - Shooting Master APK.md
deleted file mode 100644
index 82fe33c4685bf5f685de690af2dc73978a2cd675..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Learn to Throw Knives Like a Pro with Knife Hit - Shooting Master APK.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-
Knife Hit - Shooting Master: A Fun and Addictive Game for Android
-
If you are looking for a simple yet exciting game to play on your Android device, you might want to try Knife Hit - Shooting Master. This is a game where you have to tap to throw knives and hit the target. The more you hit, the more points you can get. But be careful, don't hit the other knives or you will lose. Sounds easy, right? Well, not so fast. The target will rotate, move, and change shape, making it harder to hit. And there are also boss levels where you have to defeat a giant fruit or a monster with your knives. Are you ready to test your skills and reflexes in this game? Let's find out more about it.
-
What is Knife Hit - Shooting Master?
-
Knife Hit - Shooting Master is a game developed by BlueGame Studio, a small indie team based in Vietnam. The game was released in 2022 and has been downloaded over 10 million times on Google Play Store. The game is rated 4.3 out of 5 stars by more than 100 thousand users who have enjoyed its gameplay, graphics, and sound effects.
The gameplay of Knife Hit - Shooting Master is very simple and intuitive. You just have to tap the screen to throw a knife at the target. The target can be a wooden log, a fruit, a cake, a pizza, or even a dinosaur. You have to hit the target as many times as possible without hitting the other knives that are already stuck on it. If you hit another knife, you will lose one life and have to start over. You have three lives in total, so be careful.
-
As you progress through the game, the target will rotate faster, move around, or change shape, making it harder to hit. You will also encounter boss levels where you have to defeat a giant fruit or a monster with your knives. These levels are more challenging and require more accuracy and timing. You will also get bonus points for hitting the center of the target or for hitting multiple targets in a row.
-
Features of Knife Hit - Shooting Master
-
Knife Hit - Shooting Master is not just a simple tapping game. It also has many features that make it more fun and addictive. Here are some of them:
-
Different knives and targets
-
The game has over 100 different knives that you can collect and use in the game. Each knife has its own design, color, and shape. Some of them are realistic, like kitchen knives or daggers, while others are more creative, like pencils, scissors, or swords. You can unlock new knives by completing levels, earning coins, or watching ads.
-
The game also has over 50 different targets that you can hit with your knives. Each target has its own theme, like food, animals, or objects. Some of them are easy to hit, while others are tricky and require more skill. You can unlock new targets by completing levels or earning coins.
-
knife hit shooting master game download
-knife hit shooting master mod apk
-knife hit shooting master online
-knife hit shooting master free
-knife hit shooting master android
-knife hit shooting master hack
-knife hit shooting master cheats
-knife hit shooting master tips
-knife hit shooting master review
-knife hit shooting master gameplay
-knife hit shooting master app store
-knife hit shooting master ios
-knife hit shooting master pc
-knife hit shooting master windows
-knife hit shooting master mac
-knife hit shooting master linux
-knife hit shooting master chromebook
-knife hit shooting master bluestacks
-knife hit shooting master nox player
-knife hit shooting master emulator
-knife hit shooting master apk pure
-knife hit shooting master apk mirror
-knife hit shooting master apk combo
-knife hit shooting master apk online
-knife hit shooting master apk offline
-knife hit shooting master apk latest version
-knife hit shooting master apk update
-knife hit shooting master apk old version
-knife hit shooting master apk file download
-knife hit shooting master apk install
-knife hit shooting master apk uninstall
-knife hit shooting master apk size
-knife hit shooting master apk requirements
-knife hit shooting master apk features
-knife hit shooting master apk bugs
-knife hit shooting master apk fixes
-knife hit shooting master apk support
-knife hit shooting master apk contact
-knife hit shooting master apk feedback
-knife hit shooting master apk rating
-knife hit shooting master apk alternatives
-knife hit shooting master apk similar games
-knife hit - throwing knives game apk download [^1^]
-Hit Master 3D - Knife Assassin game download [^2^]
-Hit Master 3D - Knife Assassin mod apk [^2^]
-Hit Master 3D - Knife Assassin online [^2^]
-Hit Master 3D - Knife Assassin free [^2^]
-Hit Master 3D - Knife Assassin android [^2^]
-Hit Master 3D - Knife Assassin hack [^2^]
-
Boss levels and challenges
-
The game has 10 boss levels where you have to defeat a giant fruit or a monster with your knives. These levels are more difficult than the regular ones and require more knives to complete. You have to hit the boss multiple times until its health bar is empty. But be careful, the boss will also attack you with its own weapons or abilities. For example, the pineapple boss will shoot spikes at you, while the dragon boss will breathe fire at you. You have to dodge these attacks and hit the boss as fast as possible.
-
The game also has daily challenges where you can earn coins and rewards by completing various tasks, such as hitting a certain number of targets, hitting the center of the target, or hitting multiple targets in a row. These challenges are updated every day and give you more reasons to play the game.
-
Rewards and achievements
-
The game has many rewards and achievements that you can earn by playing the game. You can get coins, gems, stars, and chests by hitting the target, completing levels, or watching ads. You can use these items to unlock new knives, targets, or skins for your game. You can also get trophies by completing achievements, such as hitting 1000 targets, defeating 10 bosses, or collecting 50 knives. These trophies will show your progress and skills in the game.
-
Leaderboards and rankings
-
The game has leaderboards and rankings where you can compare your score and performance with other players around the world. You can see your rank in different categories, such as total score, highest level, most coins, or most knives. You can also see the top players in each category and try to beat their scores. You can also share your score and achievements with your friends on social media platforms, such as Facebook, Twitter, or Instagram.
-
Graphics and sound effects
-
The game has colorful and cartoonish graphics that make it appealing and enjoyable to play. The game has a variety of themes and backgrounds for each target, such as forest, desert, ocean, or space. The game also has smooth animations and transitions that make it look realistic and dynamic. The game has catchy and upbeat sound effects that match the gameplay and mood of the game. The game has a cheerful and energetic music that plays in the background and changes according to the level and situation. The game also has voice-overs and sound effects that add more fun and humor to the game.
-
How to download and install Knife Hit - Shooting Master APK
-
If you want to play Knife Hit - Shooting Master on your Android device, you have two options to download and install it. You can either download it from Google Play Store or from APKCombo or ApkOnline websites. Here are the steps for each option:
-
Download from Google Play Store
-
This is the easiest and safest way to download and install Knife Hit - Shooting Master on your device. You just have to follow these steps:
-
-
Open Google Play Store on your device.
-
Search for Knife Hit - Shooting Master in the search bar.
-
Select the game from the list of results.
-
Tap on Install button to download and install the game.
-
Wait for the installation to finish.
-
Tap on Open button to launch the game.
-
-
Download from APKCombo or ApkOnline
-
This is another way to download and install Knife Hit - Shooting Master on your device. You can use this option if you don't have access to Google Play Store or if you want to get the latest version of the game. You just have to follow these steps:
-
-
Open your browser on your device.
-
Go to APKCombo or ApkOnline website.
-
Search for Knife Hit - Shooting Master in the search bar.
-
Select the game from the list of results.
-
Tap on Download APK button to download the APK file of the game.
-
Wait for the download to finish.
-
-
Install the APK file on your device
-
After downloading the APK file of Knife Hit - Shooting Master from APKCombo or ApkOnline website, you have to install it on your device. You just have to follow these steps:
-
-
Go to Settings on your device.
-
Go to Security or Privacy section.
-
Enable Unknown Sources option to allow installation of apps from sources other than Google Play Store.
-
Go to Downloads or File Manager on your device.
-
Find and tap on the APK file of Knife Hit - Shooting Master that you downloaded earlier.
-
Tap on Install button to install the game.
-
Wait for the installation to finish.
-
Tap on Open button to launch the game.
-
-
Conclusion
-
Knife Hit - Shooting Master is a fun and addictive game for Android devices that will test your skills and reflexes in throwing knives at various targets. The game has many features that make it more enjoyable and challenging, such as different knives and targets, boss levels and challenges, rewards and achievements, leaderboards and rankings, graphics and sound effects. The game is easy to play and hard to master. You can download and install it from Google Play Store or from APKCombo or ApkOnline websites. If you are looking for a game that will keep you entertained and challenged, you should give Knife Hit - Shooting Master a try.
-
FAQs
-
Here are some frequently asked questions about Knife Hit - Shooting Master:
-
-
Q: How many levels are there in Knife Hit - Shooting Master?
-
A: There are 100 levels in Knife Hit - Shooting Master, plus 10 boss levels. You can replay any level you have completed to improve your score and earn more coins.
-
Q: How can I get more coins and gems in Knife Hit - Shooting Master?
-
A: You can get more coins and gems by hitting the target, completing levels, watching ads, or opening chests. You can also buy coins and gems with real money if you want to support the developers.
-
Q: How can I change the skin of my game in Knife Hit - Shooting Master?
-
A: You can change the skin of your game by tapping on the settings icon on the top right corner of the screen. You can choose from different themes, such as dark, light, neon, or rainbow. You can also unlock new skins by earning stars or buying them with gems.
-
Q: What are the benefits of logging in with Facebook in Knife Hit - Shooting Master?
-
A: Logging in with Facebook will allow you to save your progress and sync it across different devices. You will also be able to see your friends' scores and challenge them to beat your score. You will also get 100 gems as a bonus for logging in with Facebook.
-
Q: Is Knife Hit - Shooting Master safe to play for children?
-
A: Knife Hit - Shooting Master is a game that is suitable for all ages. The game does not contain any violence, blood, or gore. The game is also free to play and does not require any personal information or permissions. However, the game does contain ads and in-app purchases that may require parental supervision.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Solitario Premium APK El clsico juego de cartas con ms opciones y diversin.md b/spaces/congsaPfin/Manga-OCR/logs/Solitario Premium APK El clsico juego de cartas con ms opciones y diversin.md
deleted file mode 100644
index 8520a8ea318b26f626d00ddd101edd79de1f9e97..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Solitario Premium APK El clsico juego de cartas con ms opciones y diversin.md
+++ /dev/null
@@ -1,115 +0,0 @@
-
-
Descargar Solitario Premium APK: Cómo disfrutar del clásico juego de cartas en tu dispositivo Android
-
¿Te gusta el solitario? ¿Quieres jugar al clásico juego de cartas en tu teléfono o tableta Android? ¿Quieres acceder a funciones y opciones exclusivas que no encontrarás en otras versiones? Entonces, te interesa descargar solitario premium apk, una aplicación que te permite jugar al solitario de forma gratuita, sin anuncios, con gráficos de alta calidad, música relajante, desafíos diarios, temas personalizados y mucho más. En este artículo, te contamos todo lo que necesitas saber sobre el solitario premium apk, desde su origen e historia, hasta sus beneficios para la salud mental, pasando por sus características, requisitos, pasos de instalación y consejos para ganar. ¡Sigue leyendo y prepárate para disfrutar del mejor solitario en tu dispositivo Android!
-
¿Qué es el solitario y por qué es tan popular?
-
El solitario, también conocido como paciencia o cabale, es un género de juegos de cartas que se pueden jugar por una sola persona. El solitario más común es el Klondike, que consiste en crear cuatro pilas de cartas, una por cada palo, en orden ascendente (desde el as hasta el rey). Estas pilas se llaman pilares o cimientos. Para lograrlo, se debe ir moviendo las cartas entre siete columnas o montones que se forman al repartir las cartas boca abajo. Solo se puede mover la carta que está boca arriba en cada columna, y se debe colocar sobre otra carta de color diferente y valor inferior. Por ejemplo, si hay un seis de corazones, se puede colocar sobre él un cinco de picas o un cinco de tréboles. Si no hay más movimientos posibles, se puede tomar una carta del mazo o pila de reserva que se encuentra aparte. El juego termina cuando se completan los cuatro pilares o cuando no hay más movimientos posibles.
El solitario es un juego muy popular por varias razones. En primer lugar, es un juego muy fácil de aprender, ya que solo se necesita un mazo de cartas y seguir unas reglas sencillas. En segundo lugar, es un juego muy entretenido y desafiante, ya que requiere de habilidad, estrategia y paciencia para resolverlo. En tercer lugar, es un juego muy relajante y terapéutico, ya que ayuda a calmar la mente, a entrar en un estado meditativo y a mejorar la memoria y la concentración. Además, el solitario tiene una larga y fascinante historia que lo hace aún más interesante.
-
El origen y la historia del solitario
-
El solitario no tiene una fecha definitiva de invención, pero su registro se puede rastrear hasta finales del siglo XVIII en el norte de Europa y Escandinavia. El término "patiencespiel" apareció por primera vez en un libro alemán publicado en 1788. También hay referencias al solitario en la literatura francesa. Se cree que el solitario se originó como una forma de entretenimiento para la nobleza y la realeza, y que se popularizó en el siglo XIX con la aparición de los primeros libros de solitario. Algunos personajes famosos que eran aficionados al solitario son Napoleón Bonaparte, Winston Churchill, Franklin D. Roosevelt y Marcel Proust.
-
Los beneficios de jugar al solitario para la salud mental
-
Jugar al solitario no solo es divertido, sino también beneficioso para la salud mental. Algunos de los beneficios que se pueden obtener son los siguientes:
-
-
Reduce el estrés y la ansiedad. El solitario es un juego que requiere concentración y atención, lo que ayuda a distraerse de los problemas y las preocupaciones. Además, el solitario tiene un efecto calmante y relajante, ya que se acompaña de música suave y gráficos agradables.
-
Mejora la memoria y la agilidad mental. El solitario es un juego que implica recordar las cartas y las posiciones, lo que estimula la memoria a corto y largo plazo. Asimismo, el solitario es un juego que exige pensar y planificar los movimientos, lo que mejora la capacidad de razonamiento y resolución de problemas.
-
Aumenta la autoestima y la confianza. El solitario es un juego que ofrece un reto personal y una satisfacción al completarlo. Al lograr resolver el solitario, se siente una sensación de logro y orgullo, lo que aumenta la autoestima y la confianza en uno mismo.
-
Fomenta la paciencia y la perseverancia. El solitario es un juego que no siempre se puede resolver a la primera, sino que a veces se necesita intentarlo varias veces hasta encontrar la solución. Esto enseña a tener paciencia y perseverancia, dos virtudes importantes para la vida.
-
-
¿Qué es el solitario premium apk y qué ventajas tiene?
-
El solitario premium apk es una aplicación para dispositivos Android que te permite jugar al solitario de forma gratuita, sin anuncios, con gráficos de alta calidad, música relajante, desafíos diarios, temas personalizados y mucho más. Se trata de una versión mejorada del solitario clásico, que ofrece una experiencia de juego única e inigualable. Algunas de las ventajas que tiene el solitario premium apk son las siguientes:
-
Las características y funciones del solitario premium apk
-
El solitario premium apk tiene una serie de características y funciones que lo hacen diferente y superior a otras versiones del solitario. Estas son algunas de ellas:
-
-
Tiene gráficos de alta calidad, con efectos visuales realistas y detallados. Las cartas tienen un diseño elegante y clásico, con diferentes estilos para elegir. El fondo del juego también se puede cambiar según el gusto del usuario.
-
Tiene música relajante, con melodías suaves y tranquilas que acompañan al juego. La música se puede ajustar o silenciar según la preferencia del usuario.
-
Tiene desafíos diarios, con niveles de dificultad variados que ponen a prueba las habilidades del jugador. Cada día se puede acceder a un nuevo desafío, con recompensas especiales por completarlo.
-
Tiene temas personalizados, con colores y fondos diferentes para cada temporada o festividad. El usuario puede elegir el tema que más le guste o cambiarlo según su estado de ánimo.
-
Tiene modos de juego alternativos, como el modo cronometrado, el modo vegas o el modo experto. Estos modos ofrecen una mayor variedad y diversión al juego, con reglas distintas y puntuaciones diferentes.
-
Tiene estadísticas y logros, con datos e información sobre el rendimiento del jugador. El usuario puede consultar sus récords, sus victorias, sus derrotas, su tiempo medio, su porcentaje de éxito y mucho más. También puede ver los logros que ha conseguido y los que le faltan por obtener.
-
Tiene opciones de personalización, con ajustes para adaptar el juego a las preferencias del usuario. El usuario puede cambiar el tamaño de las cartas, el tipo de movimiento, el sonido, las notificaciones, el idioma y más.
-
Tiene soporte y actualizaciones, con un equipo de desarrolladores que se encarga de resolver cualquier problema o duda que tenga el usuario. Además, el solitario premium apk se actualiza constantemente con nuevas funciones y mejoras.
-
-
Los requisitos y los pasos para descargar e instalar el solitario premium apk
-
El solitario premium apk es una aplicación que se puede descargar e instalar fácilmente en cualquier dispositivo Android. Estos son los requisitos y los pasos que se deben seguir:
-
-
El requisito principal es tener un dispositivo Android con una versión igual o superior a la 4.4 (KitKat). También se necesita una conexión a internet y un espacio de almacenamiento suficiente.
-
El primer paso es descargar el archivo apk del solitario premium desde un sitio web seguro y confiable. Se puede usar el siguiente enlace: [Descargar solitario premium apk].
-
El segundo paso es habilitar la opción de "Orígenes desconocidos" en el dispositivo Android. Esta opción permite instalar aplicaciones que no provienen de la tienda oficial de Google Play. Para ello, se debe ir a Ajustes > Seguridad > Orígenes desconocidos y activarla.
-
El tercer paso es localizar el archivo apk descargado en el dispositivo Android. Normalmente, se encuentra en la carpeta de Descargas o en la de Archivos. Una vez localizado, se debe pulsar sobre él para iniciar la instalación.
-
El cuarto paso es seguir las instrucciones que aparecen en la pantalla para completar la instalación. Se debe aceptar los permisos y las condiciones de uso de la aplicación.
-
El quinto paso es abrir la aplicación y disfrutar del solitario premium apk en el dispositivo Android.
-
-
¿Cómo jugar al solitario premium apk y qué consejos seguir para ganar?
-
El solitario premium apk es un juego muy fácil de jugar, pero también muy difícil de ganar. Por eso, es importante conocer el objetivo, las reglas y las estrategias del juego. Estos son algunos consejos que te ayudarán a mejorar tu juego de solitario:
-
descargar solitario premium apk gratis
-descargar solitario premium apk full
-descargar solitario premium apk mod
-descargar solitario premium apk sin anuncios
-descargar solitario premium apk ultima version
-descargar solitario premium apk para android
-descargar solitario premium apk mega
-descargar solitario premium apk mediafire
-descargar solitario premium apk 2023
-descargar solitario premium apk 4.16.3141.1
-descargar microsoft solitaire collection premium apk
-descargar microsoft solitaire collection premium apk gratis
-descargar microsoft solitaire collection premium apk full
-descargar microsoft solitaire collection premium apk mod
-descargar microsoft solitaire collection premium apk sin anuncios
-descargar microsoft solitaire collection premium apk ultima version
-descargar microsoft solitaire collection premium apk para android
-descargar microsoft solitaire collection premium apk mega
-descargar microsoft solitaire collection premium apk mediafire
-descargar microsoft solitaire collection premium apk 2023
-descargar microsoft solitaire collection premium apk 4.16.3141.1
-descargar juegos de solitario premium apk
-descargar juegos de solitario premium apk gratis
-descargar juegos de solitario premium apk full
-descargar juegos de solitario premium apk mod
-descargar juegos de solitario premium apk sin anuncios
-descargar juegos de solitario premium apk ultima version
-descargar juegos de solitario premium apk para android
-descargar juegos de solitario premium apk mega
-descargar juegos de solitario premium apk mediafire
-descargar juegos de solitario premium apk 2023
-descargar juegos de cartas solitario premium apk
-descargar juegos de cartas solitario premium apk gratis
-descargar juegos de cartas solitario premium apk full
-descargar juegos de cartas solitario premium apk mod
-descargar juegos de cartas solitario premium apk sin anuncios
-descargar juegos de cartas solitario premium apk ultima version
-descargar juegos de cartas solitario premium apk para android
-descargar juegos de cartas solitario premium apk mega
-descargar juegos de cartas solitario premium apk mediafire
-descargar juegos de cartas solitario premium apk 2023
-como descargar solitario premium apk
-como descargar microsoft solitaire collection premium apk
-como descargar juegos de solitario premium apk
-como descargar juegos de cartas solitario premium apk
-
El objetivo y las reglas del solitario
-
El objetivo del solitario es crear cuatro pilas de cartas, una por cada palo, en orden ascendente (desde el as hasta el rey). Estas pilas se llaman pilares o cimientos. Para lograrlo, se debe ir moviendo las cartas entre siete columnas o montones que se forman al repartir las cartas boca abajo. Solo se puede mover la carta que está boca arriba en cada columna, y se debe colocar sobre otra carta de color diferente y valor inferior. Por ejemplo, si hay un seis de corazones, se puede colocar sobre él un cinco de picas o un cinco de tréboles. Si no hay más movimientos posibles, se puede tomar una carta del mazo o pila de reserva que se encuentra aparte. El juego termina cuando se completan los cuatro pilares o cuando no hay más movimientos posibles.
-
Las estrategias y trucos para mejorar tu juego de solitario
-
Aunque el solitario es un juego que depende mucho del azar, también hay algunas estrategias y trucos que pueden aumentar las probabilidades de ganar. Estos son algunos de ellos:
-
-
Mover primero las cartas del mazo o pila de reserva. Esto permite tener más opciones y posibilidades de mover las cartas de las columnas.
-
No llenar los espacios vacíos con reyes. Esto limita los movimientos posibles y bloquea las columnas. Es mejor esperar a tener una carta baja o un as para llenar los espacios vacíos.
-
No mover las cartas a los pilares o cimientos demasiado pronto. Esto puede impedir que se puedan mover otras cartas que están debajo o que se necesitan para formar secuencias. Es mejor esperar a tener una buena cantidad de cartas ordenadas en las columnas antes de moverlas a los pilares.
-
Tener en cuenta los palos y los valores de las cartas. Esto ayuda a planificar los movimientos con anticipación y a evitar errores o bloqueos. Es conveniente saber qué cartas faltan por salir y qué cart as se pueden mover o no.
-
Usar el botón de deshacer cuando sea necesario. Esto permite corregir los movimientos que se hayan hecho por error o que no hayan sido favorables. El solitario premium apk tiene un botón de deshacer ilimitado, lo que facilita el juego.
-
-
Conclusión
-
El solitario es un juego de cartas clásico, popular y divertido que se puede jugar en cualquier dispositivo Android gracias al solitario premium apk. Esta aplicación ofrece una versión mejorada del solitario, con gráficos de alta calidad, música relajante, desafíos diarios, temas personalizados y muchas funciones y opciones más. Además, el solitario tiene beneficios para la salud mental, como reducir el estrés, mejorar la memoria, aumentar la autoestima y fomentar la paciencia. Para jugar al solitario premium apk, solo se necesita descargar e instalar el archivo apk desde un sitio web seguro y confiable, y seguir las reglas y las estrategias del juego. Si te gusta el solitario, no dudes en descargar solitario premium apk y disfrutar del mejor juego de cartas en tu dispositivo Android.
-
Preguntas frecuentes
-
A continuación, se responden algunas de las preguntas más frecuentes sobre el solitario premium apk:
-
¿Es seguro descargar e instalar el solitario premium apk?
-
Sí, es seguro siempre y cuando se descargue e instale el archivo apk desde un sitio web seguro y confiable. El solitario premium apk no contiene virus ni malware que puedan dañar el dispositivo Android o comprometer la privacidad del usuario.
-
¿Es legal descargar e instalar el solitario premium apk?
-
Sí, es legal siempre y cuando se respeten los derechos de autor y las condiciones de uso de la aplicación. El solitario premium apk es una aplicación gratuita que no infringe ninguna ley ni normativa vigente.
-
¿Qué diferencia hay entre el solitario premium apk y el solitario clásico?
-
La diferencia principal es que el solitario premium apk ofrece una versión mejorada del solitario clásico, con gráficos de alta calidad, música relajante, desafíos diarios, temas personalizados y muchas funciones y opciones más. El solitario clásico es una versión más simple y básica del juego de cartas.
-
¿Qué otros juegos de cartas se pueden jugar con el solitario premium apk?
-
El solitario premium apk incluye otros juegos de cartas que se pueden jugar con el mismo mazo de 52 cartas. Algunos de estos juegos son: Spider Solitaire, FreeCell Solitaire, Pyramid Solitaire, TriPeaks Solitaire y Golf Solitaire.
-
¿Cómo se puede contactar con el equipo de desarrolladores del solitario premium apk?
-
Se puede contactar con el equipo de desarrolladores del solitario premium apk a través del correo electrónico [email protected] o a través de las redes sociales Facebook, Twitter e Instagram. El equipo está disponible para resolver cualquier problema o duda que tenga el usuario sobre la aplicación.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Test Your Skills and Patience on Truck Driver Crazy Road.md b/spaces/congsaPfin/Manga-OCR/logs/Test Your Skills and Patience on Truck Driver Crazy Road.md
deleted file mode 100644
index 298dbbd77e93d1410b4658eaa2ef8ba6b5275621..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Test Your Skills and Patience on Truck Driver Crazy Road.md
+++ /dev/null
@@ -1,122 +0,0 @@
-
-
Truck Driver Crazy Road APKPure: A Challenging and Fun Driving Game
-
Do you love driving trucks and trailers on rough and bumpy roads? Do you want to experience the thrill and excitement of transporting cargo across different locations? If yes, then you should try Truck Driver Crazy Road APKPure, a realistic and fun truck driving game that will put you to the limits. In this article, we will tell you everything you need to know about this game, including what it is, what are its features, how to play it, and why you should download it.
Truck Driver Crazy Road APKPure is a truck driving game that will test your balancing skills and also your patience. You will have to drive through the uphill with lots of rocks and debris scattered along the road. You will also have to face different weather conditions, such as rain, snow, fog, and night. You will have to deliver your cargo safely and on time, without losing or damaging it. You will have to deal with traffic, narrow bridges, sharp turns, steep slopes, and other obstacles on your way.
-
A free and easy-to-download app from APKPure
-
Truck Driver Crazy Road APKPure is a free app that you can download from APKPure, a website that offers safe and fast downloads of Android apps and games. You don't need to register or sign up to download this app. You just need to click on the download button and install it on your device. The app has a size of about 100 MB and requires Android 4.1 or higher to run. The app is updated regularly with bug fixes and improvements.
-
What are the features of Truck Driver Crazy Road APKPure?
-
Four different game modes
-
Truck Driver Crazy Road APKPure has four different game modes that you can choose from, depending on your preference and mood. They are:
Delivery mode: In this mode, you have to deliver your cargo from one point to another within a given time limit. You have to be careful not to lose or damage your cargo on the way.
-
Parking mode: In this mode, you have to park your truck and trailer in a designated spot without hitting anything. You have to be precise and accurate in your movements.
-
Garage mode: In this mode, you can customize your truck and trailer with different colors, wheels, lights, horns, and stickers. You can also upgrade your engine, brakes, suspension, tires, and fuel tank.
-
Free mode: In this mode, you can drive freely on any map without any time limit or task. You can explore the environment and enjoy the scenery.
-
-
Various trucks and trailers to choose from
-
Truck Driver Crazy Road APKPure has a variety of trucks and trailers that you can choose from, each with its own characteristics and performance. You can unlock more trucks and trailers by completing tasks and earning coins. Some of the trucks and trailers that you can drive are:
-
-
Truck
Trailer
Red truck
Wooden trailer
-
Blue truck
Metal trailer
-
Green truck
Oil tanker
-
Yellow truck
Cement mixer
-
Black truck
Container trailer
-
-
Stunning graphics and sound effects
-
Truck Driver Crazy Road APKPure has stunning graphics and sound effects that will make you feel like you are driving a real truck. The game has realistic 3D models of trucks and trailers, as well as detailed environments and landscapes. You can see the mountains, forests, rivers, bridges, buildings, and roads on your way. You can also hear the engine sound, the horn, the brakes, the tires, and the cargo noise. The game also has dynamic lighting and shadows, as well as weather effects such as rain, snow, fog, and night.
-
Realistic physics and weather conditions
-
Truck Driver Crazy Road APKPure has realistic physics and weather conditions that will affect your driving experience. The game has a realistic simulation of gravity, inertia, friction, and collision. You will have to balance your truck and trailer on the uneven and slippery roads. You will also have to adjust your speed and direction according to the wind, rain, snow, fog, and night. You will have to be careful not to tip over or crash your truck and trailer.
-
How to play Truck Driver Crazy Road APKPure?
-
Use the on-screen controls to steer, accelerate, brake, and horn
-
To play Truck Driver Crazy Road APKPure, you have to use the on-screen controls to steer, accelerate, brake, and horn. You can choose between two types of controls: tilt or buttons. You can also adjust the sensitivity and position of the controls in the settings menu. The controls are easy to use and responsive.
-
Follow the arrow to reach your destination
-
To complete your task in Truck Driver Crazy Road APKPure, you have to follow the arrow that shows you the direction to your destination. You have to drive carefully and avoid getting lost or stuck on the way. You have to reach your destination within the time limit and without losing or damaging your cargo.
-
Avoid obstacles and collisions on the road
-
To drive safely in Truck Driver Crazy Road APKPure, you have to avoid obstacles and collisions on the road. You have to watch out for other vehicles, pedestrians, animals, rocks, trees, poles, signs, barriers, and other objects that can block or damage your truck and trailer. You have to keep a safe distance from them and use your horn to warn them. You also have to obey the traffic rules and signals.
-
Complete the tasks and earn coins
-
To progress in Truck Driver Crazy Road APKPure, you have to complete the tasks and earn coins. You have to deliver your cargo from one point to another or park your truck and trailer in a designated spot. You have to do it within the time limit and without losing or damaging your cargo. You will earn coins based on your performance and speed. You can use the coins to unlock more trucks and trailers or customize them in the garage mode.
-
Why should you download Truck Driver Crazy Road APKPure?
-
Test your driving skills and patience
-
If you want to test your driving skills and patience, you should download Truck Driver Crazy Road APKPure. This game will challenge you with its realistic and difficult driving scenarios. You will have to master the art of balancing your truck and trailer on the rough and bumpy roads. You will also have to cope with the changing weather conditions and traffic situations. You will have to be careful not to lose or damage your cargo on the way.
-
Enjoy the scenic views and challenging terrains
-
If you want to enjoy the scenic views and challenging terrains, you should download Truck Driver Crazy Road APKPure. This game will take you to different locations with beautiful landscapes and environments. You will see the mountains, forests, rivers, bridges, buildings, and roads on your way. You will also face different terrains such as hills, valleys, plains, deserts, snowfields, swamps, and more.
-
Have fun and relax with this addictive game
-
If you want to have fun and relax with this addictive game, you should download Truck Driver Crazy Road APKPure. This game will keep you entertained for hours with its four different game modes and various trucks and trailers. You can play this game anytime and anywhere without any internet connection. You can also share your scores and achievements with your friends and family on social media. You can also rate and review this game on APKPure and give your feedback to the developers.
-
Conclusion
-
Truck Driver Crazy Road APKPure is a challenging and fun driving game that will make you feel like a real truck driver. You will have to drive through different locations and weather conditions, deliver your cargo safely and on time, avoid obstacles and collisions, and customize your truck and trailer. You will also enjoy the stunning graphics and sound effects, the realistic physics and simulation, and the four different game modes. You can download this game for free from APKPure and have fun and relax with this addictive game.
-
FAQs
-
What are the minimum requirements to play Truck Driver Crazy Road APKPure?
-
To play Truck Driver Crazy Road APKPure, you need an Android device with version 4.1 or higher, a storage space of about 100 MB, and an internet connection to download the app.
-
How can I change the language of Truck Driver Crazy Road APKPure?
-
To change the language of Truck Driver Crazy Road APKPure, you can go to the settings menu and select the language option. You can choose from English, Russian, Turkish, German, Spanish, French, Italian, Portuguese, Arabic, Chinese, Japanese, Korean, Hindi, Indonesian, and Vietnamese.
-
How can I contact the developers of Truck Driver Crazy Road APKPure?
-
To contact the developers of Truck Driver Crazy Road APKPure, you can visit their website at http://games89.com/ or their Facebook page at https://www.facebook.com/Games89com-100900695181173/. You can also email them at games89com@gmail.com.
-
What are some tips and tricks to play Truck Driver Crazy Road APKPure?
-
Some tips and tricks to play Truck Driver Crazy Road APKPure are:
-
-
Use the brake wisely to avoid skidding or sliding on the slippery roads.
-
Use the horn to warn other vehicles or pedestrians on your way.
-
Use the camera button to change the view angle and see your surroundings better.
-
Use the map button to see your location and destination.
-
Use the pause button to pause or resume the game.
-
-
What are some similar games to Truck Driver Crazy Road APKPure?
-
Some similar games to Truck Driver Crazy Road APKPure are:
-
-
Truck Simulator 2018: Europe by Zuuks Games
-
Truck Simulator USA by Ovidiu Pop
-
Euro Truck Driver 2018 by Ovidiu Pop
-
Offroad Cargo Transport Simulator by Game Pickle
-
Cargo Transport Simulator by SkisoSoft
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Clean Master License Key ((LINK)).md b/spaces/contluForse/HuggingGPT/assets/Clean Master License Key ((LINK)).md
deleted file mode 100644
index 776e7bd8db791466de56de0732673298b2d01b7f..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Clean Master License Key ((LINK)).md
+++ /dev/null
@@ -1,52 +0,0 @@
-
-
-It can maintain and expand your RAM and your CPU. To use this marvelous cleaner, just set up its licence key and you can begin using it immediately.
-
-Clean Master License Key is an excellent program that permits you to examine the PC’s health. We find out more in regards to the diverse features of Clean Master License Key.
-
-Thanks for visiting our site. Listed here you will get the most recent and also latest version of Clean Master License Key.
-
-Clean Master License Key 2020 Crack
-
-Clean Master License Key is an outstanding utility which enables you to view your pc and also RAM. This is a complete device which provides numerous features. Clean Master License Key has a clever algorithm which is used to identify and also remove the various abnormal files. You can also clean your programs, RAM and files. To put this into action, you need to launch the Clean Master License Key and then execute the functions you desire. This excellent program has an uncomplicated interface. So, users can control the Clean Master License Key on their own. The Clean Master License Key permits you to select the files and groups which you want to delete. Moreover, you can remove everything that is likely to cause a problem in your pc’s health.
-
-Clean Master License Key can also locate the junk files that have been occupying your pc’s internal storage. Hence, the procedure of cleaning your pc is rather easy. You can now create your computer to its finest state.
-
-Clean Master License Key Features:
-
-More, Clean Master License Key contains various tools to clean the various things that can spoil the health of your PC. Some of these tools are:
-
-Uninstaller
-
-A tool that allows you to easily uninstall programs that are useless.
-
-Optimizer
-
-A program that can improve the performance of your computer.
-
-System Cleaner
-
-This can identify and remove the junk files.
-
-Real-time Scanner
-
-Can take good care of the PC’s performance by scanning the performance of the PC.
-
-System Guard
-
-Can protect your computer from all kinds of threats.
-
-System Resurrector
-
-Can fix many problems in your pc.
-
-System Memory Booster
-
-You can enhance the performance of your computer using this tool.
-
-What’s New In The Latest Version?
-
-The latest version of Clean Master License Key enables you to quickly and also efficiently clean your computer. The Clean Master License Key 4fefd39f24
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/Da Vinci Code Ebook Free Download Epub The Best Way to Enjoy the Thrilling Mystery Novel.md b/spaces/contluForse/HuggingGPT/assets/Da Vinci Code Ebook Free Download Epub The Best Way to Enjoy the Thrilling Mystery Novel.md
deleted file mode 100644
index 6adbbb1e110047c28c5ff570db7e4bd948807912..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Da Vinci Code Ebook Free Download Epub The Best Way to Enjoy the Thrilling Mystery Novel.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
Project Gutenberg eBooks may be freely used in the United States because most are not protected by U.S. copyright law. They may not be free of copyright in other countries. Readers outside of the United States must check the copyright terms of their countries before accessing, downloading or redistributing eBooks. We also have a number of copyrighted titles, for which the copyright holder has given permission for unlimited non-commercial worldwide use.
This is where you can pick up your free downloads of Crafting Unforgettable Characters, as well as the bonus books the Complete Outline Transcript of Storming and 5 Secrets of Story Structure and my free Scrivener template.
-
Visit the Overdrive website or our online catalog for Overdrive ebooks available for Kindle devices. Libby will offer you an option to read in the Libby app or if you would like the book downloaded to your Kindle device.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Download HD Movie of Terror Strike A Gripping Story of Courage and Survival.md b/spaces/contluForse/HuggingGPT/assets/Download HD Movie of Terror Strike A Gripping Story of Courage and Survival.md
deleted file mode 100644
index 07b80e4aa45ac3db70f6911b98cdd41e54e121e6..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Download HD Movie of Terror Strike A Gripping Story of Courage and Survival.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
The SIT formed to probe the Dinanagar attack believes that the strike was undertaken by the Laskhar-e-Toiba, while the attack on the Pathankot Air Force Station was the handiwork of Jaish-e-Mohammad. Both are Pakistan-based terror outfits and had used Mastgarh village route to enter India.
A counterpart of the principle of least action in Nature is that attackers in human conflict follow the path of least resistance. Thus, Sun Tzu notes: Now an army may be likened to water, for just as water avoids heights and hastens to the lowlands, so an army avoids strength and strikes weakness. For attacks by terrorists, cyber hackers or warring states, quantitative risk modelling is unified by the principles of adversarial conflict, such as those laid out by Sun Tzu. The well-defined principles underlying quantitative terrorism risk modelling minimize the need to resort to expert judgement (Woo 2011, 2015). Within the bounds defined by the Western counter-terrorism environment, terrorists maximize their operational utility by abiding by the classic principles of terrorist modus operandi: substituting hardened targets; following the path of least resistance in weapon selection; and leveraging their scarce resources to achieve the greatest impact. The metric for impact includes not just loss inflicted but also the media attention gained. An insightful ISIS slogan is that media is half Jihad. Media coverage is essential for terrorist recruitment and funding, as well as for propaganda. This is so important that in 2002, Osama bin Laden wrote that the media war may reach 90% of the preparation for battles (Awan 2016).
-
CM Terrorism Crisis Protocols are now not only necessary for airlines, government buildings and mass transit organizations, but for businesses as a whole, from college campuses to nightclubs to movie theaters. The unfortunate and tragic reality is this: Terrorism and mass calamity can strike anywhere at any time.
https://patito.me/miami-study-guide-2016-download/ https://img7.360mb.com/data-c/1/880/137/0/4/0/1138067_lmhvxo.png BTW this is a travel plan I did using a GPS system for a motorcycle trip. FriendsSeason2COMPLETE720pBRripsujaidrpimprg The second season is pretty much a continuation of season 4, which is why theyre not on youtube theyre on .
https://sachillenger.com/friendsseason2complete720pbrripsujaidrpimprg/ https://www.antobrien.com/friends-season-2-complete-episode-8-review/ https://www.thscookies.com/friends-season-2-complete-part-1/ FriendsSeason2COMPLETE720pBRripsujaidrpimprg The second season of Friends is much like the fourth (though it picks up right after the season 4 finale). .
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Fsdreamteam Gsx Fsx 1.9.0.9 LINK Crack.md b/spaces/diacanFperku/AutoGPT/Fsdreamteam Gsx Fsx 1.9.0.9 LINK Crack.md
deleted file mode 100644
index e87cda613216499dd1b4c29cfcf52e213553c10b..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Fsdreamteam Gsx Fsx 1.9.0.9 LINK Crack.md
+++ /dev/null
@@ -1,93 +0,0 @@
-
-
Fsdreamteam Gsx Fsx 1.9.0.9 Crack: How to Download and Install
-
Fsdreamteam Gsx Fsx 1.9.0.9 crack is a software that allows you to add realistic ground services to your flight simulator. GSX stands for Ground Services X, and it is a product of Fsdreamteam, a company that specializes in developing add-ons for flight simulators. GSX works with both FSX and P3D, and it simulates various operations on the ground, such as marshalling, catering, boarding, refueling, pushback, and more. GSX also features many native FSX animations and believable human characters.
-
If you are a fan of flight simulation, you might want to download and install Fsdreamteam Gsx Fsx 1.9.0.9 crack to enhance your experience and immersion. However, downloading and installing Fsdreamteam Gsx Fsx 1.9.0.9 crack is not as easy as it sounds. You need to find a reliable and safe source, follow the instructions carefully, and avoid any errors or issues that may occur. In this article, we will show you how to download and install Fsdreamteam Gsx Fsx 1.9.0.9 crack step by step.
Where to Download Fsdreamteam Gsx Fsx 1.9.0.9 Crack?
-
There are many websites that offer Fsdreamteam Gsx Fsx 1.9.0.9 crack for download, but not all of them are trustworthy and secure. Some of them may contain viruses, malware, or pop-up ads that can harm your device or compromise your privacy. Therefore, you need to be careful and choose only reputable and verified websites that provide Fsdreamteam Gsx Fsx 1.9.0.9 crack for download.
-
Here are some of the websites that we recommend:
-
-
FS Nusantara: This website provides Fsdreamteam Gsx Fsx 1.9.0.9 crack for download in 480p and 720p HD quality, with dual audio (Hindi-English) and English subtitles.
-
YouTube: This website provides Fsdreamteam Gsx Fsx 1.9.0.9 crack for download in video format, with instructions and proof.
-
Woodys Wags Grooming /boarding: This website provides Fsdreamteam Gsx Fsx 1.9.0.9 crack for download in zip file format, with a link to a tutorial.
-
-
How to Download Fsdreamteam Gsx Fsx 1.9.0.9 Crack?
-
To download Fsdreamteam Gsx Fsx 1.9.0.9 crack from the websites mentioned above, you need to follow these steps:
-
-
Visit the website of your choice and search for Fsdreamteam Gsx Fsx 1.9.0.9 crack.
-
Select the file that you want to download and click on the download link or button.
-
You may be redirected to another page or website that contains the download link or button.
-
You may need to complete a captcha or a verification process to prove that you are not a robot.
-
You may need to wait for a few seconds or minutes before the download starts.
-
Choose the location where you want to save the file on your device and click on save.
-
Wait for the download to finish and extract the file if it is in zip format.
-
-
How to Install Fsdreamteam Gsx Fsx 1.9.0.9 Crack?
-
To install Fsdreamteam Gsx Fsx 1.9.
-
How to Install Fsdreamteam Gsx Fsx 1.9.0.9 Crack?
-
To install Fsdreamteam Gsx Fsx 1.9.0.9 crack on your flight simulator, you need to follow these steps:
-
-
Make sure that you have FSX or P3D installed on your device.
-
Run the Fsdreamteam Gsx Fsx 1.9.0.9 crack file that you have downloaded and extracted.
-
Follow the instructions on the screen and choose the destination folder where you want to install GSX.
-
Wait for the installation to finish and launch your flight simulator.
-
Enjoy using GSX with realistic ground services on your flights.
-
-
What are the Features and Benefits of Fsdreamteam Gsx Fsx 1.9.0.9 Crack?
-
Fsdreamteam Gsx Fsx 1.9.0.9 crack is a software that provides you with many features and benefits, such as:
-
-
It works with every FSX and P3D airport, both default and third-party, even those not released yet.
-
It supports all default FSX and P3D airplanes and many popular third-party airplanes, such as PMDG, Aerosoft, Captain Sim, Quality Wings, and more.
-
It offers vehicles in many different types and sizes, depending on the airplane and airport in use.
-
It has many sound effects and supports 3D surround sound with OpenAL.
-
It has realistic human animations using FSX bones and skin meshes.
-
It has an easy to use user interface, fully integrated in FSX and P3D using standard ATC-like menus.
-
It has an easy user-customization of vehicles, using the provided paint kit.
-
It has a live update feature that keeps GSX always updated automatically, with new supported airplanes and airports.
-
It has a direct airplane interface that allows interaction with complex third-party airplanes featuring custom door controls, ground equipment, and more.
-
It has a support for full airport customization, already enabled with all FSDT sceneries and some third-party sceneries, allowing better integration with any airport.
-
-
Conclusion
-
In this article, we have shown you how to download and install Fsdreamteam Gsx Fsx 1.9.0.9 crack on your flight simulator. We have also provided you with some tips to download it safely and quickly. We have also discussed the features and benefits of Fsdreamteam Gsx Fsx 1.9.0.9 crack and how it can enhance your flight simulation experience and immersion. We hope that this article has been helpful for you and that you have enjoyed using Fsdreamteam Gsx Fsx 1.9.0.9 crack on your flights. If you have any questions or suggestions, please feel free to leave a comment below. Thank you for reading!
-
-
What are the Requirements and Precautions for Fsdreamteam Gsx Fsx 1.9.0.9 Crack?
-
Before you download and install Fsdreamteam Gsx Fsx 1.9.0.9 crack on your flight simulator, you need to make sure that you meet the following requirements and precautions:
-
-
You need to have FSX or P3D installed on your device, with the latest updates and service packs.
-
You need to have enough disk space and memory to run GSX smoothly and without errors.
-
You need to have a good internet connection and a compatible device to download GSX from the source websites.
-
You need to have a backup of your original files and settings, in case something goes wrong or you want to uninstall GSX.
-
You need to be aware of the legal and ethical issues of downloading and using cracked software, and the possible consequences that may arise.
-
You need to be careful and cautious of the source websites that you choose to download GSX from, and scan your device for any viruses, malware, or pop-up ads that may harm your device or compromise your privacy.
-
-
What are the Reviews and Ratings of Fsdreamteam Gsx Fsx 1.9.0.9 Crack?
-
Fsdreamteam Gsx Fsx 1.9.0.9 crack is a software that has received many positive reviews and ratings from users and critics alike. Here are some of the reviews and ratings that we have found:
-
-
"GSX is a must-have for any flight simulator enthusiast. It adds so much realism and immersion to your flights, with realistic ground services and operations. It works with every airport and airplane, and it is easy to use and customize. I highly recommend it." - User review on YouTube
-
"Fsdreamteam Gsx Fsx 1.9.0.9 crack is a great software that enhances your flight simulation experience with ground services. It is compatible with FSX and P3D, and it supports many third-party airplanes and sceneries. It has many features and benefits, such as vehicles, sound effects, human animations, user interface, live update, direct airplane interface, and airport customization. It is easy to download and install, and it works flawlessly." - User review on Woodys Wags Grooming /boarding
-
"Fsdreamteam Gsx Fsx 1.9.0.9 crack is one of the best add-ons for flight simulators. It simulates various operations on the ground, such as marshalling, catering, boarding, refueling, pushback, and more. It has many vehicles in different types and sizes, depending on the airplane and airport in use. It has an amazing sound quality and realistic human characters. It has an intuitive user interface and a live update feature that keeps it updated automatically. It is a must-have for any flight simulator fan." - User review on FS Nusantara
-
-
What are the FAQs and Answers for Fsdreamteam Gsx Fsx 1.9.0.9 Crack?
-
Fsdreamteam Gsx Fsx 1.9.0.9 crack is a software that may raise some questions and doubts for users and potential users. Here are some of the frequently asked questions and answers for Fsdreamteam Gsx Fsx 1.9.0.9 crack:
-
-
Q: Is Fsdreamteam Gsx Fsx 1.9.0.9 crack legal and ethical?
-A: Fsdreamteam Gsx Fsx 1.9.0.9 crack is a software that violates the copyright and license agreement of Fsdreamteam, the original developer of GSX. Therefore, it is illegal and unethical to download and use Fsdreamteam Gsx Fsx 1.9.0.9 crack, and it may result in legal or penal consequences.
-
Q: Is Fsdreamteam Gsx Fsx 1.9.0.9 crack safe and secure?
-A: Fsdreamteam Gsx Fsx 1.9.0.9 crack is a software that may contain viruses, malware, or pop-up ads that can harm your device or compromise your privacy. Therefore, it is not safe and secure to download and use Fsdreamteam Gsx Fsx 1.9.0.9 crack, and it may cause technical difficulties or errors on your device.
-
Q: Is Fsdreamteam Gsx Fsx 1.9.0.9 crack compatible with my device and simulator?
-A: Fsdreamteam Gsx Fsx 1.9.0.9 crack is a software that works with both FSX and P3D, and it supports all default and third-party airplanes and airports. However, it may not work properly or at all with some devices or simulators, depending on their specifications and settings.
-
Q: How can I uninstall Fsdreamteam Gsx Fsx 1.9.0.9 crack?
-A: To uninstall Fsdreamteam Gsx Fsx 1.9.0.9 crack from your device and simulator, you need to follow these steps:
-- Delete the GSX folder from your simulator's main folder.
-- Delete the Addon Manager folder from your simulator's main folder.
-- Delete the Couatl folder from your simulator's main folder.
-- Delete the Couatl_Updater.exe file from your simulator's main folder.
-- Delete the GSX entry from your simulator's scenery library.
-- Restore your original files and settings from your backup.
-
-
Conclusion
-
In this article, we have shown you how to download and install Fsdreamteam Gsx Fsx 1.9.0.9 crack on your flight simulator. We have also provided you with some tips to download it safely and quickly. We have also discussed the features and benefits of Fsdreamteam Gsx Fsx 1.9.0.9 crack and how it can enhance your flight simulation experience and immersion. We have also addressed some of the challenges and alternatives of downloading and using Fsdreamteam Gsx Fsx 1.9.0.9 crack. We have also answered some of the frequently asked questions and doubts about Fsdreamteam Gsx Fsx 1.9.0.9 crack.
-
We hope that this article has been helpful for you and that you have enjoyed using Fsdreamteam Gsx Fsx 1.9.0.9 crack on your flights. However, we also advise you to be aware of the legal and ethical issues of downloading and using cracked software, and the possible consequences that may arise. We also recommend you to support the original developer of GSX, Fsdreamteam, by purchasing their product legally and ethically.
-
If you have any questions or suggestions, please feel free to leave a comment below. Thank you for reading!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Hiljadu-Cudesnih-Sunaca-Haled-Hosseinipdf.md b/spaces/diacanFperku/AutoGPT/Hiljadu-Cudesnih-Sunaca-Haled-Hosseinipdf.md
deleted file mode 100644
index 1b908e000d1b39f51aee1634ed6101a04a76c2b9..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Hiljadu-Cudesnih-Sunaca-Haled-Hosseinipdf.md
+++ /dev/null
@@ -1,38 +0,0 @@
-Hiljadu Cudesnih Sunaca Haled Hosseini.pdf
-
-
-
-Download File > [https://maudaracte.blogspot.com/?file=2tvJde](https://maudaracte.blogspot.com/?file=2tvJde)
-
-
-
-
-
-
-
-
-
-```markdown
-Hiljadu Cudesnih Sunaca: A Review of Haled Hosseini's Novel
-Hiljadu Cudesnih Sunaca (A Thousand Splendid Suns) is a novel by Afghan-American author Haled Hosseini, published in 2007. It tells the story of two women, Mariam and Laila, who suffer from the oppression and violence of the Taliban regime in Afghanistan. The novel explores themes such as love, friendship, family, courage, sacrifice, and resilience in the face of hardship.
-In this article, we will review the novel and its main characters, plot, style, and message. We will also provide some information about the author and his other works.
-
-Main Characters
-The novel has two main protagonists: Mariam and Laila. Mariam is a harami (illegitimate child) who lives with her bitter mother Nana in a hut outside Herat. She is rejected by her wealthy father Jalil and his family, and forced to marry Rasheed, a cruel and abusive shoemaker in Kabul. Laila is a beautiful and intelligent girl who grows up in a loving family in Kabul. She falls in love with Tariq, a boy from her neighborhood who loses his leg in a landmine explosion. When her parents are killed by a rocket attack, she is rescued by Rasheed and becomes his second wife.
-Mariam and Laila initially resent each other, but they gradually develop a bond of friendship and sisterhood. They support each other through the horrors of war, domestic violence, poverty, and oppression. They also share a love for Aziza, Laila's daughter by Tariq, whom Rasheed rejects as his own. Together, they endure the brutality of the Taliban regime, which imposes harsh restrictions on women's rights and freedoms. They also face the threat of Rasheed's violence, which escalates as he becomes more frustrated and paranoid.
-The novel also has several secondary characters who play important roles in the story. Some of them are:
-
-Tariq: Laila's childhood friend and lover, who loses his leg in a landmine explosion. He flees to Pakistan with his family after the Soviet invasion of Afghanistan. He later returns to Kabul to find Laila and rescue her from Rasheed.
-Aziza: Laila's daughter by Tariq, whom she gives birth to in secret. She is a smart and brave girl who loves Mariam as her mother. She is sent to an orphanage by Rasheed when he can no longer afford to feed her.
-Zalmai: Laila's son by Rasheed, whom she conceives after being raped by him. He is spoiled and favored by Rasheed, who sees him as his heir. He is loyal to his father and distrustful of Tariq.
-Mullah Faizullah: Mariam's teacher and friend, who teaches her how to read and write. He is a kind and gentle man who encourages Mariam to pursue her dreams. He dies of old age before Mariam leaves Herat.
-Nana: Mariam's mother, who was impregnated by Jalil when she was his housekeeper. She suffers from epilepsy and depression, and blames Mariam for her misfortune. She commits suicide after Mariam leaves her to visit Jalil.
-Jalil: Mariam's father, who is a wealthy businessman with three wives and nine legitimate children. He visits Mariam once a week and tells her stories about Herat and the world. He abandons Mariam when she asks to live with him, and arranges her marriage to Rasheed.
-
-
-Plot Summary
-The novel spans over three decades of Afghan history, from the 1970s to the 2000s. It covers major events such as the Soviet invasion, the civil war, the rise of the Taliban, and the US intervention.
-The novel begins with Mariam's childhood in Herat, where she lives with her mother Nana in a hut outside the city. She longs to visit her father Jalil and his family in their mansion in Herat. On her fifteenth birthday, she decides to go to Herat to see Jalil after he fails to show up for their weekly visit. She is shocked to discover that Jalil has lied to her about his dfd1c89656
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/SMS Caster 37 Full !!BETTER!! With Keygen.md b/spaces/diacanFperku/AutoGPT/SMS Caster 37 Full !!BETTER!! With Keygen.md
deleted file mode 100644
index 1619a16d0e8d75ff23073c64107bafb776b59f4b..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/SMS Caster 37 Full !!BETTER!! With Keygen.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
the government of india has launched smart grid mission (sgm) to achieve its vision of having a low-carbon, sustainable and secure electrical power system. the mission aims to achieve its goal in two phases: phase 1 (‘grid readiness’) and phase 2 (‘smart grid’). the mission envisages the development of a smart grid environment that can provide services to various stakeholders including consumers, service providers, and utilities. the mission proposes to achieve a smart grid environment by: 1) developing an electric power delivery grid (epdg) that provides a secure and reliable power supply for a nation that meets the needs of an increasingly mobile, data-driven society, 2) enabling the integration of smart metering, advanced metering infrastructure (ami) and cyber-physical systems (cps) with epdg to provide timely, accurate, and reliable information, and 3) building an ecosystem of service providers that offer smart services to consumers and utilities.
-
conclusion: with greater-than-ever pharmaceutical and technological developments, the question of whether patients will benefit from emerging drugs is a crucial one. we believe that this study highlights the importance of drug safety surveillance in modern drug development and the utility of vigibase. for this reason, we have made the summary and methods available via the site so that they can be used more broadly. we look forward to further innovations and contributions from the vigibase community to improve drug safety for patients.
we have created a quiet drop that can hold up to 330 standard size (6 x 9) books or 850 jeweled media cases. our carts can be up to 100% recyclable (depending on materials) and our carts are super portable. we have a very competitive price for a cart with a non-marring, quality cart.
epson l1800 a3 photo ink tank printer is one of the latest innovative products that epson released. it was the evolution of the inkjet printer that was released. therefore, it is evident that it is equipped with a refined hardware and with epson's full force software to cater for the needs of the business owners and consumers. moreover, it is also an all-in-one machine with photo printing, scanning, copying, and faxing functionalities. it is also equipped with a wide array of storage options. therefore, it is quite suitable for professionals to utilize it for data storage!
-
Adjustment Program Reset Impressora Epson TX130TX133TX135 Luzes Piscandorar
ink jet l1800 a3 photo ink tank printer is designed to fit with the epson l1800 a3 photo ink tank printer. therefore, both of these devices can connect to each other via usb. the connection is made possible through the usb id, which is extracted from the epson l1800 a3 photo ink tank printer. the extracted data is then automatically transmitted to the epson l1800 printer. furthermore, this connection is made possible through the usb id, which is extracted from the epson l1800 printer. therefore, this is a very convenient way to make the connection. therefore, you can also remove the ink cartridge manually from the printer. nevertheless, the ink cartridge is connected with a special interface.
-
epson l1800 a3 photo ink tank printer has a good memory capacity. therefore, this printer will store some valuable data that can be accessed whenever needed. this is quite convenient for the business owners and consumers. however, the size of the ink cartridge is quite limited. therefore, you can only utilize half of its capacity at a time.
City Car Driving 1.2.2 Serial Key: A Complete Guide
-
-
If you are looking for a realistic driving simulator game, you might want to try City Car Driving 1.2.2. This game allows you to practice your driving skills in various traffic conditions, weather, and road situations. You can choose from different cars, modes, and scenarios to test your abilities and learn from your mistakes.
However, to enjoy the full features of City Car Driving 1.2.2, you need a valid serial key to activate the game. A serial key is a unique code that verifies your purchase and unlocks the game for you. Without a serial key, you can only play the demo version of the game, which has limited options and functions.
-
-
How to Get City Car Driving 1.2.2 Serial Key
-
-
There are two ways to get a serial key for City Car Driving 1.2.2: buying it from the official website or downloading it from a reliable source.
-
-
Buying City Car Driving 1.2.2 Serial Key
-
-
The easiest and safest way to get a serial key for City Car Driving 1.2.2 is to buy it from the official website of the game: https://citycardriving.com/buy/citycardriving. Here, you can choose from different payment methods and currencies to complete your purchase. You will receive an email with your serial key and instructions on how to activate the game.
-
-
-
The advantages of buying a serial key from the official website are:
-
-
You will get a genuine and legal serial key that works for your game.
-
You will get access to all the updates and patches of the game.
-
You will get technical support and customer service from the developers.
-
You will support the creators of the game and help them improve their products.
-
-
-
Downloading City Car Driving 1.2.2 Serial Key
-
-
Another way to get a serial key for City Car Driving 1.2.2 is to download it from a third-party source, such as a website or a torrent. This method is not recommended, as it may expose you to various risks and problems.
-
-
The disadvantages of downloading a serial key from an unofficial source are:
-
-
You may get a fake or invalid serial key that does not work for your game.
-
You may get a virus or malware that infects your computer or steals your personal information.
-
You may get into legal trouble for violating the copyright laws and terms of service of the game.
-
You may miss out on the updates and patches of the game.
-
You may not get any technical support or customer service from the developers.
-
You may harm the creators of the game and discourage them from making more games.
-
-
-
How to Activate City Car Driving 1.2.2 with Serial Key
-
-
Once you have obtained a valid serial key for City Car Driving 1.2.2, you need to activate the game with it. To do this, follow these steps:
-
-
Download and install City Car Driving 1.2.2 on your computer.
-
Launch the game and copy the code from the startup window.
Enter your serial number, the program code you have copied, and your email address.
-
The activation key will be sent to your email address.
-
Enter your activation key into the box in the program window and click “Registration” button.
-
Enjoy playing City Car Driving 1.2.2 with full features!
-
-
-
Conclusion
-
-
City Car Driving 1.2.2 is a great driving simulator game that can help you improve your driving skills and have fun at the same time. To play this game with full features, you need a serial key to activate it. You can either buy a serial key from the official website or download it from a reliable source, but be careful of the risks and disadvantages of the latter option. Once you have a serial key, you can easily activate the game and start driving!
-
City Car Driving 1.2.2 Mods and Custom Cars
-
-
One of the most exciting features of City Car Driving 1.2.2 is the ability to add mods and custom cars to the game. Mods are modifications that enhance or change the game in various ways, such as adding new cars, maps, traffic, sounds, etc. Custom cars are user-created cars that you can download and drive in the game.
-
-
To add mods and custom cars to City Car Driving 1.2.2, you need to use the Steam Workshop. The Steam Workshop is a platform that allows you to easily discover, download, and install fan-created content for your game or software. You can browse through thousands of mods and custom cars created by other users and subscribe to the ones you like. The subscribed content will be automatically available when you start the game.
-
-
Some of the benefits of using mods and custom cars in City Car Driving 1.2.2 are:
-
-
You can expand your car collection with different models, brands, styles, and performance.
-
You can drive in new maps and environments that offer different challenges and scenery.
-
You can experience new traffic situations and scenarios that test your driving skills and reactions.
-
You can customize your game with different sounds, graphics, effects, etc.
-
You can support the creative community and share your own mods and custom cars with others.
-
-
-
City Car Driving 1.2.2 System Requirements and Download
-
-
Before you can play City Car Driving 1.2.2 with serial key, you need to make sure that your computer meets the minimum system requirements for the game. The system requirements are:
The application's stability is not guaranteed on Intel HD Graphics and AMD HD Radeon on-board graphics cards.
-
-
-
If your computer meets or exceeds these requirements, you can download City Car Driving 1.2.2 from the official website of the game: https://citycardriving.com/download/citycardriving. Here, you can choose from different download options and payment methods to get your copy of the game.
-
-
Alternatively, you can buy City Car Driving 1.2.2 from Steam: https://store.steampowered.com/app/493490/City_Car_Driving/. Steam is a digital distribution platform that allows you to buy, download, and play games online. By buying City Car Driving 1.2.2 from Steam, you can also access the Steam Workshop and other Steam features.
-
-
After you have downloaded City Car Driving 1.2.2, you need to activate it with your serial key as explained in the previous section.
-
-
-If you want me to write more paragraphs, you need to give me a specific topic or question that you want me to address. Otherwise, I suggest you move on to the next step of your article writing process, such as checking for grammar errors, adding images or videos, writing a conclusion, or creating a summary. I can help you with any of these tasks if you ask me.
-
City Car Driving 1.2.2 Serial Key: The Final Word
-
-
City Car Driving 1.2.2 is a game that can offer you a lot of fun and learning. It is a realistic driving simulator that can help you master the basic skills of car driving in different road conditions and situations. It has many features and benefits that make it stand out from other driving games, such as smart traffic, realistic physics, various cars, modes, scenarios, and maps. It also allows you to add mods and custom cars to the game through the Steam Workshop, which can enhance your gaming experience and creativity.
-
-
To play City Car Driving 1.2.2 with full features, you need a serial key to activate the game. You can get a serial key by buying it from the official website or downloading it from a reliable source. However, you should be careful of the risks and disadvantages of the latter option, such as fake or invalid serial keys, viruses or malware, legal trouble, missing updates and patches, etc. Once you have a serial key, you can easily activate the game and start driving.
-
-
If you are looking for a game that can challenge your driving skills and entertain you at the same time, City Car Driving 1.2.2 is a great choice for you. It is a game that can teach you how to drive safely and confidently in real life. It is also a game that can let you explore different cars, environments, and situations in a virtual world. It is a game that can give you hours of fun and satisfaction.
-
-
So what are you waiting for? Get your City Car Driving 1.2.2 serial key today and enjoy the ride!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Cnc Software Mastercam X5 Crack Rarl.md b/spaces/falterWliame/Face_Mask_Detection/Cnc Software Mastercam X5 Crack Rarl.md
deleted file mode 100644
index e9ea5fa8561470292b4152fa773f3807e076a9f0..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Cnc Software Mastercam X5 Crack Rarl.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
the software compliance group is a division of the business software alliance ( bsa ), a not-for-profit that works with governments and businesses to help protect the integrity of the software market. it works with vendors, distributors, retailers and government agencies to help keep software users safe and prevent software piracy.
-
one of the many features in the new version is the ability to import stereolithography files, though in this version it only works on the mastercam 2219 in the windows operating system. mastercam has also added the ability to create and edit grid entities, and the new version allows you to merge the grid entities back into the part file. the software also allows you to resize the document and even supports features such as the ability to display and edit 2d and 3d graphics, as well as to display and edit surface models.
mastercam software crack has been updated, a complete brand new version is now available for download. its main goal is to make life easier for the user, making the whole experience more user-friendly. this version has been upgraded with many new features, new intuitive user interface and a lot more. the user guide also contains a lot of useful information that will make your life easier.
-
not having to open a document to create a new part is really handy for a freelancer who has a lot of different projects going on and has to switch between them frequently. a new intuitive user interface has been introduced, making it easier to view and edit your parts and models. the user interface is now easier to navigate. this new version of mastercam also has a lot of new features. it supports creating new solid objects, it can also export your model in the obj format, which is the file format commonly used in 3d graphics editing software like 3d studio max. you can create new project files with different settings, such as the shape of the table, or the viewport. it can create 2d drawings directly from the project files.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Onyx Production House X10.2 13.md b/spaces/falterWliame/Face_Mask_Detection/Onyx Production House X10.2 13.md
deleted file mode 100644
index b7b92b2dc5a5d8016cfdb7d858376f926278a4fc..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Onyx Production House X10.2 13.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-
-Learn how to add a new media profile to the Onyx X10: ... Media Profile for HP Designjet L25500 Printer ... HP Designjet L25500 - Description, specifications, test, reviews, prices, photos Digitizing photographic film and slides.
-Onyx X10.
-Digitizing on a memory card.
-Record on a disk or on a computer.
-Price action.
-Description.
-Onyx X10 ...
-Onyx X10 - find out prices and detailed specifications.
-Watch a video review, read reviews and discuss on the forum.
-Pros, cons and analogues.
-Buy Onyx X10 with warranty at a low price.
-Delivery in Ukraine: Kharkiv, Kiev, Dnipropetrovsk, Odessa, Zaporozhye, Lviv and other cities. 8a78ff9644
-
-
-
diff --git a/spaces/fatiXbelha/sd/Clash of Clans Real Server Mod APK The Best Way to Experience the Game.md b/spaces/fatiXbelha/sd/Clash of Clans Real Server Mod APK The Best Way to Experience the Game.md
deleted file mode 100644
index b955a208b802c37e479095c41b6a957f9190b49a..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Clash of Clans Real Server Mod APK The Best Way to Experience the Game.md
+++ /dev/null
@@ -1,129 +0,0 @@
-
-
Clash of Clans Real Server Mod APK: How to Download and Play
-
Clash of Clans is one of the most popular and addictive strategy games in the world. Millions of players build their own villages, train their troops, and battle with other clans online. But what if you want to play the game with unlimited resources, custom mods, and access to all the features without spending any money or waiting for hours? That's where a real server mod apk comes in handy. In this article, we will explain what a real server mod apk is, why you might want to use it, and how to download and install it on your device.
Clash of Clans is a freemium mobile game developed by Supercell, a Finnish company that also created other hit games like Clash Royale, Brawl Stars, and Hay Day. The game was released in 2012 for iOS and in 2013 for Android devices. Since then, it has become one of the most downloaded and highest-grossing apps in the world, with over 500 million downloads and billions of dollars in revenue.
-
The game is set in a fantasy world where you can create your own village, join or create a clan, and fight with other players in clan wars or multiplayer battles. You can also upgrade your buildings, defenses, troops, spells, heroes, and pets using various resources like gold, elixir, dark elixir, gems, and magic items. The game is constantly updated with new content, events, challenges, and features to keep you entertained and engaged.
-
What is a mod apk?
-
A mod apk is a modified version of an original app that has been altered by someone other than the developer. A mod apk can have different features, functions, graphics, or gameplay than the original app. For example, a mod apk can have unlimited resources, unlocked items, custom skins, cheats, hacks, or other enhancements that are not available in the original app.
-
clash of clans private server mod apk download
-clash of clans mod apk unlimited everything real server
-clash of clans hack mod apk real server 2023
-clash of clans mod apk plenixclash real server
-clash of clans mod apk latest version real server
-clash of clans mod apk offline real server
-clash of clans mod apk unlimited gems real server
-clash of clans mod apk town hall 15 real server
-clash of clans mod apk android 1 real server
-clash of clans mod apk ios real server
-clash of clans mod apk unlimited troops real server
-clash of clans mod apk magic s1 real server
-clash of clans mod apk fhx real server
-clash of clans mod apk null's clash real server
-clash of clans mod apk lights s1 real server
-clash of clans mod apk darksoul real server
-clash of clans mod apk miroclash real server
-clash of clans mod apk hybrid base real server
-clash of clans mod apk builder base real server
-clash of clans mod apk supercell id real server
-clash of clans mod apk no root real server
-clash of clans mod apk online play real server
-clash of clans mod apk unlimited gold and elixir real server
-clash of clans mod apk new update real server
-clash of clans mod apk with th14 real server
-clash of clans mod apk free download for android real server
-clash of clans mod apk unlimited money and gems real server
-clash of clans mod apk hack version download real server
-clash of clans mod apk anti ban real server
-clash of clans mod apk working 100% real server
-clash of clans mod apk original graphics real server
-clash of clans mod apk no human verification real server
-clash of clans mod apk all heroes unlocked real server
-clash of clans mod apk custom buildings and heroes real server
-clash of clans mod apk unlimited resources and clan wars real server
-clash of clans mod apk with battle machine and night mode real server
-clash of clans mod apk with royal champion and giga inferno real server
-clash of clans mod apk with siege machines and wall wreckers real server
-clash of clans mod apk with electro dragon and ice golem real server
-clash of clans mod apk with pets and super troops real server
-clash of clans mod apk with new skins and events real server
-clash of clans mod apk with clan games and friendly challenges real server
-clash of clans mod apk with global chat and clan chat real server
-clash of clans mod apk with achievements and leaderboards real server
-clash of clans mod apk with unlimited spells and traps real server
-clash of clans mod apk with custom commands and mods menu real server
-
A mod apk can be downloaded from third-party websites or sources that are not affiliated with the official app store or developer. However, not all mod apks are safe or legal to use. Some mod apks may contain viruses, malware, spyware, or other harmful software that can damage your device or steal your personal information. Some mod apks may also violate the terms of service or policies of the original app or developer, which can result in bans or legal actions.
-
What is a real server mod apk?
-
A real server mod apk is a special type of mod apk that connects to the official servers of the original app instead of private servers or offline modes. A real server mod apk allows you to play the original game with all the features and functions that are available on the official servers, but with some modifications or additions that are not possible on the original app.
-
For example, a real server mod apk for Clash of Clans can let you play the game with unlimited resources like gold, elixir, dark elixir , gems, and magic items. You can also use custom mods like unlimited troops, spells, heroes, pets, buildings, defenses, or other features that are not available in the original game. You can also access all the events, challenges, seasons, and rewards that are offered on the official servers.
-
Why would you want to use a real server mod apk for Clash of Clans?
-
There are many reasons why you might want to use a real server mod apk for Clash of Clans. Some of them are:
-
-
You want to have fun and experiment with different aspects of the game without worrying about the limitations or restrictions of the original game.
-
You want to save time and money by getting unlimited resources and items without spending any real money or waiting for hours.
-
You want to test your skills and strategies against other players on the official servers with your modified game.
-
You want to enjoy the latest updates and features of the game without having to update your app or download a new mod apk every time.
-
You want to have more control and customization over your game and play it according to your preferences and style.
-
-
However, there are also some drawbacks and risks of using a real server mod apk for Clash of Clans. Some of them are:
-
-
You may face technical issues or errors while playing the mod apk, such as crashes, glitches, bugs, or compatibility problems.
-
You may lose your original game data or progress if you do not backup your files before installing the mod apk.
-
You may get banned or suspended from the official servers if the developer detects your mod apk or if you abuse the mod features.
-
You may compromise the security and privacy of your device or account if you download a mod apk from an untrusted or malicious source.
-
You may miss out on the original experience and challenge of the game as it was intended by the developer.
-
-
How to download and install a real server mod apk for Clash of Clans?
-
If you have decided to try a real server mod apk for Clash of Clans, you need to follow some steps to download and install it on your device. Here is a step-by-step guide on how to do it:
-
Where to find a reliable real server mod apk for Clash of Clans?
-
The first step is to find a reliable source where you can download a real server mod apk for Clash of Clans. There are many websites and forums that offer mod apks for various games, but not all of them are safe or trustworthy. You need to be careful and do some research before downloading any mod apk from an unknown source. Some of the things you can do are:
-
-
Check the reviews and ratings of the website or forum where you found the mod apk. See what other users have said about their experience with the mod apk and whether they faced any issues or problems.
-
Check the date and version of the mod apk. Make sure it is compatible with your device and the latest version of the original game.
-
Check the size and content of the mod apk. Make sure it does not contain any unwanted or harmful software that can harm your device or account.
-
Check the permissions and requirements of the mod apk. Make sure it does not ask for any unnecessary or suspicious permissions that can compromise your security or privacy.
-
How to backup your original game data before installing the mod apk?
-
The second step is to backup your original game data before installing the mod apk. This is important because you may lose your progress or account if something goes wrong during the installation or if you want to switch back to the original game later. There are different ways to backup your game data, depending on your device and the type of data you want to save. Some of the common methods are:
-
-
Using Google Play Games or Facebook to sync your game data with your online account. This will allow you to restore your game data on any device that supports these platforms.
-
Using a file manager app or a computer to copy and paste your game data files from your device's internal storage or SD card to another location. This will allow you to manually restore your game data on the same device or a different device.
-
Using a cloud service or an external storage device to backup your game data files online or offline. This will allow you to access your game data from anywhere and anytime.
-
-
Make sure you know where your game data files are located and how to restore them before installing the mod apk. You can also use a backup app or tool that can automate the process for you.
-
How to enable unknown sources on your device and install the mod apk?
-
The third step is to enable unknown sources on your device and install the mod apk. This is necessary because most devices do not allow installing apps from sources other than the official app store or developer. To enable unknown sources, you need to follow these steps:
-
-
Go to your device's settings and look for security or privacy options.
-
Find and tap on the option that says unknown sources, install unknown apps, or something similar.
-
Toggle on the option and confirm your choice if prompted.
-
-
Once you have enabled unknown sources, you can proceed to install the mod apk. To install the mod apk, you need to follow these steps:
-
-
Locate and tap on the mod apk file that you downloaded from the source.
-
Follow the instructions on the screen and agree to the terms and conditions if asked.
-
Wait for the installation to complete and tap on open or done when finished.
-
-
How to launch and play the mod apk on your device?
-
The final step is to launch and play the mod apk on your device. This is easy and similar to playing any other app on your device. To launch and play the mod apk, you need to follow these steps:
-
-
Find and tap on the icon of the mod apk on your device's home screen or app drawer.
-
Wait for the game to load and sign in with your account if required.
-
Enjoy playing the game with unlimited resources, custom mods, and access to all the features.
-
-
Conclusion
-
In conclusion, a real server mod apk for Clash of Clans is a modified version of the original game that connects to the official servers and allows you to play with unlimited resources, custom mods, and access to all the features. It can be fun and exciting to use, but it also comes with some drawbacks and risks that you should be aware of. If you want to try a real server mod apk for Clash of Clans, you need to find a reliable source, backup your original game data, enable unknown sources, install the mod apk, and launch and play it on your device. We hope this article has helped you understand what a real server mod apk is, why you might want to use it, and how to download and install it on your device.
-
Frequently Asked Questions
-
Here are some of the frequently asked questions about real server mod apks for Clash of Clans:
-
Is using a real server mod apk for Clash of Clans legal?
-
Using a real server mod apk for Clash of Clans may not be legal in some countries or regions, depending on their laws and regulations regarding intellectual property rights, digital piracy, online gaming, or other related matters. You should check with your local authorities before using a real server mod apk for Clash of Clans.
-
Is using a real server mod apk for Clash of Clans safe?
-
Using a real server mod apk for Clash of Clans may not be safe for your device or account, depending on the source, quality, and content of the mod apk. You should only download a real server mod apk for Clash of Clans from a trusted and reputable source that has positive reviews and ratings from other users. You should also scan the mod apk with an antivirus or anti-malware software before installing it on your device. You should also backup your original game data and enable unknown sources on your device before installing the mod apk. You should also be careful not to abuse the mod features or violate the terms of service or policies of the original game or developer, as this may result in bans or legal actions.
-
Is using a real server mod apk for Clash of Clans fair?
-
Using a real server mod apk for Clash of Clans may not be fair for other players who play the game without any modifications or enhancements. You may have an unfair advantage over them in terms of resources, items, features, or gameplay. You may also ruin their experience or enjoyment of the game by using cheats, hacks, or mods that affect their gameplay. You should respect other players and play the game in a fair and ethical manner.
-
Is using a real server mod apk for Clash of Clans permanent?
-
Using a real server mod apk for Clash of Clans is not permanent, as you can always switch back to the original game if you want to. You can uninstall the mod apk from your device and restore your original game data from your backup. You can also update your original game app from the official app store or developer if there are any new updates or features available.
-
Is using a real server mod apk for Clash of Clans worth it?
-
Using a real server mod apk for Clash of Clans may be worth it for some players who want to have fun and experiment with different aspects of the game without worrying about the limitations or restrictions of the original game. It may also be worth it for some players who want to save time and money by getting unlimited resources and items without spending any real money or waiting for hours. However, it may not be worth it for some players who prefer the original experience and challenge of the game as it was intended by the developer. It may also not be worth it for some players who value their security, privacy, and fairness over their entertainment and enjoyment. Ultimately, it depends on your personal preference and perspective whether using a real server mod apk for Clash of Clans is worth it or not.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Stickman The Flash APK Mod Menu with God Mode and Unlimited Power.md b/spaces/fatiXbelha/sd/Download Stickman The Flash APK Mod Menu with God Mode and Unlimited Power.md
deleted file mode 100644
index 3470adf1a0f484d28b9e56ad0fb8cf8c475fa02e..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Stickman The Flash APK Mod Menu with God Mode and Unlimited Power.md
+++ /dev/null
@@ -1,118 +0,0 @@
-
-
Stickman The Flash APK Mod Menu: A Guide for Gamers
-
Do you love stickman games? Do you enjoy fast-paced action and epic battles? If yes, then you should try Stickman The Flash, a new game that will test your reflexes and skills. In this game, you will play as a stickman hero who has superpowers and can move faster than light. You will face various enemies and challenges in different modes and levels. You will also be able to customize your character and weapons with various upgrades and items.
But what if you want to make the game more fun and easy? What if you want to have unlimited power, god mode, unlocked weapons, and more? Well, there is a way to do that. You can use Stickman The Flash APK Mod Menu, a modified version of the game that gives you access to many features and options that are not available in the original game. In this article, we will tell you everything you need to know about Stickman The Flash APK Mod Menu, including what it is, how to download and install it, and how to play the game with it. Let's get started!
-
What is Stickman The Flash?
-
Stickman The Flash is a 2D action game developed by StormHit Games. It was released in 2021 for Android devices. The game is inspired by the popular DC Comics superhero, The Flash, who can run at superhuman speeds and manipulate time. In the game, you will control a stickman version of The Flash, who has similar abilities and powers. You will use your speed, strength, and skills to fight against various enemies, such as robots, ninjas, zombies, aliens, and more. You will also encounter bosses and mini-bosses that will challenge your abilities.
-
Features of Stickman The Flash
-
Stickman The Flash has many features that make it an exciting and addictive game. Some of these features are:
-
stickman the flash mod apk unlimited money
-stickman the flash mod apk download for android
-stickman the flash mod apk latest version 2023
-stickman the flash mod apk god mode unlocked
-stickman the flash mod apk free shopping
-stickman the flash mod apk no ads
-stickman the flash mod apk all weapons
-stickman the flash mod apk unlimited power
-stickman the flash mod apk offline
-stickman the flash mod apk hack
-stickman the flash mod menu apk download
-stickman the flash mod menu apk free
-stickman the flash mod menu apk 2023
-stickman the flash mod menu apk god mode
-stickman the flash mod menu apk unlimited money and power
-stickman the flash mod menu apk all characters
-stickman the flash mod menu apk no root
-stickman the flash mod menu apk latest update
-stickman the flash mod menu apk android 1
-stickman the flash mod menu apk revdl
-download stickman the flash apk mod menu for free
-download stickman the flash apk mod menu latest version
-download stickman the flash apk mod menu android 2023
-download stickman the flash apk mod menu god mode and unlimited power
-download stickman the flash apk mod menu unlocked everything
-download stickman the flash apk mod menu no verification
-download stickman the flash apk mod menu from mediafire
-download stickman the flash apk mod menu without ads
-how to install stickman the flash apk mod menu on android
-how to use stickman the flash apk mod menu features
-how to update stickman the flash apk mod menu 2023
-how to get stickman the flash apk mod menu for free
-how to hack stickman the flash with apk mod menu
-how to play stickman the flash with apk mod menu offline
-how to unlock all weapons in stickman the flash with apk mod menu
-best settings for stickman the flash apk mod menu 2023
-best tips and tricks for stickman the flash apk mod menu gameplay
-best guide and tutorial for stickman the flash apk mod menu installation and usage
-best review and rating for stickman the flash apk mod menu 2023
-
-
Simple and intuitive controls: You can control your character with just one finger. Tap to move, swipe to dash, and hold to charge your power.
-
Stunning graphics and animations: The game has colorful and detailed graphics that create a vivid and dynamic environment. The animations are smooth and realistic, showing the effects of your movements and attacks.
-
Various modes and levels: The game has different modes that offer different challenges and objectives. You can play in story mode, where you will follow the plot and complete missions. You can also play in survival mode, where you will face endless waves of enemies until you die. You can also play in arena mode, where you will fight against other players online.
-
Customizable character and weapons: You can customize your character's appearance, such as his hair, eyes, clothes, and accessories. You can also upgrade your weapons and skills with coins that you earn from playing the game. You can choose from different types of weapons, such as swords, guns, hammers, axes, etc.
-
Achievements and leaderboards: You can unlock various achievements by completing tasks and challenges in the game. You can also compete with other players on the leaderboards by scoring high points in each mode.
-
-
How to play Stickman The Flash
-
The gameplay of Stickman The Flash is simple but fun. Here are some basic steps on how to play the game:
-
-
Select a mode that you want to play.
-
Select a level or stage that you want to play.
-
Select a character and a weapon that you want to use.
-
Tap the screen to move your character.
-
Swipe the screen to dash or dodge.
Hold the screen to charge your power and release it to unleash a special attack.
-
Defeat all the enemies and complete the objectives of each level or stage.
-
Earn coins and rewards for your performance.
-
-
That's how you play Stickman The Flash. It's easy to learn but hard to master. You will need to use your reflexes, skills, and strategy to overcome the challenges and enemies in the game.
-
What is Stickman The Flash APK Mod Menu?
-
Stickman The Flash APK Mod Menu is a modified version of the original game that gives you access to many features and options that are not available in the original game. It is a file that you can download and install on your Android device. It will allow you to modify the game according to your preferences and needs.
-
Benefits of using Stickman The Flash APK Mod Menu
-
There are many benefits of using Stickman The Flash APK Mod Menu. Some of these benefits are:
-
-
You can have unlimited power, which means you can use your special attack as much as you want without waiting for it to recharge.
-
You can have god mode, which means you will not take any damage from enemies or obstacles.
-
You can have unlocked weapons, which means you can use any weapon in the game without buying or upgrading it.
-
You can have unlimited coins, which means you can buy and upgrade anything in the game without worrying about the cost.
-
You can have no ads, which means you will not see any annoying ads while playing the game.
-
-
How to download and install Stickman The Flash APK Mod Menu
-
Downloading and installing Stickman The Flash APK Mod Menu is easy and simple. Here are some steps on how to do it:
-
-
Go to a trusted website that provides the link to download Stickman The Flash APK Mod Menu. You can search for it on Google or use this link: .
-
Click on the download button and wait for the file to be downloaded on your device.
-
Go to your device's settings and enable the option to install apps from unknown sources. This will allow you to install the file that you downloaded.
-
Go to your device's file manager and locate the file that you downloaded. Tap on it and follow the instructions to install it.
-
Once the installation is done, you can launch the game and enjoy the mod menu features.
-
-
Note: You may need to uninstall the original game before installing the mod menu version. Also, make sure that your device has enough space and meets the minimum requirements for the game.
-
Tips and tricks for playing Stickman The Flash with APK Mod Menu
-
Playing Stickman The Flash with APK Mod Menu can be very fun and easy. However, if you want to make the most out of it, here are some tips and tricks that you can follow:
-
Use your powers wisely
-
Even though you have unlimited power, you should still use it wisely. Don't spam your special attack all the time, as it may make the game boring and less challenging. Use it when you need it, such as when you face a boss or a large group of enemies. Also, don't forget to use your dash and dodge abilities, as they can help you avoid damage and move faster.
-
Upgrade your weapons and skills
-
Even though you have unlocked weapons, you should still upgrade them and your skills. Upgrading them will make them more powerful and effective, as well as give you more options and variety. You can also try different combinations of weapons and skills, such as using a sword with a gun, or using a hammer with a speed boost. Experiment with different styles and find what suits you best.
-
Choose your character and mode
-
Even though you have god mode, you should still choose your character and mode carefully. Choosing a different character will give you a different appearance and personality, as well as different stats and abilities. Choosing a different mode will give you a different challenge and objective, as well as different rewards and rankings. You can also switch between them anytime you want, so don't be afraid to try new things and have fun.
-
Conclusion
-
Stickman The Flash is a great game for anyone who loves stickman games, action games, or superhero games. It has simple but fun gameplay, stunning graphics and animations, various modes and levels, customizable character and weapons, achievements and leaderboards, and more. It is also free to play and download on Android devices.
-
However, if you want to make the game more fun and easy, you can use Stickman The Flash APK Mod Menu, a modified version of the game that gives you access to many features and options that are not available in the original game. You can have unlimited power, god mode, unlocked weapons, unlimited coins, no ads, and more. You can also modify the game according to your preferences and needs.
-
To use Stickman The Flash APK Mod Menu, you need to download and install it on your device. You can find the link to download it on a trusted website or use this link: . You also need to enable the option to install apps from unknown sources on your device's settings. Once you install it, you can launch the game and enjoy the mod menu features.
-
Playing Stickman The Flash with APK Mod Menu can be very fun and easy, but you should still use some tips and tricks to make the most out of it. You should use your powers wisely, upgrade your weapons and skills, choose your character and mode, and have fun. You can also switch between the original game and the mod menu version anytime you want.
-
We hope this article has helped you learn more about Stickman The Flash APK Mod Menu and how to use it. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!
-
FAQs
-
Here are some frequently asked questions about Stickman The Flash APK Mod Menu:
-
-
Is Stickman The Flash APK Mod Menu safe to use?
-
Yes, Stickman The Flash APK Mod Menu is safe to use as long as you download it from a trusted website or source. However, you should always be careful when installing apps from unknown sources, as they may contain viruses or malware that can harm your device or data.
-
Is Stickman The Flash APK Mod Menu legal to use?
-
No, Stickman The Flash APK Mod Menu is not legal to use, as it violates the terms and conditions of the original game. Using it may result in banning your account or losing your progress in the game. Therefore, we do not recommend using it for any purposes other than entertainment or education.
-
Does Stickman The Flash APK Mod Menu work on iOS devices?
-
No, Stickman The Flash APK Mod Menu only works on Android devices. It is not compatible with iOS devices or any other platforms.
-
Can I play online with Stickman The Flash APK Mod Menu?
-
Yes, you can play online with Stickman The Flash APK Mod Menu, but you may encounter some problems or issues. For example, you may not be able to connect with other players who are using the original game or a different version of the mod menu. You may also face lagging or crashing issues due to the mod menu features.
-
Can I update Stickman The Flash APK Mod Menu?
-
Yes, you can update Stickman The Flash APK Mod Menu whenever there is a new version available. However, you may need to uninstall the previous version and install the new one manually. You may also lose some of your data or settings in the process.
- 197e85843d
-
-
\ No newline at end of file
diff --git "a/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" "b/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py"
deleted file mode 100644
index 73ae45f240f346fec6bb1ec87a2616055e481827..0000000000000000000000000000000000000000
--- "a/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py"
+++ /dev/null
@@ -1,52 +0,0 @@
-from toolbox import CatchException, update_ui
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
-import datetime, re
-
-@CatchException
-def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- """
- txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
- llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
- plugin_kwargs 插件模型的参数,暂时没有用武之地
- chatbot 聊天显示框的句柄,用于显示给用户
- history 聊天历史,前情提要
- system_prompt 给gpt的静默提醒
- web_port 当前软件运行的端口号
- """
- history = [] # 清空历史,以免输入溢出
- chatbot.append(("这是什么功能?", "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板(该函数只有20多行代码)。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR!"))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
- for i in range(5):
- currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month
- currentDay = (datetime.date.today() + datetime.timedelta(days=i)).day
- i_say = f'历史中哪些事件发生在{currentMonth}月{currentDay}日?用中文列举两条,然后分别给出描述事件的两个英文单词。' + '当你给出关键词时,使用以下json格式:{"KeyWords":[EnglishKeyWord1,EnglishKeyWord2]}。'
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say, inputs_show_user=i_say,
- llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
- sys_prompt='输出格式示例:1908年,美国消防救援事业发展的“美国消防协会”成立。关键词:{"KeyWords":["Fire","American"]}。'
- )
- gpt_say = get_images(gpt_say)
- chatbot[-1] = (i_say, gpt_say)
- history.append(i_say);history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
-
-
-def get_images(gpt_say):
- def get_image_by_keyword(keyword):
- import requests
- from bs4 import BeautifulSoup
- response = requests.get(f'https://wallhaven.cc/search?q={keyword}', timeout=2)
- for image_element in BeautifulSoup(response.content, 'html.parser').findAll("img"):
- if "data-src" in image_element: break
- return image_element["data-src"]
-
- for keywords in re.findall('{"KeyWords":\[(.*?)\]}', gpt_say):
- keywords = [n.strip('"') for n in keywords.split(',')]
- try:
- description = keywords[0]
- url = get_image_by_keyword(keywords[0])
- img_tag = f"\n\n"
- gpt_say += img_tag
- except:
- continue
- return gpt_say
\ No newline at end of file
diff --git a/spaces/fclong/summary/fengshen/examples/finetune_bart_qg/finetune_bart.sh b/spaces/fclong/summary/fengshen/examples/finetune_bart_qg/finetune_bart.sh
deleted file mode 100644
index ae88b230fa223c3d2c519e4f09cb1c703319af48..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/finetune_bart_qg/finetune_bart.sh
+++ /dev/null
@@ -1,97 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=bart_qg # create a short name for your job
-#SBATCH --nodes=1 # node count
-#SBATCH --ntasks-per-node=8 # number of tasks to run per node
-#SBATCH --cpus-per-task=10 # cpu-cores per task (>1 if multi-threaded tasks)
-#SBATCH --gres=gpu:1 # number of gpus per node
-#SBATCH -o %x-%j.log # output and error log file names (%x for job id)
-set -x -e
-
-MODEL_NAME=IDEA-CCNL/Randeng-BART-139M
-RUN_NAME=bart_v0_test
-ROOT_DIR=../../workspace/log/$RUN_NAME
-
-config_json="$ROOT_DIR/$MODEL_NAME.ds_config.json"
-export MASTER_PORT=$[RANDOM%10000+40000]
-
-MICRO_BATCH_SIZE=32
-
-cat < $config_json
-{
- "train_micro_batch_size_per_gpu": $MICRO_BATCH_SIZE,
- "gradient_clipping": 1,
- "zero_optimization": {
- "stage": 1
- },
- "fp16": {
- "enabled": true,
- }
-}
-EOT
-export PL_DEEPSPEED_CONFIG_PATH=$config_json
-export TORCH_EXTENSIONS_DIR=../../workspace/torch_extensions
-
-DATA_ARGS=" \
- --train_file train.json \
- --val_file dev.json \
- --test_file test.json \
- --tokenizer_type bart \
- --num_workers 8 \
- --dataloader_workers 2 \
- --train_batchsize $MICRO_BATCH_SIZE \
- --val_batchsize $MICRO_BATCH_SIZE \
- --test_batchsize $MICRO_BATCH_SIZE \
- --max_seq_lengt 512 \
- --max_src_length 32 \
- --max_kno_length 416 \
- --max_tgt_length 64 \
- --mask_ans_style anstoken_multispan \
- "
-
-MODEL_ARGS="\
- --model_path $MODEL_NAME/ \
- --learning_rate 1e-4 \
- --min_learning_rate 1e-8 \
- --lr_decay_steps 100000 \
- --weight_decay 1e-2 \
- --warmup_steps 1000 \
- "
-
-MODEL_CHECKPOINT_ARGS="\
- --monitor val_loss \
- --save_top_k 3 \
- --mode min \
- --save_last \
- --every_n_train_steps 5000 \
- --save_ckpt_path $ROOT_DIR/ckpt/ \
- --load_ckpt_path $ROOT_DIR/ckpt/ \
- --filename model-{step:02d}-{train_loss:.4f} \
- "
-
-TRAINER_ARGS="\
- --gradient_clip_val 1.0 \
- --max_epochs 1 \
- --gpus 1 \
- --num_nodes 1 \
- --strategy ddp \
- --log_every_n_steps 100 \
- --val_check_interval 0.5 \
- --accumulate_grad_batches 1 \
- --default_root_dir $ROOT_DIR \
- --tensorboard_dir $ROOT_DIR \
- --label_smooth 0.1 \
- "
-
-
-
-export options=" \
- $DATA_ARGS \
- $MODEL_ARGS \
- $MODEL_CHECKPOINT_ARGS \
- $TRAINER_ARGS \
- "
-# test
-export SCRIPT_PATH=./finetune_bart.py
-
-python3 ${SCRIPT_PATH} $options > $ROOT_DIR/test.log
-
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bingotingo A Simple and Fast LinkedIn Video Downloader.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bingotingo A Simple and Fast LinkedIn Video Downloader.md
deleted file mode 100644
index e0947f0fb988d8a3e3f3dce791d06ca6d3379d9b..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bingotingo A Simple and Fast LinkedIn Video Downloader.md
+++ /dev/null
@@ -1,127 +0,0 @@
-
-
How to Download LinkedIn Video with BingoTingo
-
LinkedIn is one of the most popular social media platforms for professionals and businesses. It allows you to share your expertise, network with others, and discover new opportunities. But did you know that you can also share and watch videos on LinkedIn?
-
In this article, we will show you how to download LinkedIn video with BingoTingo, a free online video downloader that lets you save any video from any website in seconds. Whether you want to watch a video offline, share it with your friends, or use it for your own projects, BingoTingo can help you do it easily and quickly.
LinkedIn video is a feature that allows you to upload and share videos on your LinkedIn profile, page, or group. You can also watch videos posted by other users on your feed or search for videos by topic or hashtag.
-
LinkedIn video can be used for various purposes, such as:
-
-
Showing your work or portfolio
-
Demonstrating your skills or knowledge
-
Sharing your insights or opinions
-
Promoting your products or services
-
Engaging with your audience or customers
-
Learning from experts or influencers
-
-
Why Download LinkedIn Video?
-
Downloading LinkedIn video can be useful for many reasons, such as:
-
-
You can watch it offline without internet connection
-
You can save it on your device for future reference
-
You can edit it or add subtitles or captions
-
You can share it on other platforms or channels
-
You can use it for your own presentations or projects
-
-
What is BingoTingo?
-
BingoTingo is a free online video downloader that allows you to download any video from any website in seconds. You don't need to install any software or register any account. You just need to copy and paste the URL of the video you want to download and BingoTingo will do the rest for you.
-
How BingoTingo Works
-
BingoTingo works by extracting the video source from the URL you provide and converting it into a downloadable file. You can choose from various formats and quality options, such as MP4, WEBM, 3GP, 720p, 480p, 360p, etc. You can also preview the video before downloading it.
-
Benefits of BingoTingo
-
BingoTingo has many benefits over other video downloaders, such as:
-
bingotingo linkedin video downloader online
-bingotingo how to save linkedin videos to computer
-bingotingo best way to download videos from linkedin
-bingotingo how to copy video link from linkedin
-bingotingo how to download linkedin learning videos
-bingotingo how to download linkedin live videos
-bingotingo how to download videos from linkedin app
-bingotingo how to download videos from linkedin messages
-bingotingo how to download videos from linkedin profile
-bingotingo how to download videos from linkedin feed
-bingotingo how to download videos from linkedin pulse
-bingotingo how to download videos from linkedin groups
-bingotingo how to download videos from linkedin stories
-bingotingo how to download videos from linkedin ads
-bingotingo how to download videos from linkedin events
-bingotingo free tool for downloading linkedin videos
-bingotingo easy steps for downloading linkedin videos
-bingotingo guide for downloading linkedin videos in 2023
-bingotingo tips and tricks for downloading linkedin videos
-bingotingo benefits of downloading linkedin videos
-bingotingo why you should download linkedin videos
-bingotingo what you can do with downloaded linkedin videos
-bingotingo how to edit downloaded linkedin videos
-bingotingo how to share downloaded linkedin videos
-bingotingo how to upload downloaded linkedin videos
-bingotingo how to convert downloaded linkedin videos to different formats
-bingotingo how to compress downloaded linkedin videos
-bingotingo how to optimize downloaded linkedin videos for SEO
-bingotingo how to use downloaded linkedin videos for marketing
-bingotingo how to use downloaded linkedin videos for education
-bingotingo how to use downloaded linkedin videos for entertainment
-bingotingo how to use downloaded linkedin videos for inspiration
-bingotingo how to use downloaded linkedin videos for networking
-bingotingo how to use downloaded linkedin videos for personal branding
-bingotingo how to use downloaded linkedin videos for social media
-bingotingo alternatives to downloading linkedin videos
-bingotingo pros and cons of downloading linkedin videos
-bingotingo reviews of downloading linkedin videos with bingotingo
-bingotingo testimonials of downloading linkedin videos with bingotingo
-bingotingo case studies of downloading linkedin videos with bingotingo
-bingotingo FAQs of downloading linkedin videos with bingotingo
-bingotingo features of downloading linkedin videos with bingotingo
-bingotingo pricing of downloading linkedin videos with bingotingo
-bingotingo support of downloading linkedin videos with bingotingo
-bingotingo comparison of downloading linkedin videos with other tools
-bingotingo challenges of downloading linkedin videos with other tools
-bingotingo solutions of downloading linkedin videos with other tools
-bingotingo recommendations of downloading linkedin videos with other tools
-bingotingo best practices of downloading linkedin videos with other tools
-
-
It is free and unlimited
-
It is fast and easy
-
It supports any website and any device
-
It does not require any installation or registration
-
It does not contain any ads or malware
-
It respects your privacy and security
-
-
How to Download LinkedIn Video with BingoTingo
-
Downloading LinkedIn video with BingoTingo is very simple and straightforward. You just need to follow these five steps:
-
Step 1: Find the LinkedIn Video You Want to Download
-
The first step is to find the LinkedIn video you want to download. You can do this by browsing your feed, searching by topic or hashtag, or visiting a specific profile, page, or group. Once you find the video, click on it to open it in a new tab.
-
Step 2: Copy the URL of the LinkedIn Video
-
The second step is to copy the URL of the LinkedIn video. You can do this by selecting the address bar of your browser and pressing Ctrl+C (Windows) or Command+C (Mac). Alternatively, you can right-click on the video and choose Copy Video URL from the menu.
-
Step 3: Paste the URL into BingoTingo's Search Box
-
The third step is to paste the URL into BingoTingo's search box. You can do this by visiting bingotingo.com, clicking on the search box, and pressing Ctrl+V (Windows) or Command+V (Mac). Alternatively, you can right-click on the search box and choose Paste from the menu.
-
Step 4: Choose Your Preferred Format and Quality
-
The fourth step is to choose your preferred format and quality for your downloaded video. You can do this by clicking on the drop-down menu next to the search box and selecting one of the available options. You can also preview the video by clicking on the Play button.
-
Step 5: Click on Download and Enjoy Your Video
-
The fifth and final step is to click on the Download button and enjoy your video. You can do this by clicking on the green Download button below the preview window. Your video will start downloading automatically to your device. You can then watch it offline, share it with others, or use it for your own purposes.
-
Tips and Tricks for Downloading LinkedIn Video with BingoTingo
-
To make your experience of downloading LinkedIn video with BingoTingo even better, here are some tips and tricks you can follow:
-
Use a Reliable Internet Connection
-
To ensure a smooth and fast download process, make sure you have a reliable internet connection. Avoid using public Wi-Fi networks or mobile data that may be slow or unstable. If possible, use a wired connection or a strong Wi-Fi signal.
-
Check the Video Permissions Before Downloading
-
To respect the rights of the video creators and avoid any legal issues, check the video permissions before downloading. Some videos may be private, restricted, or copyrighted. In that case, you may need to ask for permission from the video owner or follow the terms and conditions of LinkedIn. You can check the video permissions by clicking on the three dots icon on the top right corner of the video and choosing View Video Details from the menu.
-
Use a Good Video Player to Watch Your Downloaded Videos
-
To enjoy your downloaded videos in the best quality and performance, use a good video player to watch them. Some video players may not support certain formats or quality options, or may have issues with playback or sound. We recommend using VLC Media Player, which is a free and versatile video player that supports almost any format and quality.
-
Conclusion
-
Downloading LinkedIn video with BingoTingo is a great way to save and watch any video from LinkedIn offline, share it with others, or use it for your own projects. BingoTingo is a free, fast, and easy online video downloader that supports any website and any device. You just need to copy and paste the URL of the video you want to download and choose your preferred format and quality. BingoTingo will do the rest for you in seconds.
-
We hope this article has helped you learn how to download LinkedIn video with BingoTingo. If you have any questions or feedback, please feel free to contact us or leave a comment below. We would love to hear from you.
-
FAQs
-
Here are some frequently asked questions about downloading LinkedIn video with BingoTingo:
-
-
Is BingoTingo safe to use?
-
Yes, BingoTingo is safe to use. It does not contain any ads or malware, and it does not collect or store any of your personal data or information. It also respects your privacy and security by using encryption and HTTPS protocols.
-
Can I download LinkedIn live videos with BingoTingo?
-
Yes, you can download LinkedIn live videos with BingoTingo. However, you need to wait until the live stream is over and the video is available on the website. Then, you can follow the same steps as described above to download it.
-
Can I download multiple LinkedIn videos at once with BingoTingo?
-
No, you cannot download multiple LinkedIn videos at once with BingoTingo. You need to download each video individually by copying and pasting its URL into BingoTingo's search box. However, you can open multiple tabs or windows of BingoTingo and download different videos simultaneously.
-
Can I download LinkedIn videos on my mobile device with BingoTingo?
-
Yes, you can download LinkedIn videos on your mobile device with BingoTingo. You can use any browser on your smartphone or tablet to access bingotingo.com and follow the same steps as described above to download any video from LinkedIn.
-
Can I download LinkedIn videos in HD quality with BingoTingo?
-
Yes, you can download LinkedIn videos in HD quality with BingoTingo. You can choose from various quality options, such as 720p, 1080p, or 4K, depending on the availability of the video source. However, keep in mind that higher quality videos will take longer to download and occupy more space on your device.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/Music_Source_Separation/bytesep/plot_results/musdb18.py b/spaces/fffiloni/Music_Source_Separation/bytesep/plot_results/musdb18.py
deleted file mode 100644
index eb91faa60b79f0f34aba1bb4810c2be7be8438f3..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/Music_Source_Separation/bytesep/plot_results/musdb18.py
+++ /dev/null
@@ -1,198 +0,0 @@
-import argparse
-import os
-import pickle
-
-import matplotlib.pyplot as plt
-import numpy as np
-
-
-def load_sdrs(workspace, task_name, filename, config, gpus, source_type):
-
- stat_path = os.path.join(
- workspace,
- "statistics",
- task_name,
- filename,
- "config={},gpus={}".format(config, gpus),
- "statistics.pkl",
- )
-
- stat_dict = pickle.load(open(stat_path, 'rb'))
-
- median_sdrs = [e['median_sdr_dict'][source_type] for e in stat_dict['test']]
-
- return median_sdrs
-
-
-def plot_statistics(args):
-
- # arguments & parameters
- workspace = args.workspace
- select = args.select
- task_name = "musdb18"
- filename = "train"
-
- # paths
- fig_path = os.path.join('results', task_name, "sdr_{}.pdf".format(select))
- os.makedirs(os.path.dirname(fig_path), exist_ok=True)
-
- linewidth = 1
- lines = []
- fig, ax = plt.subplots(1, 1, figsize=(8, 6))
-
- if select == '1a':
- sdrs = load_sdrs(
- workspace,
- task_name,
- filename,
- config='vocals-accompaniment,unet',
- gpus=1,
- source_type="vocals",
- )
- (line,) = ax.plot(sdrs, label='UNet,l1_wav', linewidth=linewidth)
- lines.append(line)
- ylim = 15
-
- elif select == '1b':
- sdrs = load_sdrs(
- workspace,
- task_name,
- filename,
- config='accompaniment-vocals,unet',
- gpus=1,
- source_type="accompaniment",
- )
- (line,) = ax.plot(sdrs, label='UNet,l1_wav', linewidth=linewidth)
- lines.append(line)
- ylim = 20
-
- if select == '1c':
- sdrs = load_sdrs(
- workspace,
- task_name,
- filename,
- config='vocals-accompaniment,unet',
- gpus=1,
- source_type="vocals",
- )
- (line,) = ax.plot(sdrs, label='UNet,l1_wav', linewidth=linewidth)
- lines.append(line)
-
- sdrs = load_sdrs(
- workspace,
- task_name,
- filename,
- config='vocals-accompaniment,resunet',
- gpus=2,
- source_type="vocals",
- )
- (line,) = ax.plot(sdrs, label='ResUNet_ISMIR2021,l1_wav', linewidth=linewidth)
- lines.append(line)
-
- sdrs = load_sdrs(
- workspace,
- task_name,
- filename,
- config='vocals-accompaniment,unet_subbandtime',
- gpus=1,
- source_type="vocals",
- )
- (line,) = ax.plot(sdrs, label='unet_subband,l1_wav', linewidth=linewidth)
- lines.append(line)
-
- sdrs = load_sdrs(
- workspace,
- task_name,
- filename,
- config='vocals-accompaniment,resunet_subbandtime',
- gpus=1,
- source_type="vocals",
- )
- (line,) = ax.plot(sdrs, label='resunet_subband,l1_wav', linewidth=linewidth)
- lines.append(line)
-
- ylim = 15
-
- elif select == '1d':
- sdrs = load_sdrs(
- workspace,
- task_name,
- filename,
- config='accompaniment-vocals,unet',
- gpus=1,
- source_type="accompaniment",
- )
- (line,) = ax.plot(sdrs, label='UNet,l1_wav', linewidth=linewidth)
- lines.append(line)
-
- sdrs = load_sdrs(
- workspace,
- task_name,
- filename,
- config='accompaniment-vocals,resunet',
- gpus=2,
- source_type="accompaniment",
- )
- (line,) = ax.plot(sdrs, label='ResUNet_ISMIR2021,l1_wav', linewidth=linewidth)
- lines.append(line)
-
- # sdrs = load_sdrs(
- # workspace,
- # task_name,
- # filename,
- # config='accompaniment-vocals,unet_subbandtime',
- # gpus=1,
- # source_type="accompaniment",
- # )
- # (line,) = ax.plot(sdrs, label='UNet_subbtandtime,l1_wav', linewidth=linewidth)
- # lines.append(line)
-
- sdrs = load_sdrs(
- workspace,
- task_name,
- filename,
- config='accompaniment-vocals,resunet_subbandtime',
- gpus=1,
- source_type="accompaniment",
- )
- (line,) = ax.plot(
- sdrs, label='ResUNet_subbtandtime,l1_wav', linewidth=linewidth
- )
- lines.append(line)
-
- ylim = 20
-
- else:
- raise Exception('Error!')
-
- eval_every_iterations = 10000
- total_ticks = 50
- ticks_freq = 10
-
- ax.set_ylim(0, ylim)
- ax.set_xlim(0, total_ticks)
- ax.xaxis.set_ticks(np.arange(0, total_ticks + 1, ticks_freq))
- ax.xaxis.set_ticklabels(
- np.arange(
- 0,
- total_ticks * eval_every_iterations + 1,
- ticks_freq * eval_every_iterations,
- )
- )
- ax.yaxis.set_ticks(np.arange(ylim + 1))
- ax.yaxis.set_ticklabels(np.arange(ylim + 1))
- ax.grid(color='b', linestyle='solid', linewidth=0.3)
- plt.legend(handles=lines, loc=4)
-
- plt.savefig(fig_path)
- print('Save figure to {}'.format(fig_path))
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--workspace', type=str, required=True)
- parser.add_argument('--select', type=str, required=True)
-
- args = parser.parse_args()
-
- plot_statistics(args)
diff --git a/spaces/fffiloni/SplitTrack2MusicGen/tests/utils/__init__.py b/spaces/fffiloni/SplitTrack2MusicGen/tests/utils/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/SplitTrack2MusicGen/tests/utils/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/lp_train.py b/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/lp_train.py
deleted file mode 100644
index 24a19bacd0a4b789415cfccbce1f8bc99bc493ed..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/lp_train.py
+++ /dev/null
@@ -1,301 +0,0 @@
-import json
-import logging
-import math
-import os
-import time
-from contextlib import suppress
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-
-try:
- import wandb
-except ImportError:
- wandb = None
-
-from open_clip import LPLoss, LPMetrics, lp_gather_features
-from open_clip.utils import do_mixup, get_mix_lambda
-from .distributed import is_master
-from .zero_shot import zero_shot_eval
-
-
-class AverageMeter(object):
- """Computes and stores the average and current value"""
-
- def __init__(self):
- self.reset()
-
- def reset(self):
- self.val = 0
- self.avg = 0
- self.sum = 0
- self.count = 0
-
- def update(self, val, n=1):
- self.val = val
- self.sum += val * n
- self.count += n
- self.avg = self.sum / self.count
-
-
-def unwrap_model(model):
- if hasattr(model, "module"):
- return model.module
- else:
- return model
-
-
-def train_one_epoch(
- model,
- data,
- epoch,
- optimizer,
- scaler,
- scheduler,
- args,
- tb_writer=None,
- extra_suffix="",
-):
- device = torch.device(args.device)
- autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress
- model.train()
- loss = LPLoss(args.lp_loss)
-
- dataloader, sampler = data["train"].dataloader, data["train"].sampler
- if args.distributed and sampler is not None:
- sampler.set_epoch(epoch)
- num_batches_per_epoch = dataloader.num_batches
- sample_digits = math.ceil(math.log(dataloader.num_samples + 1, 10))
-
- # for toy dataset
- if args.dataset_type == "toy":
- dataloader.dataset.generate_queue()
-
- loss_m = AverageMeter()
- batch_time_m = AverageMeter()
- data_time_m = AverageMeter()
- end = time.time()
-
- for i, batch in enumerate(dataloader):
- step = num_batches_per_epoch * epoch + i
-
- if isinstance(scheduler, dict):
- for s in scheduler.values():
- s(step)
- else:
- scheduler(step)
-
- audio = batch # contains mel_spec, wavform, and longer list
- class_label = batch["class_label"]
- # audio = audio.to(device=device, non_blocking=True)
- class_label = class_label.to(device=device, non_blocking=True)
-
- if args.mixup:
- # https://github.com/RetroCirce/HTS-Audio-Transformer/blob/main/utils.py#L146
- mix_lambda = torch.from_numpy(
- get_mix_lambda(0.5, len(audio["waveform"]))
- ).to(device)
- class_label = do_mixup(class_label, mix_lambda)
- else:
- mix_lambda = None
-
- data_time_m.update(time.time() - end)
- if isinstance(optimizer, dict):
- for o_ in optimizer.values():
- o_.zero_grad()
- else:
- optimizer.zero_grad()
-
- with autocast():
- pred = model(audio, mix_lambda=mix_lambda, device=device)
- total_loss = loss(pred, class_label)
-
- if isinstance(optimizer, dict):
- if scaler is not None:
- scaler.scale(total_loss).backward()
- for o_ in optimizer.values():
- if args.horovod:
- o_.synchronize()
- scaler.unscale_(o_)
- with o_.skip_synchronize():
- scaler.step(o_)
- else:
- scaler.step(o_)
- scaler.update()
- else:
- total_loss.backward()
- for o_ in optimizer.values():
- o_.step()
- else:
- if scaler is not None:
- scaler.scale(total_loss).backward()
- if args.horovod:
- optimizer.synchronize()
- scaler.unscale_(optimizer)
- with optimizer.skip_synchronize():
- scaler.step(optimizer)
- else:
- scaler.step(optimizer)
- scaler.update()
- else:
- total_loss.backward()
- optimizer.step()
-
- # Note: we clamp to 4.6052 = ln(100), as in the original paper.
- with torch.no_grad():
- unwrap_model(model).clap_model.logit_scale_a.clamp_(0, math.log(100))
- unwrap_model(model).clap_model.logit_scale_t.clamp_(0, math.log(100))
-
- batch_time_m.update(time.time() - end)
- end = time.time()
- batch_count = i + 1
-
- if is_master(args) and (i % 100 == 0 or batch_count == num_batches_per_epoch):
- if isinstance(audio, dict):
- batch_size = len(audio["waveform"])
- else:
- batch_size = len(audio)
- num_samples = batch_count * batch_size * args.world_size
- samples_per_epoch = dataloader.num_samples
- percent_complete = 100.0 * batch_count / num_batches_per_epoch
-
- # NOTE loss is coarsely sampled, just master node and per log update
- loss_m.update(total_loss.item(), batch_size)
- if isinstance(optimizer, dict):
- logging.info(
- f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] "
- f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) "
- f"Data (t): {data_time_m.avg:.3f} "
- f"Batch (t): {batch_time_m.avg:.3f} "
- f"LR: {[o_.param_groups[0]['lr'] for o_ in optimizer.values()]}"
- )
- log_data = {
- "loss": loss_m.val,
- "data_time": data_time_m.val,
- "batch_time": batch_time_m.val,
- "lr": [o_.param_groups[0]["lr"] for o_ in optimizer.values()],
- }
- else:
- logging.info(
- f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] "
- f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) "
- f"Data (t): {data_time_m.avg:.3f} "
- f"Batch (t): {batch_time_m.avg:.3f} "
- f"LR: {optimizer.param_groups[0]['lr']:5f} "
- )
-
- # Save train loss / etc. Using non avg meter values as loggers have their own smoothing
- log_data = {
- "loss": loss_m.val,
- "data_time": data_time_m.val,
- "batch_time": batch_time_m.val,
- "lr": optimizer.param_groups[0]["lr"],
- }
- for name, val in log_data.items():
- name = f"train{extra_suffix}/{name}"
- if tb_writer is not None:
- tb_writer.add_scalar(name, val, step)
- if args.wandb:
- assert wandb is not None, "Please install wandb."
- wandb.log({name: val, "step": step})
-
- # resetting batch / data time meters per log window
- batch_time_m.reset()
- data_time_m.reset()
- # end for
-
-
-def evaluate(model, data, epoch, args, tb_writer=None, extra_suffix=""):
- metrics = {}
- if not args.parallel_eval:
- if not is_master(args):
- return metrics
- device = torch.device(args.device)
- model.eval()
-
- # CHANGE
- # zero_shot_metrics = zero_shot_eval(model, data, epoch, args)
- # metrics.update(zero_shot_metrics)
- if is_master(args):
- print("Evaluating...")
- metric_names = args.lp_metrics.split(",")
- eval_tool = LPMetrics(metric_names=metric_names)
-
- autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress
- if "val" in data and (
- args.val_frequency
- and ((epoch % args.val_frequency) == 0 or epoch == args.epochs)
- ):
- if args.parallel_eval:
- dataloader, sampler = data["val"].dataloader, data["val"].sampler
- if args.distributed and sampler is not None:
- sampler.set_epoch(epoch)
- samples_per_val = dataloader.num_samples
- else:
- dataloader = data["val"].dataloader
- num_samples = 0
- samples_per_val = dataloader.num_samples
-
- eval_info = {"pred": [], "target": []}
- with torch.no_grad():
- for i, batch in enumerate(dataloader):
- audio = batch # contains mel_spec, wavform, and longer list
- class_label = batch["class_label"]
-
- # audio = audio.to(device=device, non_blocking=True)
- class_label = class_label.to(device=device, non_blocking=True)
-
- with autocast():
- pred = model(audio, device=device)
- if args.parallel_eval:
- pred, class_label = lp_gather_features(
- pred, class_label, args.world_size, args.horovod
- )
- eval_info["pred"].append(pred)
- eval_info["target"].append(class_label)
-
- num_samples += class_label.shape[0]
-
- if (i % 100) == 0: # and i != 0:
- logging.info(
- f"Eval Epoch: {epoch} [{num_samples} / {samples_per_val}]"
- )
-
- if is_master(args):
- eval_info["pred"] = torch.cat(eval_info["pred"], 0).cpu()
- eval_info["target"] = torch.cat(eval_info["target"], 0).cpu()
- metric_dict = eval_tool.evaluate_mertics(
- eval_info["pred"], eval_info["target"]
- )
- metrics.update(metric_dict)
- if "epoch" not in metrics.keys():
- metrics.update({"epoch": epoch})
-
- if is_master(args):
- if not metrics:
- return metrics
-
- logging.info(
- f"Eval Epoch: {epoch} "
- + "\n".join(
- ["\t".join([f"{m}: {round(metrics[m], 4):.4f}"]) for m in metrics]
- )
- )
- if args.save_logs:
- for name, val in metrics.items():
- if tb_writer is not None:
- tb_writer.add_scalar(f"val{extra_suffix}/{name}", val, epoch)
-
- with open(os.path.join(args.checkpoint_path, "results.jsonl"), "a+") as f:
- f.write(json.dumps(metrics))
- f.write("\n")
-
- if args.wandb:
- assert wandb is not None, "Please install wandb."
- for name, val in metrics.items():
- wandb.log({f"val{extra_suffix}/{name}": val, "epoch": epoch})
-
- return metrics
- else:
- return metrics
diff --git a/spaces/fffiloni/instant-TTS-Bark-cloning/README.md b/spaces/fffiloni/instant-TTS-Bark-cloning/README.md
deleted file mode 100644
index 900b006e17eeab3c485722571d87875989c12aa3..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/instant-TTS-Bark-cloning/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Coqui Bark Voice Cloning
-emoji: 🐸🐶
-colorFrom: yellow
-colorTo: gray
-python_version: 3.10.12
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/fffiloni/sd-img-variations/app.py b/spaces/fffiloni/sd-img-variations/app.py
deleted file mode 100644
index a1f219ab043065d045d5e5f3451e55305c787aba..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/sd-img-variations/app.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import gradio as gr
-import torch
-from PIL import Image
-
-from lambda_diffusers import StableDiffusionImageEmbedPipeline
-
-def ask(input_im, scale, steps, seed, images):
- images = images
- generator = torch.Generator(device=device).manual_seed(int(seed))
-
- images_list = pipe(
- 2*[input_im],
- guidance_scale=scale,
- num_inference_steps=steps,
- generator=generator,
- )
-
- for i, image in enumerate(images_list["sample"]):
- if(images_list["nsfw_content_detected"][i]):
- safe_image = Image.open(r"unsafe.png")
- images.append(safe_image)
- else:
- images.append(image)
- return images
-
-def main(input_im, n_pairs, scale, steps, seed):
- print('Start the magic !')
- images = []
- for i in range(n_pairs):
- print('Asking for a new pair of image [' + str(i + 1) + '/' + str(n_pairs) + ']')
- seed = seed+i
- images = ask(input_im, scale, steps, seed, images)
- print('Thanks to Sylvain, it worked like a charm!')
- return images
-
-device = "cuda" if torch.cuda.is_available() else "cpu"
-pipe = StableDiffusionImageEmbedPipeline.from_pretrained(
- "lambdalabs/sd-image-variations-diffusers",
- revision="273115e88df42350019ef4d628265b8c29ef4af5",
- )
-pipe = pipe.to(device)
-
-inputs = [
- gr.Image(),
- gr.Slider(1, 3, value=2, step=1, label="Pairs of images to ask"),
- gr.Slider(0, 25, value=3, step=1, label="Guidance scale"),
- gr.Slider(5, 50, value=25, step=5, label="Steps"),
- gr.Slider(label = "Seed", minimum = 0, maximum = 2147483647, step = 1, randomize = True)
-]
-output = gr.Gallery(label="Generated variations")
-output.style(grid=2, height="")
-
-description = \
-"""
-
This demo is running on CPU. Working version fixed by Sylvain @fffiloni. You'll get n pairs of images variations.
-Asking for pairs of images instead of more than 2 images in a row helps us to avoid heavy CPU load and connection error out ;)
-Waiting time (for 2 pairs): ~5/10 minutes • NSFW filters enabled •
-Generate variations on an input image using a fine-tuned version of Stable Diffusion.
-Trained by Justin Pinkney (@Buntworthy) at Lambda
-This version has been ported to 🤗 Diffusers library, see more details on how to use this version in the Lambda Diffusers repo.
-For the original training code see this repo.
-
-
-"""
-
-article = \
-"""
-—
-## How does this work?
-The normal Stable Diffusion model is trained to be conditioned on text input. This version has had the original text encoder (from CLIP) removed, and replaced with
-the CLIP _image_ encoder instead. So instead of generating images based a text input, images are generated to match CLIP's embedding of the image.
-This creates images which have the same rough style and content, but different details, in particular the composition is generally quite different.
-This is a totally different approach to the img2img script of the original Stable Diffusion and gives very different results.
-The model was fine tuned on the [LAION aethetics v2 6+ dataset](https://laion.ai/blog/laion-aesthetics/) to accept the new conditioning.
-Training was done on 4xA6000 GPUs on [Lambda GPU Cloud](https://lambdalabs.com/service/gpu-cloud).
-More details on the method and training will come in a future blog post.
-"""
-
-demo = gr.Interface(
- fn=main,
- title="Stable Diffusion Image Variations",
- inputs=inputs,
- outputs=output,
- description=description,
- article=article
- )
-demo.launch()
diff --git "a/spaces/fkhuggingme/gpt-academic/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py" "b/spaces/fkhuggingme/gpt-academic/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py"
deleted file mode 100644
index 49f41b18b986d229d4dd91aa6a0be74dee6d1296..0000000000000000000000000000000000000000
--- "a/spaces/fkhuggingme/gpt-academic/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py"
+++ /dev/null
@@ -1,310 +0,0 @@
-from toolbox import update_ui
-from toolbox import CatchException, report_execption, write_results_to_file
-from .crazy_utils import input_clipping
-
-def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
- import os, copy
- from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
- from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
- msg = '正常'
- inputs_array = []
- inputs_show_user_array = []
- history_array = []
- sys_prompt_array = []
- report_part_1 = []
-
- assert len(file_manifest) <= 512, "源文件太多(超过512个), 请缩减输入文件的数量。或者,您也可以选择删除此行警告,并修改代码拆分file_manifest列表,从而实现分批次处理。"
- ############################## <第一步,逐个文件分析,多线程> ##################################
- for index, fp in enumerate(file_manifest):
- # 读取文件
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
- file_content = f.read()
- prefix = "接下来请你逐文件分析下面的工程" if index==0 else ""
- i_say = prefix + f'请对下面的程序文件做一个概述文件名是{os.path.relpath(fp, project_folder)},文件代码是 ```{file_content}```'
- i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的程序文件做一个概述: {os.path.abspath(fp)}'
- # 装载请求内容
- inputs_array.append(i_say)
- inputs_show_user_array.append(i_say_show_user)
- history_array.append([])
- sys_prompt_array.append("你是一个程序架构分析师,正在分析一个源代码项目。你的回答必须简单明了。")
-
- # 文件读取完成,对每一个源代码文件,生成一个请求线程,发送到chatgpt进行分析
- gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
- inputs_array = inputs_array,
- inputs_show_user_array = inputs_show_user_array,
- history_array = history_array,
- sys_prompt_array = sys_prompt_array,
- llm_kwargs = llm_kwargs,
- chatbot = chatbot,
- show_user_at_complete = True
- )
-
- # 全部文件解析完成,结果写入文件,准备对工程源代码进行汇总分析
- report_part_1 = copy.deepcopy(gpt_response_collection)
- history_to_return = report_part_1
- res = write_results_to_file(report_part_1)
- chatbot.append(("完成?", "逐个文件分析已完成。" + res + "\n\n正在开始汇总。"))
- yield from update_ui(chatbot=chatbot, history=history_to_return) # 刷新界面
-
- ############################## <第二步,综合,单线程,分组+迭代处理> ##################################
- batchsize = 16 # 10个文件为一组
- report_part_2 = []
- previous_iteration_files = []
- last_iteration_result = ""
- while True:
- if len(file_manifest) == 0: break
- this_iteration_file_manifest = file_manifest[:batchsize]
- this_iteration_gpt_response_collection = gpt_response_collection[:batchsize*2]
- file_rel_path = [os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)]
- # 把“请对下面的程序文件做一个概述” 替换成 精简的 "文件名:{all_file[index]}"
- for index, content in enumerate(this_iteration_gpt_response_collection):
- if index%2==0: this_iteration_gpt_response_collection[index] = f"{file_rel_path[index//2]}" # 只保留文件名节省token
- previous_iteration_files.extend([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
- previous_iteration_files_string = ', '.join(previous_iteration_files)
- current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
- i_say = f'用一张Markdown表格简要描述以下文件的功能:{previous_iteration_files_string}。根据以上分析,用一句话概括程序的整体功能。'
- inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。'
- this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection)
- this_iteration_history.append(last_iteration_result)
- # 裁剪input
- inputs, this_iteration_history_feed = input_clipping(inputs=i_say, history=this_iteration_history, max_token_limit=2560)
- result = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=inputs, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
- history=this_iteration_history_feed, # 迭代之前的分析
- sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。")
- report_part_2.extend([i_say, result])
- last_iteration_result = result
-
- file_manifest = file_manifest[batchsize:]
- gpt_response_collection = gpt_response_collection[batchsize*2:]
-
- ############################## ##################################
- history_to_return.extend(report_part_2)
- res = write_results_to_file(history_to_return)
- chatbot.append(("完成了吗?", res))
- yield from update_ui(chatbot=chatbot, history=history_to_return) # 刷新界面
-
-
-@CatchException
-def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob
- file_manifest = [f for f in glob.glob('./*.py') if ('test_project' not in f) and ('gpt_log' not in f)] + \
- [f for f in glob.glob('./crazy_functions/*.py') if ('test_project' not in f) and ('gpt_log' not in f)]+ \
- [f for f in glob.glob('./request_llm/*.py') if ('test_project' not in f) and ('gpt_log' not in f)]
- project_folder = './'
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何python文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-@CatchException
-def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.py', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何python文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-
-@CatchException
-def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.h', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.hpp', recursive=True)] #+ \
- # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.h头文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-@CatchException
-def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.h', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.hpp', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.h头文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-
-@CatchException
-def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.java', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.jar', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.xml', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.sh', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何java文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-
-@CatchException
-def 解析一个Rect项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.ts', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.tsx', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.json', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.js', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.jsx', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何Rect文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-
-@CatchException
-def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.go', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/go.mod', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/go.sum', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/go.work', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何golang文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-
-@CatchException
-def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.lua', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.xml', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.json', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.toml', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何lua文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-
-@CatchException
-def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.cs', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.csproj', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何CSharp文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-
-@CatchException
-def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- txt_pattern = plugin_kwargs.get("advanced_arg")
- txt_pattern = txt_pattern.replace(",", ",")
- # 将要匹配的模式(例如: *.c, *.cpp, *.py, config.toml)
- pattern_include = [_.lstrip(" ,").rstrip(" ,") for _ in txt_pattern.split(",") if _ != "" and not _.strip().startswith("^")]
- if not pattern_include: pattern_include = ["*"] # 不输入即全部匹配
- # 将要忽略匹配的文件后缀(例如: ^*.c, ^*.cpp, ^*.py)
- pattern_except_suffix = [_.lstrip(" ^*.,").rstrip(" ,") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^*.")]
- pattern_except_suffix += ['zip', 'rar', '7z', 'tar', 'gz'] # 避免解析压缩文件
- # 将要忽略匹配的文件名(例如: ^README.md)
- pattern_except_name = [_.lstrip(" ^*,").rstrip(" ,").replace(".", "\.") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^") and not _.strip().startswith("^*.")]
- # 生成正则表达式
- pattern_except = '/[^/]+\.(' + "|".join(pattern_except_suffix) + ')$'
- pattern_except += '|/(' + "|".join(pattern_except_name) + ')$' if pattern_except_name != [] else ''
-
- history.clear()
- import glob, os, re
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- # 若上传压缩文件, 先寻找到解压的文件夹路径, 从而避免解析压缩文件
- maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)]
- if len(maybe_dir)>0 and maybe_dir[0].endswith('.extract'):
- extract_folder_path = maybe_dir[0]
- else:
- extract_folder_path = project_folder
- # 按输入的匹配模式寻找上传的非压缩文件和已解压的文件
- file_manifest = [f for pattern in pattern_include for f in glob.glob(f'{extract_folder_path}/**/{pattern}', recursive=True) if "" != extract_folder_path and \
- os.path.isfile(f) and (not re.search(pattern_except, f) or pattern.endswith('.' + re.search(pattern_except, f).group().split('.')[-1]))]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
\ No newline at end of file
diff --git a/spaces/flax-community/t5-vae/t5_vae_flax_alt/src/vae.py b/spaces/flax-community/t5-vae/t5_vae_flax_alt/src/vae.py
deleted file mode 100644
index 676546fa95c86f36584846cda85955e2d40c12a1..0000000000000000000000000000000000000000
--- a/spaces/flax-community/t5-vae/t5_vae_flax_alt/src/vae.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import jax.numpy as jnp
-import flax.linen as nn
-
-from t5_vae_flax_alt.src.encoders import VAE_ENCODER_MODELS
-from t5_vae_flax_alt.src.decoders import VAE_DECODER_MODELS
-from t5_vae_flax_alt.src.config import T5VaeConfig
-
-
-class VAE(nn.Module):
- # see https://github.com/google/flax#what-does-flax-look-like
- """
- An MMD-VAE used with encoder-decoder models.
- Encodes all token encodings into a single latent & spits them back out.
- """
- config: T5VaeConfig
- dtype: jnp.dtype = jnp.float32 # the dtype of the computation
-
- def setup(self):
- self.encoder = VAE_ENCODER_MODELS[self.config.vae_encoder_model](self.config.latent_token_size, self.config.n_latent_tokens)
- self.decoder = VAE_DECODER_MODELS[self.config.vae_decoder_model](self.config.t5.d_model, self.config.n_latent_tokens)
-
- def __call__(self, encoding=None, latent_codes=None):
- latent_codes = self.encode(encoding)
- return self.decode(latent_codes), latent_codes
-
- def encode(self, encoding):
- return self.encoder(encoding)
-
- def decode(self, latent):
- return self.decoder(latent)
diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/manual_control.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/manual_control.py
deleted file mode 100644
index b0745707fc12872c52a96f82eaf1ab1f204f9a40..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/SocialAISchool/gym-minigrid/manual_control.py
+++ /dev/null
@@ -1,115 +0,0 @@
-#!/usr/bin/env python3
-raise DeprecationWarning("Use the one in ./scipts")
-
-import time
-import argparse
-import numpy as np
-import gym
-import gym_minigrid
-from gym_minigrid.wrappers import *
-from gym_minigrid.window import Window
-
-def redraw(img):
- if not args.agent_view:
- img = env.render('rgb_array', tile_size=args.tile_size)
-
- window.show_img(img)
-
-def reset():
- if args.seed != -1:
- env.seed(args.seed)
-
- obs = env.reset()
-
- if hasattr(env, 'mission'):
- print('Mission: %s' % env.mission)
- window.set_caption(env.mission)
-
- redraw(obs)
-
-def step(action):
- obs, reward, done, info = env.step(action)
- print('step=%s, reward=%.2f' % (env.step_count, reward))
-
- if done:
- print('done!')
- reset()
- else:
- redraw(obs)
-
-def key_handler(event):
- print('pressed', event.key)
-
- if event.key == 'escape':
- window.close()
- return
-
- if event.key == 'backspace':
- reset()
- return
-
- if event.key == 'left':
- step(env.actions.left)
- return
- if event.key == 'right':
- step(env.actions.right)
- return
- if event.key == 'up':
- step(env.actions.forward)
- return
-
- # Spacebar
- if event.key == ' ':
- step(env.actions.toggle)
- return
- if event.key == 'pageup':
- step(env.actions.pickup)
- return
- if event.key == 'pagedown':
- step(env.actions.drop)
- return
-
- if event.key == 'enter':
- step(env.actions.done)
- return
-
-parser = argparse.ArgumentParser()
-parser.add_argument(
- "--env",
- help="gym environment to load",
- default='MiniGrid-MultiRoom-N6-v0'
-)
-parser.add_argument(
- "--seed",
- type=int,
- help="random seed to generate the environment with",
- default=-1
-)
-parser.add_argument(
- "--tile_size",
- type=int,
- help="size at which to render tiles",
- default=32
-)
-parser.add_argument(
- '--agent_view',
- default=False,
- help="draw the agent sees (partially observable view)",
- action='store_true'
-)
-
-args = parser.parse_args()
-
-env = gym.make(args.env)
-
-if args.agent_view:
- env = RGBImgPartialObsWrapper(env)
- env = ImgObsWrapper(env)
-
-window = Window('gym_minigrid - ' + args.env)
-window.reg_key_handler(key_handler)
-
-reset()
-
-# Blocking event loop
-window.show(block=True)
diff --git a/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Aichat.py b/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Aichat.py
deleted file mode 100644
index d78375ce7e62b634c82e163c693a5557b8e2f860..0000000000000000000000000000000000000000
--- a/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Aichat.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import requests
-import os
-import json
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://hteyun.com'
-model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613']
-supports_stream = True
-needs_auth = False
-
-def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs):
- headers = {
- 'Content-Type': 'application/json',
- }
- data = {
- 'model': model,
- 'temperature': 0.7,
- 'presence_penalty': 0,
- 'messages': messages,
- }
- response = requests.post(url + '/api/chat-stream',
- json=data, stream=True)
-
- if stream:
- for chunk in response.iter_content(chunk_size=None):
- chunk = chunk.decode('utf-8')
- if chunk.strip():
- message = json.loads(chunk)['choices'][0]['message']['content']
- yield message
- else:
- message = response.json()['choices'][0]['message']['content']
- yield message
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
\ No newline at end of file
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/cnn/bricks/transformer.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/cnn/bricks/transformer.py
deleted file mode 100644
index e61ae0dd941a7be00b3e41a3de833ec50470a45f..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/cnn/bricks/transformer.py
+++ /dev/null
@@ -1,595 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import copy
-import warnings
-
-import torch
-import torch.nn as nn
-
-from annotator.uniformer.mmcv import ConfigDict, deprecated_api_warning
-from annotator.uniformer.mmcv.cnn import Linear, build_activation_layer, build_norm_layer
-from annotator.uniformer.mmcv.runner.base_module import BaseModule, ModuleList, Sequential
-from annotator.uniformer.mmcv.utils import build_from_cfg
-from .drop import build_dropout
-from .registry import (ATTENTION, FEEDFORWARD_NETWORK, POSITIONAL_ENCODING,
- TRANSFORMER_LAYER, TRANSFORMER_LAYER_SEQUENCE)
-
-# Avoid BC-breaking of importing MultiScaleDeformableAttention from this file
-try:
- from annotator.uniformer.mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention # noqa F401
- warnings.warn(
- ImportWarning(
- '``MultiScaleDeformableAttention`` has been moved to '
- '``mmcv.ops.multi_scale_deform_attn``, please change original path ' # noqa E501
- '``from annotator.uniformer.mmcv.cnn.bricks.transformer import MultiScaleDeformableAttention`` ' # noqa E501
- 'to ``from annotator.uniformer.mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention`` ' # noqa E501
- ))
-
-except ImportError:
- warnings.warn('Fail to import ``MultiScaleDeformableAttention`` from '
- '``mmcv.ops.multi_scale_deform_attn``, '
- 'You should install ``mmcv-full`` if you need this module. ')
-
-
-def build_positional_encoding(cfg, default_args=None):
- """Builder for Position Encoding."""
- return build_from_cfg(cfg, POSITIONAL_ENCODING, default_args)
-
-
-def build_attention(cfg, default_args=None):
- """Builder for attention."""
- return build_from_cfg(cfg, ATTENTION, default_args)
-
-
-def build_feedforward_network(cfg, default_args=None):
- """Builder for feed-forward network (FFN)."""
- return build_from_cfg(cfg, FEEDFORWARD_NETWORK, default_args)
-
-
-def build_transformer_layer(cfg, default_args=None):
- """Builder for transformer layer."""
- return build_from_cfg(cfg, TRANSFORMER_LAYER, default_args)
-
-
-def build_transformer_layer_sequence(cfg, default_args=None):
- """Builder for transformer encoder and transformer decoder."""
- return build_from_cfg(cfg, TRANSFORMER_LAYER_SEQUENCE, default_args)
-
-
-@ATTENTION.register_module()
-class MultiheadAttention(BaseModule):
- """A wrapper for ``torch.nn.MultiheadAttention``.
-
- This module implements MultiheadAttention with identity connection,
- and positional encoding is also passed as input.
-
- Args:
- embed_dims (int): The embedding dimension.
- num_heads (int): Parallel attention heads.
- attn_drop (float): A Dropout layer on attn_output_weights.
- Default: 0.0.
- proj_drop (float): A Dropout layer after `nn.MultiheadAttention`.
- Default: 0.0.
- dropout_layer (obj:`ConfigDict`): The dropout_layer used
- when adding the shortcut.
- init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization.
- Default: None.
- batch_first (bool): When it is True, Key, Query and Value are shape of
- (batch, n, embed_dim), otherwise (n, batch, embed_dim).
- Default to False.
- """
-
- def __init__(self,
- embed_dims,
- num_heads,
- attn_drop=0.,
- proj_drop=0.,
- dropout_layer=dict(type='Dropout', drop_prob=0.),
- init_cfg=None,
- batch_first=False,
- **kwargs):
- super(MultiheadAttention, self).__init__(init_cfg)
- if 'dropout' in kwargs:
- warnings.warn('The arguments `dropout` in MultiheadAttention '
- 'has been deprecated, now you can separately '
- 'set `attn_drop`(float), proj_drop(float), '
- 'and `dropout_layer`(dict) ')
- attn_drop = kwargs['dropout']
- dropout_layer['drop_prob'] = kwargs.pop('dropout')
-
- self.embed_dims = embed_dims
- self.num_heads = num_heads
- self.batch_first = batch_first
-
- self.attn = nn.MultiheadAttention(embed_dims, num_heads, attn_drop,
- **kwargs)
-
- self.proj_drop = nn.Dropout(proj_drop)
- self.dropout_layer = build_dropout(
- dropout_layer) if dropout_layer else nn.Identity()
-
- @deprecated_api_warning({'residual': 'identity'},
- cls_name='MultiheadAttention')
- def forward(self,
- query,
- key=None,
- value=None,
- identity=None,
- query_pos=None,
- key_pos=None,
- attn_mask=None,
- key_padding_mask=None,
- **kwargs):
- """Forward function for `MultiheadAttention`.
-
- **kwargs allow passing a more general data flow when combining
- with other operations in `transformerlayer`.
-
- Args:
- query (Tensor): The input query with shape [num_queries, bs,
- embed_dims] if self.batch_first is False, else
- [bs, num_queries embed_dims].
- key (Tensor): The key tensor with shape [num_keys, bs,
- embed_dims] if self.batch_first is False, else
- [bs, num_keys, embed_dims] .
- If None, the ``query`` will be used. Defaults to None.
- value (Tensor): The value tensor with same shape as `key`.
- Same in `nn.MultiheadAttention.forward`. Defaults to None.
- If None, the `key` will be used.
- identity (Tensor): This tensor, with the same shape as x,
- will be used for the identity link.
- If None, `x` will be used. Defaults to None.
- query_pos (Tensor): The positional encoding for query, with
- the same shape as `x`. If not None, it will
- be added to `x` before forward function. Defaults to None.
- key_pos (Tensor): The positional encoding for `key`, with the
- same shape as `key`. Defaults to None. If not None, it will
- be added to `key` before forward function. If None, and
- `query_pos` has the same shape as `key`, then `query_pos`
- will be used for `key_pos`. Defaults to None.
- attn_mask (Tensor): ByteTensor mask with shape [num_queries,
- num_keys]. Same in `nn.MultiheadAttention.forward`.
- Defaults to None.
- key_padding_mask (Tensor): ByteTensor with shape [bs, num_keys].
- Defaults to None.
-
- Returns:
- Tensor: forwarded results with shape
- [num_queries, bs, embed_dims]
- if self.batch_first is False, else
- [bs, num_queries embed_dims].
- """
-
- if key is None:
- key = query
- if value is None:
- value = key
- if identity is None:
- identity = query
- if key_pos is None:
- if query_pos is not None:
- # use query_pos if key_pos is not available
- if query_pos.shape == key.shape:
- key_pos = query_pos
- else:
- warnings.warn(f'position encoding of key is'
- f'missing in {self.__class__.__name__}.')
- if query_pos is not None:
- query = query + query_pos
- if key_pos is not None:
- key = key + key_pos
-
- # Because the dataflow('key', 'query', 'value') of
- # ``torch.nn.MultiheadAttention`` is (num_query, batch,
- # embed_dims), We should adjust the shape of dataflow from
- # batch_first (batch, num_query, embed_dims) to num_query_first
- # (num_query ,batch, embed_dims), and recover ``attn_output``
- # from num_query_first to batch_first.
- if self.batch_first:
- query = query.transpose(0, 1)
- key = key.transpose(0, 1)
- value = value.transpose(0, 1)
-
- out = self.attn(
- query=query,
- key=key,
- value=value,
- attn_mask=attn_mask,
- key_padding_mask=key_padding_mask)[0]
-
- if self.batch_first:
- out = out.transpose(0, 1)
-
- return identity + self.dropout_layer(self.proj_drop(out))
-
-
-@FEEDFORWARD_NETWORK.register_module()
-class FFN(BaseModule):
- """Implements feed-forward networks (FFNs) with identity connection.
-
- Args:
- embed_dims (int): The feature dimension. Same as
- `MultiheadAttention`. Defaults: 256.
- feedforward_channels (int): The hidden dimension of FFNs.
- Defaults: 1024.
- num_fcs (int, optional): The number of fully-connected layers in
- FFNs. Default: 2.
- act_cfg (dict, optional): The activation config for FFNs.
- Default: dict(type='ReLU')
- ffn_drop (float, optional): Probability of an element to be
- zeroed in FFN. Default 0.0.
- add_identity (bool, optional): Whether to add the
- identity connection. Default: `True`.
- dropout_layer (obj:`ConfigDict`): The dropout_layer used
- when adding the shortcut.
- init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization.
- Default: None.
- """
-
- @deprecated_api_warning(
- {
- 'dropout': 'ffn_drop',
- 'add_residual': 'add_identity'
- },
- cls_name='FFN')
- def __init__(self,
- embed_dims=256,
- feedforward_channels=1024,
- num_fcs=2,
- act_cfg=dict(type='ReLU', inplace=True),
- ffn_drop=0.,
- dropout_layer=None,
- add_identity=True,
- init_cfg=None,
- **kwargs):
- super(FFN, self).__init__(init_cfg)
- assert num_fcs >= 2, 'num_fcs should be no less ' \
- f'than 2. got {num_fcs}.'
- self.embed_dims = embed_dims
- self.feedforward_channels = feedforward_channels
- self.num_fcs = num_fcs
- self.act_cfg = act_cfg
- self.activate = build_activation_layer(act_cfg)
-
- layers = []
- in_channels = embed_dims
- for _ in range(num_fcs - 1):
- layers.append(
- Sequential(
- Linear(in_channels, feedforward_channels), self.activate,
- nn.Dropout(ffn_drop)))
- in_channels = feedforward_channels
- layers.append(Linear(feedforward_channels, embed_dims))
- layers.append(nn.Dropout(ffn_drop))
- self.layers = Sequential(*layers)
- self.dropout_layer = build_dropout(
- dropout_layer) if dropout_layer else torch.nn.Identity()
- self.add_identity = add_identity
-
- @deprecated_api_warning({'residual': 'identity'}, cls_name='FFN')
- def forward(self, x, identity=None):
- """Forward function for `FFN`.
-
- The function would add x to the output tensor if residue is None.
- """
- out = self.layers(x)
- if not self.add_identity:
- return self.dropout_layer(out)
- if identity is None:
- identity = x
- return identity + self.dropout_layer(out)
-
-
-@TRANSFORMER_LAYER.register_module()
-class BaseTransformerLayer(BaseModule):
- """Base `TransformerLayer` for vision transformer.
-
- It can be built from `mmcv.ConfigDict` and support more flexible
- customization, for example, using any number of `FFN or LN ` and
- use different kinds of `attention` by specifying a list of `ConfigDict`
- named `attn_cfgs`. It is worth mentioning that it supports `prenorm`
- when you specifying `norm` as the first element of `operation_order`.
- More details about the `prenorm`: `On Layer Normalization in the
- Transformer Architecture `_ .
-
- Args:
- attn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )):
- Configs for `self_attention` or `cross_attention` modules,
- The order of the configs in the list should be consistent with
- corresponding attentions in operation_order.
- If it is a dict, all of the attention modules in operation_order
- will be built with this config. Default: None.
- ffn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )):
- Configs for FFN, The order of the configs in the list should be
- consistent with corresponding ffn in operation_order.
- If it is a dict, all of the attention modules in operation_order
- will be built with this config.
- operation_order (tuple[str]): The execution order of operation
- in transformer. Such as ('self_attn', 'norm', 'ffn', 'norm').
- Support `prenorm` when you specifying first element as `norm`.
- Default:None.
- norm_cfg (dict): Config dict for normalization layer.
- Default: dict(type='LN').
- init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization.
- Default: None.
- batch_first (bool): Key, Query and Value are shape
- of (batch, n, embed_dim)
- or (n, batch, embed_dim). Default to False.
- """
-
- def __init__(self,
- attn_cfgs=None,
- ffn_cfgs=dict(
- type='FFN',
- embed_dims=256,
- feedforward_channels=1024,
- num_fcs=2,
- ffn_drop=0.,
- act_cfg=dict(type='ReLU', inplace=True),
- ),
- operation_order=None,
- norm_cfg=dict(type='LN'),
- init_cfg=None,
- batch_first=False,
- **kwargs):
-
- deprecated_args = dict(
- feedforward_channels='feedforward_channels',
- ffn_dropout='ffn_drop',
- ffn_num_fcs='num_fcs')
- for ori_name, new_name in deprecated_args.items():
- if ori_name in kwargs:
- warnings.warn(
- f'The arguments `{ori_name}` in BaseTransformerLayer '
- f'has been deprecated, now you should set `{new_name}` '
- f'and other FFN related arguments '
- f'to a dict named `ffn_cfgs`. ')
- ffn_cfgs[new_name] = kwargs[ori_name]
-
- super(BaseTransformerLayer, self).__init__(init_cfg)
-
- self.batch_first = batch_first
-
- assert set(operation_order) & set(
- ['self_attn', 'norm', 'ffn', 'cross_attn']) == \
- set(operation_order), f'The operation_order of' \
- f' {self.__class__.__name__} should ' \
- f'contains all four operation type ' \
- f"{['self_attn', 'norm', 'ffn', 'cross_attn']}"
-
- num_attn = operation_order.count('self_attn') + operation_order.count(
- 'cross_attn')
- if isinstance(attn_cfgs, dict):
- attn_cfgs = [copy.deepcopy(attn_cfgs) for _ in range(num_attn)]
- else:
- assert num_attn == len(attn_cfgs), f'The length ' \
- f'of attn_cfg {num_attn} is ' \
- f'not consistent with the number of attention' \
- f'in operation_order {operation_order}.'
-
- self.num_attn = num_attn
- self.operation_order = operation_order
- self.norm_cfg = norm_cfg
- self.pre_norm = operation_order[0] == 'norm'
- self.attentions = ModuleList()
-
- index = 0
- for operation_name in operation_order:
- if operation_name in ['self_attn', 'cross_attn']:
- if 'batch_first' in attn_cfgs[index]:
- assert self.batch_first == attn_cfgs[index]['batch_first']
- else:
- attn_cfgs[index]['batch_first'] = self.batch_first
- attention = build_attention(attn_cfgs[index])
- # Some custom attentions used as `self_attn`
- # or `cross_attn` can have different behavior.
- attention.operation_name = operation_name
- self.attentions.append(attention)
- index += 1
-
- self.embed_dims = self.attentions[0].embed_dims
-
- self.ffns = ModuleList()
- num_ffns = operation_order.count('ffn')
- if isinstance(ffn_cfgs, dict):
- ffn_cfgs = ConfigDict(ffn_cfgs)
- if isinstance(ffn_cfgs, dict):
- ffn_cfgs = [copy.deepcopy(ffn_cfgs) for _ in range(num_ffns)]
- assert len(ffn_cfgs) == num_ffns
- for ffn_index in range(num_ffns):
- if 'embed_dims' not in ffn_cfgs[ffn_index]:
- ffn_cfgs['embed_dims'] = self.embed_dims
- else:
- assert ffn_cfgs[ffn_index]['embed_dims'] == self.embed_dims
- self.ffns.append(
- build_feedforward_network(ffn_cfgs[ffn_index],
- dict(type='FFN')))
-
- self.norms = ModuleList()
- num_norms = operation_order.count('norm')
- for _ in range(num_norms):
- self.norms.append(build_norm_layer(norm_cfg, self.embed_dims)[1])
-
- def forward(self,
- query,
- key=None,
- value=None,
- query_pos=None,
- key_pos=None,
- attn_masks=None,
- query_key_padding_mask=None,
- key_padding_mask=None,
- **kwargs):
- """Forward function for `TransformerDecoderLayer`.
-
- **kwargs contains some specific arguments of attentions.
-
- Args:
- query (Tensor): The input query with shape
- [num_queries, bs, embed_dims] if
- self.batch_first is False, else
- [bs, num_queries embed_dims].
- key (Tensor): The key tensor with shape [num_keys, bs,
- embed_dims] if self.batch_first is False, else
- [bs, num_keys, embed_dims] .
- value (Tensor): The value tensor with same shape as `key`.
- query_pos (Tensor): The positional encoding for `query`.
- Default: None.
- key_pos (Tensor): The positional encoding for `key`.
- Default: None.
- attn_masks (List[Tensor] | None): 2D Tensor used in
- calculation of corresponding attention. The length of
- it should equal to the number of `attention` in
- `operation_order`. Default: None.
- query_key_padding_mask (Tensor): ByteTensor for `query`, with
- shape [bs, num_queries]. Only used in `self_attn` layer.
- Defaults to None.
- key_padding_mask (Tensor): ByteTensor for `query`, with
- shape [bs, num_keys]. Default: None.
-
- Returns:
- Tensor: forwarded results with shape [num_queries, bs, embed_dims].
- """
-
- norm_index = 0
- attn_index = 0
- ffn_index = 0
- identity = query
- if attn_masks is None:
- attn_masks = [None for _ in range(self.num_attn)]
- elif isinstance(attn_masks, torch.Tensor):
- attn_masks = [
- copy.deepcopy(attn_masks) for _ in range(self.num_attn)
- ]
- warnings.warn(f'Use same attn_mask in all attentions in '
- f'{self.__class__.__name__} ')
- else:
- assert len(attn_masks) == self.num_attn, f'The length of ' \
- f'attn_masks {len(attn_masks)} must be equal ' \
- f'to the number of attention in ' \
- f'operation_order {self.num_attn}'
-
- for layer in self.operation_order:
- if layer == 'self_attn':
- temp_key = temp_value = query
- query = self.attentions[attn_index](
- query,
- temp_key,
- temp_value,
- identity if self.pre_norm else None,
- query_pos=query_pos,
- key_pos=query_pos,
- attn_mask=attn_masks[attn_index],
- key_padding_mask=query_key_padding_mask,
- **kwargs)
- attn_index += 1
- identity = query
-
- elif layer == 'norm':
- query = self.norms[norm_index](query)
- norm_index += 1
-
- elif layer == 'cross_attn':
- query = self.attentions[attn_index](
- query,
- key,
- value,
- identity if self.pre_norm else None,
- query_pos=query_pos,
- key_pos=key_pos,
- attn_mask=attn_masks[attn_index],
- key_padding_mask=key_padding_mask,
- **kwargs)
- attn_index += 1
- identity = query
-
- elif layer == 'ffn':
- query = self.ffns[ffn_index](
- query, identity if self.pre_norm else None)
- ffn_index += 1
-
- return query
-
-
-@TRANSFORMER_LAYER_SEQUENCE.register_module()
-class TransformerLayerSequence(BaseModule):
- """Base class for TransformerEncoder and TransformerDecoder in vision
- transformer.
-
- As base-class of Encoder and Decoder in vision transformer.
- Support customization such as specifying different kind
- of `transformer_layer` in `transformer_coder`.
-
- Args:
- transformerlayer (list[obj:`mmcv.ConfigDict`] |
- obj:`mmcv.ConfigDict`): Config of transformerlayer
- in TransformerCoder. If it is obj:`mmcv.ConfigDict`,
- it would be repeated `num_layer` times to a
- list[`mmcv.ConfigDict`]. Default: None.
- num_layers (int): The number of `TransformerLayer`. Default: None.
- init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization.
- Default: None.
- """
-
- def __init__(self, transformerlayers=None, num_layers=None, init_cfg=None):
- super(TransformerLayerSequence, self).__init__(init_cfg)
- if isinstance(transformerlayers, dict):
- transformerlayers = [
- copy.deepcopy(transformerlayers) for _ in range(num_layers)
- ]
- else:
- assert isinstance(transformerlayers, list) and \
- len(transformerlayers) == num_layers
- self.num_layers = num_layers
- self.layers = ModuleList()
- for i in range(num_layers):
- self.layers.append(build_transformer_layer(transformerlayers[i]))
- self.embed_dims = self.layers[0].embed_dims
- self.pre_norm = self.layers[0].pre_norm
-
- def forward(self,
- query,
- key,
- value,
- query_pos=None,
- key_pos=None,
- attn_masks=None,
- query_key_padding_mask=None,
- key_padding_mask=None,
- **kwargs):
- """Forward function for `TransformerCoder`.
-
- Args:
- query (Tensor): Input query with shape
- `(num_queries, bs, embed_dims)`.
- key (Tensor): The key tensor with shape
- `(num_keys, bs, embed_dims)`.
- value (Tensor): The value tensor with shape
- `(num_keys, bs, embed_dims)`.
- query_pos (Tensor): The positional encoding for `query`.
- Default: None.
- key_pos (Tensor): The positional encoding for `key`.
- Default: None.
- attn_masks (List[Tensor], optional): Each element is 2D Tensor
- which is used in calculation of corresponding attention in
- operation_order. Default: None.
- query_key_padding_mask (Tensor): ByteTensor for `query`, with
- shape [bs, num_queries]. Only used in self-attention
- Default: None.
- key_padding_mask (Tensor): ByteTensor for `query`, with
- shape [bs, num_keys]. Default: None.
-
- Returns:
- Tensor: results with shape [num_queries, bs, embed_dims].
- """
- for layer in self.layers:
- query = layer(
- query,
- key,
- value,
- query_pos=query_pos,
- key_pos=key_pos,
- attn_masks=attn_masks,
- query_key_padding_mask=query_key_padding_mask,
- key_padding_mask=key_padding_mask,
- **kwargs)
- return query
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/datasets/stare.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/datasets/stare.py
deleted file mode 100644
index cbd14e0920e7f6a73baff1432e5a32ccfdb0dfae..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/datasets/stare.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import os.path as osp
-
-from .builder import DATASETS
-from .custom import CustomDataset
-
-
-@DATASETS.register_module()
-class STAREDataset(CustomDataset):
- """STARE dataset.
-
- In segmentation map annotation for STARE, 0 stands for background, which is
- included in 2 categories. ``reduce_zero_label`` is fixed to False. The
- ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to
- '.ah.png'.
- """
-
- CLASSES = ('background', 'vessel')
-
- PALETTE = [[120, 120, 120], [6, 230, 230]]
-
- def __init__(self, **kwargs):
- super(STAREDataset, self).__init__(
- img_suffix='.png',
- seg_map_suffix='.ah.png',
- reduce_zero_label=False,
- **kwargs)
- assert osp.exists(self.img_dir)
diff --git a/spaces/godelbach/onlyjitz/app.py b/spaces/godelbach/onlyjitz/app.py
deleted file mode 100644
index 0808d24bdd57d04255c469cdba802ddf388f28a6..0000000000000000000000000000000000000000
--- a/spaces/godelbach/onlyjitz/app.py
+++ /dev/null
@@ -1,22 +0,0 @@
-import gradio as gr
-from fastai.vision.all import *
-
-categories = ("Armbar", "Triangle")
-
-
-learn = load_learner("model.pkl")
-
-image = gr.Image(shape=(224, 224))
-
-examples = ["images/armbar1.jpeg", "images/armbar2.jpeg", "images/armbar3.webp", "images/armbar4.png", "images/flying_armbar.jpeg",
- "images/triangle.jpeg", "images/triangle2.webp", "images/triangle3.jpeg", "images/triangle4.jpeg", "images/triangle_armbar1.jpeg"]
-
-
-def image_classifier(img):
- _, _, probs = learn.predict(img)
- return dict(zip(categories, map(float, probs)))
-
-
-app = gr.Interface(fn=image_classifier, inputs=image,
- outputs="label", examples=examples)
-app.launch()
diff --git a/spaces/gradio/HuBERT/examples/criss/README.md b/spaces/gradio/HuBERT/examples/criss/README.md
deleted file mode 100644
index 4689ed7c10497a5100b28fe6d6801a7c089da569..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/examples/criss/README.md
+++ /dev/null
@@ -1,61 +0,0 @@
-# Cross-lingual Retrieval for Iterative Self-Supervised Training
-
-https://arxiv.org/pdf/2006.09526.pdf
-
-## Introduction
-
-CRISS is a multilingual sequence-to-sequnce pretraining method where mining and training processes are applied iteratively, improving cross-lingual alignment and translation ability at the same time.
-
-## Requirements:
-
-* faiss: https://github.com/facebookresearch/faiss
-* mosesdecoder: https://github.com/moses-smt/mosesdecoder
-* flores: https://github.com/facebookresearch/flores
-* LASER: https://github.com/facebookresearch/LASER
-
-## Unsupervised Machine Translation
-##### 1. Download and decompress CRISS checkpoints
-```
-cd examples/criss
-wget https://dl.fbaipublicfiles.com/criss/criss_3rd_checkpoints.tar.gz
-tar -xf criss_checkpoints.tar.gz
-```
-##### 2. Download and preprocess Flores test dataset
-Make sure to run all scripts from examples/criss directory
-```
-bash download_and_preprocess_flores_test.sh
-```
-
-##### 3. Run Evaluation on Sinhala-English
-```
-bash unsupervised_mt/eval.sh
-```
-
-## Sentence Retrieval
-##### 1. Download and preprocess Tatoeba dataset
-```
-bash download_and_preprocess_tatoeba.sh
-```
-
-##### 2. Run Sentence Retrieval on Tatoeba Kazakh-English
-```
-bash sentence_retrieval/sentence_retrieval_tatoeba.sh
-```
-
-## Mining
-##### 1. Install faiss
-Follow instructions on https://github.com/facebookresearch/faiss/blob/master/INSTALL.md
-##### 2. Mine pseudo-parallel data between Kazakh and English
-```
-bash mining/mine_example.sh
-```
-
-## Citation
-```bibtex
-@article{tran2020cross,
- title={Cross-lingual retrieval for iterative self-supervised training},
- author={Tran, Chau and Tang, Yuqing and Li, Xian and Gu, Jiatao},
- journal={arXiv preprint arXiv:2006.09526},
- year={2020}
-}
-```
diff --git a/spaces/gradio/HuBERT/scripts/sacrebleu.sh b/spaces/gradio/HuBERT/scripts/sacrebleu.sh
deleted file mode 100644
index c10bf2b76ea032deabab6f5c9d8a3e1e884f1642..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/scripts/sacrebleu.sh
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/bin/bash
-
-if [ $# -ne 4 ]; then
- echo "usage: $0 TESTSET SRCLANG TGTLANG GEN"
- exit 1
-fi
-
-TESTSET=$1
-SRCLANG=$2
-TGTLANG=$3
-
-GEN=$4
-
-if ! command -v sacremoses &> /dev/null
-then
- echo "sacremoses could not be found, please install with: pip install sacremoses"
- exit
-fi
-
-grep ^H $GEN \
-| sed 's/^H\-//' \
-| sort -n -k 1 \
-| cut -f 3 \
-| sacremoses detokenize \
-> $GEN.sorted.detok
-
-sacrebleu --test-set $TESTSET --language-pair "${SRCLANG}-${TGTLANG}" < $GEN.sorted.detok
diff --git a/spaces/gulabpatel/GFP_GAN/tests/test_gfpgan_model.py b/spaces/gulabpatel/GFP_GAN/tests/test_gfpgan_model.py
deleted file mode 100644
index 1408ddd7c909c7257fbcea79f8576231a40f9211..0000000000000000000000000000000000000000
--- a/spaces/gulabpatel/GFP_GAN/tests/test_gfpgan_model.py
+++ /dev/null
@@ -1,132 +0,0 @@
-import tempfile
-import torch
-import yaml
-from basicsr.archs.stylegan2_arch import StyleGAN2Discriminator
-from basicsr.data.paired_image_dataset import PairedImageDataset
-from basicsr.losses.losses import GANLoss, L1Loss, PerceptualLoss
-
-from gfpgan.archs.arcface_arch import ResNetArcFace
-from gfpgan.archs.gfpganv1_arch import FacialComponentDiscriminator, GFPGANv1
-from gfpgan.models.gfpgan_model import GFPGANModel
-
-
-def test_gfpgan_model():
- with open('tests/data/test_gfpgan_model.yml', mode='r') as f:
- opt = yaml.load(f, Loader=yaml.FullLoader)
-
- # build model
- model = GFPGANModel(opt)
- # test attributes
- assert model.__class__.__name__ == 'GFPGANModel'
- assert isinstance(model.net_g, GFPGANv1) # generator
- assert isinstance(model.net_d, StyleGAN2Discriminator) # discriminator
- # facial component discriminators
- assert isinstance(model.net_d_left_eye, FacialComponentDiscriminator)
- assert isinstance(model.net_d_right_eye, FacialComponentDiscriminator)
- assert isinstance(model.net_d_mouth, FacialComponentDiscriminator)
- # identity network
- assert isinstance(model.network_identity, ResNetArcFace)
- # losses
- assert isinstance(model.cri_pix, L1Loss)
- assert isinstance(model.cri_perceptual, PerceptualLoss)
- assert isinstance(model.cri_gan, GANLoss)
- assert isinstance(model.cri_l1, L1Loss)
- # optimizer
- assert isinstance(model.optimizers[0], torch.optim.Adam)
- assert isinstance(model.optimizers[1], torch.optim.Adam)
-
- # prepare data
- gt = torch.rand((1, 3, 512, 512), dtype=torch.float32)
- lq = torch.rand((1, 3, 512, 512), dtype=torch.float32)
- loc_left_eye = torch.rand((1, 4), dtype=torch.float32)
- loc_right_eye = torch.rand((1, 4), dtype=torch.float32)
- loc_mouth = torch.rand((1, 4), dtype=torch.float32)
- data = dict(gt=gt, lq=lq, loc_left_eye=loc_left_eye, loc_right_eye=loc_right_eye, loc_mouth=loc_mouth)
- model.feed_data(data)
- # check data shape
- assert model.lq.shape == (1, 3, 512, 512)
- assert model.gt.shape == (1, 3, 512, 512)
- assert model.loc_left_eyes.shape == (1, 4)
- assert model.loc_right_eyes.shape == (1, 4)
- assert model.loc_mouths.shape == (1, 4)
-
- # ----------------- test optimize_parameters -------------------- #
- model.feed_data(data)
- model.optimize_parameters(1)
- assert model.output.shape == (1, 3, 512, 512)
- assert isinstance(model.log_dict, dict)
- # check returned keys
- expected_keys = [
- 'l_g_pix', 'l_g_percep', 'l_g_style', 'l_g_gan', 'l_g_gan_left_eye', 'l_g_gan_right_eye', 'l_g_gan_mouth',
- 'l_g_comp_style_loss', 'l_identity', 'l_d', 'real_score', 'fake_score', 'l_d_r1', 'l_d_left_eye',
- 'l_d_right_eye', 'l_d_mouth'
- ]
- assert set(expected_keys).issubset(set(model.log_dict.keys()))
-
- # ----------------- remove pyramid_loss_weight-------------------- #
- model.feed_data(data)
- model.optimize_parameters(100000) # large than remove_pyramid_loss = 50000
- assert model.output.shape == (1, 3, 512, 512)
- assert isinstance(model.log_dict, dict)
- # check returned keys
- expected_keys = [
- 'l_g_pix', 'l_g_percep', 'l_g_style', 'l_g_gan', 'l_g_gan_left_eye', 'l_g_gan_right_eye', 'l_g_gan_mouth',
- 'l_g_comp_style_loss', 'l_identity', 'l_d', 'real_score', 'fake_score', 'l_d_r1', 'l_d_left_eye',
- 'l_d_right_eye', 'l_d_mouth'
- ]
- assert set(expected_keys).issubset(set(model.log_dict.keys()))
-
- # ----------------- test save -------------------- #
- with tempfile.TemporaryDirectory() as tmpdir:
- model.opt['path']['models'] = tmpdir
- model.opt['path']['training_states'] = tmpdir
- model.save(0, 1)
-
- # ----------------- test the test function -------------------- #
- model.test()
- assert model.output.shape == (1, 3, 512, 512)
- # delete net_g_ema
- model.__delattr__('net_g_ema')
- model.test()
- assert model.output.shape == (1, 3, 512, 512)
- assert model.net_g.training is True # should back to training mode after testing
-
- # ----------------- test nondist_validation -------------------- #
- # construct dataloader
- dataset_opt = dict(
- name='Demo',
- dataroot_gt='tests/data/gt',
- dataroot_lq='tests/data/gt',
- io_backend=dict(type='disk'),
- scale=4,
- phase='val')
- dataset = PairedImageDataset(dataset_opt)
- dataloader = torch.utils.data.DataLoader(dataset=dataset, batch_size=1, shuffle=False, num_workers=0)
- assert model.is_train is True
- with tempfile.TemporaryDirectory() as tmpdir:
- model.opt['path']['visualization'] = tmpdir
- model.nondist_validation(dataloader, 1, None, save_img=True)
- assert model.is_train is True
- # check metric_results
- assert 'psnr' in model.metric_results
- assert isinstance(model.metric_results['psnr'], float)
-
- # validation
- with tempfile.TemporaryDirectory() as tmpdir:
- model.opt['is_train'] = False
- model.opt['val']['suffix'] = 'test'
- model.opt['path']['visualization'] = tmpdir
- model.opt['val']['pbar'] = True
- model.nondist_validation(dataloader, 1, None, save_img=True)
- # check metric_results
- assert 'psnr' in model.metric_results
- assert isinstance(model.metric_results['psnr'], float)
-
- # if opt['val']['suffix'] is None
- model.opt['val']['suffix'] = None
- model.opt['name'] = 'demo'
- model.opt['path']['visualization'] = tmpdir
- model.nondist_validation(dataloader, 1, None, save_img=True)
- # check metric_results
- assert 'psnr' in model.metric_results
- assert isinstance(model.metric_results['psnr'], float)
diff --git a/spaces/gwang-kim/DATID-3D/eg3d/training/dataset.py b/spaces/gwang-kim/DATID-3D/eg3d/training/dataset.py
deleted file mode 100644
index b4d7c4fb13d1541f9d11af92a76cc859d71f5547..0000000000000000000000000000000000000000
--- a/spaces/gwang-kim/DATID-3D/eg3d/training/dataset.py
+++ /dev/null
@@ -1,244 +0,0 @@
-# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
-#
-# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
-# property and proprietary rights in and to this material, related
-# documentation and any modifications thereto. Any use, reproduction,
-# disclosure or distribution of this material and related documentation
-# without an express license agreement from NVIDIA CORPORATION or
-# its affiliates is strictly prohibited.
-
-"""Streaming images and labels from datasets created with dataset_tool.py."""
-
-import os
-import numpy as np
-import zipfile
-import PIL.Image
-import json
-import torch
-import dnnlib
-
-try:
- import pyspng
-except ImportError:
- pyspng = None
-
-#----------------------------------------------------------------------------
-
-class Dataset(torch.utils.data.Dataset):
- def __init__(self,
- name, # Name of the dataset.
- raw_shape, # Shape of the raw image data (NCHW).
- max_size = None, # Artificially limit the size of the dataset. None = no limit. Applied before xflip.
- use_labels = False, # Enable conditioning labels? False = label dimension is zero.
- xflip = False, # Artificially double the size of the dataset via x-flips. Applied after max_size.
- random_seed = 0, # Random seed to use when applying max_size.
- ):
- self._name = name
- self._raw_shape = list(raw_shape)
- self._use_labels = use_labels
- self._raw_labels = None
- self._label_shape = None
-
- # Apply max_size.
- self._raw_idx = np.arange(self._raw_shape[0], dtype=np.int64)
- if (max_size is not None) and (self._raw_idx.size > max_size):
- np.random.RandomState(random_seed).shuffle(self._raw_idx)
- self._raw_idx = np.sort(self._raw_idx[:max_size])
-
- # Apply xflip.
- self._xflip = np.zeros(self._raw_idx.size, dtype=np.uint8)
- if xflip:
- self._raw_idx = np.tile(self._raw_idx, 2)
- self._xflip = np.concatenate([self._xflip, np.ones_like(self._xflip)])
-
- def _get_raw_labels(self):
- if self._raw_labels is None:
- self._raw_labels = self._load_raw_labels() if self._use_labels else None
- if self._raw_labels is None:
- self._raw_labels = np.zeros([self._raw_shape[0], 0], dtype=np.float32)
- assert isinstance(self._raw_labels, np.ndarray)
- assert self._raw_labels.shape[0] == self._raw_shape[0]
- assert self._raw_labels.dtype in [np.float32, np.int64]
- if self._raw_labels.dtype == np.int64:
- assert self._raw_labels.ndim == 1
- assert np.all(self._raw_labels >= 0)
- self._raw_labels_std = self._raw_labels.std(0)
- return self._raw_labels
-
- def close(self): # to be overridden by subclass
- pass
-
- def _load_raw_image(self, raw_idx): # to be overridden by subclass
- raise NotImplementedError
-
- def _load_raw_labels(self): # to be overridden by subclass
- raise NotImplementedError
-
- def __getstate__(self):
- return dict(self.__dict__, _raw_labels=None)
-
- def __del__(self):
- try:
- self.close()
- except:
- pass
-
- def __len__(self):
- return self._raw_idx.size
-
- def __getitem__(self, idx):
- image = self._load_raw_image(self._raw_idx[idx])
- assert isinstance(image, np.ndarray)
- assert list(image.shape) == self.image_shape
- assert image.dtype == np.uint8
- if self._xflip[idx]:
- assert image.ndim == 3 # CHW
- image = image[:, :, ::-1]
- return image.copy(), self.get_label(idx)
-
- def get_label(self, idx):
- label = self._get_raw_labels()[self._raw_idx[idx]]
- if label.dtype == np.int64:
- onehot = np.zeros(self.label_shape, dtype=np.float32)
- onehot[label] = 1
- label = onehot
- return label.copy()
-
- def get_details(self, idx):
- d = dnnlib.EasyDict()
- d.raw_idx = int(self._raw_idx[idx])
- d.xflip = (int(self._xflip[idx]) != 0)
- d.raw_label = self._get_raw_labels()[d.raw_idx].copy()
- return d
-
- def get_label_std(self):
- return self._raw_labels_std
-
- @property
- def name(self):
- return self._name
-
- @property
- def image_shape(self):
- return list(self._raw_shape[1:])
-
- @property
- def num_channels(self):
- assert len(self.image_shape) == 3 # CHW
- return self.image_shape[0]
-
- @property
- def resolution(self):
- assert len(self.image_shape) == 3 # CHW
- assert self.image_shape[1] == self.image_shape[2]
- return self.image_shape[1]
-
- @property
- def label_shape(self):
- if self._label_shape is None:
- raw_labels = self._get_raw_labels()
- if raw_labels.dtype == np.int64:
- self._label_shape = [int(np.max(raw_labels)) + 1]
- else:
- self._label_shape = raw_labels.shape[1:]
- return list(self._label_shape)
-
- @property
- def label_dim(self):
- assert len(self.label_shape) == 1
- return self.label_shape[0]
-
- @property
- def has_labels(self):
- return any(x != 0 for x in self.label_shape)
-
- @property
- def has_onehot_labels(self):
- return self._get_raw_labels().dtype == np.int64
-
-#----------------------------------------------------------------------------
-
-class ImageFolderDataset(Dataset):
- def __init__(self,
- path, # Path to directory or zip.
- resolution = None, # Ensure specific resolution, None = highest available.
- **super_kwargs, # Additional arguments for the Dataset base class.
- ):
- self._path = path
- self._zipfile = None
-
- if os.path.isdir(self._path):
- self._type = 'dir'
- self._all_fnames = {os.path.relpath(os.path.join(root, fname), start=self._path) for root, _dirs, files in os.walk(self._path) for fname in files}
- elif self._file_ext(self._path) == '.zip':
- self._type = 'zip'
- self._all_fnames = set(self._get_zipfile().namelist())
- else:
- raise IOError('Path must point to a directory or zip')
-
- PIL.Image.init()
- self._image_fnames = sorted(fname for fname in self._all_fnames if self._file_ext(fname) in PIL.Image.EXTENSION)
- if len(self._image_fnames) == 0:
- raise IOError('No image files found in the specified path')
-
- name = os.path.splitext(os.path.basename(self._path))[0]
- raw_shape = [len(self._image_fnames)] + list(self._load_raw_image(0).shape)
- if resolution is not None and (raw_shape[2] != resolution or raw_shape[3] != resolution):
- raise IOError('Image files do not match the specified resolution')
- super().__init__(name=name, raw_shape=raw_shape, **super_kwargs)
-
- @staticmethod
- def _file_ext(fname):
- return os.path.splitext(fname)[1].lower()
-
- def _get_zipfile(self):
- assert self._type == 'zip'
- if self._zipfile is None:
- self._zipfile = zipfile.ZipFile(self._path)
- return self._zipfile
-
- def _open_file(self, fname):
- if self._type == 'dir':
- return open(os.path.join(self._path, fname), 'rb')
- if self._type == 'zip':
- return self._get_zipfile().open(fname, 'r')
- return None
-
- def close(self):
- try:
- if self._zipfile is not None:
- self._zipfile.close()
- finally:
- self._zipfile = None
-
- def __getstate__(self):
- return dict(super().__getstate__(), _zipfile=None)
-
- def _load_raw_image(self, raw_idx):
- fname = self._image_fnames[raw_idx]
- with self._open_file(fname) as f:
- if pyspng is not None and self._file_ext(fname) == '.png':
- image = pyspng.load(f.read())
- else:
- image = np.array(PIL.Image.open(f))
- if image.ndim == 2:
- image = image[:, :, np.newaxis] # HW => HWC
- image = image.transpose(2, 0, 1) # HWC => CHW
- return image
-
- def _load_raw_labels(self):
- fname = 'dataset.json'
- if fname not in self._all_fnames:
- return None
- with self._open_file(fname) as f:
- labels = json.load(f)['labels']
- if labels is None:
- return None
- labels = dict(labels)
- labels = [labels[fname.replace('\\', '/')] for fname in self._image_fnames]
- labels = np.array(labels)
- labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim])
- return labels
-
-#----------------------------------------------------------------------------
diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/encoders/helpers.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/encoders/helpers.py
deleted file mode 100644
index c5e907be6703ccc43f263b4c40f7d1b84bc47755..0000000000000000000000000000000000000000
--- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/encoders/helpers.py
+++ /dev/null
@@ -1,145 +0,0 @@
-from collections import namedtuple
-import torch
-import torch.nn.functional as F
-from torch.nn import Conv2d, BatchNorm2d, PReLU, ReLU, Sigmoid, MaxPool2d, AdaptiveAvgPool2d, Sequential, Module
-
-"""
-ArcFace implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch)
-"""
-
-
-class Flatten(Module):
- def forward(self, input):
- return input.view(input.size(0), -1)
-
-
-def l2_norm(input, axis=1):
- norm = torch.norm(input, 2, axis, True)
- output = torch.div(input, norm)
- return output
-
-
-class Bottleneck(namedtuple('Block', ['in_channel', 'depth', 'stride'])):
- """ A named tuple describing a ResNet block. """
-
-
-def get_block(in_channel, depth, num_units, stride=2):
- return [Bottleneck(in_channel, depth, stride)] + [Bottleneck(depth, depth, 1) for i in range(num_units - 1)]
-
-
-def get_blocks(num_layers):
- if num_layers == 50:
- blocks = [
- get_block(in_channel=64, depth=64, num_units=3),
- get_block(in_channel=64, depth=128, num_units=4),
- get_block(in_channel=128, depth=256, num_units=14),
- get_block(in_channel=256, depth=512, num_units=3)
- ]
- elif num_layers == 100:
- blocks = [
- get_block(in_channel=64, depth=64, num_units=3),
- get_block(in_channel=64, depth=128, num_units=13),
- get_block(in_channel=128, depth=256, num_units=30),
- get_block(in_channel=256, depth=512, num_units=3)
- ]
- elif num_layers == 152:
- blocks = [
- get_block(in_channel=64, depth=64, num_units=3),
- get_block(in_channel=64, depth=128, num_units=8),
- get_block(in_channel=128, depth=256, num_units=36),
- get_block(in_channel=256, depth=512, num_units=3)
- ]
- else:
- raise ValueError(
- "Invalid number of layers: {}. Must be one of [50, 100, 152]".format(num_layers))
- return blocks
-
-
-class SEModule(Module):
- def __init__(self, channels, reduction):
- super(SEModule, self).__init__()
- self.avg_pool = AdaptiveAvgPool2d(1)
- self.fc1 = Conv2d(channels, channels // reduction,
- kernel_size=1, padding=0, bias=False)
- self.relu = ReLU(inplace=True)
- self.fc2 = Conv2d(channels // reduction, channels,
- kernel_size=1, padding=0, bias=False)
- self.sigmoid = Sigmoid()
-
- def forward(self, x):
- module_input = x
- x = self.avg_pool(x)
- x = self.fc1(x)
- x = self.relu(x)
- x = self.fc2(x)
- x = self.sigmoid(x)
- return module_input * x
-
-
-class bottleneck_IR(Module):
- def __init__(self, in_channel, depth, stride):
- super(bottleneck_IR, self).__init__()
- if in_channel == depth:
- self.shortcut_layer = MaxPool2d(1, stride)
- else:
- self.shortcut_layer = Sequential(
- Conv2d(in_channel, depth, (1, 1), stride, bias=False),
- BatchNorm2d(depth)
- )
- self.res_layer = Sequential(
- BatchNorm2d(in_channel),
- Conv2d(in_channel, depth, (3, 3), (1, 1),
- 1, bias=False), PReLU(depth),
- Conv2d(depth, depth, (3, 3), stride, 1,
- bias=False), BatchNorm2d(depth)
- )
-
- def forward(self, x):
- shortcut = self.shortcut_layer(x)
- res = self.res_layer(x)
- return res + shortcut
-
-
-class bottleneck_IR_SE(Module):
- def __init__(self, in_channel, depth, stride):
- super(bottleneck_IR_SE, self).__init__()
- if in_channel == depth:
- self.shortcut_layer = MaxPool2d(1, stride)
- else:
- self.shortcut_layer = Sequential(
- Conv2d(in_channel, depth, (1, 1), stride, bias=False),
- BatchNorm2d(depth)
- )
- self.res_layer = Sequential(
- BatchNorm2d(in_channel),
- Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False),
- PReLU(depth),
- Conv2d(depth, depth, (3, 3), stride, 1, bias=False),
- BatchNorm2d(depth),
- SEModule(depth, 16)
- )
-
- def forward(self, x):
- shortcut = self.shortcut_layer(x)
- res = self.res_layer(x)
- return res + shortcut
-
-
-def _upsample_add(x, y):
- """Upsample and add two feature maps.
- Args:
- x: (Variable) top feature map to be upsampled.
- y: (Variable) lateral feature map.
- Returns:
- (Variable) added feature map.
- Note in PyTorch, when input size is odd, the upsampled feature map
- with `F.upsample(..., scale_factor=2, mode='nearest')`
- maybe not equal to the lateral feature map size.
- e.g.
- original input size: [N,_,15,15] ->
- conv2d feature map size: [N,_,8,8] ->
- upsampled feature map size: [N,_,16,16]
- So we choose bilinear upsample which supports arbitrary output sizes.
- """
- _, _, H, W = y.size()
- return F.interpolate(x, size=(H, W), mode='bilinear', align_corners=True) + y
diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.cpp b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.cpp
deleted file mode 100644
index d2e633dc896433c205e18bc3e455539192ff968e..0000000000000000000000000000000000000000
--- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.cpp
+++ /dev/null
@@ -1,23 +0,0 @@
-#include
-
-
-torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel,
- int up_x, int up_y, int down_x, int down_y,
- int pad_x0, int pad_x1, int pad_y0, int pad_y1);
-
-#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
-#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous")
-#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
-
-torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel,
- int up_x, int up_y, int down_x, int down_y,
- int pad_x0, int pad_x1, int pad_y0, int pad_y1) {
- CHECK_CUDA(input);
- CHECK_CUDA(kernel);
-
- return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1);
-}
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)");
-}
\ No newline at end of file
diff --git a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/utils.py b/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/utils.py
deleted file mode 100644
index 51e80c5e296b24cae130ab0459baf268e0db7673..0000000000000000000000000000000000000000
--- a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/utils.py
+++ /dev/null
@@ -1,60 +0,0 @@
-from itertools import repeat
-import collections.abc
-
-from torch import nn as nn
-from torchvision.ops.misc import FrozenBatchNorm2d
-
-
-def freeze_batch_norm_2d(module, module_match={}, name=''):
- """
- Converts all `BatchNorm2d` and `SyncBatchNorm` layers of provided module into `FrozenBatchNorm2d`. If `module` is
- itself an instance of either `BatchNorm2d` or `SyncBatchNorm`, it is converted into `FrozenBatchNorm2d` and
- returned. Otherwise, the module is walked recursively and submodules are converted in place.
-
- Args:
- module (torch.nn.Module): Any PyTorch module.
- module_match (dict): Dictionary of full module names to freeze (all if empty)
- name (str): Full module name (prefix)
-
- Returns:
- torch.nn.Module: Resulting module
-
- Inspired by https://github.com/pytorch/pytorch/blob/a5895f85be0f10212791145bfedc0261d364f103/torch/nn/modules/batchnorm.py#L762
- """
- res = module
- is_match = True
- if module_match:
- is_match = name in module_match
- if is_match and isinstance(module, (nn.modules.batchnorm.BatchNorm2d, nn.modules.batchnorm.SyncBatchNorm)):
- res = FrozenBatchNorm2d(module.num_features)
- res.num_features = module.num_features
- res.affine = module.affine
- if module.affine:
- res.weight.data = module.weight.data.clone().detach()
- res.bias.data = module.bias.data.clone().detach()
- res.running_mean.data = module.running_mean.data
- res.running_var.data = module.running_var.data
- res.eps = module.eps
- else:
- for child_name, child in module.named_children():
- full_child_name = '.'.join([name, child_name]) if name else child_name
- new_child = freeze_batch_norm_2d(child, module_match, full_child_name)
- if new_child is not child:
- res.add_module(child_name, new_child)
- return res
-
-
-# From PyTorch internals
-def _ntuple(n):
- def parse(x):
- if isinstance(x, collections.abc.Iterable):
- return x
- return tuple(repeat(x, n))
- return parse
-
-
-to_1tuple = _ntuple(1)
-to_2tuple = _ntuple(2)
-to_3tuple = _ntuple(3)
-to_4tuple = _ntuple(4)
-to_ntuple = lambda n, x: _ntuple(n)(x)
diff --git a/spaces/hanstyle/tts/wav2lip_train.py b/spaces/hanstyle/tts/wav2lip_train.py
deleted file mode 100644
index 6e0811808af55464a803be1e268be33f1b8a31a9..0000000000000000000000000000000000000000
--- a/spaces/hanstyle/tts/wav2lip_train.py
+++ /dev/null
@@ -1,374 +0,0 @@
-from os.path import dirname, join, basename, isfile
-from tqdm import tqdm
-
-from models import SyncNet_color as SyncNet
-from models import Wav2Lip as Wav2Lip
-import audio
-
-import torch
-from torch import nn
-from torch import optim
-import torch.backends.cudnn as cudnn
-from torch.utils import data as data_utils
-import numpy as np
-
-from glob import glob
-
-import os, random, cv2, argparse
-from hparams import hparams, get_image_list
-
-parser = argparse.ArgumentParser(description='Code to train the Wav2Lip model without the visual quality discriminator')
-
-parser.add_argument("--data_root", help="Root folder of the preprocessed LRS2 dataset", required=True, type=str)
-
-parser.add_argument('--checkpoint_dir', help='Save checkpoints to this directory', required=True, type=str)
-parser.add_argument('--syncnet_checkpoint_path', help='Load the pre-trained Expert discriminator', required=True, type=str)
-
-parser.add_argument('--checkpoint_path', help='Resume from this checkpoint', default=None, type=str)
-
-args = parser.parse_args()
-
-
-global_step = 0
-global_epoch = 0
-use_cuda = torch.cuda.is_available()
-print('use_cuda: {}'.format(use_cuda))
-
-syncnet_T = 5
-syncnet_mel_step_size = 16
-
-class Dataset(object):
- def __init__(self, split):
- self.all_videos = get_image_list(args.data_root, split)
-
- def get_frame_id(self, frame):
- return int(basename(frame).split('.')[0])
-
- def get_window(self, start_frame):
- start_id = self.get_frame_id(start_frame)
- vidname = dirname(start_frame)
-
- window_fnames = []
- for frame_id in range(start_id, start_id + syncnet_T):
- frame = join(vidname, '{}.jpg'.format(frame_id))
- if not isfile(frame):
- return None
- window_fnames.append(frame)
- return window_fnames
-
- def read_window(self, window_fnames):
- if window_fnames is None: return None
- window = []
- for fname in window_fnames:
- img = cv2.imread(fname)
- if img is None:
- return None
- try:
- img = cv2.resize(img, (hparams.img_size, hparams.img_size))
- except Exception as e:
- return None
-
- window.append(img)
-
- return window
-
- def crop_audio_window(self, spec, start_frame):
- if type(start_frame) == int:
- start_frame_num = start_frame
- else:
- start_frame_num = self.get_frame_id(start_frame) # 0-indexing ---> 1-indexing
- start_idx = int(80. * (start_frame_num / float(hparams.fps)))
-
- end_idx = start_idx + syncnet_mel_step_size
-
- return spec[start_idx : end_idx, :]
-
- def get_segmented_mels(self, spec, start_frame):
- mels = []
- assert syncnet_T == 5
- start_frame_num = self.get_frame_id(start_frame) + 1 # 0-indexing ---> 1-indexing
- if start_frame_num - 2 < 0: return None
- for i in range(start_frame_num, start_frame_num + syncnet_T):
- m = self.crop_audio_window(spec, i - 2)
- if m.shape[0] != syncnet_mel_step_size:
- return None
- mels.append(m.T)
-
- mels = np.asarray(mels)
-
- return mels
-
- def prepare_window(self, window):
- # 3 x T x H x W
- x = np.asarray(window) / 255.
- x = np.transpose(x, (3, 0, 1, 2))
-
- return x
-
- def __len__(self):
- return len(self.all_videos)
-
- def __getitem__(self, idx):
- while 1:
- idx = random.randint(0, len(self.all_videos) - 1)
- vidname = self.all_videos[idx]
- img_names = list(glob(join(vidname, '*.jpg')))
- if len(img_names) <= 3 * syncnet_T:
- continue
-
- img_name = random.choice(img_names)
- wrong_img_name = random.choice(img_names)
- while wrong_img_name == img_name:
- wrong_img_name = random.choice(img_names)
-
- window_fnames = self.get_window(img_name)
- wrong_window_fnames = self.get_window(wrong_img_name)
- if window_fnames is None or wrong_window_fnames is None:
- continue
-
- window = self.read_window(window_fnames)
- if window is None:
- continue
-
- wrong_window = self.read_window(wrong_window_fnames)
- if wrong_window is None:
- continue
-
- try:
- wavpath = join(vidname, "audio.wav")
- wav = audio.load_wav(wavpath, hparams.sample_rate)
-
- orig_mel = audio.melspectrogram(wav).T
- except Exception as e:
- continue
-
- mel = self.crop_audio_window(orig_mel.copy(), img_name)
-
- if (mel.shape[0] != syncnet_mel_step_size):
- continue
-
- indiv_mels = self.get_segmented_mels(orig_mel.copy(), img_name)
- if indiv_mels is None: continue
-
- window = self.prepare_window(window)
- y = window.copy()
- window[:, :, window.shape[2]//2:] = 0.
-
- wrong_window = self.prepare_window(wrong_window)
- x = np.concatenate([window, wrong_window], axis=0)
-
- x = torch.FloatTensor(x)
- mel = torch.FloatTensor(mel.T).unsqueeze(0)
- indiv_mels = torch.FloatTensor(indiv_mels).unsqueeze(1)
- y = torch.FloatTensor(y)
- return x, indiv_mels, mel, y
-
-def save_sample_images(x, g, gt, global_step, checkpoint_dir):
- x = (x.detach().cpu().numpy().transpose(0, 2, 3, 4, 1) * 255.).astype(np.uint8)
- g = (g.detach().cpu().numpy().transpose(0, 2, 3, 4, 1) * 255.).astype(np.uint8)
- gt = (gt.detach().cpu().numpy().transpose(0, 2, 3, 4, 1) * 255.).astype(np.uint8)
-
- refs, inps = x[..., 3:], x[..., :3]
- folder = join(checkpoint_dir, "samples_step{:09d}".format(global_step))
- if not os.path.exists(folder): os.mkdir(folder)
- collage = np.concatenate((refs, inps, g, gt), axis=-2)
- for batch_idx, c in enumerate(collage):
- for t in range(len(c)):
- cv2.imwrite('{}/{}_{}.jpg'.format(folder, batch_idx, t), c[t])
-
-logloss = nn.BCELoss()
-def cosine_loss(a, v, y):
- d = nn.functional.cosine_similarity(a, v)
- loss = logloss(d.unsqueeze(1), y)
-
- return loss
-
-device = torch.device("cuda" if use_cuda else "cpu")
-syncnet = SyncNet().to(device)
-for p in syncnet.parameters():
- p.requires_grad = False
-
-recon_loss = nn.L1Loss()
-def get_sync_loss(mel, g):
- g = g[:, :, :, g.size(3)//2:]
- g = torch.cat([g[:, :, i] for i in range(syncnet_T)], dim=1)
- # B, 3 * T, H//2, W
- a, v = syncnet(mel, g)
- y = torch.ones(g.size(0), 1).float().to(device)
- return cosine_loss(a, v, y)
-
-def train(device, model, train_data_loader, test_data_loader, optimizer,
- checkpoint_dir=None, checkpoint_interval=None, nepochs=None):
-
- global global_step, global_epoch
- resumed_step = global_step
-
- while global_epoch < nepochs:
- print('Starting Epoch: {}'.format(global_epoch))
- running_sync_loss, running_l1_loss = 0., 0.
- prog_bar = tqdm(enumerate(train_data_loader))
- for step, (x, indiv_mels, mel, gt) in prog_bar:
- model.train()
- optimizer.zero_grad()
-
- # Move data to CUDA device
- x = x.to(device)
- mel = mel.to(device)
- indiv_mels = indiv_mels.to(device)
- gt = gt.to(device)
-
- g = model(indiv_mels, x)
-
- if hparams.syncnet_wt > 0.:
- sync_loss = get_sync_loss(mel, g)
- else:
- sync_loss = 0.
-
- l1loss = recon_loss(g, gt)
-
- loss = hparams.syncnet_wt * sync_loss + (1 - hparams.syncnet_wt) * l1loss
- loss.backward()
- optimizer.step()
-
- if global_step % checkpoint_interval == 0:
- save_sample_images(x, g, gt, global_step, checkpoint_dir)
-
- global_step += 1
- cur_session_steps = global_step - resumed_step
-
- running_l1_loss += l1loss.item()
- if hparams.syncnet_wt > 0.:
- running_sync_loss += sync_loss.item()
- else:
- running_sync_loss += 0.
-
- if global_step == 1 or global_step % checkpoint_interval == 0:
- save_checkpoint(
- model, optimizer, global_step, checkpoint_dir, global_epoch)
-
- if global_step == 1 or global_step % hparams.eval_interval == 0:
- with torch.no_grad():
- average_sync_loss = eval_model(test_data_loader, global_step, device, model, checkpoint_dir)
-
- if average_sync_loss < .75:
- hparams.set_hparam('syncnet_wt', 0.01) # without image GAN a lesser weight is sufficient
-
- prog_bar.set_description('L1: {}, Sync Loss: {}'.format(running_l1_loss / (step + 1),
- running_sync_loss / (step + 1)))
-
- global_epoch += 1
-
-
-def eval_model(test_data_loader, global_step, device, model, checkpoint_dir):
- eval_steps = 700
- print('Evaluating for {} steps'.format(eval_steps))
- sync_losses, recon_losses = [], []
- step = 0
- while 1:
- for x, indiv_mels, mel, gt in test_data_loader:
- step += 1
- model.eval()
-
- # Move data to CUDA device
- x = x.to(device)
- gt = gt.to(device)
- indiv_mels = indiv_mels.to(device)
- mel = mel.to(device)
-
- g = model(indiv_mels, x)
-
- sync_loss = get_sync_loss(mel, g)
- l1loss = recon_loss(g, gt)
-
- sync_losses.append(sync_loss.item())
- recon_losses.append(l1loss.item())
-
- if step > eval_steps:
- averaged_sync_loss = sum(sync_losses) / len(sync_losses)
- averaged_recon_loss = sum(recon_losses) / len(recon_losses)
-
- print('L1: {}, Sync loss: {}'.format(averaged_recon_loss, averaged_sync_loss))
-
- return averaged_sync_loss
-
-def save_checkpoint(model, optimizer, step, checkpoint_dir, epoch):
-
- checkpoint_path = join(
- checkpoint_dir, "checkpoint_step{:09d}.pth".format(global_step))
- optimizer_state = optimizer.state_dict() if hparams.save_optimizer_state else None
- torch.save({
- "state_dict": model.state_dict(),
- "optimizer": optimizer_state,
- "global_step": step,
- "global_epoch": epoch,
- }, checkpoint_path)
- print("Saved checkpoint:", checkpoint_path)
-
-
-def _load(checkpoint_path):
- if use_cuda:
- checkpoint = torch.load(checkpoint_path)
- else:
- checkpoint = torch.load(checkpoint_path,
- map_location=lambda storage, loc: storage)
- return checkpoint
-
-def load_checkpoint(path, model, optimizer, reset_optimizer=False, overwrite_global_states=True):
- global global_step
- global global_epoch
-
- print("Load checkpoint from: {}".format(path))
- checkpoint = _load(path)
- s = checkpoint["state_dict"]
- new_s = {}
- for k, v in s.items():
- new_s[k.replace('module.', '')] = v
- model.load_state_dict(new_s)
- if not reset_optimizer:
- optimizer_state = checkpoint["optimizer"]
- if optimizer_state is not None:
- print("Load optimizer state from {}".format(path))
- optimizer.load_state_dict(checkpoint["optimizer"])
- if overwrite_global_states:
- global_step = checkpoint["global_step"]
- global_epoch = checkpoint["global_epoch"]
-
- return model
-
-if __name__ == "__main__":
- checkpoint_dir = args.checkpoint_dir
-
- # Dataset and Dataloader setup
- train_dataset = Dataset('train')
- test_dataset = Dataset('val')
-
- train_data_loader = data_utils.DataLoader(
- train_dataset, batch_size=hparams.batch_size, shuffle=True,
- num_workers=hparams.num_workers)
-
- test_data_loader = data_utils.DataLoader(
- test_dataset, batch_size=hparams.batch_size,
- num_workers=4)
-
- device = torch.device("cuda" if use_cuda else "cpu")
-
- # Model
- model = Wav2Lip().to(device)
- print('total trainable params {}'.format(sum(p.numel() for p in model.parameters() if p.requires_grad)))
-
- optimizer = optim.Adam([p for p in model.parameters() if p.requires_grad],
- lr=hparams.initial_learning_rate)
-
- if args.checkpoint_path is not None:
- load_checkpoint(args.checkpoint_path, model, optimizer, reset_optimizer=False)
-
- load_checkpoint(args.syncnet_checkpoint_path, syncnet, None, reset_optimizer=True, overwrite_global_states=False)
-
- if not os.path.exists(checkpoint_dir):
- os.mkdir(checkpoint_dir)
-
- # Train!
- train(device, model, train_data_loader, test_data_loader, optimizer,
- checkpoint_dir=checkpoint_dir,
- checkpoint_interval=hparams.checkpoint_interval,
- nepochs=hparams.nepochs)
diff --git a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/autoanchor.py b/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/autoanchor.py
deleted file mode 100644
index 4c11ab3decec6f30f46fcd6121a3cfd5bc7957c2..0000000000000000000000000000000000000000
--- a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/autoanchor.py
+++ /dev/null
@@ -1,169 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
-"""
-AutoAnchor utils
-"""
-
-import random
-
-import numpy as np
-import torch
-import yaml
-from tqdm import tqdm
-
-from utils import TryExcept
-from utils.general import LOGGER, TQDM_BAR_FORMAT, colorstr
-
-PREFIX = colorstr('AutoAnchor: ')
-
-
-def check_anchor_order(m):
- # Check anchor order against stride order for YOLOv5 Detect() module m, and correct if necessary
- a = m.anchors.prod(-1).mean(-1).view(-1) # mean anchor area per output layer
- da = a[-1] - a[0] # delta a
- ds = m.stride[-1] - m.stride[0] # delta s
- if da and (da.sign() != ds.sign()): # same order
- LOGGER.info(f'{PREFIX}Reversing anchor order')
- m.anchors[:] = m.anchors.flip(0)
-
-
-@TryExcept(f'{PREFIX}ERROR')
-def check_anchors(dataset, model, thr=4.0, imgsz=640):
- # Check anchor fit to data, recompute if necessary
- m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect()
- shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True)
- scale = np.random.uniform(0.9, 1.1, size=(shapes.shape[0], 1)) # augment scale
- wh = torch.tensor(np.concatenate([l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)])).float() # wh
-
- def metric(k): # compute metric
- r = wh[:, None] / k[None]
- x = torch.min(r, 1 / r).min(2)[0] # ratio metric
- best = x.max(1)[0] # best_x
- aat = (x > 1 / thr).float().sum(1).mean() # anchors above threshold
- bpr = (best > 1 / thr).float().mean() # best possible recall
- return bpr, aat
-
- stride = m.stride.to(m.anchors.device).view(-1, 1, 1) # model strides
- anchors = m.anchors.clone() * stride # current anchors
- bpr, aat = metric(anchors.cpu().view(-1, 2))
- s = f'\n{PREFIX}{aat:.2f} anchors/target, {bpr:.3f} Best Possible Recall (BPR). '
- if bpr > 0.98: # threshold to recompute
- LOGGER.info(f'{s}Current anchors are a good fit to dataset ✅')
- else:
- LOGGER.info(f'{s}Anchors are a poor fit to dataset ⚠️, attempting to improve...')
- na = m.anchors.numel() // 2 # number of anchors
- anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False)
- new_bpr = metric(anchors)[0]
- if new_bpr > bpr: # replace anchors
- anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors)
- m.anchors[:] = anchors.clone().view_as(m.anchors)
- check_anchor_order(m) # must be in pixel-space (not grid-space)
- m.anchors /= stride
- s = f'{PREFIX}Done ✅ (optional: update model *.yaml to use these anchors in the future)'
- else:
- s = f'{PREFIX}Done ⚠️ (original anchors better than new anchors, proceeding with original anchors)'
- LOGGER.info(s)
-
-
-def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True):
- """ Creates kmeans-evolved anchors from training dataset
-
- Arguments:
- dataset: path to data.yaml, or a loaded dataset
- n: number of anchors
- img_size: image size used for training
- thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0
- gen: generations to evolve anchors using genetic algorithm
- verbose: print all results
-
- Return:
- k: kmeans evolved anchors
-
- Usage:
- from utils.autoanchor import *; _ = kmean_anchors()
- """
- from scipy.cluster.vq import kmeans
-
- npr = np.random
- thr = 1 / thr
-
- def metric(k, wh): # compute metrics
- r = wh[:, None] / k[None]
- x = torch.min(r, 1 / r).min(2)[0] # ratio metric
- # x = wh_iou(wh, torch.tensor(k)) # iou metric
- return x, x.max(1)[0] # x, best_x
-
- def anchor_fitness(k): # mutation fitness
- _, best = metric(torch.tensor(k, dtype=torch.float32), wh)
- return (best * (best > thr).float()).mean() # fitness
-
- def print_results(k, verbose=True):
- k = k[np.argsort(k.prod(1))] # sort small to large
- x, best = metric(k, wh0)
- bpr, aat = (best > thr).float().mean(), (x > thr).float().mean() * n # best possible recall, anch > thr
- s = f'{PREFIX}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr\n' \
- f'{PREFIX}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, ' \
- f'past_thr={x[x > thr].mean():.3f}-mean: '
- for x in k:
- s += '%i,%i, ' % (round(x[0]), round(x[1]))
- if verbose:
- LOGGER.info(s[:-2])
- return k
-
- if isinstance(dataset, str): # *.yaml file
- with open(dataset, errors='ignore') as f:
- data_dict = yaml.safe_load(f) # model dict
- from utils.dataloaders import LoadImagesAndLabels
- dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True)
-
- # Get label wh
- shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True)
- wh0 = np.concatenate([l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)]) # wh
-
- # Filter
- i = (wh0 < 3.0).any(1).sum()
- if i:
- LOGGER.info(f'{PREFIX}WARNING ⚠️ Extremely small objects found: {i} of {len(wh0)} labels are <3 pixels in size')
- wh = wh0[(wh0 >= 2.0).any(1)].astype(np.float32) # filter > 2 pixels
- # wh = wh * (npr.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1
-
- # Kmeans init
- try:
- LOGGER.info(f'{PREFIX}Running kmeans for {n} anchors on {len(wh)} points...')
- assert n <= len(wh) # apply overdetermined constraint
- s = wh.std(0) # sigmas for whitening
- k = kmeans(wh / s, n, iter=30)[0] * s # points
- assert n == len(k) # kmeans may return fewer points than requested if wh is insufficient or too similar
- except Exception:
- LOGGER.warning(f'{PREFIX}WARNING ⚠️ switching strategies from kmeans to random init')
- k = np.sort(npr.rand(n * 2)).reshape(n, 2) * img_size # random init
- wh, wh0 = (torch.tensor(x, dtype=torch.float32) for x in (wh, wh0))
- k = print_results(k, verbose=False)
-
- # Plot
- # k, d = [None] * 20, [None] * 20
- # for i in tqdm(range(1, 21)):
- # k[i-1], d[i-1] = kmeans(wh / s, i) # points, mean distance
- # fig, ax = plt.subplots(1, 2, figsize=(14, 7), tight_layout=True)
- # ax = ax.ravel()
- # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.')
- # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) # plot wh
- # ax[0].hist(wh[wh[:, 0]<100, 0],400)
- # ax[1].hist(wh[wh[:, 1]<100, 1],400)
- # fig.savefig('wh.png', dpi=200)
-
- # Evolve
- f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma
- pbar = tqdm(range(gen), bar_format=TQDM_BAR_FORMAT) # progress bar
- for _ in pbar:
- v = np.ones(sh)
- while (v == 1).all(): # mutate until a change occurs (prevent duplicates)
- v = ((npr.random(sh) < mp) * random.random() * npr.randn(*sh) * s + 1).clip(0.3, 3.0)
- kg = (k.copy() * v).clip(min=2.0)
- fg = anchor_fitness(kg)
- if fg > f:
- f, k = fg, kg.copy()
- pbar.desc = f'{PREFIX}Evolving anchors with Genetic Algorithm: fitness = {f:.4f}'
- if verbose:
- print_results(k, verbose)
-
- return print_results(k).astype(np.float32)
diff --git a/spaces/henryezell/freewilly/app.py b/spaces/henryezell/freewilly/app.py
deleted file mode 100644
index 8be47e7462d04255ee691ae31eeae8b73920f87b..0000000000000000000000000000000000000000
--- a/spaces/henryezell/freewilly/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/stabilityai/FreeWilly2").launch()
\ No newline at end of file
diff --git a/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py b/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py
deleted file mode 100644
index 846e39849535ed08accb10d7001f2431a851d372..0000000000000000000000000000000000000000
--- a/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import ONNXVITS_models
-import utils
-from text import text_to_sequence
-import torch
-import commons
-
-def get_text(text, hps):
- text_norm = text_to_sequence(text, hps.symbols, hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = torch.LongTensor(text_norm)
- return text_norm
-
-hps = utils.get_hparams_from_file("../vits/pretrained_models/uma87.json")
-symbols = hps.symbols
-net_g = ONNXVITS_models.SynthesizerTrn(
- len(symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- **hps.model)
-_ = net_g.eval()
-_ = utils.load_checkpoint("../vits/pretrained_models/uma_1153000.pth", net_g)
-
-text1 = get_text("ありがとうございます。", hps)
-stn_tst = text1
-with torch.no_grad():
- x_tst = stn_tst.unsqueeze(0)
- x_tst_lengths = torch.LongTensor([stn_tst.size(0)])
- sid = torch.tensor([0])
- o = net_g(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, length_scale=1)
\ No newline at end of file
diff --git a/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/text/mandarin.py b/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/text/mandarin.py
deleted file mode 100644
index 093d8826809aa2681f6088174427337a59e0c882..0000000000000000000000000000000000000000
--- a/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/text/mandarin.py
+++ /dev/null
@@ -1,329 +0,0 @@
-import os
-import sys
-import re
-from pypinyin import lazy_pinyin, BOPOMOFO
-import jieba
-import cn2an
-import logging
-
-logging.getLogger('jieba').setLevel(logging.WARNING)
-jieba.initialize()
-
-
-# List of (Latin alphabet, bopomofo) pairs:
-_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', 'ㄟˉ'),
- ('b', 'ㄅㄧˋ'),
- ('c', 'ㄙㄧˉ'),
- ('d', 'ㄉㄧˋ'),
- ('e', 'ㄧˋ'),
- ('f', 'ㄝˊㄈㄨˋ'),
- ('g', 'ㄐㄧˋ'),
- ('h', 'ㄝˇㄑㄩˋ'),
- ('i', 'ㄞˋ'),
- ('j', 'ㄐㄟˋ'),
- ('k', 'ㄎㄟˋ'),
- ('l', 'ㄝˊㄛˋ'),
- ('m', 'ㄝˊㄇㄨˋ'),
- ('n', 'ㄣˉ'),
- ('o', 'ㄡˉ'),
- ('p', 'ㄆㄧˉ'),
- ('q', 'ㄎㄧㄡˉ'),
- ('r', 'ㄚˋ'),
- ('s', 'ㄝˊㄙˋ'),
- ('t', 'ㄊㄧˋ'),
- ('u', 'ㄧㄡˉ'),
- ('v', 'ㄨㄧˉ'),
- ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'),
- ('x', 'ㄝˉㄎㄨˋㄙˋ'),
- ('y', 'ㄨㄞˋ'),
- ('z', 'ㄗㄟˋ')
-]]
-
-# List of (bopomofo, romaji) pairs:
-_bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄅㄛ', 'p⁼wo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p⁼'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't⁼'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k⁼'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'h'),
- ('ㄐ', 'ʧ⁼'),
- ('ㄑ', 'ʧʰ'),
- ('ㄒ', 'ʃ'),
- ('ㄓ', 'ʦ`⁼'),
- ('ㄔ', 'ʦ`ʰ'),
- ('ㄕ', 's`'),
- ('ㄖ', 'ɹ`'),
- ('ㄗ', 'ʦ⁼'),
- ('ㄘ', 'ʦʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ə'),
- ('ㄝ', 'e'),
- ('ㄞ', 'ai'),
- ('ㄟ', 'ei'),
- ('ㄠ', 'au'),
- ('ㄡ', 'ou'),
- ('ㄧㄢ', 'yeNN'),
- ('ㄢ', 'aNN'),
- ('ㄧㄣ', 'iNN'),
- ('ㄣ', 'əNN'),
- ('ㄤ', 'aNg'),
- ('ㄧㄥ', 'iNg'),
- ('ㄨㄥ', 'uNg'),
- ('ㄩㄥ', 'yuNg'),
- ('ㄥ', 'əNg'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'ɥ'),
- ('ˉ', '→'),
- ('ˊ', '↑'),
- ('ˇ', '↓↑'),
- ('ˋ', '↓'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-# List of (romaji, ipa) pairs:
-_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('ʃy', 'ʃ'),
- ('ʧʰy', 'ʧʰ'),
- ('ʧ⁼y', 'ʧ⁼'),
- ('NN', 'n'),
- ('Ng', 'ŋ'),
- ('y', 'j'),
- ('h', 'x')
-]]
-
-# List of (bopomofo, ipa) pairs:
-_bopomofo_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄅㄛ', 'p⁼wo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p⁼'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't⁼'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k⁼'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'x'),
- ('ㄐ', 'tʃ⁼'),
- ('ㄑ', 'tʃʰ'),
- ('ㄒ', 'ʃ'),
- ('ㄓ', 'ts`⁼'),
- ('ㄔ', 'ts`ʰ'),
- ('ㄕ', 's`'),
- ('ㄖ', 'ɹ`'),
- ('ㄗ', 'ts⁼'),
- ('ㄘ', 'tsʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ə'),
- ('ㄝ', 'ɛ'),
- ('ㄞ', 'aɪ'),
- ('ㄟ', 'eɪ'),
- ('ㄠ', 'ɑʊ'),
- ('ㄡ', 'oʊ'),
- ('ㄧㄢ', 'jɛn'),
- ('ㄩㄢ', 'ɥæn'),
- ('ㄢ', 'an'),
- ('ㄧㄣ', 'in'),
- ('ㄩㄣ', 'ɥn'),
- ('ㄣ', 'ən'),
- ('ㄤ', 'ɑŋ'),
- ('ㄧㄥ', 'iŋ'),
- ('ㄨㄥ', 'ʊŋ'),
- ('ㄩㄥ', 'jʊŋ'),
- ('ㄥ', 'əŋ'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'ɥ'),
- ('ˉ', '→'),
- ('ˊ', '↑'),
- ('ˇ', '↓↑'),
- ('ˋ', '↓'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-# List of (bopomofo, ipa2) pairs:
-_bopomofo_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄅㄛ', 'pwo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'h'),
- ('ㄐ', 'tɕ'),
- ('ㄑ', 'tɕʰ'),
- ('ㄒ', 'ɕ'),
- ('ㄓ', 'tʂ'),
- ('ㄔ', 'tʂʰ'),
- ('ㄕ', 'ʂ'),
- ('ㄖ', 'ɻ'),
- ('ㄗ', 'ts'),
- ('ㄘ', 'tsʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ɤ'),
- ('ㄝ', 'ɛ'),
- ('ㄞ', 'aɪ'),
- ('ㄟ', 'eɪ'),
- ('ㄠ', 'ɑʊ'),
- ('ㄡ', 'oʊ'),
- ('ㄧㄢ', 'jɛn'),
- ('ㄩㄢ', 'yæn'),
- ('ㄢ', 'an'),
- ('ㄧㄣ', 'in'),
- ('ㄩㄣ', 'yn'),
- ('ㄣ', 'ən'),
- ('ㄤ', 'ɑŋ'),
- ('ㄧㄥ', 'iŋ'),
- ('ㄨㄥ', 'ʊŋ'),
- ('ㄩㄥ', 'jʊŋ'),
- ('ㄥ', 'ɤŋ'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'y'),
- ('ˉ', '˥'),
- ('ˊ', '˧˥'),
- ('ˇ', '˨˩˦'),
- ('ˋ', '˥˩'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-
-def number_to_chinese(text):
- numbers = re.findall(r'\d+(?:\.?\d+)?', text)
- for number in numbers:
- text = text.replace(number, cn2an.an2cn(number), 1)
- return text
-
-
-def chinese_to_bopomofo(text):
- text = text.replace('、', ',').replace(';', ',').replace(':', ',')
- words = jieba.lcut(text, cut_all=False)
- text = ''
- for word in words:
- bopomofos = lazy_pinyin(word, BOPOMOFO)
- if not re.search('[\u4e00-\u9fff]', word):
- text += word
- continue
- for i in range(len(bopomofos)):
- bopomofos[i] = re.sub(r'([\u3105-\u3129])$', r'\1ˉ', bopomofos[i])
- if text != '':
- text += ' '
- text += ''.join(bopomofos)
- return text
-
-
-def latin_to_bopomofo(text):
- for regex, replacement in _latin_to_bopomofo:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def bopomofo_to_romaji(text):
- for regex, replacement in _bopomofo_to_romaji:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def bopomofo_to_ipa(text):
- for regex, replacement in _bopomofo_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def bopomofo_to_ipa2(text):
- for regex, replacement in _bopomofo_to_ipa2:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def chinese_to_romaji(text):
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- text = bopomofo_to_romaji(text)
- text = re.sub('i([aoe])', r'y\1', text)
- text = re.sub('u([aoəe])', r'w\1', text)
- text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)',
- r'\1ɹ`\2', text).replace('ɻ', 'ɹ`')
- text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text)
- return text
-
-
-def chinese_to_lazy_ipa(text):
- text = chinese_to_romaji(text)
- for regex, replacement in _romaji_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def chinese_to_ipa(text):
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- text = bopomofo_to_ipa(text)
- text = re.sub('i([aoe])', r'j\1', text)
- text = re.sub('u([aoəe])', r'w\1', text)
- text = re.sub('([sɹ]`[⁼ʰ]?)([→↓↑ ]+|$)',
- r'\1ɹ`\2', text).replace('ɻ', 'ɹ`')
- text = re.sub('([s][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text)
- return text
-
-
-def chinese_to_ipa2(text):
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- text = bopomofo_to_ipa2(text)
- text = re.sub(r'i([aoe])', r'j\1', text)
- text = re.sub(r'u([aoəe])', r'w\1', text)
- text = re.sub(r'([ʂɹ]ʰ?)([˩˨˧˦˥ ]+|$)', r'\1ʅ\2', text)
- text = re.sub(r'(sʰ?)([˩˨˧˦˥ ]+|$)', r'\1ɿ\2', text)
- return text
\ No newline at end of file
diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/create_plots_new/front_change.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/create_plots_new/front_change.py
deleted file mode 100644
index 6689ca39d92ece151aa27e93692b17e665a80075..0000000000000000000000000000000000000000
--- a/spaces/ho11laqe/nnUNet_calvingfront_detection/create_plots_new/front_change.py
+++ /dev/null
@@ -1,228 +0,0 @@
-import cv2
-
-import numpy as np
-import os
-import plotly.express as px
-import plotly.figure_factory as ff
-import datetime
-import plotly.io as pio
-import plotly.graph_objs as go
-
-pio.kaleido.scope.mathjax = None
-import math
-# import pylab
-from matplotlib.colors import LinearSegmentedColormap
-from PIL import ImageColor
-
-
-def distribute_glacier(list_of_samples):
- list_of_glaciers = {}
- for glacier in ['JAC']:
- #for glacier in [ 'COL', 'Mapple', 'Crane', 'Jorum','DBE','SI', 'JAC']:
- list_of_glaciers[glacier] = [sample for sample in list_of_samples if glacier in sample]
- return list_of_glaciers
-
-
-def create_dict(list_of_samples):
- list_dict = []
- for sample in list_of_samples:
- sample_split = sample.split('_')
- finish_date = datetime.datetime.fromisoformat(sample_split[1]) + datetime.timedelta(days=50)
- sample_dict = {
- 'Glacier': sample_split[0],
- 'Start': sample_split[1],
- 'Finish': str(finish_date),
- 'Satellite:': sample_split[2]
- }
- list_dict.append(sample_dict)
- return list_dict
-
-
-if __name__ == '__main__':
- train_dir = '/home/ho11laqe/PycharmProjects/data_raw/fronts/train/'
- test_dir = '/home/ho11laqe/PycharmProjects/data_raw/fronts/test/'
-
- list_of_train_samples = os.listdir(train_dir)
- list_of_test_samples = os.listdir(test_dir)
- list_of_samples = list_of_train_samples + list_of_test_samples
- list_of_glaciers = distribute_glacier(list_of_samples)
- list_dict = create_dict(list_of_samples)
-
- # define color map
- colormap = px.colors.sequential.Reds[-1::-1]
- for glacier in list_of_glaciers:
- print(glacier)
- list_of_glaciers[glacier].sort()
-
-
- if glacier in ['COL', 'Mapple']:
- data_directory = test_dir
- image_directory = '/home/ho11laqe/PycharmProjects/data_raw/sar_images/test/'
- else:
- data_directory = train_dir
- image_directory = '/home/ho11laqe/PycharmProjects/data_raw/sar_images/train/'
-
-
- # define SAR blackground image
- if glacier == 'COL':
- canvas = cv2.imread(image_directory + 'COL_2011-11-13_TDX_7_1_092.png')
- shape = canvas.shape
-
- elif glacier == 'JAC':
- canvas = cv2.imread(image_directory + 'JAC_2009-06-21_TSX_6_1_005.png')
- shape = canvas.shape
-
- elif glacier == 'Jorum':
- canvas = cv2.imread(image_directory + 'Jorum_2011-09-04_TSX_7_4_034.png')
- shape = canvas.shape
-
- elif glacier == 'Mapple':
- canvas = cv2.imread(image_directory + 'Mapple_2008-10-13_TSX_7_2_034.png')
- shape = canvas.shape
-
- elif glacier == 'SI':
- canvas = cv2.imread(image_directory + 'SI_2013-08-14_TSX_7_1_125.png')
-
- elif glacier == 'Crane':
- canvas = cv2.imread(image_directory + 'Crane_2008-10-13_TSX_7_3_034.png')
-
- elif glacier == 'DBE':
- canvas = cv2.imread(image_directory + 'DBE_2008-03-30_TSX_7_3_049.png')
-
- else:
- print('No image for background')
- exit()
-
- number_images = len(list_of_glaciers[glacier])
- kernel = np.ones((3, 3), np.uint8)
-
- # iterate over all fronts of one glacier
- for i, image_name in enumerate(list_of_glaciers[glacier]):
- front = cv2.imread(data_directory + image_name)
-
- # if front label has to be resized to fit background image
- # the front is not dilated.
- if front.shape != canvas.shape:
- front = cv2.resize(front, (shape[1], shape[0]))
-
- else:
- front = cv2.dilate(front, kernel)
-
- # color interpolation based on position in dataset
- # TODO based on actual date
- index = (1 - i / number_images) * (len(colormap) - 1)
- up = math.ceil(index)
- down = up - 1
- color_up = np.array(ImageColor.getcolor(colormap[up], 'RGB'))
- color_down = np.array(ImageColor.getcolor(colormap[down], 'RGB'))
- dif = up - down
- color = color_up * (1 - dif) + color_down * dif
-
- # draw front on canvas
- non_zeros = np.nonzero(front)
- canvas[non_zeros[:2]] = np.uint([color for _ in non_zeros[0]])
-
- #scale reference for fontsize
- ref_x = 15000 / 7
-
- if glacier == 'COL':
- image = canvas[750:, 200:2800]
- new_shape = image.shape
- res = 7
- scale = new_shape[1] / ref_x
- fig = px.imshow(image, height=new_shape[0]- int(80 * scale), width=new_shape[1])
- legend = dict(thickness=int(50 * scale), tickvals=[-4.4, 4.4],
- ticktext=['2011 (+0.8°C)', '2020 (+1.2°C)'],
- outlinewidth=0)
-
- elif glacier == 'Mapple':
- image = canvas
- new_shape = image.shape
- res = 7
- scale = new_shape[1] / ref_x
- fig = px.imshow(image, height=new_shape[0] - int(150 * scale), width=new_shape[1])
- legend = dict(thickness=int(50 * scale), tickvals=[-4.8, 4.8], ticktext=['2006', '2020 '],
- outlinewidth=0)
-
- elif glacier == 'Crane':
- image = canvas[:2500,:]
- new_shape = image.shape
- res = 7
- scale = new_shape[1] / ref_x
- fig = px.imshow(image, height=new_shape[0] - int(150 * scale), width=new_shape[1])
- legend = dict(thickness=int(50 * scale), tickvals=[-4.8, 4.8], ticktext=['2002', '2014'],
- outlinewidth=0)
-
- elif glacier == 'Jorum':
- image = canvas#[200:1600, 1500:]
- new_shape = image.shape
- res = 7
- scale = new_shape[1] / ref_x
- fig = px.imshow(image, height=new_shape[0] - int(240 * scale), width=new_shape[1])
- legend = dict(thickness=int(50 * scale), tickvals=[-4.8, 4.8], ticktext=['2003', '2020'],
- outlinewidth=0)
-
- elif glacier == 'DBE':
- image = canvas[700:, 750:]
- new_shape = image.shape
- res = 7
- scale = new_shape[1] / ref_x
- fig = px.imshow(image, height=new_shape[0] - int(150 * scale), width=new_shape[1])
- legend = dict(thickness=int(50 * scale), tickvals=[-4.7, 4.7], ticktext=['1995', '2014'],
- outlinewidth=0)
-
- elif glacier == 'SI':
- image = canvas
- new_shape = image.shape
- res = 7
- scale = new_shape[0] / ref_x
- fig = px.imshow(image, height=new_shape[0] - int(240 * scale), width=new_shape[1])
- legend = dict(thickness=int(50 * scale), tickvals=[-4.8, 4.8], ticktext=['1995', '2014'],
- outlinewidth=0)
-
- elif glacier == 'JAC':
- image = canvas[:, :]
- new_shape = image.shape
- res = 6
- scale = new_shape[1] / ref_x
- fig = px.imshow(image, height=new_shape[0] - int(340 * scale), width=new_shape[1])
- legend = dict(thickness=int(50 * scale), tickvals=[-4.6, 4.7],
- ticktext=['2009 (+0.7°C)', '2015 (+0.9°C)'],
- outlinewidth=0)
- else:
- fig = px.imshow(canvas)
- res = 7
- scale = 1
-
- colorbar_trace = go.Scatter(x=[None],
- y=[None],
- mode='markers',
- marker=dict(
- colorscale=colormap[::-1],
- showscale=True,
- cmin=-5,
- cmax=5,
- colorbar=legend
- ),
- hoverinfo='none'
- )
- fig.update_layout(yaxis=dict(tickmode='array',
- tickvals=[0, 5000 / res, 10000 / res, 15000 / res, 20000 / res, 25000 / res],
- ticktext=[0, 5, 10, 15, 20, 25],
- title='km'))
- fig.update_layout(xaxis=dict(tickmode='array',
- tickvals=[0, 5000 / res, 10000 / res, 15000 / res, 20000 / res, 25000 / res],
- ticktext=[0, 5, 10, 15, 20, 25],
- title='km'))
-
- fig.update_xaxes(tickfont=dict(size=int(40 * scale)))
- fig.update_yaxes(tickfont=dict(size=int(40 * scale)))
- fig.update_layout(font=dict(size=int(60 * scale), family="Computer Modern"))
- fig.update_coloraxes(colorbar_x=0)
- fig['layout']['xaxis']['title']['font']['size'] = int(60 * scale)
- fig['layout']['yaxis']['title']['font']['size'] = int(60 * scale)
-
- fig['layout']['showlegend'] = False
- fig.add_trace(colorbar_trace)
- fig.write_image('output/' + glacier + "_front_change.pdf", format='pdf')
- # fig.show()
\ No newline at end of file
diff --git a/spaces/hra/GPT4-makes-BabyAGI/README.md b/spaces/hra/GPT4-makes-BabyAGI/README.md
deleted file mode 100644
index e93cb2ef6fae4bcff7e254e0d5adbefdcd3b059c..0000000000000000000000000000000000000000
--- a/spaces/hra/GPT4-makes-BabyAGI/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: GPT4 Makes BabyAGI
-emoji: 📊
-colorFrom: blue
-colorTo: green
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
-license: cc-by-nc-sa-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/hrdtbs/rvc-mochinoa/infer_pack/commons.py b/spaces/hrdtbs/rvc-mochinoa/infer_pack/commons.py
deleted file mode 100644
index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000
--- a/spaces/hrdtbs/rvc-mochinoa/infer_pack/commons.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += (
- 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
- )
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def slice_segments2(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
- num_timescales - 1
- )
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
- )
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1.0 / norm_type)
- return total_norm
diff --git a/spaces/huggingchat/chat-ui/src/lib/utils/chunk.ts b/spaces/huggingchat/chat-ui/src/lib/utils/chunk.ts
deleted file mode 100644
index 3d8f924eba449978957a62c39c7406f819edf49a..0000000000000000000000000000000000000000
--- a/spaces/huggingchat/chat-ui/src/lib/utils/chunk.ts
+++ /dev/null
@@ -1,33 +0,0 @@
-/**
- * Chunk array into arrays of length at most `chunkSize`
- *
- * @param chunkSize must be greater than or equal to 1
- */
-export function chunk(arr: T, chunkSize: number): T[] {
- if (isNaN(chunkSize) || chunkSize < 1) {
- throw new RangeError("Invalid chunk size: " + chunkSize);
- }
-
- if (!arr.length) {
- return [];
- }
-
- /// Small optimization to not chunk buffers unless needed
- if (arr.length <= chunkSize) {
- return [arr];
- }
-
- return range(Math.ceil(arr.length / chunkSize)).map((i) => {
- return arr.slice(i * chunkSize, (i + 1) * chunkSize);
- }) as T[];
-}
-
-function range(n: number, b?: number): number[] {
- return b
- ? Array(b - n)
- .fill(0)
- .map((_, i) => n + i)
- : Array(n)
- .fill(0)
- .map((_, i) => i);
-}
diff --git a/spaces/hyuan5040/ChatWithSpeech/app.py b/spaces/hyuan5040/ChatWithSpeech/app.py
deleted file mode 100644
index 122358ddec17831bbfea06dd04fd346cb77b5da4..0000000000000000000000000000000000000000
--- a/spaces/hyuan5040/ChatWithSpeech/app.py
+++ /dev/null
@@ -1,177 +0,0 @@
-import tempfile
-import gradio as gr
-import openai
-from neon_tts_plugin_coqui import CoquiTTS
-
-def Question(Ask_Question):
- # pass the generated text to audio
- openai.api_key = "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"
- # Set up the model and prompt
- model_engine = "text-davinci-003"
- #prompt = "who is alon musk?"
- # Generate a response
- completion = openai.Completion.create(
- engine=model_engine,
- prompt=Ask_Question,
- max_tokens=1024,
- n=1,
- stop=None,
- temperature=0.5,)
- response = completion.choices[0].text
- #out_result=resp['message']
- return response
-
-LANGUAGES = list(CoquiTTS.langs.keys())
-default_lang = "en"
-import telnetlib
-#import whisper
-#whisper_model = whisper.load_model("small")
-whisper = gr.Interface.load(name="spaces/sanchit-gandhi/whisper-large-v2")
-#chatgpt = gr.Blocks.load(name="spaces/fffiloni/whisper-to-chatGPT")
-import os
-import json
-session_token = os.environ.get('SessionToken')
-#api_endpoint = os.environ.get('API_EndPoint')
-# ChatGPT
-#from revChatGPT.ChatGPT import Chatbot
-#chatbot = Chatbot({"session_token": session_token}) # You can start a custom conversation
-import asyncio
-from pygpt import PyGPT
-
-title = "Speech to ChatGPT to Speech"
-#info = "more info at [Neon Coqui TTS Plugin](https://github.com/NeonGeckoCom/neon-tts-plugin-coqui), [Coqui TTS](https://github.com/coqui-ai/TTS)"
-#badge = "https://visitor-badge-reloaded.herokuapp.com/badge?page_id=neongeckocom.neon-tts-plugin-coqui"
-coquiTTS = CoquiTTS()
-chat_id = {'conversation_id': None, 'parent_id': None}
-headers = {'Authorization': 'yusin'}
-
-async def chat_gpt_ask(prompt):
- chat_gpt = PyGPT(session_token)
- await chat_gpt.connect()
- await chat_gpt.wait_for_ready()
- answer = await chat_gpt.ask(prompt)
- print(answer)
- await chat_gpt.disconnect()
-
-# ChatGPT
-def chat_hf(audio, custom_token, language):
- #output = chatgpt(audio, "transcribe", fn_index=0)
- #whisper_text, gpt_response = output[0], output[1]
- try:
- whisper_text = translate(audio)
- if whisper_text == "ERROR: You have to either use the microphone or upload an audio file":
- gpt_response = "MISSING AUDIO: Record your voice by clicking the microphone button, do not forget to stop recording before sending your message ;)"
- else:
- #gpt_response = chatbot.ask(whisper_text, conversation_id=conversation_id, parent_id=None)
- gpt_response = asyncio.run(chat_gpt_ask(whisper_text, id='yusin'))
- #if chat_id['conversation_id'] != None:
- # data = {"content": whisper_text, "conversation_id": chat_id['conversation_id'], "parent_id": chat_id['parent_id']}
- #else:
- # data = {"content": whisper_text}
- #print(data)
- #res = requests.get('http://myip.ipip.net', timeout=5).text
- #print(res)
- #response = requests.post('api_endpoint', headers=headers, json=data, verify=False, timeout=5)
- #print('this is my answear', response.text)
- #chat_id['parent_id'] = response.json()["response_id"]
- #chat_id['conversation_id'] = response.json()["conversation_id"]
- #gpt_response = response.json()["content"]
- #response = requests.get('https://api.pawan.krd/chat/gpt?text=' + whisper_text + '&cache=false', verify=False, timeout=5)
- #print(response.text)
-
- #whisper_text = translate(audio)
- #api = ChatGPT(session_token)
- #resp = api.send_message(whisper_text)
-
- #api.refresh_auth() # refresh the authorization token
- #api.reset_conversation() # reset the conversation
- #gpt_response = resp['message']
-
- except:
- whisper_text = translate(audio)
- gpt_response = """Sorry, I'm quite busy right now, but please try again later :)"""
- #whisper_text = translate(audio)
- #api = ChatGPT(custom_token)
- #resp = api.send_message(whisper_text)
-
- #api.refresh_auth() # refresh the authorization token
- #api.reset_conversation() # reset the conversation
- #gpt_response = resp['message']
-
- ## call openai
- gpt_response = Question(whisper_text)
-
- # to voice
- with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp:
- coquiTTS.get_tts(gpt_response, fp, speaker = {"language" : language})
-
- return whisper_text, gpt_response, fp.name
-
-# whisper
-#def translate(audio):
-# print("""
-# —
-# Sending audio to Whisper ...
-# —
-# """)
-#
-# audio = whisper.load_audio(audio)
-# audio = whisper.pad_or_trim(audio)
-#
-# mel = whisper.log_mel_spectrogram(audio).to(whisper_model.device)
-#
-# _, probs = whisper_model.detect_language(mel)
-#
-# transcript_options = whisper.DecodingOptions(task="transcribe", fp16 = False)
-#
-# transcription = whisper.decode(whisper_model, mel, transcript_options)
-#
-# print("language spoken: " + transcription.language)
-# print("transcript: " + transcription.text)
-# print("———————————————————————————————————————————")
-#
-# return transcription.text
-
-def translate(audio):
- print("""
- —
- Sending audio to Whisper ...
- —
- """)
-
- text_result = whisper(audio, None, "transcribe", fn_index=0)
- #print(text_result)
- return text_result
-
-
-with gr.Blocks() as blocks:
- gr.Markdown("
"
- + title
- + "
")
- #gr.Markdown(description)
- radio = gr.Radio(label="Language",choices=LANGUAGES,value=default_lang)
- with gr.Row(equal_height=True):# equal_height=False
- with gr.Column():# variant="panel"
- audio_file = gr.Audio(source="microphone",type="filepath")
- custom_token = gr.Textbox(label='If it fails, use your own session token', placeholder="your own session token")
- with gr.Row():# mobile_collapse=False
- submit = gr.Button("Submit", variant="primary")
- with gr.Column():
- text1 = gr.Textbox(label="Speech to Text")
- text2 = gr.Textbox(label="ChatGPT Response")
- audio = gr.Audio(label="Output", interactive=False)
- #gr.Markdown(info)
- #gr.Markdown("
"
- # +f''
- # +"
")
-
- # actions
- submit.click(
- chat_hf,
- [audio_file, custom_token, radio],
- [text1, text2, audio],
- )
- radio.change(lambda lang: CoquiTTS.langs[lang]["sentence"], radio, text2)
-
-
-blocks.launch(debug=True)
diff --git a/spaces/iamironman4279/SadTalker/src/face3d/data/__init__.py b/spaces/iamironman4279/SadTalker/src/face3d/data/__init__.py
deleted file mode 100644
index 9a9761c518a1b07c5996165869742af0a52c82bc..0000000000000000000000000000000000000000
--- a/spaces/iamironman4279/SadTalker/src/face3d/data/__init__.py
+++ /dev/null
@@ -1,116 +0,0 @@
-"""This package includes all the modules related to data loading and preprocessing
-
- To add a custom dataset class called 'dummy', you need to add a file called 'dummy_dataset.py' and define a subclass 'DummyDataset' inherited from BaseDataset.
- You need to implement four functions:
- -- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt).
- -- <__len__>: return the size of dataset.
- -- <__getitem__>: get a data point from data loader.
- -- : (optionally) add dataset-specific options and set default options.
-
-Now you can use the dataset class by specifying flag '--dataset_mode dummy'.
-See our template dataset class 'template_dataset.py' for more details.
-"""
-import numpy as np
-import importlib
-import torch.utils.data
-from face3d.data.base_dataset import BaseDataset
-
-
-def find_dataset_using_name(dataset_name):
- """Import the module "data/[dataset_name]_dataset.py".
-
- In the file, the class called DatasetNameDataset() will
- be instantiated. It has to be a subclass of BaseDataset,
- and it is case-insensitive.
- """
- dataset_filename = "data." + dataset_name + "_dataset"
- datasetlib = importlib.import_module(dataset_filename)
-
- dataset = None
- target_dataset_name = dataset_name.replace('_', '') + 'dataset'
- for name, cls in datasetlib.__dict__.items():
- if name.lower() == target_dataset_name.lower() \
- and issubclass(cls, BaseDataset):
- dataset = cls
-
- if dataset is None:
- raise NotImplementedError("In %s.py, there should be a subclass of BaseDataset with class name that matches %s in lowercase." % (dataset_filename, target_dataset_name))
-
- return dataset
-
-
-def get_option_setter(dataset_name):
- """Return the static method of the dataset class."""
- dataset_class = find_dataset_using_name(dataset_name)
- return dataset_class.modify_commandline_options
-
-
-def create_dataset(opt, rank=0):
- """Create a dataset given the option.
-
- This function wraps the class CustomDatasetDataLoader.
- This is the main interface between this package and 'train.py'/'test.py'
-
- Example:
- >>> from data import create_dataset
- >>> dataset = create_dataset(opt)
- """
- data_loader = CustomDatasetDataLoader(opt, rank=rank)
- dataset = data_loader.load_data()
- return dataset
-
-class CustomDatasetDataLoader():
- """Wrapper class of Dataset class that performs multi-threaded data loading"""
-
- def __init__(self, opt, rank=0):
- """Initialize this class
-
- Step 1: create a dataset instance given the name [dataset_mode]
- Step 2: create a multi-threaded data loader.
- """
- self.opt = opt
- dataset_class = find_dataset_using_name(opt.dataset_mode)
- self.dataset = dataset_class(opt)
- self.sampler = None
- print("rank %d %s dataset [%s] was created" % (rank, self.dataset.name, type(self.dataset).__name__))
- if opt.use_ddp and opt.isTrain:
- world_size = opt.world_size
- self.sampler = torch.utils.data.distributed.DistributedSampler(
- self.dataset,
- num_replicas=world_size,
- rank=rank,
- shuffle=not opt.serial_batches
- )
- self.dataloader = torch.utils.data.DataLoader(
- self.dataset,
- sampler=self.sampler,
- num_workers=int(opt.num_threads / world_size),
- batch_size=int(opt.batch_size / world_size),
- drop_last=True)
- else:
- self.dataloader = torch.utils.data.DataLoader(
- self.dataset,
- batch_size=opt.batch_size,
- shuffle=(not opt.serial_batches) and opt.isTrain,
- num_workers=int(opt.num_threads),
- drop_last=True
- )
-
- def set_epoch(self, epoch):
- self.dataset.current_epoch = epoch
- if self.sampler is not None:
- self.sampler.set_epoch(epoch)
-
- def load_data(self):
- return self
-
- def __len__(self):
- """Return the number of data in the dataset"""
- return min(len(self.dataset), self.opt.max_dataset_size)
-
- def __iter__(self):
- """Return a batch of data"""
- for i, data in enumerate(self.dataloader):
- if i * self.opt.batch_size >= self.opt.max_dataset_size:
- break
- yield data
diff --git a/spaces/inamXcontru/PoeticTTS/Bibliotecacon65534librosenespaolEPUB67GBSerialKey Access the Largest Collection of Spanish eBooks.md b/spaces/inamXcontru/PoeticTTS/Bibliotecacon65534librosenespaolEPUB67GBSerialKey Access the Largest Collection of Spanish eBooks.md
deleted file mode 100644
index cdad5e983a71b2f089312822ff2cad8d78b7b93a..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Bibliotecacon65534librosenespaolEPUB67GBSerialKey Access the Largest Collection of Spanish eBooks.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-7 Apr To X Copy your DVDs on DVD-R/RW or DVD-9, use this GUI or command-line application, also known as dvd xcopy.
-
-XCopy is the Free version of DVD/CD XCopy, a powerful and easy-to-use DVD burning software with ability to copy DVD and Blu-ray discs and. 19 Sep Use XCopy to copy a DVD disc with subtitles to a new blank disc and vice versa. I need it for my parents so that they can watch their. DVD X Copy is a powerful and easy-to-use DVD copying software that supports burning Blu-ray discs and copying DVDs and.The present invention relates to the field of lithium secondary batteries, and more particularly, to a nonaqueous electrolyte for lithium secondary batteries capable of improving battery safety and manufacturing method thereof.
-
-With the development of mobile electronic appliances, such as mobile phones, camcorders, and notebook computers, the demand for small, light-weight, and high-capacity secondary batteries used as power sources is rapidly increasing. Among secondary batteries developed so far, lithium secondary batteries are a great advantage since they can realize high energy density and high discharge voltage, when compared to other types of secondary batteries. Accordingly, lithium secondary batteries are widely used as power sources for various applications.
-
-A lithium secondary battery is prepared by injecting a nonaqueous electrolyte obtained by dissolving a lithium salt in a nonaqueous solvent into an electrode assembly, and then placing the electrode assembly in a battery case together with a lithium foil, a collection of lithiated carbon, or the like, which functions as an anode.
-
-Among lithium secondary batteries developed so far, a lithium-ion battery includes a cathode, an anode, and an electrolyte in which a nonaqueous solvent is dissolved. At this time, when the nonaqueous solvent in the electrolyte is decomposed during an overcharge or an overdischarge of the battery, the nonaqueous solvent is transformed to generate gas. As a result, the pressure of the electrolyte increases so that a battery safety problem may occur. For this reason, nonaqueous solvents having excellent thermal stability at high temperature are required. In addition, safety performance should be further improved and processability should be improved.
-
-Meanwhile, when a battery is charged or discharged, lithium ions are electrochemically transferred between the anode and the cathode via the electrolyte. At this time, the stability of the electrolyte is important. 4fefd39f24
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Fast And Furious 8 (English) Video Songs Hd 1080p Blu-ray Download Movie [BEST].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Fast And Furious 8 (English) Video Songs Hd 1080p Blu-ray Download Movie [BEST].md
deleted file mode 100644
index c565a8c093e9ce2d6c365b4cd0e218d2eaee5e80..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Fast And Furious 8 (English) Video Songs Hd 1080p Blu-ray Download Movie [BEST].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Fast And Furious 8 (English) video songs hd 1080p blu-ray download movie
-
-# Bingo
-
-Bingo,一个让你呼吸顺畅 New Bing。
-
-高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。
-
-
-
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://github.com/weaigc/bingo/blob/main/license)
-
-问题反馈请前往 https://github.com/weaigc/bingo/issues
-
-
-
diff --git a/spaces/ma-xu/LIVE/pybind11/tests/test_class.py b/spaces/ma-xu/LIVE/pybind11/tests/test_class.py
deleted file mode 100644
index 4214fe79d7fbab2b38a1f15ca39d41e7cd33a171..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/pybind11/tests/test_class.py
+++ /dev/null
@@ -1,333 +0,0 @@
-# -*- coding: utf-8 -*-
-import pytest
-
-import env # noqa: F401
-
-from pybind11_tests import class_ as m
-from pybind11_tests import UserType, ConstructorStats
-
-
-def test_repr():
- # In Python 3.3+, repr() accesses __qualname__
- assert "pybind11_type" in repr(type(UserType))
- assert "UserType" in repr(UserType)
-
-
-def test_instance(msg):
- with pytest.raises(TypeError) as excinfo:
- m.NoConstructor()
- assert msg(excinfo.value) == "m.class_.NoConstructor: No constructor defined!"
-
- instance = m.NoConstructor.new_instance()
-
- cstats = ConstructorStats.get(m.NoConstructor)
- assert cstats.alive() == 1
- del instance
- assert cstats.alive() == 0
-
-
-def test_docstrings(doc):
- assert doc(UserType) == "A `py::class_` type for testing"
- assert UserType.__name__ == "UserType"
- assert UserType.__module__ == "pybind11_tests"
- assert UserType.get_value.__name__ == "get_value"
- assert UserType.get_value.__module__ == "pybind11_tests"
-
- assert doc(UserType.get_value) == """
- get_value(self: m.UserType) -> int
-
- Get value using a method
- """
- assert doc(UserType.value) == "Get/set value using a property"
-
- assert doc(m.NoConstructor.new_instance) == """
- new_instance() -> m.class_.NoConstructor
-
- Return an instance
- """
-
-
-def test_qualname(doc):
- """Tests that a properly qualified name is set in __qualname__ (even in pre-3.3, where we
- backport the attribute) and that generated docstrings properly use it and the module name"""
- assert m.NestBase.__qualname__ == "NestBase"
- assert m.NestBase.Nested.__qualname__ == "NestBase.Nested"
-
- assert doc(m.NestBase.__init__) == """
- __init__(self: m.class_.NestBase) -> None
- """
- assert doc(m.NestBase.g) == """
- g(self: m.class_.NestBase, arg0: m.class_.NestBase.Nested) -> None
- """
- assert doc(m.NestBase.Nested.__init__) == """
- __init__(self: m.class_.NestBase.Nested) -> None
- """
- assert doc(m.NestBase.Nested.fn) == """
- fn(self: m.class_.NestBase.Nested, arg0: int, arg1: m.class_.NestBase, arg2: m.class_.NestBase.Nested) -> None
- """ # noqa: E501 line too long
- assert doc(m.NestBase.Nested.fa) == """
- fa(self: m.class_.NestBase.Nested, a: int, b: m.class_.NestBase, c: m.class_.NestBase.Nested) -> None
- """ # noqa: E501 line too long
- assert m.NestBase.__module__ == "pybind11_tests.class_"
- assert m.NestBase.Nested.__module__ == "pybind11_tests.class_"
-
-
-def test_inheritance(msg):
- roger = m.Rabbit('Rabbit')
- assert roger.name() + " is a " + roger.species() == "Rabbit is a parrot"
- assert m.pet_name_species(roger) == "Rabbit is a parrot"
-
- polly = m.Pet('Polly', 'parrot')
- assert polly.name() + " is a " + polly.species() == "Polly is a parrot"
- assert m.pet_name_species(polly) == "Polly is a parrot"
-
- molly = m.Dog('Molly')
- assert molly.name() + " is a " + molly.species() == "Molly is a dog"
- assert m.pet_name_species(molly) == "Molly is a dog"
-
- fred = m.Hamster('Fred')
- assert fred.name() + " is a " + fred.species() == "Fred is a rodent"
-
- assert m.dog_bark(molly) == "Woof!"
-
- with pytest.raises(TypeError) as excinfo:
- m.dog_bark(polly)
- assert msg(excinfo.value) == """
- dog_bark(): incompatible function arguments. The following argument types are supported:
- 1. (arg0: m.class_.Dog) -> str
-
- Invoked with:
- """
-
- with pytest.raises(TypeError) as excinfo:
- m.Chimera("lion", "goat")
- assert "No constructor defined!" in str(excinfo.value)
-
-
-def test_inheritance_init(msg):
-
- # Single base
- class Python(m.Pet):
- def __init__(self):
- pass
- with pytest.raises(TypeError) as exc_info:
- Python()
- expected = ["m.class_.Pet.__init__() must be called when overriding __init__",
- "Pet.__init__() must be called when overriding __init__"] # PyPy?
- # TODO: fix PyPy error message wrt. tp_name/__qualname__?
- assert msg(exc_info.value) in expected
-
- # Multiple bases
- class RabbitHamster(m.Rabbit, m.Hamster):
- def __init__(self):
- m.Rabbit.__init__(self, "RabbitHamster")
-
- with pytest.raises(TypeError) as exc_info:
- RabbitHamster()
- expected = ["m.class_.Hamster.__init__() must be called when overriding __init__",
- "Hamster.__init__() must be called when overriding __init__"] # PyPy
- assert msg(exc_info.value) in expected
-
-
-def test_automatic_upcasting():
- assert type(m.return_class_1()).__name__ == "DerivedClass1"
- assert type(m.return_class_2()).__name__ == "DerivedClass2"
- assert type(m.return_none()).__name__ == "NoneType"
- # Repeat these a few times in a random order to ensure no invalid caching is applied
- assert type(m.return_class_n(1)).__name__ == "DerivedClass1"
- assert type(m.return_class_n(2)).__name__ == "DerivedClass2"
- assert type(m.return_class_n(0)).__name__ == "BaseClass"
- assert type(m.return_class_n(2)).__name__ == "DerivedClass2"
- assert type(m.return_class_n(2)).__name__ == "DerivedClass2"
- assert type(m.return_class_n(0)).__name__ == "BaseClass"
- assert type(m.return_class_n(1)).__name__ == "DerivedClass1"
-
-
-def test_isinstance():
- objects = [tuple(), dict(), m.Pet("Polly", "parrot")] + [m.Dog("Molly")] * 4
- expected = (True, True, True, True, True, False, False)
- assert m.check_instances(objects) == expected
-
-
-def test_mismatched_holder():
- import re
-
- with pytest.raises(RuntimeError) as excinfo:
- m.mismatched_holder_1()
- assert re.match('generic_type: type ".*MismatchDerived1" does not have a non-default '
- 'holder type while its base ".*MismatchBase1" does', str(excinfo.value))
-
- with pytest.raises(RuntimeError) as excinfo:
- m.mismatched_holder_2()
- assert re.match('generic_type: type ".*MismatchDerived2" has a non-default holder type '
- 'while its base ".*MismatchBase2" does not', str(excinfo.value))
-
-
-def test_override_static():
- """#511: problem with inheritance + overwritten def_static"""
- b = m.MyBase.make()
- d1 = m.MyDerived.make2()
- d2 = m.MyDerived.make()
-
- assert isinstance(b, m.MyBase)
- assert isinstance(d1, m.MyDerived)
- assert isinstance(d2, m.MyDerived)
-
-
-def test_implicit_conversion_life_support():
- """Ensure the lifetime of temporary objects created for implicit conversions"""
- assert m.implicitly_convert_argument(UserType(5)) == 5
- assert m.implicitly_convert_variable(UserType(5)) == 5
-
- assert "outside a bound function" in m.implicitly_convert_variable_fail(UserType(5))
-
-
-def test_operator_new_delete(capture):
- """Tests that class-specific operator new/delete functions are invoked"""
-
- class SubAliased(m.AliasedHasOpNewDelSize):
- pass
-
- with capture:
- a = m.HasOpNewDel()
- b = m.HasOpNewDelSize()
- d = m.HasOpNewDelBoth()
- assert capture == """
- A new 8
- B new 4
- D new 32
- """
- sz_alias = str(m.AliasedHasOpNewDelSize.size_alias)
- sz_noalias = str(m.AliasedHasOpNewDelSize.size_noalias)
- with capture:
- c = m.AliasedHasOpNewDelSize()
- c2 = SubAliased()
- assert capture == (
- "C new " + sz_noalias + "\n" +
- "C new " + sz_alias + "\n"
- )
-
- with capture:
- del a
- pytest.gc_collect()
- del b
- pytest.gc_collect()
- del d
- pytest.gc_collect()
- assert capture == """
- A delete
- B delete 4
- D delete
- """
-
- with capture:
- del c
- pytest.gc_collect()
- del c2
- pytest.gc_collect()
- assert capture == (
- "C delete " + sz_noalias + "\n" +
- "C delete " + sz_alias + "\n"
- )
-
-
-def test_bind_protected_functions():
- """Expose protected member functions to Python using a helper class"""
- a = m.ProtectedA()
- assert a.foo() == 42
-
- b = m.ProtectedB()
- assert b.foo() == 42
-
- class C(m.ProtectedB):
- def __init__(self):
- m.ProtectedB.__init__(self)
-
- def foo(self):
- return 0
-
- c = C()
- assert c.foo() == 0
-
-
-def test_brace_initialization():
- """ Tests that simple POD classes can be constructed using C++11 brace initialization """
- a = m.BraceInitialization(123, "test")
- assert a.field1 == 123
- assert a.field2 == "test"
-
- # Tests that a non-simple class doesn't get brace initialization (if the
- # class defines an initializer_list constructor, in particular, it would
- # win over the expected constructor).
- b = m.NoBraceInitialization([123, 456])
- assert b.vec == [123, 456]
-
-
-@pytest.mark.xfail("env.PYPY")
-def test_class_refcount():
- """Instances must correctly increase/decrease the reference count of their types (#1029)"""
- from sys import getrefcount
-
- class PyDog(m.Dog):
- pass
-
- for cls in m.Dog, PyDog:
- refcount_1 = getrefcount(cls)
- molly = [cls("Molly") for _ in range(10)]
- refcount_2 = getrefcount(cls)
-
- del molly
- pytest.gc_collect()
- refcount_3 = getrefcount(cls)
-
- assert refcount_1 == refcount_3
- assert refcount_2 > refcount_1
-
-
-def test_reentrant_implicit_conversion_failure(msg):
- # ensure that there is no runaway reentrant implicit conversion (#1035)
- with pytest.raises(TypeError) as excinfo:
- m.BogusImplicitConversion(0)
- assert msg(excinfo.value) == '''
- __init__(): incompatible constructor arguments. The following argument types are supported:
- 1. m.class_.BogusImplicitConversion(arg0: m.class_.BogusImplicitConversion)
-
- Invoked with: 0
- '''
-
-
-def test_error_after_conversions():
- with pytest.raises(TypeError) as exc_info:
- m.test_error_after_conversions("hello")
- assert str(exc_info.value).startswith(
- "Unable to convert function return value to a Python type!")
-
-
-def test_aligned():
- if hasattr(m, "Aligned"):
- p = m.Aligned().ptr()
- assert p % 1024 == 0
-
-
-# https://foss.heptapod.net/pypy/pypy/-/issues/2742
-@pytest.mark.xfail("env.PYPY")
-def test_final():
- with pytest.raises(TypeError) as exc_info:
- class PyFinalChild(m.IsFinal):
- pass
- assert str(exc_info.value).endswith("is not an acceptable base type")
-
-
-# https://foss.heptapod.net/pypy/pypy/-/issues/2742
-@pytest.mark.xfail("env.PYPY")
-def test_non_final_final():
- with pytest.raises(TypeError) as exc_info:
- class PyNonFinalFinalChild(m.IsNonFinalFinal):
- pass
- assert str(exc_info.value).endswith("is not an acceptable base type")
-
-
-# https://github.com/pybind/pybind11/issues/1878
-def test_exception_rvalue_abort():
- with pytest.raises(RuntimeError):
- m.PyPrintDestructor().throw_something()
diff --git a/spaces/ma-xu/LIVE/thrust/dependencies/cub/CHANGELOG.md b/spaces/ma-xu/LIVE/thrust/dependencies/cub/CHANGELOG.md
deleted file mode 100644
index 8c05ac274c68ae42b31d93dfcc7e06ddf8e28de9..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/dependencies/cub/CHANGELOG.md
+++ /dev/null
@@ -1,848 +0,0 @@
-# CUB 1.9.10-1 (NVIDIA HPC SDK 20.7, CUDA Toolkit 11.1)
-
-## Summary
-
-CUB 1.9.10-1 is the minor release accompanying the NVIDIA HPC SDK 20.7 release
- and the CUDA Toolkit 11.1 release.
-
-## Bug Fixes
-
-- #1217: Move static local in `cub::DeviceCount` to a separate host-only
- function because NVC++ doesn't support static locals in host-device
- functions.
-
-# CUB 1.9.10 (NVIDIA HPC SDK 20.5)
-
-## Summary
-
-Thrust 1.9.10 is the release accompanying the NVIDIA HPC SDK 20.5 release.
-It adds CMake `find_package` support.
-C++03, C++11, GCC < 5, Clang < 6, and MSVC < 2017 are now deprecated.
-Starting with the upcoming 1.10.0 release, C++03 support will be dropped
- entirely.
-
-## Breaking Changes
-
-- Thrust now checks that it is compatible with the version of CUB found
- in your include path, generating an error if it is not.
- If you are using your own version of CUB, it may be too old.
- It is recommended to simply delete your own version of CUB and use the
- version of CUB that comes with Thrust.
-- C++03 and C++11 are deprecated.
- Using these dialects will generate a compile-time warning.
- These warnings can be suppressed by defining
- `CUB_IGNORE_DEPRECATED_CPP_DIALECT` (to suppress C++03 and C++11
- deprecation warnings) or `CUB_IGNORE_DEPRECATED_CPP_11` (to suppress C++11
- deprecation warnings).
- Suppression is only a short term solution.
- We will be dropping support for C++03 in the 1.10.0 release and C++11 in the
- near future.
-- GCC < 5, Clang < 6, and MSVC < 2017 are deprecated.
- Using these compilers will generate a compile-time warning.
- These warnings can be suppressed by defining
- `CUB_IGNORE_DEPRECATED_COMPILER`.
- Suppression is only a short term solution.
- We will be dropping support for these compilers in the near future.
-
-## New Features
-
-- CMake `find_package` support.
- Just point CMake at the `cmake` folder in your CUB include directory
- (ex: `cmake -DCUB_DIR=/usr/local/cuda/include/cub/cmake/ .`) and then you
- can add CUB to your CMake project with `find_package(CUB REQUIRED CONFIG)`.
-
-# CUB 1.9.9 (CUDA 11.0)
-
-## Summary
-
-CUB 1.9.9 is the release accompanying the CUDA Toolkit 11.0 release.
-It introduces CMake support, version macros, platform detection machinery,
- and support for NVC++, which uses Thrust (and thus CUB) to implement
- GPU-accelerated C++17 Parallel Algorithms.
-Additionally, the scan dispatch layer was refactored and modernized.
-C++03, C++11, GCC < 5, Clang < 6, and MSVC < 2017 are now deprecated.
-Starting with the upcoming 1.10.0 release, C++03 support will be dropped
- entirely.
-
-## Breaking Changes
-
-- Thrust now checks that it is compatible with the version of CUB found
- in your include path, generating an error if it is not.
- If you are using your own version of CUB, it may be too old.
- It is recommended to simply delete your own version of CUB and use the
- version of CUB that comes with Thrust.
-- C++03 and C++11 are deprecated.
- Using these dialects will generate a compile-time warning.
- These warnings can be suppressed by defining
- `CUB_IGNORE_DEPRECATED_CPP_DIALECT` (to suppress C++03 and C++11
- deprecation warnings) or `CUB_IGNORE_DEPRECATED_CPP11` (to suppress C++11
- deprecation warnings).
- Suppression is only a short term solution.
- We will be dropping support for C++03 in the 1.10.0 release and C++11 in the
- near future.
-- GCC < 5, Clang < 6, and MSVC < 2017 are deprecated.
- Using these compilers will generate a compile-time warning.
- These warnings can be suppressed by defining
- `CUB_IGNORE_DEPRECATED_COMPILER`.
- Suppression is only a short term solution.
- We will be dropping support for these compilers in the near future.
-
-## New Features
-
-- CMake support.
- Thanks to Francis Lemaire for this contribution.
-- Refactorized and modernized scan dispatch layer.
- Thanks to Francis Lemaire for this contribution.
-- Policy hooks for device-wide reduce, scan, and radix sort facilities
- to simplify tuning and allow users to provide custom policies.
- Thanks to Francis Lemaire for this contribution.
-- ``: `CUB_VERSION`, `CUB_VERSION_MAJOR`, `CUB_VERSION_MINOR`,
- `CUB_VERSION_SUBMINOR`, and `CUB_PATCH_NUMBER`.
-- Platform detection machinery:
- - ``: Detects the C++ standard dialect.
- - ``: host and device compiler detection.
- - ``: `CUB_DEPRECATED`.
- - `: Includes ``,
- ``, ``,
- ``, ``,
- ``
-- `cub::DeviceCount` and `cub::DeviceCountUncached`, caching abstractions for
- `cudaGetDeviceCount`.
-
-## Other Enhancements
-
-- Lazily initialize the per-device CUDAattribute caches, because CUDA context
- creation is expensive and adds up with large CUDA binaries on machines with
- many GPUs.
- Thanks to the NVIDIA PyTorch team for bringing this to our attention.
-- Make `cub::SwitchDevice` avoid setting/resetting the device if the current
- device is the same as the target device.
-
-## Bug Fixes
-
-- Add explicit failure parameter to CAS in the CUB attribute cache to workaround
- a GCC 4.8 bug.
-- Revert a change in reductions that changed the signedness of the `lane_id`
- variable to suppress a warning, as this introduces a bug in optimized device
- code.
-- Fix initialization in `cub::ExclusiveSum`.
- Thanks to Conor Hoekstra for this contribution.
-- Fix initialization of the `std::array` in the CUB attribute cache.
-- Fix `-Wsign-compare` warnings.
- Thanks to Elias Stehle for this contribution.
-- Fix `test_block_reduce.cu` to build without parameters.
- Thanks to Francis Lemaire for this contribution.
-- Add missing includes to `grid_even_share.cuh`.
- Thanks to Francis Lemaire for this contribution.
-- Add missing includes to `thread_search.cuh`.
- Thanks to Francis Lemaire for this contribution.
-- Add missing includes to `cub.cuh`.
- Thanks to Felix Kallenborn for this contribution.
-
-# CUB 1.9.8-1 (NVIDIA HPC SDK 20.3)
-
-## Summary
-
-CUB 1.9.8-1 is a variant of 1.9.8 accompanying the NVIDIA HPC SDK 20.3 release.
-It contains modifications necessary to serve as the implementation of NVC++'s
- GPU-accelerated C++17 Parallel Algorithms.
-
-# CUB 1.9.8 (CUDA 11.0 Early Access)
-
-## Summary
-
-CUB 1.9.8 is the first release of CUB to be officially supported and included
- in the CUDA Toolkit.
-When compiling CUB in C++11 mode, CUB now caches calls to CUDA attribute query
- APIs, which improves performance of these queries by 20x to 50x when they
- are called concurrently by multiple host threads.
-
-## Enhancements
-
-- (C++11 or later) Cache calls to `cudaFuncGetAttributes` and
- `cudaDeviceGetAttribute` within `cub::PtxVersion` and `cub::SmVersion`.
- These CUDA APIs acquire locks to CUDA driver/runtime mutex and perform
- poorly under contention; with the caching, they are 20 to 50x faster when
- called concurrently.
- Thanks to Bilge Acun for bringing this issue to our attention.
-- `DispatchReduce` now takes an `OutputT` template parameter so that users can
- specify the intermediate type explicitly.
-- Radix sort tuning policies updates to fix performance issues for element
- types smaller than 4 bytes.
-
-## Bug Fixes
-
-- Change initialization style from copy initialization to direct initialization
- (which is more permissive) in `AgentReduce` to allow a wider range of types
- to be used with it.
-- Fix bad signed/unsigned comparisons in `WarpReduce`.
-- Fix computation of valid lanes in warp-level reduction primitive to correctly
- handle the case where there are 0 input items per warp.
-
-# CUB 1.8.0
-
-## Summary
-
-CUB 1.8.0 introduces changes to the `cub::Shuffle*` interfaces.
-
-## Breaking Changes
-
-- The interfaces of `cub::ShuffleIndex`, `cub::ShuffleUp`, and
- `cub::ShuffleDown` have been changed to allow for better computation of the
- PTX SHFL control constant for logical warps smaller than 32 threads.
-
-## Bug Fixes
-
-- #112: Fix `cub::WarpScan`'s broadcast of warp-wide aggregate for logical
- warps smaller than 32 threads.
-
-# CUB 1.7.5
-
-## Summary
-
-CUB 1.7.5 adds support for radix sorting `__half` keys and improved sorting
- performance for 1 byte keys.
-It was incorporated into Thrust 1.9.2.
-
-## Enhancements
-
-- Radix sort support for `__half` keys.
-- Radix sort tuning policy updates to improve 1 byte key performance.
-
-## Bug Fixes
-
-- Syntax tweaks to mollify Clang.
-- #127: `cub::DeviceRunLengthEncode::Encode` returns incorrect results.
-- #128: 7-bit sorting passes fail for SM61 with large values.
-
-# CUB 1.7.4
-
-## Summary
-
-CUB 1.7.4 is a minor release that was incorporated into Thrust 1.9.1-2.
-
-## Bug Fixes
-
-- #114: Can't pair non-trivially-constructible values in radix sort.
-- #115: `cub::WarpReduce` segmented reduction is broken in CUDA 9 for logical
- warp sizes smaller than 32.
-
-# CUB 1.7.3
-
-## Summary
-
-CUB 1.7.3 is a minor release.
-
-## Bug Fixes
-
-- #110: `cub::DeviceHistogram` null-pointer exception bug for iterator inputs.
-
-# CUB 1.7.2
-
-## Summary
-
-CUB 1.7.2 is a minor release.
-
-## Bug Fixes
-
-- #104: Device-wide reduction is now "run-to-run" deterministic for
- pseudo-associative reduction operators (like floating point addition).
-
-# CUB 1.7.1
-
-## Summary
-
-CUB 1.7.1 delivers improved radix sort performance on SM7x (Volta) GPUs and a
- number of bug fixes.
-
-## Enhancements
-
-- Radix sort tuning policies updated for SM7x (Volta).
-
-## Bug Fixes
-
-- #104: `uint64_t` `cub::WarpReduce` broken for CUB 1.7.0 on CUDA 8 and older.
-- #103: Can't mix Thrust from CUDA 9.0 and CUB.
-- #102: CUB pulls in `windows.h` which defines `min`/`max` macros that conflict
- with `std::min`/`std::max`.
-- #99: Radix sorting crashes NVCC on Windows 10 for SM52.
-- #98: cuda-memcheck: --tool initcheck failed with lineOfSight.
-- #94: Git clone size.
-- #93: Accept iterators for segment offsets.
-- #87: CUB uses anonymous unions which is not valid C++.
-- #44: Check for C++11 is incorrect for Visual Studio 2013.
-
-# CUB 1.7.0
-
-## Summary
-
-CUB 1.7.0 brings support for CUDA 9.0 and SM7x (Volta) GPUs.
-It is compatible with independent thread scheduling.
-It was incorporated into Thrust 1.9.0-5.
-
-## Breaking Changes
-
-- Remove `cub::WarpAll` and `cub::WarpAny`.
- These functions served to emulate `__all` and `__any` functionality for
- SM1x devices, which did not have those operations.
- However, SM1x devices are now deprecated in CUDA, and the interfaces of these
- two functions are now lacking the lane-mask needed for collectives to run on
- SM7x and newer GPUs which have independent thread scheduling.
-
-## Other Enhancements
-
-- Remove any assumptions of implicit warp synchronization to be compatible with
- SM7x's (Volta) independent thread scheduling.
-
-## Bug Fixes
-
-- #86: Incorrect results with reduce-by-key.
-
-# CUB 1.6.4
-
-## Summary
-
-CUB 1.6.4 improves radix sorting performance for SM5x (Maxwell) and SM6x
- (Pascal) GPUs.
-
-## Enhancements
-
-- Radix sort tuning policies updated for SM5x (Maxwell) and SM6x (Pascal) -
- 3.5B and 3.4B 32 byte keys/s on TitanX and GTX 1080, respectively.
-
-## Bug Fixes
-
-- Restore fence work-around for scan (reduce-by-key, etc.) hangs in CUDA 8.5.
-- #65: `cub::DeviceSegmentedRadixSort` should allow inputs to have
- pointer-to-const type.
-- Mollify Clang device-side warnings.
-- Remove out-dated MSVC project files.
-
-# CUB 1.6.3
-
-## Summary
-
-CUB 1.6.3 improves support for Windows, changes
- `cub::BlockLoad`/`cub::BlockStore` interface to take the local data type,
- and enhances radix sort performance for SM6x (Pascal) GPUs.
-
-## Breaking Changes
-
-- `cub::BlockLoad` and `cub::BlockStore` are now templated by the local data
- type, instead of the `Iterator` type.
- This allows for output iterators having `void` as their `value_type` (e.g.
- discard iterators).
-
-## Other Enhancements
-
-- Radix sort tuning policies updated for SM6x (Pascal) GPUs - 6.2B 4 byte
- keys/s on GP100.
-- Improved support for Windows (warnings, alignment, etc).
-
-## Bug Fixes
-
-- #74: `cub::WarpReduce` executes reduction operator for out-of-bounds items.
-- #72: `cub:InequalityWrapper::operator` should be non-const.
-- #71: `cub::KeyValuePair` won't work if `Key` has non-trivial constructor.
-- #69: cub::BlockStore::Store` doesn't compile if `OutputIteratorT::value_type`
- isn't `T`.
-- #68: `cub::TilePrefixCallbackOp::WarpReduce` doesn't permit PTX arch
- specialization.
-
-# CUB 1.6.2 (previously 1.5.5)
-
-## Summary
-
-CUB 1.6.2 (previously 1.5.5) improves radix sort performance for SM6x (Pascal)
- GPUs.
-
-## Enhancements
-
-- Radix sort tuning policies updated for SM6x (Pascal) GPUs.
-
-## Bug Fixes
-
-- Fix AArch64 compilation of `cub::CachingDeviceAllocator`.
-
-# CUB 1.6.1 (previously 1.5.4)
-
-## Summary
-
-CUB 1.6.1 (previously 1.5.4) is a minor release.
-
-## Bug Fixes
-
-- Fix radix sorting bug introduced by scan refactorization.
-
-# CUB 1.6.0 (previously 1.5.3)
-
-## Summary
-
-CUB 1.6.0 changes the scan and reduce interfaces.
-Exclusive scans now accept an "initial value" instead of an "identity value".
-Scans and reductions now support differing input and output sequence types.
-Additionally, many bugs have been fixed.
-
-## Breaking Changes
-
-- Device/block/warp-wide exclusive scans have been revised to now accept an
- "initial value" (instead of an "identity value") for seeding the computation
- with an arbitrary prefix.
-- Device-wide reductions and scans can now have input sequence types that are
- different from output sequence types (as long as they are convertible).
-
-## Other Enhancements
-
-- Reduce repository size by moving the doxygen binary to doc repository.
-- Minor reduction in `cub::BlockScan` instruction counts.
-
-## Bug Fixes
-
-- Issue #55: Warning in `cub/device/dispatch/dispatch_reduce_by_key.cuh`.
-- Issue #59: `cub::DeviceScan::ExclusiveSum` can't prefix sum of float into
- double.
-- Issue #58: Infinite loop in `cub::CachingDeviceAllocator::NearestPowerOf`.
-- Issue #47: `cub::CachingDeviceAllocator` needs to clean up CUDA global error
- state upon successful retry.
-- Issue #46: Very high amount of needed memory from the
- `cub::DeviceHistogram::HistogramEven`.
-- Issue #45: `cub::CachingDeviceAllocator` fails with debug output enabled
-
-# CUB 1.5.2
-
-## Summary
-
-CUB 1.5.2 enhances `cub::CachingDeviceAllocator` and improves scan performance
- for SM5x (Maxwell).
-
-## Enhancements
-
-- Improved medium-size scan performance on SM5x (Maxwell).
-- Refactored `cub::CachingDeviceAllocator`:
- - Now spends less time locked.
- - Uses C++11's `std::mutex` when available.
- - Failure to allocate a block from the runtime will retry once after
- freeing cached allocations.
- - Now respects max-bin, fixing an issue where blocks in excess of max-bin
- were still being retained in the free cache.
-
-## Bug fixes:
-
-- Fix for generic-type reduce-by-key `cub::WarpScan` for SM3x and newer GPUs.
-
-# CUB 1.5.1
-
-## Summary
-
-CUB 1.5.1 is a minor release.
-
-## Bug Fixes
-
-- Fix for incorrect `cub::DeviceRadixSort` output for some small problems on
- SM52 (Mawell) GPUs.
-- Fix for macro redefinition warnings when compiling `thrust::sort`.
-
-# CUB 1.5.0
-
-CUB 1.5.0 introduces segmented sort and reduction primitives.
-
-## New Features:
-
-- Segmented device-wide operations for device-wide sort and reduction primitives.
-
-## Bug Fixes:
-
-- #36: `cub::ThreadLoad` generates compiler errors when loading from
- pointer-to-const.
-- #29: `cub::DeviceRadixSort::SortKeys` yields compiler errors.
-- #26: Misaligned address after `cub::DeviceRadixSort::SortKeys`.
-- #25: Fix for incorrect results and crashes when radix sorting 0-length
- problems.
-- Fix CUDA 7.5 issues on SM52 GPUs with SHFL-based warp-scan and
- warp-reduction on non-primitive data types (e.g. user-defined structs).
-- Fix small radix sorting problems where 0 temporary bytes were required and
- users code was invoking `malloc(0)` on some systems where that returns
- `NULL`.
- CUB assumed the user was asking for the size again and not running the sort.
-
-# CUB 1.4.1
-
-## Summary
-
-CUB 1.4.1 is a minor release.
-
-## Enhancements
-
-- Allow `cub::DeviceRadixSort` and `cub::BlockRadixSort` on bool types.
-
-## Bug Fixes
-
-- Fix minor CUDA 7.0 performance regressions in `cub::DeviceScan` and
- `cub::DeviceReduceByKey`.
-- Remove requirement for callers to define the `CUB_CDP` macro
- when invoking CUB device-wide rountines using CUDA dynamic parallelism.
-- Fix headers not being included in the proper order (or missing includes)
- for some block-wide functions.
-
-# CUB 1.4.0
-
-## Summary
-
-CUB 1.4.0 adds `cub::DeviceSpmv`, `cub::DeviceRunLength::NonTrivialRuns`,
- improves `cub::DeviceHistogram`, and introduces support for SM5x (Maxwell)
- GPUs.
-
-## New Features:
-
-- `cub::DeviceSpmv` methods for multiplying sparse matrices by
- dense vectors, load-balanced using a merge-based parallel decomposition.
-- `cub::DeviceRadixSort` sorting entry-points that always return
- the sorted output into the specified buffer, as opposed to the
- `cub::DoubleBuffer` in which it could end up in either buffer.
-- `cub::DeviceRunLengthEncode::NonTrivialRuns` for finding the starting
- offsets and lengths of all non-trivial runs (i.e., length > 1) of keys in
- a given sequence.
- Useful for top-down partitioning algorithms like MSD sorting of very-large
- keys.
-
-## Other Enhancements
-
-- Support and performance tuning for SM5x (Maxwell) GPUs.
-- Updated cub::DeviceHistogram implementation that provides the same
- "histogram-even" and "histogram-range" functionality as IPP/NPP.
- Provides extremely fast and, perhaps more importantly, very uniform
- performance response across diverse real-world datasets, including
- pathological (homogeneous) sample distributions.
-
-# CUB 1.3.2
-
-## Summary
-
-CUB 1.3.2 is a minor release.
-
-## Bug Fixes
-
-- Fix `cub::DeviceReduce` where reductions of small problems (small enough to
- only dispatch a single thread block) would run in the default stream (stream
- zero) regardless of whether an alternate stream was specified.
-
-# CUB 1.3.1
-
-## Summary
-
-CUB 1.3.1 is a minor release.
-
-## Bug Fixes
-
-- Workaround for a benign WAW race warning reported by cuda-memcheck
- in `cub::BlockScan` specialized for `BLOCK_SCAN_WARP_SCANS` algorithm.
-- Fix bug in `cub::DeviceRadixSort` where the algorithm may sort more
- key bits than the caller specified (up to the nearest radix digit).
-- Fix for ~3% `cub::DeviceRadixSort` performance regression on SM2x (Fermi) and
- SM3x (Kepler) GPUs.
-
-# CUB 1.3.0
-
-## Summary
-
-CUB 1.3.0 improves how thread blocks are expressed in block- and warp-wide
- primitives and adds an enhanced version of `cub::WarpScan`.
-
-## Breaking Changes
-
-- CUB's collective (block-wide, warp-wide) primitives underwent a minor
- interface refactoring:
- - To provide the appropriate support for multidimensional thread blocks,
- The interfaces for collective classes are now template-parameterized by
- X, Y, and Z block dimensions (with `BLOCK_DIM_Y` and `BLOCK_DIM_Z` being
- optional, and `BLOCK_DIM_X` replacing `BLOCK_THREADS`).
- Furthermore, the constructors that accept remapped linear
- thread-identifiers have been removed: all primitives now assume a
- row-major thread-ranking for multidimensional thread blocks.
- - To allow the host program (compiled by the host-pass) to accurately
- determine the device-specific storage requirements for a given collective
- (compiled for each device-pass), the interfaces for collective classes
- are now (optionally) template-parameterized by the desired PTX compute
- capability.
- This is useful when aliasing collective storage to shared memory that has
- been allocated dynamically by the host at the kernel call site.
- - Most CUB programs having typical 1D usage should not require any
- changes to accomodate these updates.
-
-## New Features
-
-- Added "combination" `cub::WarpScan` methods for efficiently computing
- both inclusive and exclusive prefix scans (and sums).
-
-## Bug Fixes
-
-- Fix for bug in `cub::WarpScan` (which affected `cub::BlockScan` and
- `cub::DeviceScan`) where incorrect results (e.g., NAN) would often be
- returned when parameterized for floating-point types (fp32, fp64).
-- Workaround for ptxas error when compiling with with -G flag on Linux (for
- debug instrumentation).
-- Fixes for certain scan scenarios using custom scan operators where code
- compiled for SM1x is run on newer GPUs of higher compute-capability: the
- compiler could not tell which memory space was being used collective
- operations and was mistakenly using global ops instead of shared ops.
-
-# CUB 1.2.3
-
-## Summary
-
-CUB 1.2.3 is a minor release.
-
-## Bug Fixes
-
-- Fixed access violation bug in `cub::DeviceReduce::ReduceByKey` for
- non-primitive value types.
-- Fixed code-snippet bug in `ArgIndexInputIteratorT` documentation.
-
-# CUB 1.2.2
-
-## Summary
-
-CUB 1.2.2 adds a new variant of `cub::BlockReduce` and MSVC project solections
- for examples.
-
-## New Features
-
-- MSVC project solutions for device-wide and block-wide examples
-- New algorithmic variant of cub::BlockReduce for improved performance
- when using commutative operators (e.g., numeric addition).
-
-## Bug Fixes
-
-- Inclusion of Thrust headers in a certain order prevented CUB device-wide
- primitives from working properly.
-
-# CUB 1.2.0
-
-## Summary
-
-CUB 1.2.0 adds `cub::DeviceReduce::ReduceByKey` and
- `cub::DeviceReduce::RunLengthEncode` and support for CUDA 6.0.
-
-## New Features
-
-- `cub::DeviceReduce::ReduceByKey`.
-- `cub::DeviceReduce::RunLengthEncode`.
-
-## Other Enhancements
-
-- Improved `cub::DeviceScan`, `cub::DeviceSelect`, `cub::DevicePartition`
- performance.
-- Documentation and testing:
- - Added performance-portability plots for many device-wide primitives.
- - Explain that iterator (in)compatibilities with CUDA 5.0 (and older) and
- Thrust 1.6 (and older).
-- Revised the operation of temporary tile status bookkeeping for
- `cub::DeviceScan` (and similar) to be safe for current code run on future
- platforms (now uses proper fences).
-
-## Bug Fixes
-
-- Fix `cub::DeviceScan` bug where Windows alignment disagreements between host
- and device regarding user-defined data types would corrupt tile status.
-- Fix `cub::BlockScan` bug where certain exclusive scans on custom data types
- for the `BLOCK_SCAN_WARP_SCANS` variant would return incorrect results for
- the first thread in the block.
-- Added workaround to make `cub::TexRefInputIteratorT` work with CUDA 6.0.
-
-# CUB 1.1.1
-
-## Summary
-
-CUB 1.1.1 introduces texture and cache modifier iterators, descending sorting,
- `cub::DeviceSelect`, `cub::DevicePartition`, `cub::Shuffle*`, and
- `cub::MaxSMOccupancy`.
-Additionally, scan and sort performance for older GPUs has been improved and
- many bugs have been fixed.
-
-## Breaking Changes
-
-- Refactored block-wide I/O (`cub::BlockLoad` and `cub::BlockStore`), removing
- cache-modifiers from their interfaces.
- `cub::CacheModifiedInputIterator` and `cub::CacheModifiedOutputIterator`
- should now be used with `cub::BlockLoad` and `cub::BlockStore` to effect that
- behavior.
-
-## New Features
-
-- `cub::TexObjInputIterator`, `cub::TexRefInputIterator`,
- `cub::CacheModifiedInputIterator`, and `cub::CacheModifiedOutputIterator`
- types for loading & storing arbitrary types through the cache hierarchy.
- They are compatible with Thrust.
-- Descending sorting for `cub::DeviceRadixSort` and `cub::BlockRadixSort`.
-- Min, max, arg-min, and arg-max operators for `cub::DeviceReduce`.
-- `cub::DeviceSelect` (select-unique, select-if, and select-flagged).
-- `cub::DevicePartition` (partition-if, partition-flagged).
-- Generic `cub::ShuffleUp`, `cub::ShuffleDown`, and `cub::ShuffleIndex` for
- warp-wide communication of arbitrary data types (SM3x and up).
-- `cub::MaxSmOccupancy` for accurately determining SM occupancy for any given
- kernel function pointer.
-
-## Other Enhancements
-
-- Improved `cub::DeviceScan` and `cub::DeviceRadixSort` performance for older
- GPUs (SM1x to SM3x).
-- Renamed device-wide `stream_synchronous` param to `debug_synchronous` to
- avoid confusion about usage.
-- Documentation improvements:
- - Added simple examples of device-wide methods.
- - Improved doxygen documentation and example snippets.
-- Improved test coverege to include up to 21,000 kernel variants and 851,000
- unit tests (per architecture, per platform).
-
-## Bug Fixes
-
-- Fix misc `cub::DeviceScan, BlockScan, DeviceReduce, and BlockReduce bugs when
- operating on non-primitive types for older architectures SM1x.
-- SHFL-based scans and reductions produced incorrect results for multi-word
- types (size > 4B) on Linux.
-- For `cub::WarpScan`-based scans, not all threads in the first warp were
- entering the prefix callback functor.
-- `cub::DeviceRadixSort` had a race condition with key-value pairs for pre-SM35
- architectures.
-- `cub::DeviceRadixSor` bitfield-extract behavior with long keys on 64-bit
- Linux was incorrect.
-- `cub::BlockDiscontinuity` failed to compile for types other than
- `int32_t`/`uint32_t`.
-- CUDA Dynamic Parallelism (CDP, e.g. device-callable) versions of device-wide
- methods now report the same temporary storage allocation size requirement as
- their host-callable counterparts.
-
-# CUB 1.0.2
-
-## Summary
-
-CUB 1.0.2 is a minor release.
-
-## Bug Fixes
-
-- Corrections to code snippet examples for `cub::BlockLoad`, `cub::BlockStore`,
- and `cub::BlockDiscontinuity`.
-- Cleaned up unnecessary/missing header includes.
- You can now safely include a specific .cuh (instead of `cub.cuh`).
-- Bug/compilation fixes for `cub::BlockHistogram`.
-
-# CUB 1.0.1
-
-## Summary
-
-CUB 1.0.1 adds `cub::DeviceRadixSort` and `cub::DeviceScan`.
-Numerous other performance and correctness fixes and included.
-
-## Breaking Changes
-
-- New collective interface idiom (specialize/construct/invoke).
-
-## New Features
-
-- `cub::DeviceRadixSort`.
- Implements short-circuiting for homogenous digit passes.
-- `cub::DeviceScan`.
- Implements single-pass "adaptive-lookback" strategy.
-
-## Other Enhancements
-
-- Significantly improved documentation (with example code snippets).
-- More extensive regression test suit for aggressively testing collective
- variants.
-- Allow non-trially-constructed types (previously unions had prevented aliasing
- temporary storage of those types).
-- Improved support for SM3x SHFL (collective ops now use SHFL for types larger
- than 32 bits).
-- Better code generation for 64-bit addressing within
- `cub::BlockLoad`/`cub::BlockStore`.
-- `cub::DeviceHistogram` now supports histograms of arbitrary bins.
-- Updates to accommodate CUDA 5.5 dynamic parallelism.
-
-## Bug Fixes
-
-- Workarounds for SM10 codegen issues in uncommonly-used
- `cub::WarpScan`/`cub::WarpReduce` specializations.
-
-# CUB 0.9.4
-
-## Summary
-
-CUB 0.9.3 is a minor release.
-
-## Enhancements
-
-- Various documentation updates and corrections.
-
-## Bug Fixes
-
-- Fixed compilation errors for SM1x.
-- Fixed compilation errors for some WarpScan entrypoints on SM3x and up.
-
-# CUB 0.9.3
-
-## Summary
-
-CUB 0.9.3 adds histogram algorithms and work management utility descriptors.
-
-## New Features
-
-- `cub::DevicHistogram256`.
-- `cub::BlockHistogram256`.
-- `cub::BlockScan` algorithm variant `BLOCK_SCAN_RAKING_MEMOIZE`, which
- trades more register consumption for less shared memory I/O.
-- `cub::GridQueue`, `cub::GridEvenShare`, work management utility descriptors.
-
-## Other Enhancements
-
-- Updates to `cub::BlockRadixRank` to use `cub::BlockScan`, which improves
- performance on SM3x by using SHFL.
-- Allow types other than builtin types to be used in `cub::WarpScan::*Sum`
- methods if they only have `operator+` overloaded.
- Previously they also required to support assignment from `int(0)`.
-- Update `cub::BlockReduce`'s `BLOCK_REDUCE_WARP_REDUCTIONS` algorithm to work
- even when block size is not an even multiple of warp size.
-- Refactoring of `cub::DeviceAllocator` interface and
- `cub::CachingDeviceAllocator` implementation.
-
-# CUB 0.9.2
-
-## Summary
-
-CUB 0.9.2 adds `cub::WarpReduce`.
-
-## New Features
-
-- `cub::WarpReduce`, which uses the SHFL instruction when applicable.
- `cub::BlockReduce` now uses this `cub::WarpReduce` instead of implementing
- its own.
-
-## Enhancements
-
-- Documentation updates and corrections.
-
-## Bug Fixes
-
-- Fixes for 64-bit Linux compilation warnings and errors.
-
-# CUB 0.9.1
-
-## Summary
-
-CUB 0.9.1 is a minor release.
-
-## Bug Fixes
-
-- Fix for ambiguity in `cub::BlockScan::Reduce` between generic reduction and
- summation.
- Summation entrypoints are now called `::Sum()`, similar to the
- convention in `cub::BlockScan`.
-- Small edits to documentation and download tracking.
-
-# CUB 0.9.0
-
-## Summary
-
-Initial preview release.
-CUB is the first durable, high-performance library of cooperative block-level,
- warp-level, and thread-level primitives for CUDA kernel programming.
-
diff --git a/spaces/ma-xu/LIVE/thrust/thrust/set_operations.h b/spaces/ma-xu/LIVE/thrust/thrust/set_operations.h
deleted file mode 100644
index a51eaed4351e52aaf3569c986cc5153640dd15d6..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/thrust/set_operations.h
+++ /dev/null
@@ -1,2963 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file set_operations.h
- * \brief Set theoretic operations for sorted ranges
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-
-
-/*! \addtogroup set_operations Set Operations
- * \ingroup algorithms
- * \{
- */
-
-
-/*! \p set_difference constructs a sorted range that is the set difference of the sorted
- * ranges [first1, last1) and [first2, last2). The return value is the
- * end of the output range.
- *
- * In the simplest case, \p set_difference performs the "difference" operation from set
- * theory: the output range contains a copy of every element that is contained in
- * [first1, last1) and not contained in [first2, last1). The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements
- * that are equivalent to each other and if [first2, last2) contains \c n
- * elements that are equivalent to them, the last max(m-n,0) elements from
- * [first1, last1) range shall be copied to the output range.
- *
- * This version of \p set_difference compares elements using \c operator<.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \return The end of the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_difference to compute the
- * set difference of two sets of integers sorted in ascending order using the \p thrust::host execution
- * policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A1[6] = {0, 1, 3, 4, 5, 6, 9};
- * int A2[5] = {1, 3, 5, 7, 9};
- *
- * int result[3];
- *
- * int *result_end = thrust::set_difference(thrust::host, A1, A1 + 6, A2, A2 + 5, result);
- * // result is now {0, 4, 6}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_difference.html
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
-__host__ __device__
- OutputIterator set_difference(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result);
-
-
-/*! \p set_difference constructs a sorted range that is the set difference of the sorted
- * ranges [first1, last1) and [first2, last2). The return value is the
- * end of the output range.
- *
- * In the simplest case, \p set_difference performs the "difference" operation from set
- * theory: the output range contains a copy of every element that is contained in
- * [first1, last1) and not contained in [first2, last1). The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements
- * that are equivalent to each other and if [first2, last2) contains \c n
- * elements that are equivalent to them, the last max(m-n,0) elements from
- * [first1, last1) range shall be copied to the output range.
- *
- * This version of \p set_difference compares elements using \c operator<.
- *
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \return The end of the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_difference to compute the
- * set difference of two sets of integers sorted in ascending order.
- *
- * \code
- * #include
- * ...
- * int A1[6] = {0, 1, 3, 4, 5, 6, 9};
- * int A2[5] = {1, 3, 5, 7, 9};
- *
- * int result[3];
- *
- * int *result_end = thrust::set_difference(A1, A1 + 6, A2, A2 + 5, result);
- * // result is now {0, 4, 6}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_difference.html
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
- OutputIterator set_difference(InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result);
-
-
-/*! \p set_difference constructs a sorted range that is the set difference of the sorted
- * ranges [first1, last1) and [first2, last2). The return value is the
- * end of the output range.
- *
- * In the simplest case, \p set_difference performs the "difference" operation from set
- * theory: the output range contains a copy of every element that is contained in
- * [first1, last1) and not contained in [first2, last1). The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements
- * that are equivalent to each other and if [first2, last2) contains \c n
- * elements that are equivalent to them, the last max(m-n,0) elements from
- * [first1, last1) range shall be copied to the output range.
- *
- * This version of \p set_difference compares elements using a function object \p comp.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \param comp Comparison operator.
- * \return The end of the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1's \c value_type is convertable to \p StrictWeakCompare's \c first_argument_type.
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2's \c value_type is convertable to \p StrictWeakCompare's \c second_argument_type.
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_difference to compute the
- * set difference of two sets of integers sorted in descending order using the \p thrust::host execution
- * policy for parallelization:
- *
- * \code
- * #include
- * #include
- * #include
- * ...
- * int A1[6] = {9, 6, 5, 4, 3, 1, 0};
- * int A2[5] = {9, 7, 5, 3, 1};
- *
- * int result[3];
- *
- * int *result_end = thrust::set_difference(thrust::host, A1, A1 + 6, A2, A2 + 5, result, thrust::greater());
- * // result is now {6, 4, 0}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_difference.html
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
-__host__ __device__
- OutputIterator set_difference(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakCompare comp);
-
-
-/*! \p set_difference constructs a sorted range that is the set difference of the sorted
- * ranges [first1, last1) and [first2, last2). The return value is the
- * end of the output range.
- *
- * In the simplest case, \p set_difference performs the "difference" operation from set
- * theory: the output range contains a copy of every element that is contained in
- * [first1, last1) and not contained in [first2, last1). The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements
- * that are equivalent to each other and if [first2, last2) contains \c n
- * elements that are equivalent to them, the last max(m-n,0) elements from
- * [first1, last1) range shall be copied to the output range.
- *
- * This version of \p set_difference compares elements using a function object \p comp.
- *
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \param comp Comparison operator.
- * \return The end of the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1's \c value_type is convertable to \p StrictWeakCompare's \c first_argument_type.
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2's \c value_type is convertable to \p StrictWeakCompare's \c second_argument_type.
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_difference to compute the
- * set difference of two sets of integers sorted in descending order.
- *
- * \code
- * #include
- * #include
- * ...
- * int A1[6] = {9, 6, 5, 4, 3, 1, 0};
- * int A2[5] = {9, 7, 5, 3, 1};
- *
- * int result[3];
- *
- * int *result_end = thrust::set_difference(A1, A1 + 6, A2, A2 + 5, result, thrust::greater());
- * // result is now {6, 4, 0}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_difference.html
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
- OutputIterator set_difference(InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakCompare comp);
-
-
-/*! \p set_intersection constructs a sorted range that is the
- * intersection of sorted ranges [first1, last1) and
- * [first2, last2). The return value is the end of the
- * output range.
- *
- * In the simplest case, \p set_intersection performs the
- * "intersection" operation from set theory: the output range
- * contains a copy of every element that is contained in both
- * [first1, last1) and [first2, last2). The
- * general case is more complicated, because the input ranges may
- * contain duplicate elements. The generalization is that if a value
- * appears \c m times in [first1, last1) and \c n times in
- * [first2, last2) (where \c m may be zero), then it
- * appears min(m,n) times in the output range.
- * \p set_intersection is stable, meaning that both elements are
- * copied from the first range rather than the second, and that the
- * relative order of elements in the output range is the same as in
- * the first input range.
- *
- * This version of \p set_intersection compares objects using
- * \c operator<.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \return The end of the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_intersection to compute the
- * set intersection of two sets of integers sorted in ascending order using the \p thrust::host execution
- * policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A1[6] = {1, 3, 5, 7, 9, 11};
- * int A2[7] = {1, 1, 2, 3, 5, 8, 13};
- *
- * int result[7];
- *
- * int *result_end = thrust::set_intersection(thrust::host, A1, A1 + 6, A2, A2 + 7, result);
- * // result is now {1, 3, 5}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_intersection.html
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
-__host__ __device__
- OutputIterator set_intersection(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result);
-
-
-/*! \p set_intersection constructs a sorted range that is the
- * intersection of sorted ranges [first1, last1) and
- * [first2, last2). The return value is the end of the
- * output range.
- *
- * In the simplest case, \p set_intersection performs the
- * "intersection" operation from set theory: the output range
- * contains a copy of every element that is contained in both
- * [first1, last1) and [first2, last2). The
- * general case is more complicated, because the input ranges may
- * contain duplicate elements. The generalization is that if a value
- * appears \c m times in [first1, last1) and \c n times in
- * [first2, last2) (where \c m may be zero), then it
- * appears min(m,n) times in the output range.
- * \p set_intersection is stable, meaning that both elements are
- * copied from the first range rather than the second, and that the
- * relative order of elements in the output range is the same as in
- * the first input range.
- *
- * This version of \p set_intersection compares objects using
- * \c operator<.
- *
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \return The end of the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_intersection to compute the
- * set intersection of two sets of integers sorted in ascending order.
- *
- * \code
- * #include
- * ...
- * int A1[6] = {1, 3, 5, 7, 9, 11};
- * int A2[7] = {1, 1, 2, 3, 5, 8, 13};
- *
- * int result[7];
- *
- * int *result_end = thrust::set_intersection(A1, A1 + 6, A2, A2 + 7, result);
- * // result is now {1, 3, 5}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_intersection.html
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
- OutputIterator set_intersection(InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result);
-
-
-/*! \p set_intersection constructs a sorted range that is the
- * intersection of sorted ranges [first1, last1) and
- * [first2, last2). The return value is the end of the
- * output range.
- *
- * In the simplest case, \p set_intersection performs the
- * "intersection" operation from set theory: the output range
- * contains a copy of every element that is contained in both
- * [first1, last1) and [first2, last2). The
- * general case is more complicated, because the input ranges may
- * contain duplicate elements. The generalization is that if a value
- * appears \c m times in [first1, last1) and \c n times in
- * [first2, last2) (where \c m may be zero), then it
- * appears min(m,n) times in the output range.
- * \p set_intersection is stable, meaning that both elements are
- * copied from the first range rather than the second, and that the
- * relative order of elements in the output range is the same as in
- * the first input range.
- *
- * This version of \p set_intersection compares elements using a function object \p comp.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \param comp Comparison operator.
- * \return The end of the output range.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp.
- * \pre The resulting range shall not overlap with either input range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * The following code snippet demonstrates how to use \p set_intersection to compute
- * the set intersection of sets of integers sorted in descending order using the \p thrust::host execution
- * policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A1[6] = {11, 9, 7, 5, 3, 1};
- * int A2[7] = {13, 8, 5, 3, 2, 1, 1};
- *
- * int result[3];
- *
- * int *result_end = thrust::set_intersection(thrust::host, A1, A1 + 6, A2, A2 + 7, result, thrust::greater());
- * // result is now {5, 3, 1}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_intersection.html
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
-__host__ __device__
- OutputIterator set_intersection(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakCompare comp);
-
-
-/*! \p set_intersection constructs a sorted range that is the
- * intersection of sorted ranges [first1, last1) and
- * [first2, last2). The return value is the end of the
- * output range.
- *
- * In the simplest case, \p set_intersection performs the
- * "intersection" operation from set theory: the output range
- * contains a copy of every element that is contained in both
- * [first1, last1) and [first2, last2). The
- * general case is more complicated, because the input ranges may
- * contain duplicate elements. The generalization is that if a value
- * appears \c m times in [first1, last1) and \c n times in
- * [first2, last2) (where \c m may be zero), then it
- * appears min(m,n) times in the output range.
- * \p set_intersection is stable, meaning that both elements are
- * copied from the first range rather than the second, and that the
- * relative order of elements in the output range is the same as in
- * the first input range.
- *
- * This version of \p set_intersection compares elements using a function object \p comp.
- *
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \param comp Comparison operator.
- * \return The end of the output range.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp.
- * \pre The resulting range shall not overlap with either input range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * The following code snippet demonstrates how to use \p set_intersection to compute
- * the set intersection of sets of integers sorted in descending order.
- *
- * \code
- * #include
- * ...
- * int A1[6] = {11, 9, 7, 5, 3, 1};
- * int A2[7] = {13, 8, 5, 3, 2, 1, 1};
- *
- * int result[3];
- *
- * int *result_end = thrust::set_intersection(A1, A1 + 6, A2, A2 + 7, result, thrust::greater());
- * // result is now {5, 3, 1}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_intersection.html
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
- OutputIterator set_intersection(InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakCompare comp);
-
-
-/*! \p set_symmetric_difference constructs a sorted range that is the set symmetric
- * difference of the sorted ranges [first1, last1) and [first2, last2).
- * The return value is the end of the output range.
- *
- * In the simplest case, \p set_symmetric_difference performs a set theoretic calculation:
- * it constructs the union of the two sets A - B and B - A, where A and B are the two
- * input ranges. That is, the output range contains a copy of every element that is
- * contained in [first1, last1) but not [first2, last1), and a copy of
- * every element that is contained in [first2, last2) but not [first1, last1).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements that are
- * equivalent to each other and [first2, last1) contains \c n elements that are
- * equivalent to them, then |m - n| of those elements shall be copied to the output
- * range: the last m - n elements from [first1, last1) if m > n, and
- * the last n - m of these elements from [first2, last2) if m < n.
- *
- * This version of \p set_union compares elements using \c operator<.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \return The end of the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference to compute
- * the symmetric difference of two sets of integers sorted in ascending order using the \p thrust::host
- * execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A1[6] = {0, 1, 2, 2, 4, 6, 7};
- * int A2[5] = {1, 1, 2, 5, 8};
- *
- * int result[6];
- *
- * int *result_end = thrust::set_symmetric_difference(thrust::host, A1, A1 + 6, A2, A2 + 5, result);
- * // result = {0, 4, 5, 6, 7, 8}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_symmetric_difference.html
- * \see \p merge
- * \see \p includes
- * \see \p set_difference
- * \see \p set_union
- * \see \p set_intersection
- * \see \p sort
- * \see \p is_sorted
- */
-template
-__host__ __device__
- OutputIterator set_symmetric_difference(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result);
-
-
-/*! \p set_symmetric_difference constructs a sorted range that is the set symmetric
- * difference of the sorted ranges [first1, last1) and [first2, last2).
- * The return value is the end of the output range.
- *
- * In the simplest case, \p set_symmetric_difference performs a set theoretic calculation:
- * it constructs the union of the two sets A - B and B - A, where A and B are the two
- * input ranges. That is, the output range contains a copy of every element that is
- * contained in [first1, last1) but not [first2, last1), and a copy of
- * every element that is contained in [first2, last2) but not [first1, last1).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements that are
- * equivalent to each other and [first2, last1) contains \c n elements that are
- * equivalent to them, then |m - n| of those elements shall be copied to the output
- * range: the last m - n elements from [first1, last1) if m > n, and
- * the last n - m of these elements from [first2, last2) if m < n.
- *
- * This version of \p set_union compares elements using \c operator<.
- *
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \return The end of the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference to compute
- * the symmetric difference of two sets of integers sorted in ascending order.
- *
- * \code
- * #include
- * ...
- * int A1[6] = {0, 1, 2, 2, 4, 6, 7};
- * int A2[5] = {1, 1, 2, 5, 8};
- *
- * int result[6];
- *
- * int *result_end = thrust::set_symmetric_difference(A1, A1 + 6, A2, A2 + 5, result);
- * // result = {0, 4, 5, 6, 7, 8}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_symmetric_difference.html
- * \see \p merge
- * \see \p includes
- * \see \p set_difference
- * \see \p set_union
- * \see \p set_intersection
- * \see \p sort
- * \see \p is_sorted
- */
-template
- OutputIterator set_symmetric_difference(InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result);
-
-
-/*! \p set_symmetric_difference constructs a sorted range that is the set symmetric
- * difference of the sorted ranges [first1, last1) and [first2, last2).
- * The return value is the end of the output range.
- *
- * In the simplest case, \p set_symmetric_difference performs a set theoretic calculation:
- * it constructs the union of the two sets A - B and B - A, where A and B are the two
- * input ranges. That is, the output range contains a copy of every element that is
- * contained in [first1, last1) but not [first2, last1), and a copy of
- * every element that is contained in [first2, last2) but not [first1, last1).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements that are
- * equivalent to each other and [first2, last1) contains \c n elements that are
- * equivalent to them, then |m - n| of those elements shall be copied to the output
- * range: the last m - n elements from [first1, last1) if m > n, and
- * the last n - m of these elements from [first2, last2) if m < n.
- *
- * This version of \p set_union compares elements using a function object \p comp.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \param comp Comparison operator.
- * \return The end of the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference to compute
- * the symmetric difference of two sets of integers sorted in descending order using the \p thrust::host
- * execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A1[6] = {7, 6, 4, 2, 2, 1, 0};
- * int A2[5] = {8, 5, 2, 1, 1};
- *
- * int result[6];
- *
- * int *result_end = thrust::set_symmetric_difference(thrust::host, A1, A1 + 6, A2, A2 + 5, result);
- * // result = {8, 7, 6, 5, 4, 0}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_symmetric_difference.html
- * \see \p merge
- * \see \p includes
- * \see \p set_difference
- * \see \p set_union
- * \see \p set_intersection
- * \see \p sort
- * \see \p is_sorted
- */
-template
-__host__ __device__
- OutputIterator set_symmetric_difference(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakCompare comp);
-
-
-/*! \p set_symmetric_difference constructs a sorted range that is the set symmetric
- * difference of the sorted ranges [first1, last1) and [first2, last2).
- * The return value is the end of the output range.
- *
- * In the simplest case, \p set_symmetric_difference performs a set theoretic calculation:
- * it constructs the union of the two sets A - B and B - A, where A and B are the two
- * input ranges. That is, the output range contains a copy of every element that is
- * contained in [first1, last1) but not [first2, last1), and a copy of
- * every element that is contained in [first2, last2) but not [first1, last1).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements that are
- * equivalent to each other and [first2, last1) contains \c n elements that are
- * equivalent to them, then |m - n| of those elements shall be copied to the output
- * range: the last m - n elements from [first1, last1) if m > n, and
- * the last n - m of these elements from [first2, last2) if m < n.
- *
- * This version of \p set_union compares elements using a function object \p comp.
- *
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \param comp Comparison operator.
- * \return The end of the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference to compute
- * the symmetric difference of two sets of integers sorted in descending order.
- *
- * \code
- * #include
- * ...
- * int A1[6] = {7, 6, 4, 2, 2, 1, 0};
- * int A2[5] = {8, 5, 2, 1, 1};
- *
- * int result[6];
- *
- * int *result_end = thrust::set_symmetric_difference(A1, A1 + 6, A2, A2 + 5, result);
- * // result = {8, 7, 6, 5, 4, 0}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_symmetric_difference.html
- * \see \p merge
- * \see \p includes
- * \see \p set_difference
- * \see \p set_union
- * \see \p set_intersection
- * \see \p sort
- * \see \p is_sorted
- */
-template
- OutputIterator set_symmetric_difference(InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakCompare comp);
-
-
-/*! \p set_union constructs a sorted range that is the union of the sorted ranges
- * [first1, last1) and [first2, last2). The return value is the
- * end of the output range.
- *
- * In the simplest case, \p set_union performs the "union" operation from set
- * theory: the output range contains a copy of every element that is contained in
- * [first1, last1), [first2, last1), or both. The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements
- * that are equivalent to each other and if [first2, last2) contains \c n
- * elements that are equivalent to them, then all \c m elements from the first
- * range shall be copied to the output range, in order, and then max(n - m, 0)
- * elements from the second range shall be copied to the output, in order.
- *
- * This version of \p set_union compares elements using \c operator<.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \return The end of the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_union to compute the union of
- * two sets of integers sorted in ascending order using the \p thrust::host execution policy for
- * parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A1[7] = {0, 2, 4, 6, 8, 10, 12};
- * int A2[5] = {1, 3, 5, 7, 9};
- *
- * int result[11];
- *
- * int *result_end = thrust::set_union(thrust::host, A1, A1 + 7, A2, A2 + 5, result);
- * // result = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_union.html
- * \see \p merge
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
-__host__ __device__
- OutputIterator set_union(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result);
-
-
-/*! \p set_union constructs a sorted range that is the union of the sorted ranges
- * [first1, last1) and [first2, last2). The return value is the
- * end of the output range.
- *
- * In the simplest case, \p set_union performs the "union" operation from set
- * theory: the output range contains a copy of every element that is contained in
- * [first1, last1), [first2, last1), or both. The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements
- * that are equivalent to each other and if [first2, last2) contains \c n
- * elements that are equivalent to them, then all \c m elements from the first
- * range shall be copied to the output range, in order, and then max(n - m, 0)
- * elements from the second range shall be copied to the output, in order.
- *
- * This version of \p set_union compares elements using \c operator<.
- *
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \return The end of the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_union to compute the union of
- * two sets of integers sorted in ascending order.
- *
- * \code
- * #include
- * ...
- * int A1[7] = {0, 2, 4, 6, 8, 10, 12};
- * int A2[5] = {1, 3, 5, 7, 9};
- *
- * int result[11];
- *
- * int *result_end = thrust::set_union(A1, A1 + 7, A2, A2 + 5, result);
- * // result = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_union.html
- * \see \p merge
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
- OutputIterator set_union(InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result);
-
-
-/*! \p set_union constructs a sorted range that is the union of the sorted ranges
- * [first1, last1) and [first2, last2). The return value is the
- * end of the output range.
- *
- * In the simplest case, \p set_union performs the "union" operation from set
- * theory: the output range contains a copy of every element that is contained in
- * [first1, last1), [first2, last1), or both. The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements
- * that are equivalent to each other and if [first2, last2) contains \c n
- * elements that are equivalent to them, then all \c m elements from the first
- * range shall be copied to the output range, in order, and then max(n - m, 0)
- * elements from the second range shall be copied to the output, in order.
- *
- * This version of \p set_union compares elements using a function object \p comp.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \param comp Comparison operator.
- * \return The end of the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1's \c value_type is convertable to \p StrictWeakCompare's \c first_argument_type.
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2's \c value_type is convertable to \p StrictWeakCompare's \c second_argument_type.
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_union to compute the union of
- * two sets of integers sorted in ascending order using the \p thrust::host execution policy for
- * parallelization:
- *
- * \code
- * #include
- * #include
- * #include
- * ...
- * int A1[7] = {12, 10, 8, 6, 4, 2, 0};
- * int A2[5] = {9, 7, 5, 3, 1};
- *
- * int result[11];
- *
- * int *result_end = thrust::set_union(thrust::host, A1, A1 + 7, A2, A2 + 5, result, thrust::greater());
- * // result = {12, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_union.html
- * \see \p merge
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
-__host__ __device__
- OutputIterator set_union(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakCompare comp);
-
-
-/*! \p set_union constructs a sorted range that is the union of the sorted ranges
- * [first1, last1) and [first2, last2). The return value is the
- * end of the output range.
- *
- * In the simplest case, \p set_union performs the "union" operation from set
- * theory: the output range contains a copy of every element that is contained in
- * [first1, last1), [first2, last1), or both. The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements
- * that are equivalent to each other and if [first2, last2) contains \c n
- * elements that are equivalent to them, then all \c m elements from the first
- * range shall be copied to the output range, in order, and then max(n - m, 0)
- * elements from the second range shall be copied to the output, in order.
- *
- * This version of \p set_union compares elements using a function object \p comp.
- *
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \param comp Comparison operator.
- * \return The end of the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1's \c value_type is convertable to \p StrictWeakCompare's \c first_argument_type.
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2's \c value_type is convertable to \p StrictWeakCompare's \c second_argument_type.
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_union to compute the union of
- * two sets of integers sorted in ascending order.
- *
- * \code
- * #include
- * #include
- * ...
- * int A1[7] = {12, 10, 8, 6, 4, 2, 0};
- * int A2[5] = {9, 7, 5, 3, 1};
- *
- * int result[11];
- *
- * int *result_end = thrust::set_union(A1, A1 + 7, A2, A2 + 5, result, thrust::greater());
- * // result = {12, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_union.html
- * \see \p merge
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
- OutputIterator set_union(InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakCompare comp);
-
-
-/*! \p set_difference_by_key performs a key-value difference operation from set theory.
- * \p set_difference_by_key constructs a sorted range that is the difference of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_difference_by_key performs the "difference" operation from set
- * theory: the keys output range contains a copy of every element that is contained in
- * [keys_first1, keys_last1) and not contained in [keys_first2, keys_last2).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements
- * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n
- * elements that are equivalent to them, the last max(m-n,0) elements from
- * [keys_first1, keys_last1) range shall be copied to the output range.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_difference_by_key compares key elements using \c operator<.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_difference_by_key to compute the
- * set difference of two sets of integers sorted in ascending order with their values using the \p thrust::host
- * execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A_keys[6] = {0, 1, 3, 4, 5, 6, 9};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {1, 3, 5, 7, 9};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[3];
- * int vals_result[3];
- *
- * thrust::pair end = thrust::set_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result);
- * // keys_result is now {0, 4, 6}
- * // vals_result is now {0, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_intersection_by_key
- * \see \p set_symmetric_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
-__host__ __device__
- thrust::pair
- set_difference_by_key(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p set_difference_by_key performs a key-value difference operation from set theory.
- * \p set_difference_by_key constructs a sorted range that is the difference of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_difference_by_key performs the "difference" operation from set
- * theory: the keys output range contains a copy of every element that is contained in
- * [keys_first1, keys_last1) and not contained in [keys_first2, keys_last2).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements
- * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n
- * elements that are equivalent to them, the last max(m-n,0) elements from
- * [keys_first1, keys_last1) range shall be copied to the output range.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_difference_by_key compares key elements using \c operator<.
- *
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_difference_by_key to compute the
- * set difference of two sets of integers sorted in ascending order with their values.
- *
- * \code
- * #include
- * ...
- * int A_keys[6] = {0, 1, 3, 4, 5, 6, 9};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {1, 3, 5, 7, 9};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[3];
- * int vals_result[3];
- *
- * thrust::pair end = thrust::set_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result);
- * // keys_result is now {0, 4, 6}
- * // vals_result is now {0, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_intersection_by_key
- * \see \p set_symmetric_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
- thrust::pair
- set_difference_by_key(InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p set_difference_by_key performs a key-value difference operation from set theory.
- * \p set_difference_by_key constructs a sorted range that is the difference of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_difference_by_key performs the "difference" operation from set
- * theory: the keys output range contains a copy of every element that is contained in
- * [keys_first1, keys_last1) and not contained in [keys_first2, keys_last2).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements
- * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n
- * elements that are equivalent to them, the last max(m-n,0) elements from
- * [keys_first1, keys_last1) range shall be copied to the output range.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_difference_by_key compares key elements using a function object \p comp.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \param comp Comparison operator.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_difference_by_key to compute the
- * set difference of two sets of integers sorted in descending order with their values using the \p thrust::host
- * execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * #include
- * ...
- * int A_keys[6] = {9, 6, 5, 4, 3, 1, 0};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {9, 7, 5, 3, 1};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[3];
- * int vals_result[3];
- *
- * thrust::pair end = thrust::set_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result, thrust::greater());
- * // keys_result is now {0, 4, 6}
- * // vals_result is now {0, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_intersection_by_key
- * \see \p set_symmetric_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
-__host__ __device__
- thrust::pair
- set_difference_by_key(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- StrictWeakCompare comp);
-
-
-/*! \p set_difference_by_key performs a key-value difference operation from set theory.
- * \p set_difference_by_key constructs a sorted range that is the difference of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_difference_by_key performs the "difference" operation from set
- * theory: the keys output range contains a copy of every element that is contained in
- * [keys_first1, keys_last1) and not contained in [keys_first2, keys_last2).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements
- * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n
- * elements that are equivalent to them, the last max(m-n,0) elements from
- * [keys_first1, keys_last1) range shall be copied to the output range.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_difference_by_key compares key elements using a function object \p comp.
- *
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \param comp Comparison operator.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_difference_by_key to compute the
- * set difference of two sets of integers sorted in descending order with their values.
- *
- * \code
- * #include
- * #include
- * ...
- * int A_keys[6] = {9, 6, 5, 4, 3, 1, 0};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {9, 7, 5, 3, 1};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[3];
- * int vals_result[3];
- *
- * thrust::pair end = thrust::set_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result, thrust::greater());
- * // keys_result is now {0, 4, 6}
- * // vals_result is now {0, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_intersection_by_key
- * \see \p set_symmetric_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
- thrust::pair
- set_difference_by_key(InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- StrictWeakCompare comp);
-
-
-/*! \p set_intersection_by_key performs a key-value intersection operation from set theory.
- * \p set_intersection_by_key constructs a sorted range that is the intersection of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_intersection_by_key performs the "intersection" operation from set
- * theory: the keys output range contains a copy of every element that is contained in both
- * [keys_first1, keys_last1)[keys_first2, keys_last2).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if an element appears \c m times in [keys_first1, keys_last1)
- * and \c n times in [keys_first2, keys_last2) (where \c m may be zero), then it
- * appears min(m,n) times in the keys output range.
- * \p set_intersection_by_key is stable, meaning both that elements are copied from the first
- * input range rather than the second, and that the relative order of elements in the output range
- * is the same as the first input range.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) to the keys output range,
- * the corresponding value element is copied from [values_first1, values_last1) to the values
- * output range.
- *
- * This version of \p set_intersection_by_key compares objects using \c operator<.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \note Unlike the other key-value set operations, \p set_intersection_by_key is unique in that it has no
- * \c values_first2 parameter because elements from the second input range are never copied to the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_intersection_by_key to compute the
- * set intersection of two sets of integers sorted in ascending order with their values using the \p thrust::host
- * execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A_keys[6] = {1, 3, 5, 7, 9, 11};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0};
- *
- * int B_keys[7] = {1, 1, 2, 3, 5, 8, 13};
- *
- * int keys_result[7];
- * int vals_result[7];
- *
- * thrust::pair end = thrust::set_intersection_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 7, A_vals, keys_result, vals_result);
- *
- * // keys_result is now {1, 3, 5}
- * // vals_result is now {0, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_difference_by_key
- * \see \p set_symmetric_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
-__host__ __device__
- thrust::pair
- set_intersection_by_key(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p set_intersection_by_key performs a key-value intersection operation from set theory.
- * \p set_intersection_by_key constructs a sorted range that is the intersection of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_intersection_by_key performs the "intersection" operation from set
- * theory: the keys output range contains a copy of every element that is contained in both
- * [keys_first1, keys_last1)[keys_first2, keys_last2).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if an element appears \c m times in [keys_first1, keys_last1)
- * and \c n times in [keys_first2, keys_last2) (where \c m may be zero), then it
- * appears min(m,n) times in the keys output range.
- * \p set_intersection_by_key is stable, meaning both that elements are copied from the first
- * input range rather than the second, and that the relative order of elements in the output range
- * is the same as the first input range.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) to the keys output range,
- * the corresponding value element is copied from [values_first1, values_last1) to the values
- * output range.
- *
- * This version of \p set_intersection_by_key compares objects using \c operator<.
- *
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \note Unlike the other key-value set operations, \p set_intersection_by_key is unique in that it has no
- * \c values_first2 parameter because elements from the second input range are never copied to the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_intersection_by_key to compute the
- * set intersection of two sets of integers sorted in ascending order with their values.
- *
- * \code
- * #include
- * ...
- * int A_keys[6] = {1, 3, 5, 7, 9, 11};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0};
- *
- * int B_keys[7] = {1, 1, 2, 3, 5, 8, 13};
- *
- * int keys_result[7];
- * int vals_result[7];
- *
- * thrust::pair end = thrust::set_intersection_by_key(A_keys, A_keys + 6, B_keys, B_keys + 7, A_vals, keys_result, vals_result);
- *
- * // keys_result is now {1, 3, 5}
- * // vals_result is now {0, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_difference_by_key
- * \see \p set_symmetric_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
- thrust::pair
- set_intersection_by_key(InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p set_intersection_by_key performs a key-value intersection operation from set theory.
- * \p set_intersection_by_key constructs a sorted range that is the intersection of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_intersection_by_key performs the "intersection" operation from set
- * theory: the keys output range contains a copy of every element that is contained in both
- * [keys_first1, keys_last1)[keys_first2, keys_last2).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if an element appears \c m times in [keys_first1, keys_last1)
- * and \c n times in [keys_first2, keys_last2) (where \c m may be zero), then it
- * appears min(m,n) times in the keys output range.
- * \p set_intersection_by_key is stable, meaning both that elements are copied from the first
- * input range rather than the second, and that the relative order of elements in the output range
- * is the same as the first input range.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) to the keys output range,
- * the corresponding value element is copied from [values_first1, values_last1) to the values
- * output range.
- *
- * This version of \p set_intersection_by_key compares objects using a function object \p comp.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \param comp Comparison operator.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \note Unlike the other key-value set operations, \p set_intersection_by_key is unique in that it has no
- * \c values_first2 parameter because elements from the second input range are never copied to the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_intersection_by_key to compute the
- * set intersection of two sets of integers sorted in descending order with their values using the
- * \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * #include
- * ...
- * int A_keys[6] = {11, 9, 7, 5, 3, 1};
- * int A_vals[6] = { 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[7] = {13, 8, 5, 3, 2, 1, 1};
- *
- * int keys_result[7];
- * int vals_result[7];
- *
- * thrust::pair end = thrust::set_intersection_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 7, A_vals, keys_result, vals_result, thrust::greater());
- *
- * // keys_result is now {5, 3, 1}
- * // vals_result is now {0, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_difference_by_key
- * \see \p set_symmetric_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
-__host__ __device__
- thrust::pair
- set_intersection_by_key(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- StrictWeakCompare comp);
-
-
-/*! \p set_intersection_by_key performs a key-value intersection operation from set theory.
- * \p set_intersection_by_key constructs a sorted range that is the intersection of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_intersection_by_key performs the "intersection" operation from set
- * theory: the keys output range contains a copy of every element that is contained in both
- * [keys_first1, keys_last1)[keys_first2, keys_last2).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if an element appears \c m times in [keys_first1, keys_last1)
- * and \c n times in [keys_first2, keys_last2) (where \c m may be zero), then it
- * appears min(m,n) times in the keys output range.
- * \p set_intersection_by_key is stable, meaning both that elements are copied from the first
- * input range rather than the second, and that the relative order of elements in the output range
- * is the same as the first input range.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) to the keys output range,
- * the corresponding value element is copied from [values_first1, values_last1) to the values
- * output range.
- *
- * This version of \p set_intersection_by_key compares objects using a function object \p comp.
- *
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \param comp Comparison operator.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \note Unlike the other key-value set operations, \p set_intersection_by_key is unique in that it has no
- * \c values_first2 parameter because elements from the second input range are never copied to the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_intersection_by_key to compute the
- * set intersection of two sets of integers sorted in descending order with their values.
- *
- * \code
- * #include
- * #include
- * ...
- * int A_keys[6] = {11, 9, 7, 5, 3, 1};
- * int A_vals[6] = { 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[7] = {13, 8, 5, 3, 2, 1, 1};
- *
- * int keys_result[7];
- * int vals_result[7];
- *
- * thrust::pair end = thrust::set_intersection_by_key(A_keys, A_keys + 6, B_keys, B_keys + 7, A_vals, keys_result, vals_result, thrust::greater());
- *
- * // keys_result is now {5, 3, 1}
- * // vals_result is now {0, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_difference_by_key
- * \see \p set_symmetric_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
- thrust::pair
- set_intersection_by_key(InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- StrictWeakCompare comp);
-
-
-/*! \p set_symmetric_difference_by_key performs a key-value symmetric difference operation from set theory.
- * \p set_difference_by_key constructs a sorted range that is the symmetric difference of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_symmetric_difference_by_key performs a set theoretic calculation:
- * it constructs the union of the two sets A - B and B - A, where A and B are the two
- * input ranges. That is, the output range contains a copy of every element that is
- * contained in [keys_first1, keys_last1) but not [keys_first2, keys_last1), and a copy of
- * every element that is contained in [keys_first2, keys_last2) but not [keys_first1, keys_last1).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements that are
- * equivalent to each other and [keys_first2, keys_last1) contains \c n elements that are
- * equivalent to them, then |m - n| of those elements shall be copied to the output
- * range: the last m - n elements from [keys_first1, keys_last1) if m > n, and
- * the last n - m of these elements from [keys_first2, keys_last2) if m < n.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_symmetric_difference_by_key compares key elements using \c operator<.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the
- * symmetric difference of two sets of integers sorted in ascending order with their values using the
- * \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A_keys[6] = {0, 1, 2, 2, 4, 6, 7};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {1, 1, 2, 5, 8};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[6];
- * int vals_result[6];
- *
- * thrust::pair end = thrust::set_symmetric_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result);
- * // keys_result is now {0, 4, 5, 6, 7, 8}
- * // vals_result is now {0, 0, 1, 0, 0, 1}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_intersection_by_key
- * \see \p set_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
-__host__ __device__
- thrust::pair
- set_symmetric_difference_by_key(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p set_symmetric_difference_by_key performs a key-value symmetric difference operation from set theory.
- * \p set_difference_by_key constructs a sorted range that is the symmetric difference of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_symmetric_difference_by_key performs a set theoretic calculation:
- * it constructs the union of the two sets A - B and B - A, where A and B are the two
- * input ranges. That is, the output range contains a copy of every element that is
- * contained in [keys_first1, keys_last1) but not [keys_first2, keys_last1), and a copy of
- * every element that is contained in [keys_first2, keys_last2) but not [keys_first1, keys_last1).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements that are
- * equivalent to each other and [keys_first2, keys_last1) contains \c n elements that are
- * equivalent to them, then |m - n| of those elements shall be copied to the output
- * range: the last m - n elements from [keys_first1, keys_last1) if m > n, and
- * the last n - m of these elements from [keys_first2, keys_last2) if m < n.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_symmetric_difference_by_key compares key elements using \c operator<.
- *
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the
- * symmetric difference of two sets of integers sorted in ascending order with their values.
- *
- * \code
- * #include
- * ...
- * int A_keys[6] = {0, 1, 2, 2, 4, 6, 7};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {1, 1, 2, 5, 8};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[6];
- * int vals_result[6];
- *
- * thrust::pair end = thrust::set_symmetric_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result);
- * // keys_result is now {0, 4, 5, 6, 7, 8}
- * // vals_result is now {0, 0, 1, 0, 0, 1}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_intersection_by_key
- * \see \p set_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
- thrust::pair
- set_symmetric_difference_by_key(InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p set_symmetric_difference_by_key performs a key-value symmetric difference operation from set theory.
- * \p set_difference_by_key constructs a sorted range that is the symmetric difference of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_symmetric_difference_by_key performs a set theoretic calculation:
- * it constructs the union of the two sets A - B and B - A, where A and B are the two
- * input ranges. That is, the output range contains a copy of every element that is
- * contained in [keys_first1, keys_last1) but not [keys_first2, keys_last1), and a copy of
- * every element that is contained in [keys_first2, keys_last2) but not [keys_first1, keys_last1).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements that are
- * equivalent to each other and [keys_first2, keys_last1) contains \c n elements that are
- * equivalent to them, then |m - n| of those elements shall be copied to the output
- * range: the last m - n elements from [keys_first1, keys_last1) if m > n, and
- * the last n - m of these elements from [keys_first2, keys_last2) if m < n.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_symmetric_difference_by_key compares key elements using a function object \c comp.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \param comp Comparison operator.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the
- * symmetric difference of two sets of integers sorted in descending order with their values using the
- * \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * #include
- * ...
- * int A_keys[6] = {7, 6, 4, 2, 2, 1, 0};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {8, 5, 2, 1, 1};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[6];
- * int vals_result[6];
- *
- * thrust::pair end = thrust::set_symmetric_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result);
- * // keys_result is now {8, 7, 6, 5, 4, 0}
- * // vals_result is now {1, 0, 0, 1, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_intersection_by_key
- * \see \p set_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
-__host__ __device__
- thrust::pair
- set_symmetric_difference_by_key(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- StrictWeakCompare comp);
-
-
-/*! \p set_symmetric_difference_by_key performs a key-value symmetric difference operation from set theory.
- * \p set_difference_by_key constructs a sorted range that is the symmetric difference of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_symmetric_difference_by_key performs a set theoretic calculation:
- * it constructs the union of the two sets A - B and B - A, where A and B are the two
- * input ranges. That is, the output range contains a copy of every element that is
- * contained in [keys_first1, keys_last1) but not [keys_first2, keys_last1), and a copy of
- * every element that is contained in [keys_first2, keys_last2) but not [keys_first1, keys_last1).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements that are
- * equivalent to each other and [keys_first2, keys_last1) contains \c n elements that are
- * equivalent to them, then |m - n| of those elements shall be copied to the output
- * range: the last m - n elements from [keys_first1, keys_last1) if m > n, and
- * the last n - m of these elements from [keys_first2, keys_last2) if m < n.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_symmetric_difference_by_key compares key elements using a function object \c comp.
- *
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \param comp Comparison operator.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the
- * symmetric difference of two sets of integers sorted in descending order with their values.
- *
- * \code
- * #include
- * #include
- * ...
- * int A_keys[6] = {7, 6, 4, 2, 2, 1, 0};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {8, 5, 2, 1, 1};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[6];
- * int vals_result[6];
- *
- * thrust::pair end = thrust::set_symmetric_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result);
- * // keys_result is now {8, 7, 6, 5, 4, 0}
- * // vals_result is now {1, 0, 0, 1, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_intersection_by_key
- * \see \p set_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
- thrust::pair
- set_symmetric_difference_by_key(InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- StrictWeakCompare comp);
-
-
-/*! \p set_union_by_key performs a key-value union operation from set theory.
- * \p set_union_by_key constructs a sorted range that is the union of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_union_by_key performs the "union" operation from set theory:
- * the output range contains a copy of every element that is contained in
- * [keys_first1, keys_last1), [keys_first2, keys_last1), or both. The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements
- * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n
- * elements that are equivalent to them, then all \c m elements from the first
- * range shall be copied to the output range, in order, and then max(n - m, 0)
- * elements from the second range shall be copied to the output, in order.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_union_by_key compares key elements using \c operator<.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the
- * symmetric difference of two sets of integers sorted in ascending order with their values using the
- * \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A_keys[6] = {0, 2, 4, 6, 8, 10, 12};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {1, 3, 5, 7, 9};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[11];
- * int vals_result[11];
- *
- * thrust::pair end = thrust::set_symmetric_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result);
- * // keys_result is now {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12}
- * // vals_result is now {0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0}
- * \endcode
- *
- * \see \p set_symmetric_difference_by_key
- * \see \p set_intersection_by_key
- * \see \p set_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
-__host__ __device__
- thrust::pair
- set_union_by_key(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p set_union_by_key performs a key-value union operation from set theory.
- * \p set_union_by_key constructs a sorted range that is the union of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_union_by_key performs the "union" operation from set theory:
- * the output range contains a copy of every element that is contained in
- * [keys_first1, keys_last1), [keys_first2, keys_last1), or both. The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements
- * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n
- * elements that are equivalent to them, then all \c m elements from the first
- * range shall be copied to the output range, in order, and then max(n - m, 0)
- * elements from the second range shall be copied to the output, in order.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_union_by_key compares key elements using \c operator<.
- *
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the
- * symmetric difference of two sets of integers sorted in ascending order with their values.
- *
- * \code
- * #include
- * ...
- * int A_keys[6] = {0, 2, 4, 6, 8, 10, 12};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {1, 3, 5, 7, 9};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[11];
- * int vals_result[11];
- *
- * thrust::pair end = thrust::set_symmetric_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result);
- * // keys_result is now {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12}
- * // vals_result is now {0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0}
- * \endcode
- *
- * \see \p set_symmetric_difference_by_key
- * \see \p set_intersection_by_key
- * \see \p set_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
- thrust::pair
- set_union_by_key(InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p set_union_by_key performs a key-value union operation from set theory.
- * \p set_union_by_key constructs a sorted range that is the union of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_union_by_key performs the "union" operation from set theory:
- * the output range contains a copy of every element that is contained in
- * [keys_first1, keys_last1), [keys_first2, keys_last1), or both. The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements
- * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n
- * elements that are equivalent to them, then all \c m elements from the first
- * range shall be copied to the output range, in order, and then max(n - m, 0)
- * elements from the second range shall be copied to the output, in order.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_union_by_key compares key elements using a function object \c comp.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \param comp Comparison operator.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the
- * symmetric difference of two sets of integers sorted in descending order with their values using the
- * \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * #include
- * ...
- * int A_keys[6] = {12, 10, 8, 6, 4, 2, 0};
- * int A_vals[6] = { 0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {9, 7, 5, 3, 1};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[11];
- * int vals_result[11];
- *
- * thrust::pair end = thrust::set_symmetric_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result, thrust::greater());
- * // keys_result is now {12, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0}
- * // vals_result is now { 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0}
- * \endcode
- *
- * \see \p set_symmetric_difference_by_key
- * \see \p set_intersection_by_key
- * \see \p set_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
-__host__ __device__
- thrust::pair
- set_union_by_key(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- StrictWeakCompare comp);
-
-
-/*! \p set_union_by_key performs a key-value union operation from set theory.
- * \p set_union_by_key constructs a sorted range that is the union of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_union_by_key performs the "union" operation from set theory:
- * the output range contains a copy of every element that is contained in
- * [keys_first1, keys_last1), [keys_first2, keys_last1), or both. The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements
- * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n
- * elements that are equivalent to them, then all \c m elements from the first
- * range shall be copied to the output range, in order, and then max(n - m, 0)
- * elements from the second range shall be copied to the output, in order.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_union_by_key compares key elements using a function object \c comp.
- *
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \param comp Comparison operator.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the
- * symmetric difference of two sets of integers sorted in descending order with their values.
- *
- * \code
- * #include
- * #include
- * ...
- * int A_keys[6] = {12, 10, 8, 6, 4, 2, 0};
- * int A_vals[6] = { 0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {9, 7, 5, 3, 1};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[11];
- * int vals_result[11];
- *
- * thrust::pair end = thrust::set_symmetric_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result, thrust::greater());
- * // keys_result is now {12, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0}
- * // vals_result is now { 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0}
- * \endcode
- *
- * \see \p set_symmetric_difference_by_key
- * \see \p set_intersection_by_key
- * \see \p set_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
- thrust::pair
- set_union_by_key(InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- StrictWeakCompare comp);
-
-
-/*! \} // end set_operations
- */
-
-
-} // end thrust
-
-#include
-
diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/sequential/transform_reduce.h b/spaces/ma-xu/LIVE/thrust/thrust/system/detail/sequential/transform_reduce.h
deleted file mode 100644
index 8d2a1b3850dea55c3c8440aa7e22fdb6d002d151..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/sequential/transform_reduce.h
+++ /dev/null
@@ -1,22 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system has no special transform_reduce functions
-
diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/gather.h b/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/gather.h
deleted file mode 100644
index 098e0f4fbad4001632ed02ee9e9b39aa17e54ea0..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/gather.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system inherits gather
-#include
-
diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/esrgan/upsample.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/esrgan/upsample.py
deleted file mode 100644
index f9a6d1c26bc5b77c2ece7f66511391a0f82dd1f6..0000000000000000000000000000000000000000
--- a/spaces/manavisrani07/gradio-lipsync-wav2lip/esrgan/upsample.py
+++ /dev/null
@@ -1,84 +0,0 @@
-import cv2
-import glob
-import os
-import sys
-from basicsr.archs.rrdbnet_arch import RRDBNet
-from basicsr.utils.download_util import load_file_from_url
-import numpy as np
-import torch
-from gfpgan import GFPGANer
-from realesrgan import RealESRGANer
-from realesrgan.archs.srvgg_arch import SRVGGNetCompact
-from basicsr.utils import imwrite, img2tensor, tensor2img
-from torchvision.transforms.functional import normalize
-from basicsr.utils.registry import ARCH_REGISTRY
-
-def load_sr(model_path, device, face):
- if not face=='codeformer':
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) #alter to match dims as needed
- netscale = 4
- model_path = os.path.normpath(model_path)
- if not os.path.isfile(model_path):
- model_path = load_file_from_url(
- url='https://github.com/GucciFlipFlops1917/wav2lip-hq-updated-ESRGAN/releases/download/v0.0.1/4x_BigFace_v3_Clear.pth',
- model_dir='weights', progress=True, file_name=None)
- upsampler = RealESRGANer(
- scale=netscale,
- model_path=model_path,
- dni_weight=None,
- model=model,
- tile=0,
- tile_pad=10,
- pre_pad=0,
- half=True,
- gpu_id=0)
- if face==None:
- run_params=upsampler
- else:
- gfp = GFPGANer(
- model_path='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth',
- upscale=2,
- arch='clean',
- channel_multiplier=2,
- bg_upsampler=upsampler)
- run_params=gfp
- else:
- run_params = ARCH_REGISTRY.get('CodeFormer')(dim_embd=512, codebook_size=1024, n_head=8, n_layers=9,
- connect_list=['32', '64', '128', '256']).to(device)
- ckpt_path = load_file_from_url(url='https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth',
- model_dir='weights/CodeFormer', progress=True, file_name=None)
- checkpoint = torch.load(ckpt_path)['params_ema']
- run_params.load_state_dict(checkpoint)
- run_params.eval()
- return run_params
-
-
-def upscale(image, face, properties):
- try:
- if face==1: ## GFP-GAN
- _, _, output = properties.enhance(image, has_aligned=False, only_center_face=False, paste_back=True)
- elif face==2: ## CODEFORMER
- net = properties[0]
- device = properties[1]
- w = properties[2]
- image = cv2.resize(image, (512, 512), interpolation=cv2.INTER_LINEAR)
- cropped_face_t = img2tensor(image / 255., bgr2rgb=True, float32=True)
- normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True)
- cropped_face_t = cropped_face_t.unsqueeze(0).to(device)
- try:
- with torch.no_grad():
- cropped_face_t = net(cropped_face_t, w=w, adain=True)[0]
- restored_face = tensor2img(cropped_face_t, rgb2bgr=True, min_max=(-1, 1))
- del cropped_face_t
- torch.cuda.empty_cache()
- except Exception as error:
- print(f'\tFailed inference for CodeFormer: {error}')
- restored_face = tensor2img(cropped_face_t, rgb2bgr=True, min_max=(-1, 1))
- output = restored_face.astype('uint8')
- elif face==0: ## ESRGAN
- img = image.astype(np.float32) / 255.
- output, _ = properties.enhance(image, outscale=4)
- except RuntimeError as error:
- print('Error', error)
- print('If you encounter CUDA out of memory, try to set --tile with a smaller number.')
- return output
diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/wav2lip_models/wav2lip.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/wav2lip_models/wav2lip.py
deleted file mode 100644
index ae5d6919169ec497f0f0815184f5db8ba9108fbd..0000000000000000000000000000000000000000
--- a/spaces/manavisrani07/gradio-lipsync-wav2lip/wav2lip_models/wav2lip.py
+++ /dev/null
@@ -1,184 +0,0 @@
-import torch
-from torch import nn
-from torch.nn import functional as F
-import math
-
-from .conv import Conv2dTranspose, Conv2d, nonorm_Conv2d
-
-class Wav2Lip(nn.Module):
- def __init__(self):
- super(Wav2Lip, self).__init__()
-
- self.face_encoder_blocks = nn.ModuleList([
- nn.Sequential(Conv2d(6, 16, kernel_size=7, stride=1, padding=3)), # 96,96
-
- nn.Sequential(Conv2d(16, 32, kernel_size=3, stride=2, padding=1), # 48,48
- Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True)),
-
- nn.Sequential(Conv2d(32, 64, kernel_size=3, stride=2, padding=1), # 24,24
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True)),
-
- nn.Sequential(Conv2d(64, 128, kernel_size=3, stride=2, padding=1), # 12,12
- Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True)),
-
- nn.Sequential(Conv2d(128, 256, kernel_size=3, stride=2, padding=1), # 6,6
- Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True)),
-
- nn.Sequential(Conv2d(256, 512, kernel_size=3, stride=2, padding=1), # 3,3
- Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),),
-
- nn.Sequential(Conv2d(512, 512, kernel_size=3, stride=1, padding=0), # 1, 1
- Conv2d(512, 512, kernel_size=1, stride=1, padding=0)),])
-
- self.audio_encoder = nn.Sequential(
- Conv2d(1, 32, kernel_size=3, stride=1, padding=1),
- Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True),
-
- Conv2d(32, 64, kernel_size=3, stride=(3, 1), padding=1),
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
-
- Conv2d(64, 128, kernel_size=3, stride=3, padding=1),
- Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
-
- Conv2d(128, 256, kernel_size=3, stride=(3, 2), padding=1),
- Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),
-
- Conv2d(256, 512, kernel_size=3, stride=1, padding=0),
- Conv2d(512, 512, kernel_size=1, stride=1, padding=0),)
-
- self.face_decoder_blocks = nn.ModuleList([
- nn.Sequential(Conv2d(512, 512, kernel_size=1, stride=1, padding=0),),
-
- nn.Sequential(Conv2dTranspose(1024, 512, kernel_size=3, stride=1, padding=0), # 3,3
- Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),),
-
- nn.Sequential(Conv2dTranspose(1024, 512, kernel_size=3, stride=2, padding=1, output_padding=1),
- Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),), # 6, 6
-
- nn.Sequential(Conv2dTranspose(768, 384, kernel_size=3, stride=2, padding=1, output_padding=1),
- Conv2d(384, 384, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(384, 384, kernel_size=3, stride=1, padding=1, residual=True),), # 12, 12
-
- nn.Sequential(Conv2dTranspose(512, 256, kernel_size=3, stride=2, padding=1, output_padding=1),
- Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),), # 24, 24
-
- nn.Sequential(Conv2dTranspose(320, 128, kernel_size=3, stride=2, padding=1, output_padding=1),
- Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),), # 48, 48
-
- nn.Sequential(Conv2dTranspose(160, 64, kernel_size=3, stride=2, padding=1, output_padding=1),
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),),]) # 96,96
-
- self.output_block = nn.Sequential(Conv2d(80, 32, kernel_size=3, stride=1, padding=1),
- nn.Conv2d(32, 3, kernel_size=1, stride=1, padding=0),
- nn.Sigmoid())
-
- def forward(self, audio_sequences, face_sequences):
- # audio_sequences = (B, T, 1, 80, 16)
- B = audio_sequences.size(0)
-
- input_dim_size = len(face_sequences.size())
- if input_dim_size > 4:
- audio_sequences = torch.cat([audio_sequences[:, i] for i in range(audio_sequences.size(1))], dim=0)
- face_sequences = torch.cat([face_sequences[:, :, i] for i in range(face_sequences.size(2))], dim=0)
-
- audio_embedding = self.audio_encoder(audio_sequences) # B, 512, 1, 1
-
- feats = []
- x = face_sequences
- for f in self.face_encoder_blocks:
- x = f(x)
- feats.append(x)
-
- x = audio_embedding
- for f in self.face_decoder_blocks:
- x = f(x)
- try:
- x = torch.cat((x, feats[-1]), dim=1)
- except Exception as e:
- print(x.size())
- print(feats[-1].size())
- raise e
-
- feats.pop()
-
- x = self.output_block(x)
-
- if input_dim_size > 4:
- x = torch.split(x, B, dim=0) # [(B, C, H, W)]
- outputs = torch.stack(x, dim=2) # (B, C, T, H, W)
-
- else:
- outputs = x
-
- return outputs
-
-class Wav2Lip_disc_qual(nn.Module):
- def __init__(self):
- super(Wav2Lip_disc_qual, self).__init__()
-
- self.face_encoder_blocks = nn.ModuleList([
- nn.Sequential(nonorm_Conv2d(3, 32, kernel_size=7, stride=1, padding=3)), # 48,96
-
- nn.Sequential(nonorm_Conv2d(32, 64, kernel_size=5, stride=(1, 2), padding=2), # 48,48
- nonorm_Conv2d(64, 64, kernel_size=5, stride=1, padding=2)),
-
- nn.Sequential(nonorm_Conv2d(64, 128, kernel_size=5, stride=2, padding=2), # 24,24
- nonorm_Conv2d(128, 128, kernel_size=5, stride=1, padding=2)),
-
- nn.Sequential(nonorm_Conv2d(128, 256, kernel_size=5, stride=2, padding=2), # 12,12
- nonorm_Conv2d(256, 256, kernel_size=5, stride=1, padding=2)),
-
- nn.Sequential(nonorm_Conv2d(256, 512, kernel_size=3, stride=2, padding=1), # 6,6
- nonorm_Conv2d(512, 512, kernel_size=3, stride=1, padding=1)),
-
- nn.Sequential(nonorm_Conv2d(512, 512, kernel_size=3, stride=2, padding=1), # 3,3
- nonorm_Conv2d(512, 512, kernel_size=3, stride=1, padding=1),),
-
- nn.Sequential(nonorm_Conv2d(512, 512, kernel_size=3, stride=1, padding=0), # 1, 1
- nonorm_Conv2d(512, 512, kernel_size=1, stride=1, padding=0)),])
-
- self.binary_pred = nn.Sequential(nn.Conv2d(512, 1, kernel_size=1, stride=1, padding=0), nn.Sigmoid())
- self.label_noise = .0
-
- def get_lower_half(self, face_sequences):
- return face_sequences[:, :, face_sequences.size(2)//2:]
-
- def to_2d(self, face_sequences):
- B = face_sequences.size(0)
- face_sequences = torch.cat([face_sequences[:, :, i] for i in range(face_sequences.size(2))], dim=0)
- return face_sequences
-
- def perceptual_forward(self, false_face_sequences):
- false_face_sequences = self.to_2d(false_face_sequences)
- false_face_sequences = self.get_lower_half(false_face_sequences)
-
- false_feats = false_face_sequences
- for f in self.face_encoder_blocks:
- false_feats = f(false_feats)
-
- false_pred_loss = F.binary_cross_entropy(self.binary_pred(false_feats).view(len(false_feats), -1),
- torch.ones((len(false_feats), 1)).cuda())
-
- return false_pred_loss
-
- def forward(self, face_sequences):
- face_sequences = self.to_2d(face_sequences)
- face_sequences = self.get_lower_half(face_sequences)
-
- x = face_sequences
- for f in self.face_encoder_blocks:
- x = f(x)
-
- return self.binary_pred(x).view(len(x), -1)
diff --git a/spaces/matthoffner/chatbot/pages/api/home/home.tsx b/spaces/matthoffner/chatbot/pages/api/home/home.tsx
deleted file mode 100644
index 884d6637c4521f2fd512da948a03ecb9a90b4122..0000000000000000000000000000000000000000
--- a/spaces/matthoffner/chatbot/pages/api/home/home.tsx
+++ /dev/null
@@ -1,430 +0,0 @@
-import { useEffect, useRef, useState } from 'react';
-import { useQuery } from 'react-query';
-
-import { GetServerSideProps } from 'next';
-import { useTranslation } from 'next-i18next';
-import { serverSideTranslations } from 'next-i18next/serverSideTranslations';
-import Head from 'next/head';
-
-import { useCreateReducer } from '@/hooks/useCreateReducer';
-
-import useErrorService from '@/services/errorService';
-import useApiService from '@/services/useApiService';
-
-import {
- cleanConversationHistory,
- cleanSelectedConversation,
-} from '@/utils/app/clean';
-import { DEFAULT_SYSTEM_PROMPT, DEFAULT_TEMPERATURE } from '@/utils/app/const';
-import {
- saveConversation,
- saveConversations,
- updateConversation,
-} from '@/utils/app/conversation';
-import { saveFolders } from '@/utils/app/folders';
-import { savePrompts } from '@/utils/app/prompts';
-import { getSettings } from '@/utils/app/settings';
-
-import { Conversation } from '@/types/chat';
-import { KeyValuePair } from '@/types/data';
-import { FolderInterface, FolderType } from '@/types/folder';
-import { OpenAIModelID, OpenAIModels, fallbackModelID } from '@/types/openai';
-import { Prompt } from '@/types/prompt';
-
-import { Chat } from '@/components/Chat/Chat';
-import { Chatbar } from '@/components/Chatbar/Chatbar';
-import { Navbar } from '@/components/Mobile/Navbar';
-import Promptbar from '@/components/Promptbar';
-
-import HomeContext from './home.context';
-import { HomeInitialState, initialState } from './home.state';
-
-import { v4 as uuidv4 } from 'uuid';
-
-interface Props {
- serverSideApiKeyIsSet: boolean;
- serverSidePluginKeysSet: boolean;
- defaultModelId: OpenAIModelID;
-}
-
-const Home = ({
- serverSideApiKeyIsSet,
- serverSidePluginKeysSet,
- defaultModelId,
-}: Props) => {
- const { t } = useTranslation('chat');
- const { getModels } = useApiService();
- const { getModelsError } = useErrorService();
- const [initialRender, setInitialRender] = useState(true);
-
- const contextValue = useCreateReducer({
- initialState,
- });
-
- const {
- state: {
- apiKey,
- lightMode,
- folders,
- conversations,
- selectedConversation,
- prompts,
- temperature,
- },
- dispatch,
- } = contextValue;
-
- const stopConversationRef = useRef(false);
-
- const { data, error, refetch } = useQuery(
- ['GetModels', apiKey, serverSideApiKeyIsSet],
- ({ signal }) => {
-
- return getModels(
- {
- key: 'apiKey',
- },
- signal,
- );
- },
- { enabled: true, refetchOnMount: false },
- );
-
- useEffect(() => {
- if (data) dispatch({ field: 'models', value: data });
- }, [data, dispatch]);
-
- useEffect(() => {
- dispatch({ field: 'modelError', value: getModelsError(error) });
- }, [dispatch, error, getModelsError]);
-
- // FETCH MODELS ----------------------------------------------
-
- const handleSelectConversation = (conversation: Conversation) => {
- dispatch({
- field: 'selectedConversation',
- value: conversation,
- });
-
- saveConversation(conversation);
- };
-
- // FOLDER OPERATIONS --------------------------------------------
-
- const handleCreateFolder = (name: string, type: FolderType) => {
- const newFolder: FolderInterface = {
- id: uuidv4(),
- name,
- type,
- };
-
- const updatedFolders = [...folders, newFolder];
-
- dispatch({ field: 'folders', value: updatedFolders });
- saveFolders(updatedFolders);
- };
-
- const handleDeleteFolder = (folderId: string) => {
- const updatedFolders = folders.filter((f) => f.id !== folderId);
- dispatch({ field: 'folders', value: updatedFolders });
- saveFolders(updatedFolders);
-
- const updatedConversations: Conversation[] = conversations.map((c) => {
- if (c.folderId === folderId) {
- return {
- ...c,
- folderId: null,
- };
- }
-
- return c;
- });
-
- dispatch({ field: 'conversations', value: updatedConversations });
- saveConversations(updatedConversations);
-
- const updatedPrompts: Prompt[] = prompts.map((p) => {
- if (p.folderId === folderId) {
- return {
- ...p,
- folderId: null,
- };
- }
-
- return p;
- });
-
- dispatch({ field: 'prompts', value: updatedPrompts });
- savePrompts(updatedPrompts);
- };
-
- const handleUpdateFolder = (folderId: string, name: string) => {
- const updatedFolders = folders.map((f) => {
- if (f.id === folderId) {
- return {
- ...f,
- name,
- };
- }
-
- return f;
- });
-
- dispatch({ field: 'folders', value: updatedFolders });
-
- saveFolders(updatedFolders);
- };
-
- // CONVERSATION OPERATIONS --------------------------------------------
-
- const handleNewConversation = () => {
- const lastConversation = conversations[conversations.length - 1];
-
- const newConversation: Conversation = {
- id: uuidv4(),
- name: t('New Conversation'),
- messages: [],
- model: lastConversation?.model || {
- id: OpenAIModels[defaultModelId].id,
- name: OpenAIModels[defaultModelId].name,
- maxLength: OpenAIModels[defaultModelId].maxLength,
- tokenLimit: OpenAIModels[defaultModelId].tokenLimit,
- },
- prompt: DEFAULT_SYSTEM_PROMPT,
- temperature: lastConversation?.temperature ?? DEFAULT_TEMPERATURE,
- folderId: null,
- };
-
- const updatedConversations = [...conversations, newConversation];
-
- dispatch({ field: 'selectedConversation', value: newConversation });
- dispatch({ field: 'conversations', value: updatedConversations });
-
- saveConversation(newConversation);
- saveConversations(updatedConversations);
-
- dispatch({ field: 'loading', value: false });
- };
-
- const handleUpdateConversation = (
- conversation: Conversation,
- data: KeyValuePair,
- ) => {
- const updatedConversation = {
- ...conversation,
- [data.key]: data.value,
- };
-
- const { single, all } = updateConversation(
- updatedConversation,
- conversations,
- );
-
- dispatch({ field: 'selectedConversation', value: single });
- dispatch({ field: 'conversations', value: all });
- };
-
- // EFFECTS --------------------------------------------
-
- useEffect(() => {
- if (window.innerWidth < 640) {
- dispatch({ field: 'showChatbar', value: false });
- }
- }, [selectedConversation]);
-
- useEffect(() => {
- defaultModelId &&
- dispatch({ field: 'defaultModelId', value: defaultModelId });
- serverSideApiKeyIsSet &&
- dispatch({
- field: 'serverSideApiKeyIsSet',
- value: serverSideApiKeyIsSet,
- });
- serverSidePluginKeysSet &&
- dispatch({
- field: 'serverSidePluginKeysSet',
- value: serverSidePluginKeysSet,
- });
- }, [defaultModelId, serverSideApiKeyIsSet, serverSidePluginKeysSet]);
-
- // ON LOAD --------------------------------------------
-
- useEffect(() => {
- const settings = getSettings();
- if (settings.theme) {
- dispatch({
- field: 'lightMode',
- value: settings.theme,
- });
- }
-
- const apiKey = "test";
-
- if (serverSideApiKeyIsSet) {
- dispatch({ field: 'apiKey', value: '' });
-
- localStorage.removeItem('apiKey');
- } else if (apiKey) {
- dispatch({ field: 'apiKey', value: apiKey });
- }
-
- const pluginKeys = localStorage.getItem('pluginKeys');
- if (serverSidePluginKeysSet) {
- dispatch({ field: 'pluginKeys', value: [] });
- localStorage.removeItem('pluginKeys');
- } else if (pluginKeys) {
- dispatch({ field: 'pluginKeys', value: pluginKeys });
- }
-
- if (window.innerWidth < 640) {
- dispatch({ field: 'showChatbar', value: false });
- dispatch({ field: 'showPromptbar', value: false });
- }
-
- const showChatbar = localStorage.getItem('showChatbar');
- if (showChatbar) {
- dispatch({ field: 'showChatbar', value: showChatbar === 'true' });
- }
-
- const showPromptbar = localStorage.getItem('showPromptbar');
- if (showPromptbar) {
- dispatch({ field: 'showPromptbar', value: showPromptbar === 'true' });
- }
-
- const folders = localStorage.getItem('folders');
- if (folders) {
- dispatch({ field: 'folders', value: JSON.parse(folders) });
- }
-
- const prompts = localStorage.getItem('prompts');
- if (prompts) {
- dispatch({ field: 'prompts', value: JSON.parse(prompts) });
- }
-
- const conversationHistory = localStorage.getItem('conversationHistory');
- if (conversationHistory) {
- const parsedConversationHistory: Conversation[] =
- JSON.parse(conversationHistory);
- const cleanedConversationHistory = cleanConversationHistory(
- parsedConversationHistory,
- );
-
- dispatch({ field: 'conversations', value: cleanedConversationHistory });
- }
-
- const selectedConversation = localStorage.getItem('selectedConversation');
- if (selectedConversation) {
- const parsedSelectedConversation: Conversation =
- JSON.parse(selectedConversation);
- const cleanedSelectedConversation = cleanSelectedConversation(
- parsedSelectedConversation,
- );
-
- dispatch({
- field: 'selectedConversation',
- value: cleanedSelectedConversation,
- });
- } else {
- const lastConversation = conversations[conversations.length - 1];
- dispatch({
- field: 'selectedConversation',
- value: {
- id: uuidv4(),
- name: t('New Conversation'),
- messages: [],
- model: OpenAIModels[defaultModelId],
- prompt: DEFAULT_SYSTEM_PROMPT,
- temperature: lastConversation?.temperature ?? DEFAULT_TEMPERATURE,
- folderId: null,
- },
- });
- }
- }, [
- defaultModelId,
- dispatch,
- serverSideApiKeyIsSet,
- serverSidePluginKeysSet,
- ]);
-
- return (
-
-
- Chatbot UI
-
-
-
-
- {selectedConversation && (
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- )}
-
- );
-};
-export default Home;
-
-export const getServerSideProps: GetServerSideProps = async ({ locale }) => {
- const defaultModelId =
- (process.env.DEFAULT_MODEL &&
- Object.values(OpenAIModelID).includes(
- process.env.DEFAULT_MODEL as OpenAIModelID,
- ) &&
- process.env.DEFAULT_MODEL) ||
- fallbackModelID;
-
- let serverSidePluginKeysSet = false;
-
- const googleApiKey = process.env.GOOGLE_API_KEY;
- const googleCSEId = process.env.GOOGLE_CSE_ID;
-
- if (googleApiKey && googleCSEId) {
- serverSidePluginKeysSet = true;
- }
-
- return {
- props: {
- serverSideApiKeyIsSet: !!process.env.OPENAI_API_KEY,
- defaultModelId,
- serverSidePluginKeysSet,
- ...(await serverSideTranslations(locale ?? 'en', [
- 'common',
- 'chat',
- 'sidebar',
- 'markdown',
- 'promptbar',
- 'settings',
- ])),
- },
- };
-};
diff --git a/spaces/matthoffner/chatbot/utils/server/index.ts b/spaces/matthoffner/chatbot/utils/server/index.ts
deleted file mode 100644
index af243dc3af7eb37f0c4078b92ace624be42f2787..0000000000000000000000000000000000000000
--- a/spaces/matthoffner/chatbot/utils/server/index.ts
+++ /dev/null
@@ -1,118 +0,0 @@
-import { Message } from '@/types/chat';
-import { OpenAIModel } from '@/types/openai';
-
-import { AZURE_DEPLOYMENT_ID, OPENAI_API_HOST, OPENAI_API_TYPE, OPENAI_API_VERSION, OPENAI_ORGANIZATION } from '../app/const';
-
-import {
- ParsedEvent,
- ReconnectInterval,
- createParser,
-} from 'eventsource-parser';
-
-export class OpenAIError extends Error {
- type: string;
- param: string;
- code: string;
-
- constructor(message: string, type: string, param: string, code: string) {
- super(message);
- this.name = 'OpenAIError';
- this.type = type;
- this.param = param;
- this.code = code;
- }
-}
-
-export const OpenAIStream = async (
- model: OpenAIModel,
- systemPrompt: string,
- temperature : number,
- key: string,
- messages: Message[],
-) => {
- let url = `${OPENAI_API_HOST}/v1/chat/completions`;
- if (OPENAI_API_TYPE === 'azure') {
- url = `${OPENAI_API_HOST}/openai/deployments/${AZURE_DEPLOYMENT_ID}/chat/completions?api-version=${OPENAI_API_VERSION}`;
- }
- const res = await fetch(url, {
- headers: {
- 'Content-Type': 'application/json',
- ...(OPENAI_API_TYPE === 'openai' && {
- Authorization: `Bearer ${key ? key : process.env.OPENAI_API_KEY}`
- }),
- ...(OPENAI_API_TYPE === 'azure' && {
- 'api-key': `${key ? key : process.env.OPENAI_API_KEY}`
- }),
- ...((OPENAI_API_TYPE === 'openai' && OPENAI_ORGANIZATION) && {
- 'OpenAI-Organization': OPENAI_ORGANIZATION,
- }),
- },
- method: 'POST',
- body: JSON.stringify({
- ...(OPENAI_API_TYPE === 'openai' && {model: model.id}),
- messages: [
- {
- role: 'system',
- content: systemPrompt,
- },
- ...messages,
- ],
- max_tokens: 1000,
- temperature: temperature,
- stream: true,
- stop: ["###Human:"]
- }),
- });
-
- const encoder = new TextEncoder();
- const decoder = new TextDecoder();
-
- if (res.status !== 200) {
- const result = await res.json();
- if (result.error) {
- throw new OpenAIError(
- result.error.message,
- result.error.type,
- result.error.param,
- result.error.code,
- );
- } else {
- throw new Error(
- `OpenAI API returned an error: ${
- decoder.decode(result?.value) || result.statusText
- }`,
- );
- }
- }
-
- const stream = new ReadableStream({
- async start(controller) {
- const onParse = (event: ParsedEvent | ReconnectInterval) => {
- if (event.type === 'event') {
- const data = event.data;
-
- try {
- const json = JSON.parse(data);
- if (json.choices[0].finish_reason != null) {
- controller.close();
- return;
- }
- const text = json.choices[0].delta.content;
- const queue = encoder.encode(text);
- controller.enqueue(queue);
- } catch (e) {
- controller.error(e);
- }
- }
- };
-
- const parser = createParser(onParse);
-
- for await (const chunk of res.body as any) {
- parser.feed(decoder.decode(chunk));
- }
- },
- });
-
- return stream;
-};
diff --git a/spaces/maxmax20160403/sovits5.0/vits_decoder/discriminator.py b/spaces/maxmax20160403/sovits5.0/vits_decoder/discriminator.py
deleted file mode 100644
index 764c0ca806b707e4f36ca2abb64ce79971358dd9..0000000000000000000000000000000000000000
--- a/spaces/maxmax20160403/sovits5.0/vits_decoder/discriminator.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import torch
-import torch.nn as nn
-
-from omegaconf import OmegaConf
-from .msd import ScaleDiscriminator
-from .mpd import MultiPeriodDiscriminator
-from .mrd import MultiResolutionDiscriminator
-
-
-class Discriminator(nn.Module):
- def __init__(self, hp):
- super(Discriminator, self).__init__()
- self.MRD = MultiResolutionDiscriminator(hp)
- self.MPD = MultiPeriodDiscriminator(hp)
- self.MSD = ScaleDiscriminator()
-
- def forward(self, x):
- r = self.MRD(x)
- p = self.MPD(x)
- s = self.MSD(x)
- return r + p + s
-
-
-if __name__ == '__main__':
- hp = OmegaConf.load('../config/base.yaml')
- model = Discriminator(hp)
-
- x = torch.randn(3, 1, 16384)
- print(x.shape)
-
- output = model(x)
- for features, score in output:
- for feat in features:
- print(feat.shape)
- print(score.shape)
-
- pytorch_total_params = sum(p.numel()
- for p in model.parameters() if p.requires_grad)
- print(pytorch_total_params)
diff --git a/spaces/menghanxia/ReversibleHalftoning/model/__init__.py b/spaces/menghanxia/ReversibleHalftoning/model/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/meraih/English-Japanese-Anime-TTS/monotonic_align/__init__.py b/spaces/meraih/English-Japanese-Anime-TTS/monotonic_align/__init__.py
deleted file mode 100644
index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000
--- a/spaces/meraih/English-Japanese-Anime-TTS/monotonic_align/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import numpy as np
-import torch
-from .monotonic_align.core import maximum_path_c
-
-
-def maximum_path(neg_cent, mask):
- """ Cython optimized version.
- neg_cent: [b, t_t, t_s]
- mask: [b, t_t, t_s]
- """
- device = neg_cent.device
- dtype = neg_cent.dtype
- neg_cent = neg_cent.data.cpu().numpy().astype(np.float32)
- path = np.zeros(neg_cent.shape, dtype=np.int32)
-
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32)
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32)
- maximum_path_c(path, neg_cent, t_t_max, t_s_max)
- return torch.from_numpy(path).to(device=device, dtype=dtype)
diff --git a/spaces/merve/data-leak/public/dataset-worldviews/person-photos.js b/spaces/merve/data-leak/public/dataset-worldviews/person-photos.js
deleted file mode 100644
index 305b037acebf14e083ead577ce566ad39b81c531..0000000000000000000000000000000000000000
--- a/spaces/merve/data-leak/public/dataset-worldviews/person-photos.js
+++ /dev/null
@@ -1,119 +0,0 @@
-
-function createPhotoScroller(){
-
- var base_path = 'img/woman_washing_clothes.jpeg'
- var data = [
- {
- 'path': 'img/labels_1.svg',
- 'alt': 'Image of a woman washing clothes with bounding boxes including \'person\', and \'bucket\'',
- 'x': 198,
- 'y': 30,
- 'width': 305,
- 'height': 400,
- },
-
- {
- 'path': 'img/labels_4.svg',
- 'alt': 'Image of a woman washing clothes with bounding boxes including \'parent\', and \'laundry\'',
- 'x': 110,
- 'y': 60,
- 'width': 450,
- 'height': 470,
- },
-
-
- {
- 'path': 'img/labels_2.svg',
- 'alt': 'Image of a woman washing clothes with bounding boxes including \'hair_boho\', and \'decor_outdoor_rustic\'',
- 'x': 198,
- 'y': -35,
- 'width': 395,
- 'height': 500
- },
-
- {
- 'path': 'img/labels_3.svg',
- 'alt': 'Image of a woman washing clothes with one bounding box around her, labeled \'pedestrian\'',
- 'x': 190,
- 'y': 65,
- 'width': 190,
- 'height': 315
- },
- ];
-
-
- var photoIndex = 0;
-
- var c = d3.conventions({
- sel: d3.select('.person-photos').html(''),
- height: 550
- })
-
- var photoSel = c.svg.append('svg:image')
- .attr('x', 50)
- .attr('y', 50)
- .attr('width', 700)
- .attr('height', 500)
- .attr('xlink:href', base_path)
-
- var photoSel = c.svg.appendMany('svg:image', data)
- .attr('x', d => d.x)
- .attr('y', d => d.y)
- .attr('width', d => d.width)
- .attr('height', d => d.height)
- .attr('xlink:href', d => d.path)
- .attr('alt', d => d.alt)
-
-
- var buttonHeight = 35
- var buttonWidth = 130
-
- var buttonSel = c.svg.appendMany('g.photo-button', data)
- .translate((d,i) => [(i * 170) + 100, 0])
- .at({
- // class: "dropdown"
- })
- .on('click', function(d, i){
- photoIndex = i
- setActiveImage()
- timer.stop();
- })
-
- buttonSel.append('rect')
- .at({
- height: buttonHeight,
- width: buttonWidth,
- // fill: '#fff'
- })
-
- buttonSel.append('text')
- .at({
- textAnchor: 'middle',
- // dominantBaseline: 'central',
- dy: '.33em',
- x: buttonWidth/2,
- y: buttonHeight/2,
- class: "monospace"
- })
- .text((d,i) => 'ground truth ' + (i + 1))
-
- // buttonSel.classed('dropdown', true);
-
- if (window.__photoPersonTimer) window.__photoPersonTimer.stop()
- var timer = window.__photoPersonTimer = d3.interval(() => {
- photoIndex = (photoIndex + 1) % data.length;
- setActiveImage()
- }, 2000)
-
- function setActiveImage(i){
- photoSel.st({opacity: (d, i) => i == photoIndex ? 1 : 0 })
- buttonSel.classed('is-active-button', (d, i) => i == photoIndex)
- }
- setActiveImage()
-}
-
-createPhotoScroller();
-
-
-
-
diff --git a/spaces/merve/hidden-bias/public/measuring-fairness/annotations.js b/spaces/merve/hidden-bias/public/measuring-fairness/annotations.js
deleted file mode 100644
index 7ab68f297f98c655427a84de22388906182b240c..0000000000000000000000000000000000000000
--- a/spaces/merve/hidden-bias/public/measuring-fairness/annotations.js
+++ /dev/null
@@ -1,52 +0,0 @@
-/* Copyright 2020 Google LLC. All Rights Reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-==============================================================================*/
-
-
-
-var annotations =
-[
-]
-
-
-function addSwoop(c){
- var swoopy = d3.swoopyDrag()
- .x(d => c.x(d.x))
- .y(d => c.y(d.y))
- .draggable(0)
- .annotations(annotations)
-
- var swoopySel = c.svg.append('g.annotations').call(swoopy)
-
- c.svg.append('marker#arrow')
- .attr('viewBox', '-10 -10 20 20')
- .attr('markerWidth', 20)
- .attr('markerHeight', 20)
- .attr('orient', 'auto')
- .append('path').at({d: 'M-6.75,-6.75 L 0,0 L -6.75,6.75'})
-
-
- swoopySel.selectAll('path').attr('marker-end', 'url(#arrow)')
- window.annotationSel = swoopySel.selectAll('g')
- .st({fontSize: 12, opacity: d => d.slide == 0 ? 1 : 0})
-
- swoopySel.selectAll('text')
- .each(function(d){
- d3.select(this)
- .text('') //clear existing text
- .tspans(d3.wordwrap(d.text, d.width || 20), 12) //wrap after 20 char
- })
-}
-
-
diff --git a/spaces/mikeion/research_guru/README.md b/spaces/mikeion/research_guru/README.md
deleted file mode 100644
index 15b8c756d439437f5e40f2718ee9e3f084ce4d5e..0000000000000000000000000000000000000000
--- a/spaces/mikeion/research_guru/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Research Guru
-emoji: 🐠
-colorFrom: gray
-colorTo: gray
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/mmlab-ntu/Segment-Any-RGBD/open_vocab_seg/data/datasets/register_pascal_context.py b/spaces/mmlab-ntu/Segment-Any-RGBD/open_vocab_seg/data/datasets/register_pascal_context.py
deleted file mode 100644
index e40f87c945da20e78c0a3ea230bc9f36d1800071..0000000000000000000000000000000000000000
--- a/spaces/mmlab-ntu/Segment-Any-RGBD/open_vocab_seg/data/datasets/register_pascal_context.py
+++ /dev/null
@@ -1,588 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import os
-
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from detectron2.data.datasets import load_sem_seg
-
-PASCALCONTEX59_NAMES = (
- "aeroplane",
- "bicycle",
- "bird",
- "boat",
- "bottle",
- "bus",
- "car",
- "cat",
- "chair",
- "cow",
- "table",
- "dog",
- "horse",
- "motorbike",
- "person",
- "pottedplant",
- "sheep",
- "sofa",
- "train",
- "tvmonitor",
- "bag",
- "bed",
- "bench",
- "book",
- "building",
- "cabinet",
- "ceiling",
- "cloth",
- "computer",
- "cup",
- "door",
- "fence",
- "floor",
- "flower",
- "food",
- "grass",
- "ground",
- "keyboard",
- "light",
- "mountain",
- "mouse",
- "curtain",
- "platform",
- "sign",
- "plate",
- "road",
- "rock",
- "shelves",
- "sidewalk",
- "sky",
- "snow",
- "bedclothes",
- "track",
- "tree",
- "truck",
- "wall",
- "water",
- "window",
- "wood",
-)
-
-PASCALCONTEX459_NAMES = (
- "accordion",
- "aeroplane",
- "air conditioner",
- "antenna",
- "artillery",
- "ashtray",
- "atrium",
- "baby carriage",
- "bag",
- "ball",
- "balloon",
- "bamboo weaving",
- "barrel",
- "baseball bat",
- "basket",
- "basketball backboard",
- "bathtub",
- "bed",
- "bedclothes",
- "beer",
- "bell",
- "bench",
- "bicycle",
- "binoculars",
- "bird",
- "bird cage",
- "bird feeder",
- "bird nest",
- "blackboard",
- "board",
- "boat",
- "bone",
- "book",
- "bottle",
- "bottle opener",
- "bowl",
- "box",
- "bracelet",
- "brick",
- "bridge",
- "broom",
- "brush",
- "bucket",
- "building",
- "bus",
- "cabinet",
- "cabinet door",
- "cage",
- "cake",
- "calculator",
- "calendar",
- "camel",
- "camera",
- "camera lens",
- "can",
- "candle",
- "candle holder",
- "cap",
- "car",
- "card",
- "cart",
- "case",
- "casette recorder",
- "cash register",
- "cat",
- "cd",
- "cd player",
- "ceiling",
- "cell phone",
- "cello",
- "chain",
- "chair",
- "chessboard",
- "chicken",
- "chopstick",
- "clip",
- "clippers",
- "clock",
- "closet",
- "cloth",
- "clothes tree",
- "coffee",
- "coffee machine",
- "comb",
- "computer",
- "concrete",
- "cone",
- "container",
- "control booth",
- "controller",
- "cooker",
- "copying machine",
- "coral",
- "cork",
- "corkscrew",
- "counter",
- "court",
- "cow",
- "crabstick",
- "crane",
- "crate",
- "cross",
- "crutch",
- "cup",
- "curtain",
- "cushion",
- "cutting board",
- "dais",
- "disc",
- "disc case",
- "dishwasher",
- "dock",
- "dog",
- "dolphin",
- "door",
- "drainer",
- "dray",
- "drink dispenser",
- "drinking machine",
- "drop",
- "drug",
- "drum",
- "drum kit",
- "duck",
- "dumbbell",
- "earphone",
- "earrings",
- "egg",
- "electric fan",
- "electric iron",
- "electric pot",
- "electric saw",
- "electronic keyboard",
- "engine",
- "envelope",
- "equipment",
- "escalator",
- "exhibition booth",
- "extinguisher",
- "eyeglass",
- "fan",
- "faucet",
- "fax machine",
- "fence",
- "ferris wheel",
- "fire extinguisher",
- "fire hydrant",
- "fire place",
- "fish",
- "fish tank",
- "fishbowl",
- "fishing net",
- "fishing pole",
- "flag",
- "flagstaff",
- "flame",
- "flashlight",
- "floor",
- "flower",
- "fly",
- "foam",
- "food",
- "footbridge",
- "forceps",
- "fork",
- "forklift",
- "fountain",
- "fox",
- "frame",
- "fridge",
- "frog",
- "fruit",
- "funnel",
- "furnace",
- "game controller",
- "game machine",
- "gas cylinder",
- "gas hood",
- "gas stove",
- "gift box",
- "glass",
- "glass marble",
- "globe",
- "glove",
- "goal",
- "grandstand",
- "grass",
- "gravestone",
- "ground",
- "guardrail",
- "guitar",
- "gun",
- "hammer",
- "hand cart",
- "handle",
- "handrail",
- "hanger",
- "hard disk drive",
- "hat",
- "hay",
- "headphone",
- "heater",
- "helicopter",
- "helmet",
- "holder",
- "hook",
- "horse",
- "horse-drawn carriage",
- "hot-air balloon",
- "hydrovalve",
- "ice",
- "inflator pump",
- "ipod",
- "iron",
- "ironing board",
- "jar",
- "kart",
- "kettle",
- "key",
- "keyboard",
- "kitchen range",
- "kite",
- "knife",
- "knife block",
- "ladder",
- "ladder truck",
- "ladle",
- "laptop",
- "leaves",
- "lid",
- "life buoy",
- "light",
- "light bulb",
- "lighter",
- "line",
- "lion",
- "lobster",
- "lock",
- "machine",
- "mailbox",
- "mannequin",
- "map",
- "mask",
- "mat",
- "match book",
- "mattress",
- "menu",
- "metal",
- "meter box",
- "microphone",
- "microwave",
- "mirror",
- "missile",
- "model",
- "money",
- "monkey",
- "mop",
- "motorbike",
- "mountain",
- "mouse",
- "mouse pad",
- "musical instrument",
- "napkin",
- "net",
- "newspaper",
- "oar",
- "ornament",
- "outlet",
- "oven",
- "oxygen bottle",
- "pack",
- "pan",
- "paper",
- "paper box",
- "paper cutter",
- "parachute",
- "parasol",
- "parterre",
- "patio",
- "pelage",
- "pen",
- "pen container",
- "pencil",
- "person",
- "photo",
- "piano",
- "picture",
- "pig",
- "pillar",
- "pillow",
- "pipe",
- "pitcher",
- "plant",
- "plastic",
- "plate",
- "platform",
- "player",
- "playground",
- "pliers",
- "plume",
- "poker",
- "poker chip",
- "pole",
- "pool table",
- "postcard",
- "poster",
- "pot",
- "pottedplant",
- "printer",
- "projector",
- "pumpkin",
- "rabbit",
- "racket",
- "radiator",
- "radio",
- "rail",
- "rake",
- "ramp",
- "range hood",
- "receiver",
- "recorder",
- "recreational machines",
- "remote control",
- "road",
- "robot",
- "rock",
- "rocket",
- "rocking horse",
- "rope",
- "rug",
- "ruler",
- "runway",
- "saddle",
- "sand",
- "saw",
- "scale",
- "scanner",
- "scissors",
- "scoop",
- "screen",
- "screwdriver",
- "sculpture",
- "scythe",
- "sewer",
- "sewing machine",
- "shed",
- "sheep",
- "shell",
- "shelves",
- "shoe",
- "shopping cart",
- "shovel",
- "sidecar",
- "sidewalk",
- "sign",
- "signal light",
- "sink",
- "skateboard",
- "ski",
- "sky",
- "sled",
- "slippers",
- "smoke",
- "snail",
- "snake",
- "snow",
- "snowmobiles",
- "sofa",
- "spanner",
- "spatula",
- "speaker",
- "speed bump",
- "spice container",
- "spoon",
- "sprayer",
- "squirrel",
- "stage",
- "stair",
- "stapler",
- "stick",
- "sticky note",
- "stone",
- "stool",
- "stove",
- "straw",
- "stretcher",
- "sun",
- "sunglass",
- "sunshade",
- "surveillance camera",
- "swan",
- "sweeper",
- "swim ring",
- "swimming pool",
- "swing",
- "switch",
- "table",
- "tableware",
- "tank",
- "tap",
- "tape",
- "tarp",
- "telephone",
- "telephone booth",
- "tent",
- "tire",
- "toaster",
- "toilet",
- "tong",
- "tool",
- "toothbrush",
- "towel",
- "toy",
- "toy car",
- "track",
- "train",
- "trampoline",
- "trash bin",
- "tray",
- "tree",
- "tricycle",
- "tripod",
- "trophy",
- "truck",
- "tube",
- "turtle",
- "tvmonitor",
- "tweezers",
- "typewriter",
- "umbrella",
- "unknown",
- "vacuum cleaner",
- "vending machine",
- "video camera",
- "video game console",
- "video player",
- "video tape",
- "violin",
- "wakeboard",
- "wall",
- "wallet",
- "wardrobe",
- "washing machine",
- "watch",
- "water",
- "water dispenser",
- "water pipe",
- "water skate board",
- "watermelon",
- "whale",
- "wharf",
- "wheel",
- "wheelchair",
- "window",
- "window blinds",
- "wineglass",
- "wire",
- "wood",
- "wool",
-
-)
-
-
-def _get_voc_meta(cat_list):
- ret = {
- "stuff_classes": cat_list,
- }
- return ret
-
-
-def register_pascal_context_59(root):
- root = os.path.join(root, "VOCdevkit/VOC2010")
- meta = _get_voc_meta(PASCALCONTEX59_NAMES)
- for name, image_dirname, sem_seg_dirname in [
- ("val", "JPEGImages", "annotations_detectron2/pc59_val"),
- ]:
- image_dir = os.path.join(root, image_dirname)
- gt_dir = os.path.join(root, sem_seg_dirname)
- all_name = f"pascal_context_59_sem_seg_{name}"
- DatasetCatalog.register(
- all_name,
- lambda x=image_dir, y=gt_dir: load_sem_seg(
- y, x, gt_ext="png", image_ext="jpg"
- ),
- )
- MetadataCatalog.get(all_name).set(
- image_root=image_dir,
- sem_seg_root=gt_dir,
- evaluator_type="sem_seg",
- ignore_label=255,
- **meta,
- )
-
-def register_pascal_context_459(root):
- root = os.path.join(root, "VOCdevkit/VOC2010")
- meta = _get_voc_meta(PASCALCONTEX459_NAMES)
- for name, image_dirname, sem_seg_dirname in [
- ("val", "JPEGImages", "annotations_detectron2/pc459_val"),
- ]:
- image_dir = os.path.join(root, image_dirname)
- gt_dir = os.path.join(root, sem_seg_dirname)
- all_name = f"pascal_context_459_sem_seg_{name}"
- DatasetCatalog.register(
- all_name,
- lambda x=image_dir, y=gt_dir: load_sem_seg(
- y, x, gt_ext="tif", image_ext="jpg"
- ),
- )
- MetadataCatalog.get(all_name).set(
- image_root=image_dir,
- sem_seg_root=gt_dir,
- evaluator_type="sem_seg",
- ignore_label=65535, # NOTE: gt is saved in 16-bit TIFF images
- **meta,
- )
-
-_root = os.getenv("DETECTRON2_DATASETS", "datasets")
-register_pascal_context_59(_root)
-register_pascal_context_459(_root)
diff --git a/spaces/mmlab-ntu/relate-anything-model/segment_anything/build_sam.py b/spaces/mmlab-ntu/relate-anything-model/segment_anything/build_sam.py
deleted file mode 100644
index 07abfca24e96eced7f13bdefd3212ce1b77b8999..0000000000000000000000000000000000000000
--- a/spaces/mmlab-ntu/relate-anything-model/segment_anything/build_sam.py
+++ /dev/null
@@ -1,107 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-from functools import partial
-
-from .modeling import ImageEncoderViT, MaskDecoder, PromptEncoder, Sam, TwoWayTransformer
-
-
-def build_sam_vit_h(checkpoint=None):
- return _build_sam(
- encoder_embed_dim=1280,
- encoder_depth=32,
- encoder_num_heads=16,
- encoder_global_attn_indexes=[7, 15, 23, 31],
- checkpoint=checkpoint,
- )
-
-
-build_sam = build_sam_vit_h
-
-
-def build_sam_vit_l(checkpoint=None):
- return _build_sam(
- encoder_embed_dim=1024,
- encoder_depth=24,
- encoder_num_heads=16,
- encoder_global_attn_indexes=[5, 11, 17, 23],
- checkpoint=checkpoint,
- )
-
-
-def build_sam_vit_b(checkpoint=None):
- return _build_sam(
- encoder_embed_dim=768,
- encoder_depth=12,
- encoder_num_heads=12,
- encoder_global_attn_indexes=[2, 5, 8, 11],
- checkpoint=checkpoint,
- )
-
-
-sam_model_registry = {
- "default": build_sam,
- "vit_h": build_sam,
- "vit_l": build_sam_vit_l,
- "vit_b": build_sam_vit_b,
-}
-
-
-def _build_sam(
- encoder_embed_dim,
- encoder_depth,
- encoder_num_heads,
- encoder_global_attn_indexes,
- checkpoint=None,
-):
- prompt_embed_dim = 256
- image_size = 1024
- vit_patch_size = 16
- image_embedding_size = image_size // vit_patch_size
- sam = Sam(
- image_encoder=ImageEncoderViT(
- depth=encoder_depth,
- embed_dim=encoder_embed_dim,
- img_size=image_size,
- mlp_ratio=4,
- norm_layer=partial(torch.nn.LayerNorm, eps=1e-6),
- num_heads=encoder_num_heads,
- patch_size=vit_patch_size,
- qkv_bias=True,
- use_rel_pos=True,
- global_attn_indexes=encoder_global_attn_indexes,
- window_size=14,
- out_chans=prompt_embed_dim,
- ),
- prompt_encoder=PromptEncoder(
- embed_dim=prompt_embed_dim,
- image_embedding_size=(image_embedding_size, image_embedding_size),
- input_image_size=(image_size, image_size),
- mask_in_chans=16,
- ),
- mask_decoder=MaskDecoder(
- num_multimask_outputs=3,
- transformer=TwoWayTransformer(
- depth=2,
- embedding_dim=prompt_embed_dim,
- mlp_dim=2048,
- num_heads=8,
- ),
- transformer_dim=prompt_embed_dim,
- iou_head_depth=3,
- iou_head_hidden_dim=256,
- ),
- pixel_mean=[123.675, 116.28, 103.53],
- pixel_std=[58.395, 57.12, 57.375],
- )
- sam.eval()
- if checkpoint is not None:
- with open(checkpoint, "rb") as f:
- state_dict = torch.load(f)
- sam.load_state_dict(state_dict)
- return sam
diff --git a/spaces/mms-meta/MMS/lid.py b/spaces/mms-meta/MMS/lid.py
deleted file mode 100644
index 7d0c96248ef2c85788348874618bf8cc1b088d69..0000000000000000000000000000000000000000
--- a/spaces/mms-meta/MMS/lid.py
+++ /dev/null
@@ -1,73 +0,0 @@
-from transformers import Wav2Vec2ForSequenceClassification, AutoFeatureExtractor
-import torch
-import librosa
-
-model_id = "facebook/mms-lid-1024"
-
-processor = AutoFeatureExtractor.from_pretrained(model_id)
-model = Wav2Vec2ForSequenceClassification.from_pretrained(model_id)
-
-
-LID_SAMPLING_RATE = 16_000
-LID_TOPK = 10
-LID_THRESHOLD = 0.33
-
-LID_LANGUAGES = {}
-with open(f"data/lid/all_langs.tsv") as f:
- for line in f:
- iso, name = line.split(" ", 1)
- LID_LANGUAGES[iso] = name
-
-
-def identify(audio_source=None, microphone=None, file_upload=None):
- if audio_source is None and microphone is None and file_upload is None:
- # HACK: need to handle this case for some reason
- return {}
-
- if type(microphone) is dict:
- # HACK: microphone variable is a dict when running on examples
- microphone = microphone["name"]
- audio_fp = (
- file_upload if "upload" in str(audio_source or "").lower() else microphone
- )
- if audio_fp is None:
- return "ERROR: You have to either use the microphone or upload an audio file"
-
- audio_samples = librosa.load(audio_fp, sr=LID_SAMPLING_RATE, mono=True)[0]
-
- inputs = processor(
- audio_samples, sampling_rate=LID_SAMPLING_RATE, return_tensors="pt"
- )
-
- # set device
- if torch.cuda.is_available():
- device = torch.device("cuda")
- elif (
- hasattr(torch.backends, "mps")
- and torch.backends.mps.is_available()
- and torch.backends.mps.is_built()
- ):
- device = torch.device("mps")
- else:
- device = torch.device("cpu")
-
- model.to(device)
- inputs = inputs.to(device)
-
- with torch.no_grad():
- logit = model(**inputs).logits
-
- logit_lsm = torch.log_softmax(logit.squeeze(), dim=-1)
- scores, indices = torch.topk(logit_lsm, 5, dim=-1)
- scores, indices = torch.exp(scores).to("cpu").tolist(), indices.to("cpu").tolist()
- iso2score = {model.config.id2label[int(i)]: s for s, i in zip(scores, indices)}
- if max(iso2score.values()) < LID_THRESHOLD:
- return "Low confidence in the language identification predictions. Output is not shown!"
- return {LID_LANGUAGES[iso]: score for iso, score in iso2score.items()}
-
-
-LID_EXAMPLES = [
- [None, "./assets/english.mp3", None],
- [None, "./assets/tamil.mp3", None],
- [None, "./assets/burmese.mp3", None],
-]
diff --git a/spaces/mshkdm/VToonify/vtoonify/model/encoder/readme.md b/spaces/mshkdm/VToonify/vtoonify/model/encoder/readme.md
deleted file mode 100644
index 5421bfe3e67b7b6cbd7baf96b741b539d65bb0fd..0000000000000000000000000000000000000000
--- a/spaces/mshkdm/VToonify/vtoonify/model/encoder/readme.md
+++ /dev/null
@@ -1,9 +0,0 @@
-# Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation
-
-## Description
-Official Implementation of pSp paper for both training and evaluation. The pSp method extends the StyleGAN model to
-allow solving different image-to-image translation problems using its encoder.
-
-Fork from [https://github.com/eladrich/pixel2style2pixel](https://github.com/eladrich/pixel2style2pixel).
-
-In VToonify, we modify pSp to accept z+ latent code.
diff --git a/spaces/mshukor/UnIVAL/fairseq/tests/speech_recognition/test_collaters.py b/spaces/mshukor/UnIVAL/fairseq/tests/speech_recognition/test_collaters.py
deleted file mode 100644
index 6a5029a48faea2426d7a0277655a2c7c08c1d16c..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/tests/speech_recognition/test_collaters.py
+++ /dev/null
@@ -1,58 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import unittest
-
-import numpy as np
-import torch
-from examples.speech_recognition.data.collaters import Seq2SeqCollater
-
-
-class TestSeq2SeqCollator(unittest.TestCase):
- def test_collate(self):
-
- eos_idx = 1
- pad_idx = 0
- collater = Seq2SeqCollater(
- feature_index=0, label_index=1, pad_index=pad_idx, eos_index=eos_idx
- )
-
- # 2 frames in the first sample and 3 frames in the second one
- frames1 = np.array([[7, 8], [9, 10]])
- frames2 = np.array([[1, 2], [3, 4], [5, 6]])
- target1 = np.array([4, 2, 3, eos_idx])
- target2 = np.array([3, 2, eos_idx])
- sample1 = {"id": 0, "data": [frames1, target1]}
- sample2 = {"id": 1, "data": [frames2, target2]}
- batch = collater.collate([sample1, sample2])
-
- # collate sort inputs by frame's length before creating the batch
- self.assertTensorEqual(batch["id"], torch.tensor([1, 0]))
- self.assertEqual(batch["ntokens"], 7)
- self.assertTensorEqual(
- batch["net_input"]["src_tokens"],
- torch.tensor(
- [[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10], [pad_idx, pad_idx]]]
- ),
- )
- self.assertTensorEqual(
- batch["net_input"]["prev_output_tokens"],
- torch.tensor([[eos_idx, 3, 2, pad_idx], [eos_idx, 4, 2, 3]]),
- )
- self.assertTensorEqual(batch["net_input"]["src_lengths"], torch.tensor([3, 2]))
- self.assertTensorEqual(
- batch["target"],
- torch.tensor([[3, 2, eos_idx, pad_idx], [4, 2, 3, eos_idx]]),
- )
- self.assertEqual(batch["nsentences"], 2)
-
- def assertTensorEqual(self, t1, t2):
- self.assertEqual(t1.size(), t2.size(), "size mismatch")
- self.assertEqual(t1.ne(t2).long().sum(), 0)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/multimodalart/Tune-A-Video-Training-UI-poli/app_inference.py b/spaces/multimodalart/Tune-A-Video-Training-UI-poli/app_inference.py
deleted file mode 100644
index d705504e5bc7a8938e1b5fcfb207f4cb731c866b..0000000000000000000000000000000000000000
--- a/spaces/multimodalart/Tune-A-Video-Training-UI-poli/app_inference.py
+++ /dev/null
@@ -1,170 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import enum
-
-import gradio as gr
-from huggingface_hub import HfApi
-
-from constants import MODEL_LIBRARY_ORG_NAME, UploadTarget
-from inference import InferencePipeline
-from utils import find_exp_dirs
-
-
-class ModelSource(enum.Enum):
- HUB_LIB = UploadTarget.MODEL_LIBRARY.value
- LOCAL = 'Local'
-
-
-class InferenceUtil:
- def __init__(self, hf_token: str | None):
- self.hf_token = hf_token
-
- def load_hub_model_list(self) -> dict:
- api = HfApi(token=self.hf_token)
- choices = [
- info.modelId
- for info in api.list_models(author=MODEL_LIBRARY_ORG_NAME)
- ]
- return gr.update(choices=choices,
- value=choices[0] if choices else None)
-
- @staticmethod
- def load_local_model_list() -> dict:
- choices = find_exp_dirs()
- return gr.update(choices=choices,
- value=choices[0] if choices else None)
-
- def reload_model_list(self, model_source: str) -> dict:
- if model_source == ModelSource.HUB_LIB.value:
- return self.load_hub_model_list()
- elif model_source == ModelSource.LOCAL.value:
- return self.load_local_model_list()
- else:
- raise ValueError
-
- def load_model_info(self, model_id: str) -> tuple[str, str]:
- try:
- card = InferencePipeline.get_model_card(model_id, self.hf_token)
- except Exception:
- return '', ''
- base_model = getattr(card.data, 'base_model', '')
- training_prompt = getattr(card.data, 'training_prompt', '')
- return base_model, training_prompt
-
- def reload_model_list_and_update_model_info(
- self, model_source: str) -> tuple[dict, str, str]:
- model_list_update = self.reload_model_list(model_source)
- model_list = model_list_update['choices']
- model_info = self.load_model_info(model_list[0] if model_list else '')
- return model_list_update, *model_info
-
-
-def create_inference_demo(pipe: InferencePipeline,
- hf_token: str | None = None) -> gr.Blocks:
- app = InferenceUtil(hf_token)
-
- with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- with gr.Box():
- model_source = gr.Radio(
- label='Model Source',
- choices=[_.value for _ in ModelSource],
- value=ModelSource.HUB_LIB.value)
- reload_button = gr.Button('Reload Model List')
- model_id = gr.Dropdown(label='Model ID',
- choices=None,
- value=None)
- with gr.Accordion(
- label=
- 'Model info (Base model and prompt used for training)',
- open=False):
- with gr.Row():
- base_model_used_for_training = gr.Text(
- label='Base model', interactive=False)
- prompt_used_for_training = gr.Text(
- label='Training prompt', interactive=False)
- prompt = gr.Textbox(
- label='Prompt',
- max_lines=1,
- placeholder='Example: "A panda is surfing"')
- video_length = gr.Slider(label='Video length',
- minimum=4,
- maximum=12,
- step=1,
- value=8)
- fps = gr.Slider(label='FPS',
- minimum=1,
- maximum=12,
- step=1,
- value=1)
- seed = gr.Slider(label='Seed',
- minimum=0,
- maximum=100000,
- step=1,
- value=0)
- with gr.Accordion('Other Parameters', open=False):
- num_steps = gr.Slider(label='Number of Steps',
- minimum=0,
- maximum=100,
- step=1,
- value=50)
- guidance_scale = gr.Slider(label='CFG Scale',
- minimum=0,
- maximum=50,
- step=0.1,
- value=7.5)
-
- run_button = gr.Button('Generate')
-
- gr.Markdown('''
- - After training, you can press "Reload Model List" button to load your trained model names.
- - It takes a few minutes to download model first.
- - Expected time to generate an 8-frame video: 70 seconds with T4, 24 seconds with A10G, (10 seconds with A100)
- ''')
- with gr.Column():
- result = gr.Video(label='Result')
-
- model_source.change(fn=app.reload_model_list_and_update_model_info,
- inputs=model_source,
- outputs=[
- model_id,
- base_model_used_for_training,
- prompt_used_for_training,
- ])
- reload_button.click(fn=app.reload_model_list_and_update_model_info,
- inputs=model_source,
- outputs=[
- model_id,
- base_model_used_for_training,
- prompt_used_for_training,
- ])
- model_id.change(fn=app.load_model_info,
- inputs=model_id,
- outputs=[
- base_model_used_for_training,
- prompt_used_for_training,
- ])
- inputs = [
- model_id,
- prompt,
- video_length,
- fps,
- seed,
- num_steps,
- guidance_scale,
- ]
- prompt.submit(fn=pipe.run, inputs=inputs, outputs=result)
- run_button.click(fn=pipe.run, inputs=inputs, outputs=result)
- return demo
-
-
-if __name__ == '__main__':
- import os
-
- hf_token = os.getenv('HF_TOKEN')
- pipe = InferencePipeline(hf_token)
- demo = create_inference_demo(pipe, hf_token)
- demo.queue(max_size=10).launch(share=False)
diff --git a/spaces/multimodalart/stable-diffusion-inpainting/clipseg/score.py b/spaces/multimodalart/stable-diffusion-inpainting/clipseg/score.py
deleted file mode 100644
index 8db8915b109953931fa2a330a7731db4a51b44f8..0000000000000000000000000000000000000000
--- a/spaces/multimodalart/stable-diffusion-inpainting/clipseg/score.py
+++ /dev/null
@@ -1,453 +0,0 @@
-from torch.functional import Tensor
-
-import torch
-import inspect
-import json
-import yaml
-import time
-import sys
-
-from general_utils import log
-
-import numpy as np
-from os.path import expanduser, join, isfile, realpath
-
-from torch.utils.data import DataLoader
-
-from metrics import FixedIntervalMetrics
-
-from general_utils import load_model, log, score_config_from_cli_args, AttributeDict, get_attribute, filter_args
-
-
-DATASET_CACHE = dict()
-
-def load_model(checkpoint_id, weights_file=None, strict=True, model_args='from_config', with_config=False, ignore_weights=False):
-
- config = json.load(open(join('logs', checkpoint_id, 'config.json')))
-
- if model_args != 'from_config' and type(model_args) != dict:
- raise ValueError('model_args must either be "from_config" or a dictionary of values')
-
- model_cls = get_attribute(config['model'])
-
- # load model
- if model_args == 'from_config':
- _, model_args, _ = filter_args(config, inspect.signature(model_cls).parameters)
-
- model = model_cls(**model_args)
-
- if weights_file is None:
- weights_file = realpath(join('logs', checkpoint_id, 'weights.pth'))
- else:
- weights_file = realpath(join('logs', checkpoint_id, weights_file))
-
- if isfile(weights_file) and not ignore_weights:
- weights = torch.load(weights_file)
- for _, w in weights.items():
- assert not torch.any(torch.isnan(w)), 'weights contain NaNs'
- model.load_state_dict(weights, strict=strict)
- else:
- if not ignore_weights:
- raise FileNotFoundError(f'model checkpoint {weights_file} was not found')
-
- if with_config:
- return model, config
-
- return model
-
-
-def compute_shift2(model, datasets, seed=123, repetitions=1):
- """ computes shift """
-
- model.eval()
- model.cuda()
-
- import random
- random.seed(seed)
-
- preds, gts = [], []
- for i_dataset, dataset in enumerate(datasets):
-
- loader = DataLoader(dataset, batch_size=1, num_workers=0, shuffle=False, drop_last=False)
-
- max_iterations = int(repetitions * len(dataset.dataset.data_list))
-
- with torch.no_grad():
-
- i, losses = 0, []
- for i_all, (data_x, data_y) in enumerate(loader):
-
- data_x = [v.cuda(non_blocking=True) if v is not None else v for v in data_x]
- data_y = [v.cuda(non_blocking=True) if v is not None else v for v in data_y]
-
- pred, = model(data_x[0], data_x[1], data_x[2])
- preds += [pred.detach()]
- gts += [data_y]
-
- i += 1
- if max_iterations and i >= max_iterations:
- break
-
- from metrics import FixedIntervalMetrics
- n_values = 51
- thresholds = np.linspace(0, 1, n_values)[1:-1]
- metric = FixedIntervalMetrics(resize_pred=True, sigmoid=True, n_values=n_values)
-
- for p, y in zip(preds, gts):
- metric.add(p.unsqueeze(1), y)
-
- best_idx = np.argmax(metric.value()['fgiou_scores'])
- best_thresh = thresholds[best_idx]
-
- return best_thresh
-
-
-def get_cached_pascal_pfe(split, config):
- from datasets.pfe_dataset import PFEPascalWrapper
- try:
- dataset = DATASET_CACHE[(split, config.image_size, config.label_support, config.mask)]
- except KeyError:
- dataset = PFEPascalWrapper(mode='val', split=split, mask=config.mask, image_size=config.image_size, label_support=config.label_support)
- DATASET_CACHE[(split, config.image_size, config.label_support, config.mask)] = dataset
- return dataset
-
-
-
-
-def main():
- config, train_checkpoint_id = score_config_from_cli_args()
-
- metrics = score(config, train_checkpoint_id, None)
-
- for dataset in metrics.keys():
- for k in metrics[dataset]:
- if type(metrics[dataset][k]) in {float, int}:
- print(dataset, f'{k:<16} {metrics[dataset][k]:.3f}')
-
-
-def score(config, train_checkpoint_id, train_config):
-
- config = AttributeDict(config)
-
- print(config)
-
- # use training dataset and loss
- train_config = AttributeDict(json.load(open(f'logs/{train_checkpoint_id}/config.json')))
-
- cp_str = f'_{config.iteration_cp}' if config.iteration_cp is not None else ''
-
-
- model_cls = get_attribute(train_config['model'])
-
- _, model_args, _ = filter_args(train_config, inspect.signature(model_cls).parameters)
-
- model_args = {**model_args, **{k: config[k] for k in ['process_cond', 'fix_shift'] if k in config}}
-
- strict_models = {'ConditionBase4', 'PFENetWrapper'}
- model = load_model(train_checkpoint_id, strict=model_cls.__name__ in strict_models, model_args=model_args,
- weights_file=f'weights{cp_str}.pth', )
-
-
- model.eval()
- model.cuda()
-
- metric_args = dict()
-
- if 'threshold' in config:
- if config.metric.split('.')[-1] == 'SkLearnMetrics':
- metric_args['threshold'] = config.threshold
-
- if 'resize_to' in config:
- metric_args['resize_to'] = config.resize_to
-
- if 'sigmoid' in config:
- metric_args['sigmoid'] = config.sigmoid
-
- if 'custom_threshold' in config:
- metric_args['custom_threshold'] = config.custom_threshold
-
- if config.test_dataset == 'pascal':
-
- loss_fn = get_attribute(train_config.loss)
- # assume that if no split is specified in train_config, test on all splits,
-
- if 'splits' in config:
- splits = config.splits
- else:
- if 'split' in train_config and type(train_config.split) == int:
- # unless train_config has a split set, in that case assume train mode in training
- splits = [train_config.split]
- assert train_config.mode == 'train'
- else:
- splits = [0,1,2,3]
-
- log.info('Test on these splits', splits)
-
- scores = dict()
- for split in splits:
-
- shift = config.shift if 'shift' in config else 0
-
- # automatic shift
- if shift == 'auto':
- shift_compute_t = time.time()
- shift = compute_shift2(model, [get_cached_pascal_pfe(s, config) for s in range(4) if s != split], repetitions=config.compute_shift_fac)
- log.info(f'Best threshold is {shift}, computed on splits: {[s for s in range(4) if s != split]}, took {time.time() - shift_compute_t:.1f}s')
-
- dataset = get_cached_pascal_pfe(split, config)
-
- eval_start_t = time.time()
-
- loader = DataLoader(dataset, batch_size=1, num_workers=0, shuffle=False, drop_last=False)
-
- assert config.batch_size is None or config.batch_size == 1, 'When PFE Dataset is used, batch size must be 1'
-
- metric = FixedIntervalMetrics(resize_pred=True, sigmoid=True, custom_threshold=shift, **metric_args)
-
- with torch.no_grad():
-
- i, losses = 0, []
- for i_all, (data_x, data_y) in enumerate(loader):
-
- data_x = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_x]
- data_y = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_y]
-
- if config.mask == 'separate': # for old CondBase model
- pred, = model(data_x[0], data_x[1], data_x[2])
- else:
- # assert config.mask in {'text', 'highlight'}
- pred, _, _, _ = model(data_x[0], data_x[1], return_features=True)
-
- # loss = loss_fn(pred, data_y[0])
- metric.add(pred.unsqueeze(1) + shift, data_y)
-
- # losses += [float(loss)]
-
- i += 1
- if config.max_iterations and i >= config.max_iterations:
- break
-
- #scores[split] = {m: s for m, s in zip(metric.names(), metric.value())}
-
- log.info(f'Dataset length: {len(dataset)}, took {time.time() - eval_start_t:.1f}s to evaluate.')
-
- print(metric.value()['mean_iou_scores'])
-
- scores[split] = metric.scores()
-
- log.info(f'Completed split {split}')
-
- key_prefix = config['name'] if 'name' in config else 'pas'
-
- all_keys = set.intersection(*[set(v.keys()) for v in scores.values()])
-
- valid_keys = [k for k in all_keys if all(v[k] is not None and isinstance(v[k], (int, float, np.float)) for v in scores.values())]
-
- return {key_prefix: {k: np.mean([s[k] for s in scores.values()]) for k in valid_keys}}
-
-
- if config.test_dataset == 'coco':
- from datasets.coco_wrapper import COCOWrapper
-
- coco_dataset = COCOWrapper('test', fold=train_config.fold, image_size=train_config.image_size, mask=config.mask,
- with_class_label=True)
-
- log.info('Dataset length', len(coco_dataset))
- loader = DataLoader(coco_dataset, batch_size=config.batch_size, num_workers=2, shuffle=False, drop_last=False)
-
- metric = get_attribute(config.metric)(resize_pred=True, **metric_args)
-
- shift = config.shift if 'shift' in config else 0
-
- with torch.no_grad():
-
- i, losses = 0, []
- for i_all, (data_x, data_y) in enumerate(loader):
- data_x = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_x]
- data_y = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_y]
-
- if config.mask == 'separate': # for old CondBase model
- pred, = model(data_x[0], data_x[1], data_x[2])
- else:
- # assert config.mask in {'text', 'highlight'}
- pred, _, _, _ = model(data_x[0], data_x[1], return_features=True)
-
- metric.add([pred + shift], data_y)
-
- i += 1
- if config.max_iterations and i >= config.max_iterations:
- break
-
- key_prefix = config['name'] if 'name' in config else 'coco'
- return {key_prefix: metric.scores()}
- #return {key_prefix: {k: v for k, v in zip(metric.names(), metric.value())}}
-
-
- if config.test_dataset == 'phrasecut':
- from datasets.phrasecut import PhraseCut
-
- only_visual = config.only_visual is not None and config.only_visual
- with_visual = config.with_visual is not None and config.with_visual
-
- dataset = PhraseCut('test',
- image_size=train_config.image_size,
- mask=config.mask,
- with_visual=with_visual, only_visual=only_visual, aug_crop=False,
- aug_color=False)
-
- loader = DataLoader(dataset, batch_size=config.batch_size, num_workers=2, shuffle=False, drop_last=False)
- metric = get_attribute(config.metric)(resize_pred=True, **metric_args)
-
- shift = config.shift if 'shift' in config else 0
-
-
- with torch.no_grad():
-
- i, losses = 0, []
- for i_all, (data_x, data_y) in enumerate(loader):
- data_x = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_x]
- data_y = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_y]
-
- pred, _, _, _ = model(data_x[0], data_x[1], return_features=True)
- metric.add([pred + shift], data_y)
-
- i += 1
- if config.max_iterations and i >= config.max_iterations:
- break
-
- key_prefix = config['name'] if 'name' in config else 'phrasecut'
- return {key_prefix: metric.scores()}
- #return {key_prefix: {k: v for k, v in zip(metric.names(), metric.value())}}
-
- if config.test_dataset == 'pascal_zs':
- from third_party.JoEm.model.metric import Evaluator
- from third_party.JoEm.data_loader import get_seen_idx, get_unseen_idx, VOC
- from datasets.pascal_zeroshot import PascalZeroShot, PASCAL_VOC_CLASSES_ZS
-
- from models.clipseg import CLIPSegMultiLabel
-
- n_unseen = train_config.remove_classes[1]
-
- pz = PascalZeroShot('val', n_unseen, image_size=352)
- m = CLIPSegMultiLabel(model=train_config.name).cuda()
- m.eval();
-
- print(len(pz), n_unseen)
- print('training removed', [c for class_set in PASCAL_VOC_CLASSES_ZS[:n_unseen // 2] for c in class_set])
-
- print('unseen', [VOC[i] for i in get_unseen_idx(n_unseen)])
- print('seen', [VOC[i] for i in get_seen_idx(n_unseen)])
-
- loader = DataLoader(pz, batch_size=8)
- evaluator = Evaluator(21, get_unseen_idx(n_unseen), get_seen_idx(n_unseen))
-
- for i, (data_x, data_y) in enumerate(loader):
- pred = m(data_x[0].cuda())
- evaluator.add_batch(data_y[0].numpy(), pred.argmax(1).cpu().detach().numpy())
-
- if config.max_iter is not None and i > config.max_iter:
- break
-
- scores = evaluator.Mean_Intersection_over_Union()
- key_prefix = config['name'] if 'name' in config else 'pas_zs'
-
- return {key_prefix: {k: scores[k] for k in ['seen', 'unseen', 'harmonic', 'overall']}}
-
- elif config.test_dataset in {'same_as_training', 'affordance'}:
- loss_fn = get_attribute(train_config.loss)
-
- metric_cls = get_attribute(config.metric)
- metric = metric_cls(**metric_args)
-
- if config.test_dataset == 'same_as_training':
- dataset_cls = get_attribute(train_config.dataset)
- elif config.test_dataset == 'affordance':
- dataset_cls = get_attribute('datasets.lvis_oneshot3.LVIS_Affordance')
- dataset_name = 'aff'
- else:
- dataset_cls = get_attribute('datasets.lvis_oneshot3.LVIS_OneShot')
- dataset_name = 'lvis'
-
- _, dataset_args, _ = filter_args(config, inspect.signature(dataset_cls).parameters)
-
- dataset_args['image_size'] = train_config.image_size # explicitly use training image size for evaluation
-
- if model.__class__.__name__ == 'PFENetWrapper':
- dataset_args['image_size'] = config.image_size
-
- log.info('init dataset', str(dataset_cls))
- dataset = dataset_cls(**dataset_args)
-
- log.info(f'Score on {model.__class__.__name__} on {dataset_cls.__name__}')
-
- data_loader = torch.utils.data.DataLoader(dataset, batch_size=config.batch_size, shuffle=config.shuffle)
-
- # explicitly set prompts
- if config.prompt == 'plain':
- model.prompt_list = ['{}']
- elif config.prompt == 'fixed':
- model.prompt_list = ['a photo of a {}.']
- elif config.prompt == 'shuffle':
- model.prompt_list = ['a photo of a {}.', 'a photograph of a {}.', 'an image of a {}.', '{}.']
- elif config.prompt == 'shuffle_clip':
- from models.clip_prompts import imagenet_templates
- model.prompt_list = imagenet_templates
-
- config.assume_no_unused_keys(exceptions=['max_iterations'])
-
- t_start = time.time()
-
- with torch.no_grad(): # TODO: switch to inference_mode (torch 1.9)
- i, losses = 0, []
- for data_x, data_y in data_loader:
-
- data_x = [x.cuda() if isinstance(x, torch.Tensor) else x for x in data_x]
- data_y = [x.cuda() if isinstance(x, torch.Tensor) else x for x in data_y]
-
- if model.__class__.__name__ in {'ConditionBase4', 'PFENetWrapper'}:
- pred, = model(data_x[0], data_x[1], data_x[2])
- visual_q = None
- else:
- pred, visual_q, _, _ = model(data_x[0], data_x[1], return_features=True)
-
- loss = loss_fn(pred, data_y[0])
-
- metric.add([pred], data_y)
-
- losses += [float(loss)]
-
- i += 1
- if config.max_iterations and i >= config.max_iterations:
- break
-
- # scores = {m: s for m, s in zip(metric.names(), metric.value())}
- scores = metric.scores()
-
- keys = set(scores.keys())
- if dataset.negative_prob > 0 and 'mIoU' in keys:
- keys.remove('mIoU')
-
- name_mask = dataset.mask.replace('text_label', 'txt')[:3]
- name_neg = '' if dataset.negative_prob == 0 else '_' + str(dataset.negative_prob)
-
- score_name = config.name if 'name' in config else f'{dataset_name}_{name_mask}{name_neg}'
-
- scores = {score_name: {k: v for k,v in scores.items() if k in keys}}
- scores[score_name].update({'test_loss': np.mean(losses)})
-
- log.info(f'Evaluation took {time.time() - t_start:.1f}s')
-
- return scores
- else:
- raise ValueError('invalid test dataset')
-
-
-
-
-
-
-
-
-
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/demo/configs/webpack/common.js b/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/demo/configs/webpack/common.js
deleted file mode 100644
index 098f6686f063bf6c631df4f5f3b5921d48ed2d2a..0000000000000000000000000000000000000000
--- a/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/demo/configs/webpack/common.js
+++ /dev/null
@@ -1,84 +0,0 @@
-// Copyright (c) Meta Platforms, Inc. and affiliates.
-// All rights reserved.
-
-// This source code is licensed under the license found in the
-// LICENSE file in the root directory of this source tree.
-
-const { resolve } = require("path");
-const HtmlWebpackPlugin = require("html-webpack-plugin");
-const FriendlyErrorsWebpackPlugin = require("friendly-errors-webpack-plugin");
-const CopyPlugin = require("copy-webpack-plugin");
-const webpack = require("webpack");
-
-module.exports = {
- entry: "./src/index.tsx",
- resolve: {
- extensions: [".js", ".jsx", ".ts", ".tsx"],
- },
- output: {
- path: resolve(__dirname, "dist"),
- },
- module: {
- rules: [
- {
- test: /\.mjs$/,
- include: /node_modules/,
- type: "javascript/auto",
- resolve: {
- fullySpecified: false,
- },
- },
- {
- test: [/\.jsx?$/, /\.tsx?$/],
- use: ["ts-loader"],
- exclude: /node_modules/,
- },
- {
- test: /\.css$/,
- use: ["style-loader", "css-loader"],
- },
- {
- test: /\.(scss|sass)$/,
- use: ["style-loader", "css-loader", "postcss-loader"],
- },
- {
- test: /\.(jpe?g|png|gif|svg)$/i,
- use: [
- "file-loader?hash=sha512&digest=hex&name=img/[contenthash].[ext]",
- "image-webpack-loader?bypassOnDebug&optipng.optimizationLevel=7&gifsicle.interlaced=false",
- ],
- },
- {
- test: /\.(woff|woff2|ttf)$/,
- use: {
- loader: "url-loader",
- },
- },
- ],
- },
- plugins: [
- new CopyPlugin({
- patterns: [
- {
- from: "node_modules/onnxruntime-web/dist/*.wasm",
- to: "[name][ext]",
- },
- {
- from: "model",
- to: "model",
- },
- {
- from: "src/assets",
- to: "assets",
- },
- ],
- }),
- new HtmlWebpackPlugin({
- template: "./src/assets/index.html",
- }),
- new FriendlyErrorsWebpackPlugin(),
- new webpack.ProvidePlugin({
- process: "process/browser",
- }),
- ],
-};
diff --git a/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/demo/src/assets/index.html b/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/demo/src/assets/index.html
deleted file mode 100644
index cbcd53c19953b4421dc7b4a537eef327eafd4cf1..0000000000000000000000000000000000000000
--- a/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/demo/src/assets/index.html
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-
-
-
- Segment Anything Demo
-
-
-
-
-
-
-
-
-
diff --git a/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/aws/userdata.sh b/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/aws/userdata.sh
deleted file mode 100644
index 5fc1332ac1b0d1794cf8f8c5f6918059ae5dc381..0000000000000000000000000000000000000000
--- a/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/aws/userdata.sh
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/bin/bash
-# AWS EC2 instance startup script https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
-# This script will run only once on first instance start (for a re-start script see mime.sh)
-# /home/ubuntu (ubuntu) or /home/ec2-user (amazon-linux) is working dir
-# Use >300 GB SSD
-
-cd home/ubuntu
-if [ ! -d yolov5 ]; then
- echo "Running first-time script." # install dependencies, download COCO, pull Docker
- git clone https://github.com/ultralytics/yolov5 -b master && sudo chmod -R 777 yolov5
- cd yolov5
- bash data/scripts/get_coco.sh && echo "COCO done." &
- sudo docker pull ultralytics/yolov5:latest && echo "Docker done." &
- python -m pip install --upgrade pip && pip install -r requirements.txt && python detect.py && echo "Requirements done." &
- wait && echo "All tasks done." # finish background tasks
-else
- echo "Running re-start script." # resume interrupted runs
- i=0
- list=$(sudo docker ps -qa) # container list i.e. $'one\ntwo\nthree\nfour'
- while IFS= read -r id; do
- ((i++))
- echo "restarting container $i: $id"
- sudo docker start $id
- # sudo docker exec -it $id python train.py --resume # single-GPU
- sudo docker exec -d $id python utils/aws/resume.py # multi-scenario
- done <<<"$list"
-fi
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cogz Cmms Maintenance Software Crack 15 [TOP].md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cogz Cmms Maintenance Software Crack 15 [TOP].md
deleted file mode 100644
index 6a0b153de79d8689018bbe6c53870d69e6d2700e..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cogz Cmms Maintenance Software Crack 15 [TOP].md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
Why You Should Avoid Cogz Cmms Maintenance Software Crack 15
-
Cogz Cmms Maintenance Software is a powerful and easy-to-use solution that helps you manage your maintenance department. It automates preventive maintenance tasks, tracks work orders, manages spare parts inventory, and generates reports to optimize your maintenance efficiency. Cogz Cmms Maintenance Software has been used by thousands of customers in various industries, such as manufacturing, food processing, facilities management, transportation, government, education, health care, hospitality, and more.
However, some people may be tempted to use a cracked version of Cogz Cmms Maintenance Software, such as Cogz Cmms Maintenance Software Crack 15. This is a risky and unethical practice that can have serious consequences for your business. Here are some reasons why you should avoid using Cogz Cmms Maintenance Software Crack 15:
-
-
It is illegal. Using a cracked version of Cogz Cmms Maintenance Software is a violation of the software license agreement and a form of software piracy. You are stealing intellectual property from the software developer and depriving them of their rightful revenue. You may face legal action from the software developer or the authorities if you are caught using a cracked version of Cogz Cmms Maintenance Software.
-
It is unsafe. Using a cracked version of Cogz Cmms Maintenance Software exposes you to potential malware, viruses, spyware, ransomware, or other malicious software that may be embedded in the crack file. These can harm your computer system, compromise your data security, corrupt your files, or lock you out of your system. You may lose valuable information or incur additional costs to repair or replace your hardware or software.
-
It is unreliable. Using a cracked version of Cogz Cmms Maintenance Software may result in poor performance, errors, bugs, crashes, or compatibility issues. The crack file may not work properly with the latest updates or features of the software. You may experience frequent downtime or data loss that can affect your maintenance operations and productivity. You may also miss out on technical support, customer service, or warranty from the software developer.
-
It is unethical. Using a cracked version of Cogz Cmms Maintenance Software is unfair to the software developer who invested time, money, and effort to create a quality product that meets your maintenance needs. It is also unfair to other customers who paid for the legitimate version of the software and expect fair competition and quality service. You are undermining the trust and reputation of the software industry and harming its innovation and growth.
-
-
Therefore, you should avoid using Cogz Cmms Maintenance Software Crack 15 and instead purchase the legitimate version of Cogz Cmms Maintenance Software from their official website[^1^]. You will get a fully functional and secure software that will help you take control of your maintenance department and create efficiencies. You will also get access to free trial[^1^], free updates[^2^], cloud option[^3^], technical support[^2^], customer service[^2^], and warranty[^2^] from the software developer. You will also be supporting the software industry and its ethical standards.
-
Cogz Cmms Maintenance Software is a smart investment for your maintenance department. Don't risk your business by using a cracked version of Cogz Cmms Maintenance Software. Get the real deal today!
- cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Program Development In Java Abstraction Specification And Object-Oriented Design Download Pdf.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Program Development In Java Abstraction Specification And Object-Oriented Design Download Pdf.md
deleted file mode 100644
index 5ce5f120bd5a4c1985f613291008f4945bbd387b..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Program Development In Java Abstraction Specification And Object-Oriented Design Download Pdf.md
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
Program Development In Java: Abstraction, Specification, And Object-Oriented Design Download Pdf - A Comprehensive Guide
-
If you are looking for a book that teaches you how to develop software using Java, you might be interested in Program Development In Java: Abstraction, Specification, And Object-Oriented Design by Barbara Liskov and John Guttag. This book covers the fundamental concepts and principles of software engineering, such as abstraction, specification, modularity, inheritance, polymorphism, and design patterns. It also shows you how to apply these concepts and principles to create high-quality Java programs that are easy to understand, maintain, and reuse.
-
Program Development In Java: Abstraction, Specification, And Object-Oriented Design Download Pdf
In this article, we will give you a brief overview of the book and its contents, as well as provide you with a link to download the pdf version for free. We will also share some of the benefits and challenges of learning program development in Java using this book.
-
What is Program Development In Java: Abstraction, Specification, And Object-Oriented Design?
-
Program Development In Java: Abstraction, Specification, And Object-Oriented Design is a textbook written by Barbara Liskov and John Guttag, two renowned computer scientists and professors at MIT. The book was published in 2000 by Addison-Wesley Professional and has been widely used in undergraduate and graduate courses on software engineering and object-oriented programming.
-
The book aims to teach students how to design and implement software systems using Java as the programming language. It focuses on the use of abstraction and specification as tools for managing complexity and ensuring correctness. It also introduces object-oriented design as a way of organizing software components into classes and interfaces that support reuse and extensibility. The book covers topics such as:
-
-
The role of specifications in software development
-
The concept of abstract data types and their implementation in Java
-
The notion of subtyping and its relation to inheritance and polymorphism
-
The design of generic classes and methods using Java generics
-
The use of exceptions and assertions for error handling and verification
-
The application of design patterns to common software problems
-
The development of graphical user interfaces using Java Swing
-
The testing and debugging of Java programs using JUnit and other tools
-
-
The book also includes several case studies that illustrate the application of the concepts and techniques discussed in the book to real-world problems. Some of the case studies are:
-
-
A text editor that supports multiple fonts and styles
-
A calculator that can evaluate arithmetic expressions
-
A bank account system that supports multiple currencies and transactions
-
A game of Tetris that uses graphics and sound effects
-
A web browser that can display HTML pages and images
-
-
How to Download Program Development In Java: Abstraction, Specification, And Object-Oriented Design Pdf?
-
If you want to download the pdf version of Program Development In Java: Abstraction, Specification, And Object-Oriented Design, you can do so by clicking on the link below. The pdf file is hosted on a third-party website that requires you to complete a short survey before downloading. The survey is free and should take only a few minutes to complete. Once you finish the survey, you will be able to access the pdf file immediately.
-
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/unconditional_image_generation/README.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/unconditional_image_generation/README.md
deleted file mode 100644
index d83dc928c7a1164b3e8896bcfa1ef5d417ea6b80..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/unconditional_image_generation/README.md
+++ /dev/null
@@ -1,163 +0,0 @@
-## Training an unconditional diffusion model
-
-Creating a training image set is [described in a different document](https://huggingface.co/docs/datasets/image_process#image-datasets).
-
-### Installing the dependencies
-
-Before running the scripts, make sure to install the library's training dependencies:
-
-**Important**
-
-To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
-```bash
-git clone https://github.com/huggingface/diffusers
-cd diffusers
-pip install .
-```
-
-Then cd in the example folder and run
-```bash
-pip install -r requirements.txt
-```
-
-
-And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
-
-```bash
-accelerate config
-```
-
-### Unconditional Flowers
-
-The command to train a DDPM UNet model on the Oxford Flowers dataset:
-
-```bash
-accelerate launch train_unconditional.py \
- --dataset_name="huggan/flowers-102-categories" \
- --resolution=64 --center_crop --random_flip \
- --output_dir="ddpm-ema-flowers-64" \
- --train_batch_size=16 \
- --num_epochs=100 \
- --gradient_accumulation_steps=1 \
- --use_ema \
- --learning_rate=1e-4 \
- --lr_warmup_steps=500 \
- --mixed_precision=no \
- --push_to_hub
-```
-An example trained model: https://huggingface.co/anton-l/ddpm-ema-flowers-64
-
-A full training run takes 2 hours on 4xV100 GPUs.
-
-
-
-
-### Unconditional Pokemon
-
-The command to train a DDPM UNet model on the Pokemon dataset:
-
-```bash
-accelerate launch train_unconditional.py \
- --dataset_name="huggan/pokemon" \
- --resolution=64 --center_crop --random_flip \
- --output_dir="ddpm-ema-pokemon-64" \
- --train_batch_size=16 \
- --num_epochs=100 \
- --gradient_accumulation_steps=1 \
- --use_ema \
- --learning_rate=1e-4 \
- --lr_warmup_steps=500 \
- --mixed_precision=no \
- --push_to_hub
-```
-An example trained model: https://huggingface.co/anton-l/ddpm-ema-pokemon-64
-
-A full training run takes 2 hours on 4xV100 GPUs.
-
-
-
-### Training with multiple GPUs
-
-`accelerate` allows for seamless multi-GPU training. Follow the instructions [here](https://huggingface.co/docs/accelerate/basic_tutorials/launch)
-for running distributed training with `accelerate`. Here is an example command:
-
-```bash
-accelerate launch --mixed_precision="fp16" --multi_gpu train_unconditional.py \
- --dataset_name="huggan/pokemon" \
- --resolution=64 --center_crop --random_flip \
- --output_dir="ddpm-ema-pokemon-64" \
- --train_batch_size=16 \
- --num_epochs=100 \
- --gradient_accumulation_steps=1 \
- --use_ema \
- --learning_rate=1e-4 \
- --lr_warmup_steps=500 \
- --mixed_precision="fp16" \
- --logger="wandb"
-```
-
-To be able to use Weights and Biases (`wandb`) as a logger you need to install the library: `pip install wandb`.
-
-### Using your own data
-
-To use your own dataset, there are 2 ways:
-- you can either provide your own folder as `--train_data_dir`
-- or you can upload your dataset to the hub (possibly as a private repo, if you prefer so), and simply pass the `--dataset_name` argument.
-
-Below, we explain both in more detail.
-
-#### Provide the dataset as a folder
-
-If you provide your own folders with images, the script expects the following directory structure:
-
-```bash
-data_dir/xxx.png
-data_dir/xxy.png
-data_dir/[...]/xxz.png
-```
-
-In other words, the script will take care of gathering all images inside the folder. You can then run the script like this:
-
-```bash
-accelerate launch train_unconditional.py \
- --train_data_dir \
-
-```
-
-Internally, the script will use the [`ImageFolder`](https://huggingface.co/docs/datasets/v2.0.0/en/image_process#imagefolder) feature which will automatically turn the folders into 🤗 Dataset objects.
-
-#### Upload your data to the hub, as a (possibly private) repo
-
-It's very easy (and convenient) to upload your image dataset to the hub using the [`ImageFolder`](https://huggingface.co/docs/datasets/v2.0.0/en/image_process#imagefolder) feature available in 🤗 Datasets. Simply do the following:
-
-```python
-from datasets import load_dataset
-
-# example 1: local folder
-dataset = load_dataset("imagefolder", data_dir="path_to_your_folder")
-
-# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd)
-dataset = load_dataset("imagefolder", data_files="path_to_zip_file")
-
-# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd)
-dataset = load_dataset("imagefolder", data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip")
-
-# example 4: providing several splits
-dataset = load_dataset("imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]})
-```
-
-`ImageFolder` will create an `image` column containing the PIL-encoded images.
-
-Next, push it to the hub!
-
-```python
-# assuming you have ran the huggingface-cli login command in a terminal
-dataset.push_to_hub("name_of_your_dataset")
-
-# if you want to push to a private repo, simply pass private=True:
-dataset.push_to_hub("name_of_your_dataset", private=True)
-```
-
-and that's it! You can now train your model by simply setting the `--dataset_name` argument to the name of your dataset on the hub.
-
-More on this can also be found in [this blog post](https://huggingface.co/blog/image-search-datasets).
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/latent_diffusion/__init__.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/latent_diffusion/__init__.py
deleted file mode 100644
index bc6ac82217a37030740b3861242932f0e9bd8dd4..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/latent_diffusion/__init__.py
+++ /dev/null
@@ -1,49 +0,0 @@
-from typing import TYPE_CHECKING
-
-from ...utils import (
- OptionalDependencyNotAvailable,
- _LazyModule,
- get_objects_from_module,
- is_torch_available,
- is_transformers_available,
-)
-
-
-_dummy_objects = {}
-_import_structure = {}
-
-try:
- if not (is_transformers_available() and is_torch_available()):
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- from ...utils import dummy_torch_and_transformers_objects # noqa F403
-
- _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
-else:
- _import_structure["pipeline_latent_diffusion"] = ["LDMBertModel", "LDMTextToImagePipeline"]
- _import_structure["pipeline_latent_diffusion_superresolution"] = ["LDMSuperResolutionPipeline"]
-
-
-if TYPE_CHECKING:
- try:
- if not (is_transformers_available() and is_torch_available()):
- raise OptionalDependencyNotAvailable()
-
- except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import *
- else:
- from .pipeline_latent_diffusion import LDMBertModel, LDMTextToImagePipeline
- from .pipeline_latent_diffusion_superresolution import LDMSuperResolutionPipeline
-
-else:
- import sys
-
- sys.modules[__name__] = _LazyModule(
- __name__,
- globals()["__file__"],
- _import_structure,
- module_spec=__spec__,
- )
-
- for name, value in _dummy_objects.items():
- setattr(sys.modules[__name__], name, value)
diff --git a/spaces/paulbricman/velma/scripts/run_tweets.py b/spaces/paulbricman/velma/scripts/run_tweets.py
deleted file mode 100644
index 52e20a5cc7bacc3045cff74d980a2c4137139ed1..0000000000000000000000000000000000000000
--- a/spaces/paulbricman/velma/scripts/run_tweets.py
+++ /dev/null
@@ -1,57 +0,0 @@
-from pathlib import Path
-import pickle
-from src.util import filter
-from src.abduction import infer
-from src.baselines import infer_embs, infer_nli
-from transformers import AutoTokenizer, AutoModelForCausalLM
-from sentence_transformers import CrossEncoder, SentenceTransformer
-import pandas as pd
-from tqdm import tqdm
-
-
-df = pd.read_csv(Path('..') / 'data' / 'tweets' / 'tweets.csv')
-users = ['nabla_theta', 'slatestarcodex', 'stuhlmueller', 'ESYudkowsky', 'ben_j_todd',
- 'ch402', 'willmacaskill', 'hardmaru', 'kenneth0stanley', 'RichardMCNgo']
-
-emb_model = SentenceTransformer('all-MiniLM-L6-v2')
-# nli_model = CrossEncoder('cross-encoder/nli-deberta-v3-base')
-lm_model = AutoModelForCausalLM.from_pretrained(
- 'gustavecortal/gpt-neo-2.7B-8bit')
-lm_tok = AutoTokenizer.from_pretrained('EleutherAI/gpt-neo-2.7B')
-print('(*) Loaded models')
-
-for user in tqdm(users):
- claim_tweets = df[df['username'] == user][pd.notna(
- df['extracted_claim'])][pd.notna(df['negated_claim'])]
-
- for approach in ['embs', 'nli_relative', 'nli_absolute', 'lm']:
- print(user, approach)
- aggregate = []
- artifact_path = Path(
- '..') / 'data' / 'tweets_artifacts' / approach / (user + '.pkl')
-
- for idx, row in claim_tweets.iterrows():
- other_tweets = df[df['username'] ==
- user][df['extracted_claim'] != row['extracted_claim']]['tweet'].values
-
- selection = filter(
- row['extracted_claim'], other_tweets, emb_model, top_k=5)
- print('(*) Filtered paragraphs')
- probs = []
-
- for tweet in selection:
- if approach == 'embs':
- probs += [infer_embs(tweet, [row['extracted_claim'],
- row['negated_claim']], encoder=emb_model)[0]]
- elif approach == 'nli_absolute':
- probs += [infer_nli(tweet,
- [row['extracted_claim']], mode='absolute')[0]]
- elif approach == 'nli_relative':
- probs += [infer_nli(tweet, [row['extracted_claim'],
- row['negated_claim']], mode='relative')[0]]
- elif approach == 'lm':
- probs += [infer(tweet, [row['extracted_claim'], row['negated_claim']],
- model=lm_model, tokenizer=lm_tok, return_components=True)]
-
- aggregate += [probs]
- pickle.dump(aggregate, open(artifact_path, 'wb'))
diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/utils/confusion_viz.py b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/utils/confusion_viz.py
deleted file mode 100644
index e7250cd3c4ce887aa336be303998099e19c7644a..0000000000000000000000000000000000000000
--- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/utils/confusion_viz.py
+++ /dev/null
@@ -1,99 +0,0 @@
-from threading import local
-import torch
-import wandb
-import numpy as np
-import PIL.Image
-from typing import Iterable
-
-from utils.val_loop_hook import ValidationLoopHook
-
-def _strip_image_from_grid_row(row, gap=5, bg=255):
- strip = torch.full(
- (row.shape[0] * (row.shape[3] + gap) - gap,
- row.shape[1] * (row.shape[3] + gap) - gap), bg, dtype=row.dtype)
- for i in range(0, row.shape[0] * row.shape[1]):
- strip[(i // row.shape[1]) * (row.shape[2] + gap) : ((i // row.shape[1])+1) * (row.shape[2] + gap) - gap,
- (i % row.shape[1]) * (row.shape[3] + gap) : ((i % row.shape[1])+1) * (row.shape[3] + gap) - gap] = row[i // row.shape[1]][i % row.shape[1]]
- return PIL.Image.fromarray(strip.numpy())
-
-class ConfusionVisualizer(ValidationLoopHook):
- def __init__(self, image_shape: Iterable[int], num_classes: int, num_images: int = 5, num_slices: int = 8):
- self.image_shape = image_shape
- self.num_images = num_images
- self.num_classes = num_classes
- self.num_slices = num_slices
-
- self.activations = -99 * torch.ones(self.num_classes, self.num_images)
- self.images = torch.zeros(torch.Size([self.num_classes, self.num_images]) + torch.Size(self.image_shape))
-
- def process(self, batch, target_batch, logits_batch, prediction_batch):
- image_batch = batch["image"]
-
- with torch.no_grad():
- local_activations = torch.amax(logits_batch, dim=-1)
-
- # filter samples where the prediction does not line up with the target
- confused_samples = (prediction_batch != target_batch)
-
- # filter public dataset samples
- public = torch.tensor(["verse" in id for id in batch["verse_id"]]).type_as(confused_samples)
-
- mask = confused_samples & public
-
- for current_idx in torch.nonzero(mask).squeeze(1):
- target_class = target_batch[current_idx]
- # next item in local batch has a higher activation than the previous confusions for this class, replace it
- if local_activations[current_idx] > torch.min(self.activations[target_class]):
- idx_to_replace = torch.argsort(self.activations[target_class])[0]
- self.activations[target_class, idx_to_replace] = local_activations[current_idx]
- self.images[target_class, idx_to_replace] = image_batch[current_idx].cpu()
-
- def trigger(self, module):
- for class_idx in range(self.num_classes):
- # determine final order such that the highest activations are placed on top
- sorted_idx = torch.argsort(self.activations[class_idx], descending=True)
-
- self.images[class_idx] = self.images[class_idx, sorted_idx]
-
- normalize = lambda x: (x - np.min(x))/np.ptp(x)
-
- if len(self.images.shape) == 6:
- # 3D, visualize slices
- img_res = self.images[class_idx].shape[-1]
- img_slices = torch.linspace(0, img_res-1, self.num_slices+2, dtype=torch.long)[1:-1]
-
- # Show all images slices in a larger combined image
- top_confusing_samples = _strip_image_from_grid_row(
- torch.stack([
- torch.stack([
- torch.tensor(
- np.uint8(255 * normalize((self.images[class_idx, i, 0, ..., img_slices[s]]).numpy()))
- )
- for s in range(self.num_slices)])
- for i in range(self.num_images if self.num_images < self.images[class_idx].shape[0] else self.images[class_idx].shape[0])])
- )
-
- elif len(self.images.shape) == 5:
- # 2D
- top_confusing_samples = _strip_image_from_grid_row(
- torch.stack([
- torch.stack([
- torch.tensor(
- np.uint8(255 * normalize((self.images[class_idx, i, 0, ...]).numpy()))
- )
- ])
- for i in range(self.num_images if self.num_images < self.images[class_idx].shape[0] else self.images[class_idx].shape[0])])
- )
-
- else:
- raise RuntimeError("Unknown image shape found for confusion visualization")
-
- module.logger.experiment.log({
- # class_idx represents the ground truth, i.e. these were samples to be classified as class_idx
- # but they were predicted to belong to a different class
- f"val/top_confusing_of_class_{class_idx}": wandb.Image(top_confusing_samples)
- })
-
- def reset(self):
- self.activations = -99 * torch.ones(self.num_classes, self.num_images)
- self.images = torch.zeros(torch.Size([self.num_classes, self.num_images]) + torch.Size(self.image_shape))
\ No newline at end of file
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/packages/backports/makefile.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/packages/backports/makefile.py
deleted file mode 100644
index b8fb2154b6d0618b62281578e5e947bca487cee4..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/packages/backports/makefile.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-backports.makefile
-~~~~~~~~~~~~~~~~~~
-
-Backports the Python 3 ``socket.makefile`` method for use with anything that
-wants to create a "fake" socket object.
-"""
-import io
-from socket import SocketIO
-
-
-def backport_makefile(
- self, mode="r", buffering=None, encoding=None, errors=None, newline=None
-):
- """
- Backport of ``socket.makefile`` from Python 3.5.
- """
- if not set(mode) <= {"r", "w", "b"}:
- raise ValueError("invalid mode %r (only r, w, b allowed)" % (mode,))
- writing = "w" in mode
- reading = "r" in mode or not writing
- assert reading or writing
- binary = "b" in mode
- rawmode = ""
- if reading:
- rawmode += "r"
- if writing:
- rawmode += "w"
- raw = SocketIO(self, rawmode)
- self._makefile_refs += 1
- if buffering is None:
- buffering = -1
- if buffering < 0:
- buffering = io.DEFAULT_BUFFER_SIZE
- if buffering == 0:
- if not binary:
- raise ValueError("unbuffered streams must be binary")
- return raw
- if reading and writing:
- buffer = io.BufferedRWPair(raw, raw, buffering)
- elif reading:
- buffer = io.BufferedReader(raw, buffering)
- else:
- assert writing
- buffer = io.BufferedWriter(raw, buffering)
- if binary:
- return buffer
- text = io.TextIOWrapper(buffer, encoding, errors, newline)
- text.mode = mode
- return text
diff --git a/spaces/plzdontcry/dakubettergpt/src/types/theme.ts b/spaces/plzdontcry/dakubettergpt/src/types/theme.ts
deleted file mode 100644
index 937ef1525dd2cca331e1dcdd964cae000049f982..0000000000000000000000000000000000000000
--- a/spaces/plzdontcry/dakubettergpt/src/types/theme.ts
+++ /dev/null
@@ -1 +0,0 @@
-export type Theme = 'light' | 'dark';
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/click/parser.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/click/parser.py
deleted file mode 100644
index 5fa7adfac842bfa5689fd1a41ae4017be1ebff6f..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/click/parser.py
+++ /dev/null
@@ -1,529 +0,0 @@
-"""
-This module started out as largely a copy paste from the stdlib's
-optparse module with the features removed that we do not need from
-optparse because we implement them in Click on a higher level (for
-instance type handling, help formatting and a lot more).
-
-The plan is to remove more and more from here over time.
-
-The reason this is a different module and not optparse from the stdlib
-is that there are differences in 2.x and 3.x about the error messages
-generated and optparse in the stdlib uses gettext for no good reason
-and might cause us issues.
-
-Click uses parts of optparse written by Gregory P. Ward and maintained
-by the Python Software Foundation. This is limited to code in parser.py.
-
-Copyright 2001-2006 Gregory P. Ward. All rights reserved.
-Copyright 2002-2006 Python Software Foundation. All rights reserved.
-"""
-# This code uses parts of optparse written by Gregory P. Ward and
-# maintained by the Python Software Foundation.
-# Copyright 2001-2006 Gregory P. Ward
-# Copyright 2002-2006 Python Software Foundation
-import typing as t
-from collections import deque
-from gettext import gettext as _
-from gettext import ngettext
-
-from .exceptions import BadArgumentUsage
-from .exceptions import BadOptionUsage
-from .exceptions import NoSuchOption
-from .exceptions import UsageError
-
-if t.TYPE_CHECKING:
- import typing_extensions as te
- from .core import Argument as CoreArgument
- from .core import Context
- from .core import Option as CoreOption
- from .core import Parameter as CoreParameter
-
-V = t.TypeVar("V")
-
-# Sentinel value that indicates an option was passed as a flag without a
-# value but is not a flag option. Option.consume_value uses this to
-# prompt or use the flag_value.
-_flag_needs_value = object()
-
-
-def _unpack_args(
- args: t.Sequence[str], nargs_spec: t.Sequence[int]
-) -> t.Tuple[t.Sequence[t.Union[str, t.Sequence[t.Optional[str]], None]], t.List[str]]:
- """Given an iterable of arguments and an iterable of nargs specifications,
- it returns a tuple with all the unpacked arguments at the first index
- and all remaining arguments as the second.
-
- The nargs specification is the number of arguments that should be consumed
- or `-1` to indicate that this position should eat up all the remainders.
-
- Missing items are filled with `None`.
- """
- args = deque(args)
- nargs_spec = deque(nargs_spec)
- rv: t.List[t.Union[str, t.Tuple[t.Optional[str], ...], None]] = []
- spos: t.Optional[int] = None
-
- def _fetch(c: "te.Deque[V]") -> t.Optional[V]:
- try:
- if spos is None:
- return c.popleft()
- else:
- return c.pop()
- except IndexError:
- return None
-
- while nargs_spec:
- nargs = _fetch(nargs_spec)
-
- if nargs is None:
- continue
-
- if nargs == 1:
- rv.append(_fetch(args))
- elif nargs > 1:
- x = [_fetch(args) for _ in range(nargs)]
-
- # If we're reversed, we're pulling in the arguments in reverse,
- # so we need to turn them around.
- if spos is not None:
- x.reverse()
-
- rv.append(tuple(x))
- elif nargs < 0:
- if spos is not None:
- raise TypeError("Cannot have two nargs < 0")
-
- spos = len(rv)
- rv.append(None)
-
- # spos is the position of the wildcard (star). If it's not `None`,
- # we fill it with the remainder.
- if spos is not None:
- rv[spos] = tuple(args)
- args = []
- rv[spos + 1 :] = reversed(rv[spos + 1 :])
-
- return tuple(rv), list(args)
-
-
-def split_opt(opt: str) -> t.Tuple[str, str]:
- first = opt[:1]
- if first.isalnum():
- return "", opt
- if opt[1:2] == first:
- return opt[:2], opt[2:]
- return first, opt[1:]
-
-
-def normalize_opt(opt: str, ctx: t.Optional["Context"]) -> str:
- if ctx is None or ctx.token_normalize_func is None:
- return opt
- prefix, opt = split_opt(opt)
- return f"{prefix}{ctx.token_normalize_func(opt)}"
-
-
-def split_arg_string(string: str) -> t.List[str]:
- """Split an argument string as with :func:`shlex.split`, but don't
- fail if the string is incomplete. Ignores a missing closing quote or
- incomplete escape sequence and uses the partial token as-is.
-
- .. code-block:: python
-
- split_arg_string("example 'my file")
- ["example", "my file"]
-
- split_arg_string("example my\\")
- ["example", "my"]
-
- :param string: String to split.
- """
- import shlex
-
- lex = shlex.shlex(string, posix=True)
- lex.whitespace_split = True
- lex.commenters = ""
- out = []
-
- try:
- for token in lex:
- out.append(token)
- except ValueError:
- # Raised when end-of-string is reached in an invalid state. Use
- # the partial token as-is. The quote or escape character is in
- # lex.state, not lex.token.
- out.append(lex.token)
-
- return out
-
-
-class Option:
- def __init__(
- self,
- obj: "CoreOption",
- opts: t.Sequence[str],
- dest: t.Optional[str],
- action: t.Optional[str] = None,
- nargs: int = 1,
- const: t.Optional[t.Any] = None,
- ):
- self._short_opts = []
- self._long_opts = []
- self.prefixes: t.Set[str] = set()
-
- for opt in opts:
- prefix, value = split_opt(opt)
- if not prefix:
- raise ValueError(f"Invalid start character for option ({opt})")
- self.prefixes.add(prefix[0])
- if len(prefix) == 1 and len(value) == 1:
- self._short_opts.append(opt)
- else:
- self._long_opts.append(opt)
- self.prefixes.add(prefix)
-
- if action is None:
- action = "store"
-
- self.dest = dest
- self.action = action
- self.nargs = nargs
- self.const = const
- self.obj = obj
-
- @property
- def takes_value(self) -> bool:
- return self.action in ("store", "append")
-
- def process(self, value: t.Any, state: "ParsingState") -> None:
- if self.action == "store":
- state.opts[self.dest] = value # type: ignore
- elif self.action == "store_const":
- state.opts[self.dest] = self.const # type: ignore
- elif self.action == "append":
- state.opts.setdefault(self.dest, []).append(value) # type: ignore
- elif self.action == "append_const":
- state.opts.setdefault(self.dest, []).append(self.const) # type: ignore
- elif self.action == "count":
- state.opts[self.dest] = state.opts.get(self.dest, 0) + 1 # type: ignore
- else:
- raise ValueError(f"unknown action '{self.action}'")
- state.order.append(self.obj)
-
-
-class Argument:
- def __init__(self, obj: "CoreArgument", dest: t.Optional[str], nargs: int = 1):
- self.dest = dest
- self.nargs = nargs
- self.obj = obj
-
- def process(
- self,
- value: t.Union[t.Optional[str], t.Sequence[t.Optional[str]]],
- state: "ParsingState",
- ) -> None:
- if self.nargs > 1:
- assert value is not None
- holes = sum(1 for x in value if x is None)
- if holes == len(value):
- value = None
- elif holes != 0:
- raise BadArgumentUsage(
- _("Argument {name!r} takes {nargs} values.").format(
- name=self.dest, nargs=self.nargs
- )
- )
-
- if self.nargs == -1 and self.obj.envvar is not None and value == ():
- # Replace empty tuple with None so that a value from the
- # environment may be tried.
- value = None
-
- state.opts[self.dest] = value # type: ignore
- state.order.append(self.obj)
-
-
-class ParsingState:
- def __init__(self, rargs: t.List[str]) -> None:
- self.opts: t.Dict[str, t.Any] = {}
- self.largs: t.List[str] = []
- self.rargs = rargs
- self.order: t.List["CoreParameter"] = []
-
-
-class OptionParser:
- """The option parser is an internal class that is ultimately used to
- parse options and arguments. It's modelled after optparse and brings
- a similar but vastly simplified API. It should generally not be used
- directly as the high level Click classes wrap it for you.
-
- It's not nearly as extensible as optparse or argparse as it does not
- implement features that are implemented on a higher level (such as
- types or defaults).
-
- :param ctx: optionally the :class:`~click.Context` where this parser
- should go with.
- """
-
- def __init__(self, ctx: t.Optional["Context"] = None) -> None:
- #: The :class:`~click.Context` for this parser. This might be
- #: `None` for some advanced use cases.
- self.ctx = ctx
- #: This controls how the parser deals with interspersed arguments.
- #: If this is set to `False`, the parser will stop on the first
- #: non-option. Click uses this to implement nested subcommands
- #: safely.
- self.allow_interspersed_args: bool = True
- #: This tells the parser how to deal with unknown options. By
- #: default it will error out (which is sensible), but there is a
- #: second mode where it will ignore it and continue processing
- #: after shifting all the unknown options into the resulting args.
- self.ignore_unknown_options: bool = False
-
- if ctx is not None:
- self.allow_interspersed_args = ctx.allow_interspersed_args
- self.ignore_unknown_options = ctx.ignore_unknown_options
-
- self._short_opt: t.Dict[str, Option] = {}
- self._long_opt: t.Dict[str, Option] = {}
- self._opt_prefixes = {"-", "--"}
- self._args: t.List[Argument] = []
-
- def add_option(
- self,
- obj: "CoreOption",
- opts: t.Sequence[str],
- dest: t.Optional[str],
- action: t.Optional[str] = None,
- nargs: int = 1,
- const: t.Optional[t.Any] = None,
- ) -> None:
- """Adds a new option named `dest` to the parser. The destination
- is not inferred (unlike with optparse) and needs to be explicitly
- provided. Action can be any of ``store``, ``store_const``,
- ``append``, ``append_const`` or ``count``.
-
- The `obj` can be used to identify the option in the order list
- that is returned from the parser.
- """
- opts = [normalize_opt(opt, self.ctx) for opt in opts]
- option = Option(obj, opts, dest, action=action, nargs=nargs, const=const)
- self._opt_prefixes.update(option.prefixes)
- for opt in option._short_opts:
- self._short_opt[opt] = option
- for opt in option._long_opts:
- self._long_opt[opt] = option
-
- def add_argument(
- self, obj: "CoreArgument", dest: t.Optional[str], nargs: int = 1
- ) -> None:
- """Adds a positional argument named `dest` to the parser.
-
- The `obj` can be used to identify the option in the order list
- that is returned from the parser.
- """
- self._args.append(Argument(obj, dest=dest, nargs=nargs))
-
- def parse_args(
- self, args: t.List[str]
- ) -> t.Tuple[t.Dict[str, t.Any], t.List[str], t.List["CoreParameter"]]:
- """Parses positional arguments and returns ``(values, args, order)``
- for the parsed options and arguments as well as the leftover
- arguments if there are any. The order is a list of objects as they
- appear on the command line. If arguments appear multiple times they
- will be memorized multiple times as well.
- """
- state = ParsingState(args)
- try:
- self._process_args_for_options(state)
- self._process_args_for_args(state)
- except UsageError:
- if self.ctx is None or not self.ctx.resilient_parsing:
- raise
- return state.opts, state.largs, state.order
-
- def _process_args_for_args(self, state: ParsingState) -> None:
- pargs, args = _unpack_args(
- state.largs + state.rargs, [x.nargs for x in self._args]
- )
-
- for idx, arg in enumerate(self._args):
- arg.process(pargs[idx], state)
-
- state.largs = args
- state.rargs = []
-
- def _process_args_for_options(self, state: ParsingState) -> None:
- while state.rargs:
- arg = state.rargs.pop(0)
- arglen = len(arg)
- # Double dashes always handled explicitly regardless of what
- # prefixes are valid.
- if arg == "--":
- return
- elif arg[:1] in self._opt_prefixes and arglen > 1:
- self._process_opts(arg, state)
- elif self.allow_interspersed_args:
- state.largs.append(arg)
- else:
- state.rargs.insert(0, arg)
- return
-
- # Say this is the original argument list:
- # [arg0, arg1, ..., arg(i-1), arg(i), arg(i+1), ..., arg(N-1)]
- # ^
- # (we are about to process arg(i)).
- #
- # Then rargs is [arg(i), ..., arg(N-1)] and largs is a *subset* of
- # [arg0, ..., arg(i-1)] (any options and their arguments will have
- # been removed from largs).
- #
- # The while loop will usually consume 1 or more arguments per pass.
- # If it consumes 1 (eg. arg is an option that takes no arguments),
- # then after _process_arg() is done the situation is:
- #
- # largs = subset of [arg0, ..., arg(i)]
- # rargs = [arg(i+1), ..., arg(N-1)]
- #
- # If allow_interspersed_args is false, largs will always be
- # *empty* -- still a subset of [arg0, ..., arg(i-1)], but
- # not a very interesting subset!
-
- def _match_long_opt(
- self, opt: str, explicit_value: t.Optional[str], state: ParsingState
- ) -> None:
- if opt not in self._long_opt:
- from difflib import get_close_matches
-
- possibilities = get_close_matches(opt, self._long_opt)
- raise NoSuchOption(opt, possibilities=possibilities, ctx=self.ctx)
-
- option = self._long_opt[opt]
- if option.takes_value:
- # At this point it's safe to modify rargs by injecting the
- # explicit value, because no exception is raised in this
- # branch. This means that the inserted value will be fully
- # consumed.
- if explicit_value is not None:
- state.rargs.insert(0, explicit_value)
-
- value = self._get_value_from_state(opt, option, state)
-
- elif explicit_value is not None:
- raise BadOptionUsage(
- opt, _("Option {name!r} does not take a value.").format(name=opt)
- )
-
- else:
- value = None
-
- option.process(value, state)
-
- def _match_short_opt(self, arg: str, state: ParsingState) -> None:
- stop = False
- i = 1
- prefix = arg[0]
- unknown_options = []
-
- for ch in arg[1:]:
- opt = normalize_opt(f"{prefix}{ch}", self.ctx)
- option = self._short_opt.get(opt)
- i += 1
-
- if not option:
- if self.ignore_unknown_options:
- unknown_options.append(ch)
- continue
- raise NoSuchOption(opt, ctx=self.ctx)
- if option.takes_value:
- # Any characters left in arg? Pretend they're the
- # next arg, and stop consuming characters of arg.
- if i < len(arg):
- state.rargs.insert(0, arg[i:])
- stop = True
-
- value = self._get_value_from_state(opt, option, state)
-
- else:
- value = None
-
- option.process(value, state)
-
- if stop:
- break
-
- # If we got any unknown options we recombine the string of the
- # remaining options and re-attach the prefix, then report that
- # to the state as new larg. This way there is basic combinatorics
- # that can be achieved while still ignoring unknown arguments.
- if self.ignore_unknown_options and unknown_options:
- state.largs.append(f"{prefix}{''.join(unknown_options)}")
-
- def _get_value_from_state(
- self, option_name: str, option: Option, state: ParsingState
- ) -> t.Any:
- nargs = option.nargs
-
- if len(state.rargs) < nargs:
- if option.obj._flag_needs_value:
- # Option allows omitting the value.
- value = _flag_needs_value
- else:
- raise BadOptionUsage(
- option_name,
- ngettext(
- "Option {name!r} requires an argument.",
- "Option {name!r} requires {nargs} arguments.",
- nargs,
- ).format(name=option_name, nargs=nargs),
- )
- elif nargs == 1:
- next_rarg = state.rargs[0]
-
- if (
- option.obj._flag_needs_value
- and isinstance(next_rarg, str)
- and next_rarg[:1] in self._opt_prefixes
- and len(next_rarg) > 1
- ):
- # The next arg looks like the start of an option, don't
- # use it as the value if omitting the value is allowed.
- value = _flag_needs_value
- else:
- value = state.rargs.pop(0)
- else:
- value = tuple(state.rargs[:nargs])
- del state.rargs[:nargs]
-
- return value
-
- def _process_opts(self, arg: str, state: ParsingState) -> None:
- explicit_value = None
- # Long option handling happens in two parts. The first part is
- # supporting explicitly attached values. In any case, we will try
- # to long match the option first.
- if "=" in arg:
- long_opt, explicit_value = arg.split("=", 1)
- else:
- long_opt = arg
- norm_long_opt = normalize_opt(long_opt, self.ctx)
-
- # At this point we will match the (assumed) long option through
- # the long option matching code. Note that this allows options
- # like "-foo" to be matched as long options.
- try:
- self._match_long_opt(norm_long_opt, explicit_value, state)
- except NoSuchOption:
- # At this point the long option matching failed, and we need
- # to try with short options. However there is a special rule
- # which says, that if we have a two character options prefix
- # (applies to "--foo" for instance), we do not dispatch to the
- # short option code and will instead raise the no option
- # error.
- if arg[:2] not in self._opt_prefixes:
- self._match_short_opt(arg, state)
- return
-
- if not self.ignore_unknown_options:
- raise
-
- state.largs.append(arg)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/dateutil/zoneinfo/rebuild.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/dateutil/zoneinfo/rebuild.py
deleted file mode 100644
index 684c6586f091350c347f2b6150935f5214ffec27..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/dateutil/zoneinfo/rebuild.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import logging
-import os
-import tempfile
-import shutil
-import json
-from subprocess import check_call, check_output
-from tarfile import TarFile
-
-from dateutil.zoneinfo import METADATA_FN, ZONEFILENAME
-
-
-def rebuild(filename, tag=None, format="gz", zonegroups=[], metadata=None):
- """Rebuild the internal timezone info in dateutil/zoneinfo/zoneinfo*tar*
-
- filename is the timezone tarball from ``ftp.iana.org/tz``.
-
- """
- tmpdir = tempfile.mkdtemp()
- zonedir = os.path.join(tmpdir, "zoneinfo")
- moduledir = os.path.dirname(__file__)
- try:
- with TarFile.open(filename) as tf:
- for name in zonegroups:
- tf.extract(name, tmpdir)
- filepaths = [os.path.join(tmpdir, n) for n in zonegroups]
-
- _run_zic(zonedir, filepaths)
-
- # write metadata file
- with open(os.path.join(zonedir, METADATA_FN), 'w') as f:
- json.dump(metadata, f, indent=4, sort_keys=True)
- target = os.path.join(moduledir, ZONEFILENAME)
- with TarFile.open(target, "w:%s" % format) as tf:
- for entry in os.listdir(zonedir):
- entrypath = os.path.join(zonedir, entry)
- tf.add(entrypath, entry)
- finally:
- shutil.rmtree(tmpdir)
-
-
-def _run_zic(zonedir, filepaths):
- """Calls the ``zic`` compiler in a compatible way to get a "fat" binary.
-
- Recent versions of ``zic`` default to ``-b slim``, while older versions
- don't even have the ``-b`` option (but default to "fat" binaries). The
- current version of dateutil does not support Version 2+ TZif files, which
- causes problems when used in conjunction with "slim" binaries, so this
- function is used to ensure that we always get a "fat" binary.
- """
-
- try:
- help_text = check_output(["zic", "--help"])
- except OSError as e:
- _print_on_nosuchfile(e)
- raise
-
- if b"-b " in help_text:
- bloat_args = ["-b", "fat"]
- else:
- bloat_args = []
-
- check_call(["zic"] + bloat_args + ["-d", zonedir] + filepaths)
-
-
-def _print_on_nosuchfile(e):
- """Print helpful troubleshooting message
-
- e is an exception raised by subprocess.check_call()
-
- """
- if e.errno == 2:
- logging.error(
- "Could not find zic. Perhaps you need to install "
- "libc-bin or some other package that provides it, "
- "or it's not in your PATH?")
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/layouts/tabs.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/layouts/tabs.py
deleted file mode 100644
index 233f18c00f1adc946caa8affd970307a21490ea1..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/layouts/tabs.py
+++ /dev/null
@@ -1,95 +0,0 @@
-from __future__ import annotations
-
-from gradio_client.documentation import document, set_documentation_group
-
-from gradio.blocks import BlockContext
-from gradio.component_meta import ComponentMeta
-from gradio.events import Events
-
-set_documentation_group("layout")
-
-
-class Tabs(BlockContext, metaclass=ComponentMeta):
- """
- Tabs is a layout element within Blocks that can contain multiple "Tab" Components.
- """
-
- EVENTS = [Events.change, Events.select]
-
- def __init__(
- self,
- *,
- selected: int | str | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- elem_classes: list[str] | str | None = None,
- render: bool = True,
- ):
- """
- Parameters:
- selected: The currently selected tab. Must correspond to an id passed to the one of the child TabItems. Defaults to the first TabItem.
- visible: If False, Tabs will be hidden.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- elem_classes: An optional string or list of strings that are assigned as the class of this component in the HTML DOM. Can be used for targeting CSS styles.
- render: If False, this layout will not be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later.
- """
- BlockContext.__init__(
- self,
- visible=visible,
- elem_id=elem_id,
- elem_classes=elem_classes,
- render=render,
- )
- self.selected = selected
-
-
-@document()
-class Tab(BlockContext, metaclass=ComponentMeta):
- """
- Tab (or its alias TabItem) is a layout element. Components defined within the Tab will be visible when this tab is selected tab.
- Example:
- with gr.Blocks() as demo:
- with gr.Tab("Lion"):
- gr.Image("lion.jpg")
- gr.Button("New Lion")
- with gr.Tab("Tiger"):
- gr.Image("tiger.jpg")
- gr.Button("New Tiger")
- Guides: controlling-layout
- """
-
- EVENTS = [Events.select]
-
- def __init__(
- self,
- label: str | None = None,
- *,
- id: int | str | None = None,
- elem_id: str | None = None,
- elem_classes: list[str] | str | None = None,
- render: bool = True,
- ):
- """
- Parameters:
- label: The visual label for the tab
- id: An optional identifier for the tab, required if you wish to control the selected tab from a predict function.
- elem_id: An optional string that is assigned as the id of the