diff --git a/spaces/101-5/gpt4free/testing/aiservice/README.md b/spaces/101-5/gpt4free/testing/aiservice/README.md deleted file mode 100644 index 83b06481024eaa01c8928f0f21c52f251749caea..0000000000000000000000000000000000000000 --- a/spaces/101-5/gpt4free/testing/aiservice/README.md +++ /dev/null @@ -1,2 +0,0 @@ -https://github.com/xtekky/gpt4free/issues/40#issuecomment-1629152431 -probably gpt-3.5 \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Avoid Download Proteus 8 Full Crack Google Drive and Use the Genuine Version Instead.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Avoid Download Proteus 8 Full Crack Google Drive and Use the Genuine Version Instead.md deleted file mode 100644 index a58842b74826c1a4b54bac39377f68cece1ae932..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Avoid Download Proteus 8 Full Crack Google Drive and Use the Genuine Version Instead.md +++ /dev/null @@ -1,24 +0,0 @@ -
-

Download Proteus 8 Full Crack Google Drive: How to Do It and Why You Should Avoid It

-

Proteus 8 is a powerful software that allows you to design, simulate, and test electronic circuits and systems. It is widely used by engineers, students, hobbyists, and professionals who need to create and verify their electronic projects. Proteus 8 has many features and tools that can help you with your circuit design and analysis.

-

However, Proteus 8 is not a free software and requires a license to activate its full features. Some people may try to download Proteus 8 full crack from Google Drive, hoping to use the software without paying for it. However, this is not a smart move as it can expose your computer and data to various risks. In this article, we will explain why you should avoid downloading Proteus 8 full crack Google Drive and how to use the genuine version of the software safely and legally.

-

download proteus 8 full crack google drive


DOWNLOAD 🌟 https://byltly.com/2uKz8K



-

Why You Should Avoid Downloading Proteus 8 Full Crack Google Drive

-

Downloading Proteus 8 full crack from Google Drive may seem convenient and easy, but it comes with many drawbacks and risks. Here are some of the reasons why you should avoid downloading Proteus 8 full crack Google Drive:

- -

How to Use Proteus 8 Safely and Legally

-

If you want to use Proteus 8 for your electronic circuit design and simulation needs, you should purchase a genuine license from the official website or an authorized dealer. This way, you can enjoy the full features and benefits of the software without any risks or hassles. Here are the steps to use Proteus 8 safely and legally:

-
    -
  1. Download the software from the official website. Go to https://www.labcenter.com/downloads/ and choose the latest version of Proteus 8 for your operating system. You can also download a free trial version if you want to test the software before buying it.
  2. -
  3. Install the software on your computer. Run the downloaded file and follow the instructions on the screen to complete the installation process. You may need to enter your administrator password or grant permission to install the software.
  4. -
  5. Activate the license. After installing the software, you need to activate the license to use it. You can do this online or offline depending on your preference. To activate online, you need to enter your serial number and activation key that you received after purchasing the license. To activate offline, you need to generate an unlock key from the website using your serial number and request code that you get from the software.
  6. -
  7. Create a project. Once you activate the license, you can create a project in Proteus 8 by entering your project name, description, location, etc. You can also import your existing projects from other formats if you have any.
  8. -
  9. Start using the software. After creating a project, you can start using Proteus 8 for your circuit design and simulation needs. You can use various features and tools available in the software such as schematic capture, PCB

    -

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download CSI SAFE 2020 for Free and Discover Its Amazing Features.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download CSI SAFE 2020 for Free and Discover Its Amazing Features.md deleted file mode 100644 index b0819eb97b4bea3839cb3a3a57722860b5238c4d..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download CSI SAFE 2020 for Free and Discover Its Amazing Features.md +++ /dev/null @@ -1,21 +0,0 @@ - -

    How to Download CSI SAFE 2020 for Free

    -

    CSI SAFE is a powerful software for structural design and analysis of concrete slabs and foundations. It can handle complex geometries, loads, and reinforcement patterns, as well as perform nonlinear and dynamic analysis. CSI SAFE 2020 is the latest version of the software, which offers many new features and enhancements.

    -

    csi safe 2020 free download


    DOWNLOADhttps://byltly.com/2uKxA4



    -

    If you want to try out CSI SAFE 2020 for free, you can download a trial version from the official website of Computers and Structures, Inc. (CSI). The trial version is valid for 30 days and has full functionality. However, you will need to register with your name and email address to get the download link and activation code.

    -

    To download CSI SAFE 2020 for free, follow these steps:

    -
      -
    1. Go to https://www.csiamerica.com/products/safe and click on the "Download Trial" button.
    2. -
    3. Fill out the form with your name, email address, country, and company name. You can also select your preferred language and unit system.
    4. -
    5. Check your email for the download link and activation code. You may need to check your spam folder if you don't see it in your inbox.
    6. -
    7. Click on the download link and save the file to your computer. The file size is about 1 GB.
    8. -
    9. Run the installer and follow the instructions. You will need to enter the activation code when prompted.
    10. -
    11. Enjoy using CSI SAFE 2020 for free for 30 days!
    12. -
    -

    Note that the trial version of CSI SAFE 2020 is for evaluation purposes only and cannot be used for commercial or academic projects. If you want to use the software for longer or for professional purposes, you will need to purchase a license from CSI or their authorized resellers.

    CSI SAFE 2020 is a comprehensive software for designing and analyzing concrete slabs and foundations. It can handle various types of slabs, such as flat, waffle, ribbed, mat, and composite. It can also design and detail foundations, such as isolated, combined, strip, pile cap, and mat.

    -

    CSI SAFE 2020 has a user-friendly interface that allows you to create and edit models easily. You can import and export data from other CSI products, such as SAP2000, ETABS, and CSiBridge. You can also import and export data from other formats, such as DXF, DWG, IFC, and Excel.

    -

    -

    CSI SAFE 2020 has a powerful analysis engine that can perform linear and nonlinear analysis of slabs and foundations. It can account for various effects, such as cracking, creep, shrinkage, temperature, and soil-structure interaction. It can also perform dynamic analysis, such as modal, response spectrum, time history, and harmonic.

    -

    CSI SAFE 2020 has a comprehensive design and detailing module that can check and optimize the reinforcement of slabs and foundations according to various codes and standards. It can generate detailed reports and drawings that show the layout, quantities, and notes of the reinforcement. It can also export the reinforcement data to BIM software, such as Revit.

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ghost Recon Breakpoint PC Walkthrough How to Customize Your Character Use Your Skills and Deal with Enemies.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ghost Recon Breakpoint PC Walkthrough How to Customize Your Character Use Your Skills and Deal with Enemies.md deleted file mode 100644 index f4fe04224bb7f90b58f97f54f06a56300add9122..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ghost Recon Breakpoint PC Walkthrough How to Customize Your Character Use Your Skills and Deal with Enemies.md +++ /dev/null @@ -1,47 +0,0 @@ -
    -

    Ghost Recon Breakpoint PC Walkthrough: Tips and Tricks for Surviving the Open World

    - -

    If you are looking for a ghost recon breakpoint pc walkthrough, you have come to the right place. Ghost Recon Breakpoint is a tactical shooter game that puts you in the shoes of a special forces soldier who has to survive on a hostile island. The game features a vast open world that you can explore, complete missions, and engage in combat with enemies. However, the game can also be challenging and overwhelming, especially for beginners. That's why we have prepared this ghost recon breakpoint pc walkthrough to help you out.

    - -

    In this ghost recon breakpoint pc walkthrough, we will cover some basic tips and tricks that will make your life easier in the game. We will also give you some pointers on how to customize your character, use your skills and gadgets, and deal with different types of enemies. Whether you are playing solo or co-op, this ghost recon breakpoint pc walkthrough will help you enjoy the game more.

    -

    ghost recon breakpoint pc walkthrough


    Download Zip ❤❤❤ https://byltly.com/2uKysT



    - -

    Customize Your Character

    - -

    One of the first things you should do in Ghost Recon Breakpoint is to customize your character. You can choose from different classes, each with their own abilities and perks. You can also change your appearance, gear, and weapons. You can access the customization menu by pressing I on your keyboard or by visiting a bivouac (a campsite where you can rest and prepare).

    - -

    The four classes available in Ghost Recon Breakpoint are:

    - - - -

    You can switch between classes at any time by visiting a bivouac. You can also unlock new skills and perks for each class by earning skill points and completing challenges.

    - -

    Use Your Skills and Gadgets

    - -

    Another important aspect of Ghost Recon Breakpoint is to use your skills and gadgets effectively. You have access to a skill tree that lets you unlock various abilities that enhance your combat, survival, and reconnaissance capabilities. You can spend skill points to unlock new skills or upgrade existing ones. You can also equip different gadgets that give you an edge in different situations.

    - -

    Some of the most useful skills and gadgets in Ghost Recon Breakpoint are:

    - - - -

    You can access your skills and gadgets by pressing TAB on your keyboard or by using the wheel menu. You can also craft new gadgets or refill your ammo at bivouacs or ammo crates.

    - -

    Deal with Different Types of Enemies

    - -

    The last thing we will cover in this ghost recon breakpoint pc walkthrough is how to deal with different types of enemies. The game features a variety of enemies that have different behaviors, weapons, and weaknesses. You will need to adapt your strategy depending on the enemy you are facing.

    -

    - -

    Some of the most common types of enemies in Ghost Recon Breakpoint are:

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Catalogo De Sellos Edifil Espana 2012 Pdf LINK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Catalogo De Sellos Edifil Espana 2012 Pdf LINK.md deleted file mode 100644 index 72a466f07307417e0b2f6a003820a2e50c3cc9ea..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Catalogo De Sellos Edifil Espana 2012 Pdf LINK.md +++ /dev/null @@ -1,113 +0,0 @@ - -

    Catalogo De Sellos Edifil Espana 2012 Pdf: una guía para los amantes de la filatelia

    - -

    Si eres un apasionado de la filatelia y te interesa la historia postal de España y sus dependencias, seguramente querrás tener en tus manos el Catalogo De Sellos Edifil Espana 2012 Pdf, una obra de referencia que recoge todos los sellos emitidos desde 1850 hasta 2012.

    -

    Catalogo De Sellos Edifil Espana 2012 Pdf


    DOWNLOADhttps://imgfil.com/2uxY3p



    - -

    El Catalogo De Sellos Edifil Espana 2012 Pdf es un documento digital que puedes descargar gratis desde internet y que te ofrece una información completa y detallada de cada sello, con su imagen, descripción, valor facial, fecha de emisión, tirada, dentado, impresión y cotización.

    - -

    Además, el Catalogo De Sellos Edifil Espana 2012 Pdf incluye también los sellos de Andorra, las antiguas colonias españolas, los números de ciudad, los sobres y las hojitas bloque. Todo ello con una presentación cuidada y una calidad gráfica excelente.

    - -

    ¿Qué ventajas tiene el Catalogo De Sellos Edifil Espana 2012 Pdf?

    - -

    El Catalogo De Sellos Edifil Espana 2012 Pdf tiene muchas ventajas para los coleccionistas y aficionados a la filatelia. Algunas de ellas son:

    - - - -

    ¿Cómo descargar el Catalogo De Sellos Edifil Espana 2012 Pdf?

    - -

    Descargar el Catalogo De Sellos Edifil Espana 2012 Pdf es muy sencillo. Solo tienes que seguir estos pasos:

    - -
      -
    1. Entra en el buscador de internet de tu preferencia y escribe el nombre del catálogo: Catalogo De Sellos Edifil Espana 2012 Pdf.
    2. -
    3. Selecciona uno de los resultados que te aparecen y haz clic en él. Te llevará a una página web donde podrás ver el catálogo online o descargarlo en formato PDF.
    4. -
    5. Si quieres ver el catálogo online, solo tienes que desplazarte por las páginas y hacer zoom para ampliar las imágenes. Si quieres descargarlo en formato PDF, solo tienes que hacer clic en el botón de descarga y elegir la carpeta donde quieres guardarlo.
    6. -
    7. Una vez descargado el catálogo en tu dispositivo electrónico, podrás abrirlo con cualquier programa que lea archivos PDF y disfrutar de él siempre que quieras.
    8. -
    - -

    Así de fácil es obtener el Catalogo De Sellos Edifil Espana 2012 Pdf, un documento imprescindible para los amantes de la filatelia. No esperes más y descárgalo ya. Te sorprenderá la cantidad y la calidad de los sellos que contiene.

    -

    ¿Qué sellos puedes encontrar en el Catalogo De Sellos Edifil Espana 2012 Pdf?

    - -

    El Catalogo De Sellos Edifil Espana 2012 Pdf te ofrece una gran variedad de sellos de diferentes épocas, temáticas y estilos. Algunos de los sellos que puedes encontrar son:

    -

    - - - -
    ¿Cómo usar el Catalogo De Sellos Edifil Espana 2012 Pdf?
    - -

    El Catalogo De Sellos Edifil Espana 2012 Pdf es una herramienta muy útil para los coleccionistas y aficionados a la filatelia. Para usarlo correctamente, debes tener en cuenta lo siguiente:

    - -
      -
    1. Identifica el sello que quieres consultar y busca su número de catálogo en el índice alfabético o en el índice temático.
    2. -
    3. Localiza el sello en el catálogo y compara su imagen con la del sello real. Fíjate en los detalles como el color, el dentado, la impresión y las marcas.
    4. -
    5. Lee la descripción del sello y anota su valor facial, su fecha de emisión, su tirada y su cotización. También puedes ver si el sello forma parte de una serie o de una hojita bloque.
    6. -
    7. Repite el proceso con todos los sellos que quieras consultar y clasifica tu colección según tus criterios personales.
    8. -
    - -

    El Catalogo De Sellos Edifil Espana 2012 Pdf es un documento imprescindible para los amantes de la filatelia. No esperes más y descárgalo ya. Te sorprenderá la cantidad y la calidad de los sellos que contiene.

    -
    ¿Qué otras publicaciones puedes encontrar en Edifil?
    - -

    Edifil es una editorial especializada en filatelia que lleva más de 80 años ofreciendo productos y servicios de calidad a los coleccionistas. Además del Catalogo De Sellos Edifil Espana 2012 Pdf, puedes encontrar otras publicaciones interesantes, como:

    - - - -¿Cómo comprar el Catalogo De Sellos Edifil Espana 2012 Pdf? - -

    Si quieres comprar el Catalogo De Sellos Edifil Espana 2012 Pdf, tienes varias opciones. Puedes hacerlo a través de la página web de Edifil, donde podrás pagar con tarjeta de crédito o débito, PayPal o transferencia bancaria. También puedes hacerlo por teléfono o por correo electrónico, indicando tus datos personales y la forma de pago. Otra opción es acudir a una librería o tienda especializada en filatelia y solicitar el catálogo.

    - -

    El precio del Catalogo De Sellos Edifil Espana 2012 Pdf es de 35 euros (IVA incluido) y los gastos de envío son gratuitos para España peninsular. Para otros destinos, consulta las tarifas en la página web de Edifil o contacta con el servicio de atención al cliente.

    - -

    No lo dudes más y compra ya el Catalogo De Sellos Edifil Espana 2012 Pdf, un catálogo imprescindible para los amantes de la filatelia. Te sorprenderá la cantidad y la calidad de los sellos que contiene.

    -¿Qué beneficios tiene la filatelia para tu salud mental? - -

    La filatelia es una afición que te puede aportar muchos beneficios para tu salud mental. Algunos de ellos son:

    - - - -

    La filatelia es una afición que te puede hacer más feliz y más inteligente. No lo dudes y descarga ya el Catalogo De Sellos Edifil Espana 2012 Pdf, un catálogo imprescindible para los amantes de la filatelia. Te sorprenderá la cantidad y la calidad de los sellos que contiene.

    - -¿Cómo vender tus sellos online? - -

    Si tienes una colección de sellos que quieres vender online, puedes seguir algunos consejos que te ayudarán a hacerlo de forma segura y rentable. Algunos de ellos son:

    - -
      -
    1. Valora tus sellos correctamente, usando el Catalogo De Sellos Edifil Espana 2012 Pdf como referencia. Ten en cuenta el estado de conservación, la rareza y la demanda de tus sellos.
    2. -
    3. Elige una plataforma adecuada para vender tus sellos online, como eBay, Delcampe, Todocolección o Filatelia.com. Compara las comisiones, las condiciones y las opiniones de otros vendedores.
    4. -
    5. Prepara una descripción detallada y honesta de tus sellos, incluyendo el número de catálogo, el valor facial, la fecha de emisión, el dentado, la impresión y las posibles defectos o marcas. Acompaña tu descripción con fotos claras y nítidas de tus sellos.
    6. -
    7. Fija un precio justo y competitivo para tus sellos, basándote en el valor de mercado y en los precios de otros vendedores. Puedes optar por un precio fijo o por una subasta.
    8. -
    9. Cuida el embalaje y el envío de tus sellos, usando sobres acolchados o rígidos, fundas protectoras y etiquetas identificativas. Ofrece varias opciones de envío y seguimiento a tus compradores.
    10. -
    - -

    Vender tus sellos online puede ser una forma fácil y rápida de obtener un dinero extra por tu colección. Solo necesitas el Catalogo De Sellos Edifil Espana 2012 Pdf, una buena conexión a internet y un poco de paciencia. ¡Suerte con tu venta!

    -Conclusión - -

    El Catalogo De Sellos Edifil Espana 2012 Pdf es un documento imprescindible para los amantes de la filatelia. Es un catálogo completo y actualizado que recoge todos los sellos emitidos por España y sus dependencias postales desde 1850 hasta 2012. Es un catálogo gratuito que se puede descargar fácilmente desde internet y que ofrece una información detallada y una calidad gráfica excelente de cada sello. Es un catálogo útil y práctico que te permite conocer el valor de mercado de tus sellos y planificar tus compras y ventas. Es un catálogo interesante y educativo que te permite aprender sobre la historia, la cultura y el arte de España y sus territorios a través de sus sellos.

    - -

    En este artículo, te hemos explicado qué es el Catalogo De Sellos Edifil Espana 2012 Pdf, qué ventajas tiene, cómo descargarlo, qué sellos puedes encontrar en él, qué otras publicaciones puedes encontrar en Edifil, qué opinan los usuarios del catálogo, qué beneficios tiene la filatelia para tu salud mental y cómo vender tus sellos online. Esperamos que te haya sido útil y que te animes a descargar el catálogo y a disfrutar de tu afición a la filatelia.

    - -

    Si te ha gustado este artículo, compártelo con tus amigos y déjanos un comentario con tu opinión. También puedes suscribirte a nuestro boletín para recibir más artículos sobre filatelia y otros temas de tu interés. ¡Gracias por leernos!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Parking Multiplayer APK Download for PC - Free Simulation Game with Car Tuning and Free Walking.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Parking Multiplayer APK Download for PC - Free Simulation Game with Car Tuning and Free Walking.md deleted file mode 100644 index 46ed4bf9e1a743536d4c0da800b8039e720117a4..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Parking Multiplayer APK Download for PC - Free Simulation Game with Car Tuning and Free Walking.md +++ /dev/null @@ -1,203 +0,0 @@ - -

    How to Download Car Parking Multiplayer APK for PC

    -

    Car Parking Multiplayer is a realistic driving simulator game for Android devices that lets you explore a detailed city, customize your cars, and compete with other players online. If you are a fan of this game and want to play it on a bigger screen with better controls, you might be wondering how to download car parking multiplayer apk for PC. In this article, we will show you two ways to play car parking multiplayer on your PC, either with Windows 11 or with Android emulators.

    -

    What is Car Parking Multiplayer?

    -

    Car Parking Multiplayer is a game developed by olzhass that simulates various aspects of driving and parking in a realistic 3D environment. You can choose from over 100 different cars, from sports cars to trucks, and customize them with various parts and accessories. You can also interact with other players in online multiplayer mode, chat with them, exchange cars, or challenge them to races and drifts. You can also explore the open-world city, which has different areas such as airports, beaches, deserts, and more. You can also find hidden places and secrets in the city, such as a tank or a UFO.

    -

    download car parking multiplayer apk for pc


    Download Zip ⚹⚹⚹ https://urlin.us/2uSWeU



    -

    Features of Car Parking Multiplayer

    -

    Some of the features of car parking multiplayer are:

    - -

    Requirements for Car Parking Multiplayer

    -

    To play car parking multiplayer on your Android device, you need to have:

    - -

    Why Play Car Parking Multiplayer on PC?

    -

    While car parking multiplayer is designed for mobile devices, some players might prefer to play it on their PC for various reasons. Here are some of the advantages and disadvantages of playing car parking multiplayer on PC.

    -

    Advantages of Playing on PC

    -

    Some of the benefits of playing car parking multiplayer on PC are:

    -

    How to download car parking multiplayer apk for pc
    -Car parking multiplayer apk for pc free download
    -Car parking multiplayer apk for pc windows 10
    -Car parking multiplayer apk for pc online
    -Car parking multiplayer apk for pc bluestacks
    -Car parking multiplayer apk for pc noxplayer
    -Car parking multiplayer apk for pc simulation game
    -Car parking multiplayer apk for pc latest version
    -Car parking multiplayer apk for pc mod
    -Car parking multiplayer apk for pc hack
    -Car parking multiplayer apk for pc cheats
    -Car parking multiplayer apk for pc unlimited money
    -Car parking multiplayer apk for pc gameplay
    -Car parking multiplayer apk for pc review
    -Car parking multiplayer apk for pc tips and tricks
    -Car parking multiplayer apk for pc guide
    -Car parking multiplayer apk for pc tutorial
    -Car parking multiplayer apk for pc walkthrough
    -Car parking multiplayer apk for pc best cars
    -Car parking multiplayer apk for pc customizations
    -Car parking multiplayer apk for pc open world mode
    -Car parking multiplayer apk for pc free walking
    -Car parking multiplayer apk for pc car tuning
    -Car parking multiplayer apk for pc realistic graphics
    -Car parking multiplayer apk for pc olzhass developer
    -Car parking multiplayer apk for pc update 2023
    -Car parking multiplayer apk for pc new features
    -Car parking multiplayer apk for pc download link
    -Car parking multiplayer apk for pc system requirements
    -Car parking multiplayer apk for pc error fix
    -Car parking multiplayer apk for pc installation process
    -Car parking multiplayer apk for pc offline mode
    -Car parking multiplayer apk for pc single player mode
    -Car parking multiplayer apk for pc police mode
    -Car parking multiplayer apk for pc racing mode
    -Car parking multiplayer apk for pc drifting mode
    -Car parking multiplayer apk for pc challenges mode
    -Car parking multiplayer apk for pc missions mode
    -Car parking multiplayer apk for pc fun mode
    -Car parking multiplayer apk for pc chat mode
    -Car parking multiplayer apk for pc voice chat mode
    -Car parking multiplayer apk for pc friends mode
    -Car parking multiplayer apk for pc invite mode
    -Car parking multiplayer apk for pc join mode
    -Car parking multiplayer apk for pc create mode
    -Car parking multiplayer apk for pc server mode
    -Car parking multiplayer apk for pc private mode
    -Car parking multiplayer apk for pc public mode

    - -

    Disadvantages of Playing on PC

    -

    Some of the drawbacks of playing car parking multiplayer on PC are:

    - -

    How to Play Car Parking Multiplayer on PC with Windows 11

    -

    One of the ways to play car parking multiplayer on PC is to use Windows 11, the latest operating system from Microsoft that supports Android apps natively. This means that you can run Android apps on your PC without using any emulators or third-party software. Here are the steps to play car parking multiplayer on PC with Windows 11.

    -

    Steps to Install Windows Subsystem for Android

    -

    Before you can install and play car parking multiplayer on your PC with Windows 11, you need to enable the Windows Subsystem for Android (WSA), which is the feature that allows you to run Android apps on your PC. Here are the steps to install WSA:

    -
      -
    1. Open the Start menu and search for "Turn Windows features on or off". Click on it to open a new window.
    2. -
    3. Scroll down and find "Windows Subsystem for Android". Check the box next to it and click OK.
    4. -
    5. Wait for the installation process to complete and restart your PC if prompted.
    6. -
    7. Open the Microsoft Store app and search for "Windows Subsystem for Android". Click on it and install it on your PC.
    8. -
    9. Wait for the installation process to complete and launch WSA from the Start menu.
    10. -
    -

    Steps to Install Car Parking Multiplayer from Amazon Appstore

    -

    After you have installed WSA on your PC, you can install car parking multiplayer from the Amazon Appstore, which is the default app store for WSA. Here are the steps to install car parking multiplayer from Amazon Appstore:

    -
      -
    1. Open WSA from the Start menu and click on the Amazon Appstore icon.
    2. -
    3. Sign in with your Amazon account or create a new one if you don't have one.
    4. -
    5. Search for "Car Parking Multiplayer" in the search bar and click on it.
    6. -
    7. Click on the "Get" button and wait for the download and installation process to complete.
    8. -
    9. Click on the "Open" button or find car parking multiplayer in your app list and launch it.
    10. -
    -

    Steps to Install Google Play Store on Windows 11 (Optional)

    -

    If you prefer to use Google Play Store instead of Amazon Appstore to install car parking multiplayer on your PC with Windows 11, you can do so by following these steps:

    -
      -
    1. Download the Google Play Store APK file from a trusted source, such as APKMirror or APKPure.
    2. -
    3. Open WSA from the Start menu and click on the Settings icon.
    4. -
    5. Select "Developer mode" from the left panel and enable it by clicking on the toggle switch.
    6. -
    7. Select "File Explorer" from the left panel and click on "Choose Folder".
    8. -
    9. Select a folder where you want to store your APK files and click OK.
    10. -
    11. Copy and paste the Google Play Store APK file into that folder.
    12. -
    13. Select "Apps" from the left panel and click on "Refresh".
    14. -
    15. Select Google Play Store from the app list and click on "Install".
    16. -
    17. Wait for the installation process to complete and launch Google Play Store from your app list.
    18. -
    19. Sign in with your Google account or create a new one if you don't have one.
    20. -
    21. Search for "Car Parking Multiplayer" in the search bar and install it as usual.
    22. -
    -

    How to Play Car Parking Multiplayer on PC with Android Emulators

    -

    Another way to play car parking multiplayer on PC is to use Android emulators, which are software that mimic the Android operating system on your PC. This way, you can run any Android app or game on your PC as if you were using a mobile device. Here are some of the things you need to know about Android emulators and how to use them to play car parking multiplayer on PC.

    -

    What are Android Emulators?

    -

    Android emulators are programs that create a virtual Android device on your PC, allowing you to run Android apps and games on your PC. They usually have a user interface that resembles a smartphone or a tablet, and they let you access the Google Play Store or other app stores to download and install apps. Some of the benefits of using Android emulators are:

    - -

    However, some of the drawbacks of using Android emulators are:

    - -

    Best Android Emulators for Car Parking Multiplayer

    -

    There are many Android emulators available for PC, but not all of them are suitable for playing car parking multiplayer. Some of the factors that you need to consider when choosing an Android emulator for car parking multiplayer are:

    - -

    Based on these criteria, here are some of the best Android emulators for car parking multiplayer that you can try:

    -

    Bluestacks 5 / MSI App Player

    -

    Bluestacks 5 is one of the most popular and widely used Android emulators for PC. It is designed for gaming and offers high performance, compatibility, and features. It also has a partnership with MSI, which means that you can use MSI App Player, which is a customized version of Bluestacks 5 for MSI devices. Some of the advantages of using Bluestacks 5 / MSI App Player are:

    - -

    Nox Player

    -

    Nox Player is another popular and widely used Android emulator for PC. It is also designed for gaming and offers high performance, compatibility, and features. It also has a simple and user-friendly interface that makes it easy to use. Some of the advantages of using Nox Player are:

    - -

    Gameloop

    -

    Gameloop is another popular and widely used Android emulator for PC. It is developed by Tencent, which is the company behind some of the most popular online games such as PUBG Mobile, Call of Duty Mobile, etc. It is also designed for gaming and offers high performance, compatibility, and features. It also has a dedicated game center that lets you access and play some of the most popular online games on your PC. Some of the advantages of using Gameloop are:

    - -

    Steps to Install and Play Car Parking Multiplayer with Android Emulators

    -

    After you have chosen and downloaded an Android emulator for your PC, you can install and play car parking multiplayer with it by following these steps:

    -
      -
    1. Launch the Android emulator on your PC and sign in with your Google account or create a new one if you don't have one.
    2. -
    3. Open the Google Play Store app on the emulator and search for "Car Parking Multiplayer". Click on it and install it on your emulator.
    4. -
    5. Wait for the installation process to complete and launch car parking multiplayer from your app list or home screen.
    6. -
    7. Enjoy playing car parking multiplayer on your PC with the emulator's features and options.
    8. -
    -

    Conclusion

    -

    Car Parking Multiplayer is a fun and realistic driving simulator game for Android devices that lets you customize your cars, explore a detailed city, and compete with other players online. If you want to play this game on your PC, you have two options: using Windows 11 or using Android emulators. Both methods have their advantages and disadvantages, so you can choose the one that suits your preferences and needs. We hope this article helped you learn how to download car parking multiplayer apk for PC and enjoy playing it on a bigger screen with better controls.

    -

    FAQs

    -

    Here are some of the frequently asked questions about car parking multiplayer and how to play it on PC:

    -

    Is Car Parking Multiplayer free to play?

    -

    Yes, car parking multiplayer is free to play on Android devices. However, it contains ads and in-app purchases that can enhance your gameplay or unlock more features.

    -

    Can I play Car Parking Multiplayer offline?

    -

    Yes, you can play car parking multiplayer offline without an internet connection. However, you will not be able to access some of the features or modes that require online connectivity, such as multiplayer mode, chat, or updates.

    -

    Can I play Car Parking Multiplayer with my friends?

    -

    Yes, you can play car parking multiplayer with your friends online by joining or creating a room in multiplayer mode. You can also chat with them, exchange cars, or challenge them to races and drifts.

    -

    Can I use cheats or hacks in Car Parking Multiplayer?

    -

    We do not recommend using cheats or hacks in car parking multiplayer, as they might ruin your gaming experience or cause some issues with the game. They might also violate some terms of service or policies of the game or the emulator, which could result in bans or penalties.

    -

    How can I contact the developers of Car Parking Multiplayer?

    -

    If you have any questions, feedback, or suggestions for car parking multiplayer, you can contact the developers of the game by emailing them at olzhass@yandex.com. You can also follow them on their social media accounts or visit their website for more information.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cars Movie Tamil Dubbed HD Download - Experience the Thrill and Humor of the Pixar Classic.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cars Movie Tamil Dubbed HD Download - Experience the Thrill and Humor of the Pixar Classic.md deleted file mode 100644 index 15927f4ceda86b3468a9d20c736958b0831fc485..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cars Movie Tamil Dubbed HD Download - Experience the Thrill and Humor of the Pixar Classic.md +++ /dev/null @@ -1,136 +0,0 @@ -
    -

    Download Cars Movie in Tamil

    -

    Are you a fan of animated movies? Do you love cars and racing? Do you want to watch a fun and heartwarming story in your native language? If you answered yes to any of these questions, then you might be interested in downloading Cars movie in Tamil. In this article, we will tell you what Cars movie is about, why you should watch it in Tamil, and how to download it from different sources. Let's get started!

    -

    Introduction

    -

    What is Cars movie about?

    -

    Cars is a 2006 American computer-animated sports comedy film produced by Pixar Animation Studios for Walt Disney Pictures. The film was directed by John Lasseter from a screenplay by Dan Fogelman, Lasseter, Joe Ranft, Kiel Murray, Phil Lorin, and Jorgen Klubien and a story by Lasseter, Ranft, and Klubien. The film features an ensemble voice cast of Owen Wilson, Paul Newman (in his final voice acting theatrical film role), Bonnie Hunt, Larry the Cable Guy, Tony Shalhoub, Cheech Marin, Michael Wallis, George Carlin, Paul Dooley, Jenifer Lewis, Guido Quaroni, Michael Keaton, Katherine Helmond, John Ratzenberger and Richard Petty.

    -

    download cars movie in tamil


    DOWNLOAD ——— https://urlin.us/2uSZru



    -

    The film is set in a world populated entirely by anthropomorphic talking cars and other vehicles. It follows a hotshot rookie race car named Lightning McQueen (Wilson) who, on the way to the biggest race of his life, gets stranded in Radiator Springs, a run down town that's past its glory days, and learns a thing or two about friendship, family, and the things in life that are truly worth waiting for. The film was inspired by Lasseter's experiences on a cross-country road trip.

    -

    Why watch Cars movie in Tamil?

    -

    There are many reasons why you might want to watch Cars movie in Tamil. Here are some of them:

    - -

    How to download Cars movie in Tamil

    -

    Option 1: Archive.org

    -

    Pros and cons of Archive.org

    -

    Archive.org is a website that provides free access to millions of digital items such as books, movies, music, software, and more. You can download Cars movie in Tamil from Archive.org for free. Here are some pros and cons of using Archive.org:

    - - - - - - -
    ProsCons
    No registration or payment required.The video quality might not be very high.
    No ads or pop-ups.The download speed might be slow.
    No viruses or malware.The availability might depend on the uploader.
    No legal issues.The subtitles might not be synchronized.
    -

    Steps to download Cars movie in Tamil from Archive.org

    -

    Here are the steps to download Cars movie in Tamil from Archive.org:

    1. Go to Archive.org and type "Cars movie Tamil" in the search box.

    -

    2. You will see a list of results that match your query. Choose the one that has the best video quality and the most views.

    -

    3. Click on the result and you will be taken to a page where you can see the details of the movie, such as the title, description, date, language, duration, etc.

    -

    4. On the right side of the page, you will see a section called "Download Options". Here you can choose the format and size of the file you want to download.

    -

    5. Click on the format and size that suits your preference and a download link will appear. Right-click on the link and choose "Save link as" or "Save target as" to save the file to your computer.

    -

    How to download cars movie in tamil for free
    -Cars movie tamil dubbed download in HD quality
    -Cars 2006 tamil dubbed part 1 free download[^1^]
    -Watch cars movie online in tamil on Disney+ Hotstar[^2^]
    -Cars tamil dubbed animation movie comedy action adventure youtube video[^3^]
    -Cars movie tamil version download link
    -Cars movie tamil subtitles download
    -Cars movie tamil audio track download
    -Cars movie tamil dubbed torrent download
    -Cars movie tamil review and rating
    -Cars movie tamil songs download
    -Cars movie tamil trailer download
    -Cars movie tamil cast and crew
    -Cars movie tamil behind the scenes
    -Cars movie tamil fun facts and trivia
    -Cars movie tamil memes and jokes
    -Cars movie tamil fan art and wallpapers
    -Cars movie tamil quotes and dialogues
    -Cars movie tamil merchandise and toys
    -Cars movie tamil games and apps
    -Download cars 2 movie in tamil
    -Download cars 3 movie in tamil
    -Download cars toon mater's tall tales in tamil
    -Download cars race-o-rama game in tamil
    -Download cars radiator springs adventures game in tamil
    -Download cars mater-national championship game in tamil
    -Download cars lightning mcqueen's fast tracks game in tamil
    -Download cars the video game in tamil
    -Download cars the world of cars online game in tamil
    -Download cars the art of cars book in tamil
    -Download cars the essential guide book in tamil
    -Download cars the junior novelization book in tamil
    -Download cars the little golden book in tamil
    -Download cars the official magazine in tamil
    -Download cars the soundtrack album in tamil
    -Download cars the original score album in tamil
    -Download cars the real story of lightning mcqueen documentary in tamil
    -Download cars the making of a pixar animation classic documentary in tamil
    -Download cars the ultimate guide to the world of racing book in tamil
    -Download cars the complete history book in tamil
    -Download cars the encyclopedia of automobiles book in tamil
    -Download cars the ultimate sticker book in tamil
    -Download cars the coloring book in tamil
    -Download cars the activity book in tamil
    -Download cars the storybook collection book in tamil
    -Download cars the comic series in tamil
    -Download cars the graphic novel in tamil

    -

    6. Wait for the download to finish and enjoy watching Cars movie in Tamil!

    -

    Option 2: YouTube

    -

    Pros and cons of YouTube

    -

    YouTube is a website that allows users to upload, watch, share, and comment on videos. You can download Cars movie in Tamil from YouTube using a third-party tool or software. Here are some pros and cons of using YouTube:

    - - - - - - -
    ProsCons
    The video quality might be very high.You need to use a third-party tool or software.
    The download speed might be fast.You might encounter ads or pop-ups.
    The availability might be high.You might get viruses or malware.
    The subtitles might be synchronized.You might face legal issues.
    -

    Steps to download Cars movie in Tamil from YouTube

    -

    Here are the steps to download Cars movie in Tamil from YouTube:

    -

    1. Go to YouTube and type "Cars movie Tamil" in the search box.

    -

    2. You will see a list of results that match your query. Choose the one that has the best video quality and the most views.

    -

    3. Click on the result and you will be taken to a page where you can watch the movie online. Copy the URL of the page from your browser's address bar.

    -

    4. Go to a third-party tool or software that allows you to download YouTube videos, such as y2mate.com, savefrom.net, or 4kdownload.com.

    -

    5. Paste the URL of the YouTube video into the tool or software and choose the format and size of the file you want to download.

    -

    6. Click on the download button and save the file to your computer.

    -

    7. Wait for the download to finish and enjoy watching Cars movie in Tamil!

    -

    Option 3: Other sites

    -

    Pros and cons of other sites

    -

    There are also other sites that offer Cars movie in Tamil for download, such as isaiminiweb.com, tamilrockers.ws, or tamilyogi.cool. Here are some pros and cons of using other sites:

    - - - - - - -
    ProsCons
    The video quality might vary depending on the site.You need to register or pay for some sites.
    The download speed might vary depending on the site.You might encounter ads or pop-ups.
    The availability might vary depending on the site.You might get viruses or malware.
    The subtitles might vary depending on the site.You might face legal issues.
    -

    Steps to download Cars movie in Tamil from other sites

    -

    Here are the steps to download Cars movie in Tamil from other sites:

    1. Go to the site of your choice and type "Cars movie Tamil" in the search box.

    -

    2. You will see a list of results that match your query. Choose the one that has the best video quality and the most views.

    -

    3. Click on the result and you will be taken to a page where you can see the details of the movie, such as the title, description, date, language, duration, etc.

    -

    4. On the page, you will see a download link or button. Click on it and follow the instructions to download the movie. You might need to register or pay for some sites.

    -

    5. Wait for the download to finish and enjoy watching Cars movie in Tamil!

    -

    Conclusion

    -

    Summary of the article

    -

    In this article, we have discussed what Cars movie is about, why you should watch it in Tamil, and how to download it from different sources. We have compared the pros and cons of using Archive.org, YouTube, and other sites, and provided the steps to download Cars movie in Tamil from each option. We hope you have found this article helpful and informative. Now you can enjoy watching Cars movie in Tamil with your family and friends!

    -

    FAQs

    -

    Here are some frequently asked questions about downloading Cars movie in Tamil:

    -
      -
    1. Is it legal to download Cars movie in Tamil from online sources?
    2. -

      It depends on the source and the country you are in. Some sources are legal and authorized, while others are illegal and pirated. You should always check the terms and conditions of the source before downloading anything from it. You should also be aware of the laws and regulations of your country regarding downloading copyrighted content from online sources.

      -
    3. Is it safe to download Cars movie in Tamil from online sources?
    4. -

      It depends on the source and the tool or software you use. Some sources are safe and secure, while others are unsafe and risky. You should always scan the file for viruses or malware before opening it on your computer. You should also use a reliable and trusted tool or software to download YouTube videos or other files from online sources.

      -
    5. What are some other animated movies that are available in Tamil?
    6. -

      There are many other animated movies that are available in Tamil, such as Toy Story, Finding Nemo, The Lion King, Frozen, The Incredibles, Coco, Inside Out, Zootopia, Moana, and more. You can search for them online or ask your friends for recommendations.

      -
    7. What are some other languages that Cars movie is available in?
    8. -

      Cars movie is available in many other languages besides English and Tamil, such as Hindi, Telugu, Malayalam, Kannada, Bengali, Marathi, Gujarati, Urdu, Arabic, French, Spanish, German, Italian, Portuguese, Russian, Chinese, Japanese, Korean, and more. You can search for them online or ask your friends for suggestions.

      -
    9. Where can I watch Cars movie online without downloading it?
    10. -

      You can watch Cars movie online without downloading it on some streaming platforms or websites that offer it legally and legitimately. Some examples are Disney+, Netflix, Amazon Prime Video, Hotstar, SonyLIV, Zee5, etc. You might need to subscribe or pay for some of these platforms or websites to watch Cars movie online.

      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Access Your Accounts Anytime Anywhere with RT Bank APK for Android and iOS.md b/spaces/1phancelerku/anime-remove-background/Access Your Accounts Anytime Anywhere with RT Bank APK for Android and iOS.md deleted file mode 100644 index 6e051326d0b6984ca7ae87eda1029b385e5cbac4..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Access Your Accounts Anytime Anywhere with RT Bank APK for Android and iOS.md +++ /dev/null @@ -1,154 +0,0 @@ -
    -

    RT Bank APK: A Mobile Banking Solution for Android Devices

    -

    Do you want to manage your money on the move and around the clock with a secured mobile banking app from RT Bank? If yes, then you should try RT Bank APK, a mobile banking solution that enables you to use your smart phones and/or tablets to access your accounts. It is available in both Arabic and English. In this article, we will tell you everything you need to know about RT Bank APK, including its features, benefits, download, installation, usage, and security tips.

    -

    rt bank apk


    Download Zip ✪✪✪ https://jinyurl.com/2uNS71



    -

    What is RT Bank APK?

    -

    RT Bank APK is a mobile banking app from RT Bank for iOS and Android devices. It allows you to perform various banking transactions anytime, anywhere, with just a few taps on your screen. You can view your account balances, details, and history, inquire about your loans, request a cheque book, inquire about currency and exchange rates, locate the nearest ATM or branch, change your passwords, and more. You can also enjoy a user-friendly interface, fast performance, and high security with RT Bank APK.

    -

    Features and benefits of RT Bank APK

    -

    Some of the features and benefits of RT Bank APK are:

    - -

    How to download and install RT Bank APK

    -

    To download and install RT Bank APK on your device, follow these steps:

    -
      -
    1. Go to the Google Play Store or the App Store on your device.
    2. -
    3. Search for "RTB Mobile" or scan the QR code below.
    4. -
    5. Tap on the app icon and then tap on "Install".
    6. -
    7. Wait for the app to download and install on your device.
    8. -
    9. Tap on "Open" to launch the app.
    10. -
    - QR code for RTB Mobile app -

    How to use RT Bank APK

    -

    To use RT Bank APK on your device, follow these steps:

    -

    rt bank mobile banking app
    -rt bank internet banking login
    -rt bank apk download
    -rt bank online banking service
    -rt bank mobile app for android
    -rt bank apk for ios
    -rt bank internet banking security
    -rt bank apk latest version
    -rt bank online banking manual
    -rt bank mobile app features
    -rt bank apk free download
    -rt bank internet banking registration
    -rt bank online banking support
    -rt bank mobile app review
    -rt bank apk update
    -rt bank internet banking password
    -rt bank online banking currency rates
    -rt bank mobile app in arabic
    -rt bank apk file
    -rt bank internet banking fraud
    -rt bank online banking cheque book request
    -rt bank mobile app for tablets
    -rt bank apk install
    -rt bank internet banking certificate
    -rt bank online banking loan inquiry
    -rt bank mobile app for iphone
    -rt bank apk mod
    -rt bank internet banking customer service
    -rt bank online banking account balance
    -rt bank mobile app for ipad
    -rt bank apk old version
    -rt bank internet banking browser configuration
    -rt bank online banking location
    -rt bank mobile app for windows phone
    -rt bank apk mirror
    -rt bank internet banking transfer limit
    -rt bank online banking atm locator
    -rt bank mobile app for blackberry
    -rt bank apk pure
    -rt bank internet banking terms and conditions
    -rt bank online banking contact us
    -rt bank mobile app for huawei
    -rt bank apk hack
    -rt bank internet banking demo
    -rt bank online banking feedback form
    -rt bank mobile app for samsung
    -rt bank apk cracked
    -rt bank internet banking alert system

    -

    How to log in and manage your accounts

    -
      -
    1. Launch the app on your device.
    2. -
    3. Enter your user name and password. If you don't have an account yet, tap on "Register" and follow the instructions.
    4. -
    5. Tap on "Login" to access your accounts.
    6. -
    7. Swipe left or right to switch between accounts.
    8. -
    9. Tap on an account to view its balance, details, and history.
    10. -
    -

    How to request a cheque book

    -
      -
    1. Tap on the menu icon at the top left corner of the screen.
    2. -
    3. Tap on "Services".
    4. -
    5. Tap on "Cheque Book Request".
    6. -
    7. Select the account that you want to request a cheque book for.
    8. -
    9. Select the number of cheque books that you want to request.
    10. -

      How to inquire about currency and exchange rates

      -
        -
      1. Tap on the menu icon at the top left corner of the screen.
      2. -
      3. Tap on "Currency".
      4. -
      5. Select the currency that you want to inquire about.
      6. -
      7. Tap on "Convert" to see the exchange rate and the equivalent amount in your selected currency.
      8. -
      -

      How to locate the nearest ATM or branch

      -
        -
      1. Tap on the menu icon at the top left corner of the screen.
      2. -
      3. Tap on "ATM/Branch Locator".
      4. -
      5. Allow the app to access your location or enter your city or area manually.
      6. -
      7. Select the type of service that you are looking for (ATM or branch).
      8. -
      9. Tap on "Search" to see the nearest ATM or branch on a map.
      10. -
      11. Tap on an ATM or branch icon to see its address, phone number, and working hours.
      12. -
      -

      How to change your passwords

      -
        -
      1. Tap on the menu icon at the top left corner of the screen.
      2. -
      3. Tap on "Settings".
      4. -
      5. Tap on "Change Passwords".
      6. -
      7. Select the type of password that you want to change (login or transfer).
      8. -
      9. Enter your current password and your new password twice.
      10. -
      11. Tap on "Change" to confirm your new password.
      12. -
      -

      How to stay safe and secure with RT Bank APK

      -

      RT Bank APK is designed to provide you with a secure and convenient mobile banking experience. However, you should also take some precautions to protect yourself and your money from any potential risks. Here are some tips on how to stay safe and secure with RT Bank APK:

      -

      How to avoid phishing and fraud attempts

      -
        -
      • Do not share your user name, password, or any other personal or financial information with anyone, even if they claim to be from RT Bank or any other authority.
      • -
      • Do not click on any links or attachments in suspicious emails, SMS, or social media messages that ask you to update your account details, verify your identity, or claim that you have won a prize.
      • -
      • Do not download any apps from unknown sources or third-party websites. Only download RT Bank APK from the official Google Play Store or App Store.
      • -
      • Do not use public or unsecured Wi-Fi networks to access RT Bank APK. Use your own mobile data or a trusted Wi-Fi network instead.
      • -
      -

      How to report an electronic fraud attempt

      -
        -
      • If you receive any suspicious emails, SMS, or social media messages that claim to be from RT Bank or any other authority, do not respond to them and delete them immediately.
      • -
      • If you suspect that someone has accessed your account without your authorization, change your passwords immediately and contact RT Bank customer service at 1800-123-4567.
      • -
      • If you notice any unauthorized transactions on your account, report them immediately through RT Bank APK by tapping on "Report Fraud" under "Services". You can also contact RT Bank customer service at 1800-123-4567.
      • -
      -

      How to protect your device and data

      -
        -
      • Lock your device with a PIN, password, pattern, fingerprint, or face recognition feature.
      • -
      • Update your device's operating system and apps regularly to fix any security vulnerabilities.
      • -
      • Delete any unused apps from your device and clear your browser's cache and history regularly.
      • -
      • Avoid rooting or jailbreaking your device as it may compromise its security and functionality.
      • -
      -

      Conclusion and FAQs

      -

      In conclusion, RT Bank APK is a mobile banking solution that allows you to access your accounts anytime, anywhere, with just a few taps on your screen. You can enjoy various features and benefits such as viewing your account balances, details, and history, inquiring about your loans, requesting a cheque book, inquiring about currency and exchange rates, locating the nearest ATM or branch, changing your passwords, and more. You can also stay safe and secure with RT Bank APK by following some simple tips such as avoiding phishing and fraud attempts, reporting any electronic fraud attempt, and protecting your device and data. If you have any questions about RT Bank APK, you can check out these FAQs:

      - - -may incur some charges from your mobile network provider for using data. - - - - -
      QuestionAnswer
      Do I need to register for RT Bank APK?Yes, you need to register for RT Bank APK before you can use it. You can register through the app by tapping on "Register" and following the instructions. You will need your account number, debit card number, and mobile phone number to register.
      What are the login and transfer passwords?The login password is the password that you use to log in to RT Bank APK. The transfer password is the password that you use to confirm any transfers or payments that you make through RT Bank APK. You can change both passwords through the app by tapping on "Settings" and then "Change Passwords".
      What if I forget my passwords?If you forget your login password, you can reset it through the app by tapping on "Forgot Password" and following the instructions. You will need your user name, account number, debit card number, and mobile phone number to reset your login password. If you forget your transfer password, you will need to contact RT Bank customer service at 1800-123-4567 to reset it.
      What if I lose my device or it gets stolen?If you lose your device or it gets stolen, you should contact RT Bank customer service at 1800-123-4567 as soon as possible to deactivate your RT Bank APK account. You should also report the loss or theft of your device to your mobile network provider and the police.
      -

      We hope that this article has helped you understand more about RT Bank APK and how to use it. If you have any feedback or suggestions, please feel free to contact us at feedback@rtbank.com. Thank you for choosing RT Bank as your banking partner.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Castle Clash China APK A Comprehensive Review of the Chinese Version of the Game.md b/spaces/1phancelerku/anime-remove-background/Castle Clash China APK A Comprehensive Review of the Chinese Version of the Game.md deleted file mode 100644 index 9c86808d752547ac9f6400c7ee67630d1579568e..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Castle Clash China APK A Comprehensive Review of the Chinese Version of the Game.md +++ /dev/null @@ -1,141 +0,0 @@ - -

      Castle Clash China APK: How to Download and Play the Chinese Version of the Popular Strategy Game

      -

      Castle Clash is one of the most popular strategy games in the world, with over 100 million players worldwide. It is a game where you can build your own castle, recruit heroes, train troops, and fight against other players in various modes. But did you know that there is a Chinese version of Castle Clash that has some unique features and differences from other versions? In this article, we will tell you everything you need to know about Castle Clash China APK, how to download and install it on your Android device, how to play it on your PC or Mac, and some tips and tricks for playing it.

      -

      castle clash china apk


      Download File » https://jinyurl.com/2uNLSt



      -

      What is Castle Clash China APK?

      -

      Castle Clash China APK is the Chinese version of Castle Clash, which is developed by IGG.com, a Singapore-based company. It is also known as 城堡争霸 in Chinese, which means "Castle Battle". It is an APK file, which stands for Android Package Kit, that contains all the files and data needed to run the game on an Android device. You can download it from various sources online, but you need to be careful about the security and quality of the file.

      -

      The features of Castle Clash China APK

      -

      Castle Clash China APK has many features that make it an exciting and addictive game. Some of these features are:

      -
        -
      • You can build your own castle with different types of buildings, such as town hall, barracks, warehouse, watchtower, walls, etc.
      • -
      • You can recruit over 100 different heroes with various skills and abilities, such as magic, healing, summoning, etc.
      • -
      • You can train various types of troops, such as archers, knights, griffins, dragons, etc.
      • -
      • You can fight against other players in real-time PvP battles, such as arena, raid, guild war, etc.
      • -
      • You can join or create a guild with other players and cooperate with them in guild events, such as boss battles, torch battles, fortress feud, etc.
      • -
      • You can participate in various game modes, such as dungeon, expedition, lost realm, labyrinth, etc.
      • -
      • You can collect and upgrade various resources, such as gold, mana, gems, honor badges, shards, etc.
      • -
      • You can enjoy stunning graphics and sound effects that create an immersive gaming experience.
      • -
      -

      The differences between Castle Clash China APK and other versions

      -

      Castle Clash China APK is not exactly the same as other versions of Castle Clash. There are some differences that you should be aware of before playing it. Some of these differences are:

      -

      castle clash chinese version apk download
      -castle clash china server apk
      -castle clash china mod apk
      -castle clash china apk 2023
      -castle clash china apk latest version
      -castle clash china apk english
      -castle clash china apk hack
      -castle clash china apk unlimited gems
      -castle clash china apk update
      -castle clash china apk free download
      -castle clash chinese heroes apk
      -castle clash chinese new year apk
      -castle clash chinese edition apk
      -castle clash chinese modded apk
      -castle clash chinese server mod apk
      -castle clash chinese version mod apk download
      -castle clash chinese version hack apk
      -castle clash chinese version unlimited gems apk
      -castle clash chinese version latest apk
      -castle clash chinese version english apk
      -how to download castle clash china apk
      -how to play castle clash china apk
      -how to install castle clash china apk
      -how to update castle clash china apk
      -how to hack castle clash china apk
      -download game castle clash china apk
      -download game castle clash chinese version apk
      -download game castle clash chinese server apk
      -download game castle clash chinese mod apk
      -download game castle clash chinese edition apk
      -download game mod castle clash china apk
      -download game hack castle clash china apk
      -download game cheat castle clash china apk
      -download game offline castle clash china apk
      -download game online castle clash china apk
      -best heroes in castle clash china apk
      -best talents in castle clash china apk
      -best pets in castle clash china apk
      -best super pets in castle clash china apk
      -best equipment in castle clash china apk
      -best enchantments in castle clash china apk
      -best insignias in castle clash china apk
      -best skins in castle clash china apk
      -best team in castle clash china apk
      -best strategy in castle clash china apk
      -best tips and tricks for castle clash china apk
      -best guide for castle clash china apk
      -best cheats for castle clash china apk

      -
        -
      • The language of the game is Chinese. You may need to use a translator app or a guide to understand some of the texts and menus.
      • -
      • The game is not available on Google Play Store or App Store. You need to download it from other sources online.
      • -
      • The game may have some regional restrictions. You may need to use a VPN app or a proxy server to access some of the features or servers.
      • -
      • The game may have some exclusive content that is not available in other versions. For example, some heroes may have different names or appearances in Castle Clash China APK than in other versions.
      • -
      • The game may have some different updates or events than other versions. For example, some game modes or features may be added or removed in Castle Clash China APK at different times than in other versions.
      • -
      -

      How to download and install Castle Clash China APK on your Android device

      -

      If you want to play Castle Clash China APK on your Android device, you need to follow these steps:

      -

      Step 1: Enable unknown sources

      -

      Before you can install Castle Clash China APK on your Android device, you need to enable unknown sources in your settings. This will allow you to install apps that are not from Google Play Store or App Store. To do this, go to Settings > Security > Unknown sources and toggle it on. You may see a warning message, but you can ignore it and proceed.

      -

      Step 2: Download the APK file from a trusted source

      -

      Next, you need to download the APK file of Castle Clash China APK from a trusted source online. You can search for it on Google or use a link from a reliable website. For example, you can use this link to download the latest version of Castle Clash China APK (version 1.8.9) as of June 2023. Make sure you have enough storage space on your device before downloading the file.

      -

      Step 3: Install the APK file and launch the game

      -

      After downloading the APK file, you need to install it on your device. To do this, locate the file in your downloads folder or wherever you saved it and tap on it. You may see a pop-up message asking for your permission to install the app. Tap on Install and wait for the installation to finish. Once the installation is done, you can launch the game by tapping on Open or by finding the app icon on your home screen or app drawer. You may need to agree to some terms and conditions before playing the game.

      -

      How to play Castle Clash China APK on your PC or Mac

      -

      If you want to play Castle Clash China APK on your PC or Mac, you need to use an Android emulator. An Android emulator is a software that simulates an Android device on your computer, allowing you to run Android apps and games on it. There are many Android emulators available online, but some of the most popular ones are BlueStacks, NoxPlayer, and LDPlayer. To play Castle Clash China APK on your PC or Mac, you need to follow these steps:

      -

      Step 1: Download and install an Android emulator

      -

      First, you need to download and install an Android emulator of your choice on your PC or Mac. You can visit the official website of the emulator and follow the instructions to download and install it. For example, if you want to use BlueStacks, you can go to this link and click on Download BlueStacks. After downloading the installer file, run it and follow the steps to install BlueStacks on your computer.

      -

      Step 2: Download the APK file from a trusted source

      -

      Next, you need to download the APK file of Castle Clash China APK from a trusted source online, just like you did for your Android device. You can use the same link as before or find another one that works for you. Save the file on your computer where you can easily access it.

      -

      Step 3: Install the APK file and launch the game on the emulator

      -

      After downloading the APK file, you need to install it on the emulator. To do this, open the emulator and drag and drop the APK file onto it. Alternatively, you can click on Install APK in the emulator and browse for the file on your computer. The emulator will automatically install the app and create a shortcut for it on its home screen. Once the installation is done, you can launch the game by clicking on its icon. You may need to agree to some terms and conditions before playing the game.

      -

      Tips and tricks for playing Castle Clash China APK

      -

      Now that you know how to download and play Castle Clash China APK, here are some tips and tricks that will help you enjoy the game more:

      -

      Tip 1: Choose your heroes wisely

      -

      Heroes are one of the most important aspects of Castle Clash China APK. They can make or break your battles with their skills and abilities. Therefore, you should choose your heroes wisely and use them strategically. Some of the factors that you should consider when choosing your heroes are:

      -
        -
      • Their rarity: Heroes are classified into ordinary, elite, rare, epic, and legendary, based on their color and stars. Generally, the higher the rarity, the better the hero.
      • -
      • Their skills: Heroes have different skills that can affect their performance in battle. Some skills are passive, meaning they are always active, while some skills are active, meaning they need to be triggered by certain conditions. You should check the description and level of each skill and see how it can benefit your team.
      • -
      • Their talents: Heroes have different talents that can enhance their attributes or abilities. Some talents are innate, meaning they are fixed and cannot be changed, while some talents are random, meaning they can be replaced by using talent cards or gems. You should try to get the best talents for your heroes according to their roles and preferences.
      • -
      • Their crests: Heroes can equip up to four crests that can give them additional effects or bonuses. Crests are classified into eight sets, each with four levels. You can combine four crests of the same set and level to form a crest insignia, which can be upgraded to a higher level. You should mix and match the best crests for your heroes according to their needs and synergies.
      • -
      • Their equipment: Heroes can equip one piece of equipment that can boost their stats or skills. Equipment can be obtained from the equipment shop or the equipment trial. Equipment can also be upgraded or evolved to increase its power. You should equip your heroes with the most suitable equipment for their roles and situations.
      • -
      -

      Tip 2: Upgrade your buildings and troops regularly

      -

      Buildings and troops are also essential for Castle Clash China APK. They can help you defend your castle, collect resources, and attack other players. Therefore, you should upgrade your buildings and troops regularly and keep them in good shape. Some of the factors that you should consider when upgrading your buildings and troops are:

      -
        -
      • Their level: Buildings and troops have different levels that indicate their strength and capacity. The higher the level, the better the building or troop. You can upgrade your buildings and troops by using gold, mana, or honor badges. You should prioritize upgrading your town hall, warehouse, vaults, and barracks first, as they affect your overall progress and performance.
      • -
      • Their type: Buildings and troops have different types that indicate their function and specialty. For example, some buildings are defensive, such as watchtower, hero base, hero altar, etc., while some buildings are offensive, such as army camp, relic hall, etc. Similarly, some troops are ranged, such as archers, hunters, etc., while some troops are melee, such as knights, griffins, etc. You should balance your building and troop types according to your strategy and preference.
      • -
      • Their placement: Buildings and troops have different placements that affect their effectiveness and efficiency. For example, some buildings are better placed near the center of your castle, such as town hall, hero altar, etc., while some buildings are better placed near the edge of your castle, such as watchtower, army camp, etc. Similarly, some troops are better placed near the front of your army, such as tanks, healers, etc., while some troops are better placed near the back of your army, such as snipers, bombers, etc. You should optimize your building and troop placement according to your defense and offense plans.
      • -
      -

      Tip 3: Join a guild and participate in events

      -

      Guilds and events are also important for Castle Clash China APK. They can help you socialize with other players, get rewards, and have fun. Therefore, you should join a guild and participate in events as much as possible. Some of the benefits of joining a guild and participating in events are:

      -
        -
      • You can chat with other players in your guild and share tips and strategies.
      • -
      • You can donate shards or honor badges to your guild and get guild credits in return.
      • -
      • You can use guild credits to buy items or services from the guild shop or the guild hall.
      • -
      • You can cooperate with your guild members in guild events, such as boss battles, torch battles, fortress feud, etc., and get rewards and rankings.
      • -
      • You can participate in various game events, such as daily quests, login rewards, lucky spin, etc., and get rewards and bonuses.
      • -
      • You can participate in special events, such as festivals, celebrations, contests, etc., and get exclusive rewards and prizes.
      • -
      -

      Conclusion

      -

      Castle Clash China APK is a great game for strategy lovers who want to experience a different version of Castle Clash. It has many features and differences that make it unique and exciting. However, it also has some challenges and limitations that you need to overcome. By following the steps and tips in this article, you can download and play Castle Clash China APK on your Android device or your PC or Mac easily and safely. You can also enjoy the game more by choosing your heroes wisely, upgrading your buildings and troops regularly, and joining a guild and participating in events. We hope you have fun playing Castle Clash China APK!

      -

      FAQs

      -

      Here are some frequently asked questions about Castle Clash China APK:

      -
        -
      1. Is Castle Clash China APK safe to download and play?
      2. -

        Yes, Castle Clash China APK is safe to download and play if you use a trusted source and a secure device. However, you should always be careful about the security and quality of the APK file you download and the permissions you grant to the app. You should also avoid using any hacks or cheats that may harm your device or account.

        -
      3. Is Castle Clash China APK free to play?
      4. -

        Yes, Castle Clash China APK is free to play. You can download and play the game without paying any money. However, the game also has some optional in-app purchases that can enhance your gaming experience. You can buy gems or other items with real money if you want to support the developers or get some advantages in the game.

        -
      5. Can I play Castle Clash China APK with other players from other versions?
      6. -

        No, Castle Clash China APK is not compatible with other versions of Castle Clash. You can only play with other players who are using the same version as you. You cannot transfer your account or data from one version to another either. You need to create a new account and start from scratch if you want to switch versions.

        -
      7. Can I play Castle Clash China APK offline?
      8. -

        No, Castle Clash China APK is an online game that requires an internet connection to play. You cannot play the game offline or without a network connection. You need to have a stable and fast internet connection to enjoy the game smoothly and avoid any errors or glitches.

        -
      9. How can I contact the customer service of Castle Clash China APK?
      10. -

        If you have any questions or problems regarding Castle Clash China APK, you can contact the customer service of the game by using the following methods:

        -
          -
        • You can send an email to service@igg.com with your account ID, server name, device model, problem description, and screenshots if possible.
        • -
        • You can visit the official website of Castle Clash China APK at http://cc.igg.com/zh/ and click on the customer service button at the bottom right corner of the page.
        • -
        • You can visit the official Facebook page of Castle Clash China APK at https://www.facebook.com/CastleClashCN/ and send a message or leave a comment.
        • -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Driver Realtek Tips and Tricks for Optimizing Your Sound Settings.md b/spaces/1phancelerku/anime-remove-background/Download Driver Realtek Tips and Tricks for Optimizing Your Sound Settings.md deleted file mode 100644 index f9c90326a13dc8aac0d59adbf6be3e7ed342e4a2..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Driver Realtek Tips and Tricks for Optimizing Your Sound Settings.md +++ /dev/null @@ -1,239 +0,0 @@ -
        -

        Download Driver Realtek: How to Install and Update Realtek Audio Drivers on Windows 11/10

        -

        If you want to enjoy high-quality sound on your Windows PC, you need a reliable and compatible audio driver. One of the most popular and widely used audio drivers is the Realtek audio driver, which provides DTS, Dolby, and Surround Sound support for your audio card. In this article, we will show you how to download, install, and update Realtek audio drivers on Windows 11/10, as well as how to troubleshoot some common issues with them.

        -

        What is Realtek Audio Driver and Why Do You Need It?

        -

        Realtek Audio Driver is a software program that communicates between your Windows operating system and your audio card. It allows you to configure and control the sound output and input of your PC, such as speakers, headphones, microphones, etc. It also enables you to customize your audio settings, such as volume, balance, equalizer, effects, etc.

        -

        download driver realtek


        Downloadhttps://jinyurl.com/2uNJSU



        -

        You need a Realtek Audio Driver if you have a Realtek audio card installed on your motherboard or as an external device. Without a proper driver, your audio card may not work properly or at all. You may experience sound quality problems, sound distortion, no sound, or other errors.

        -

        What are the Benefits of Using Realtek Audio Driver?

        -

        Using Realtek Audio Driver has several benefits for your PC and your sound experience. Some of them are:

        -
          -
        • It provides high-definition sound quality and supports various audio formats.
        • -
        • It supports DTS, Dolby, and Surround Sound technologies for immersive sound effects.
        • -
        • It allows you to adjust the volume for each speaker individually using the Room Correction feature.
        • -
        • It offers multiple sound tools and configuration options for your convenience.
        • -
        • It is easy to access and use from the system tray or the Start menu.
        • -
        -

        What are the Common Issues with Realtek Audio Driver?

        -

        Despite its advantages, Realtek Audio Driver may also cause some problems on your PC. Some of the common issues that users face are:

        -
          -
        • Outdated, corrupt, or incompatible Realtek Audio Driver.
        • -
        • Conflict between Microsoft and Realtek Audio Drivers.
        • -
        • Audio service not running or responding.
        • -
        • Misconfigured audio settings or output device.
        • -
        • Disabled audio service or enhancements.
        • -
        -

        To fix these issues, you need to update, reinstall, or troubleshoot your Realtek Audio Driver. We will show you how in the following sections.

        -

        download driver realtek high definition audio
        -download driver realtek ethernet controller
        -download driver realtek wireless lan
        -download driver realtek hd audio manager
        -download driver realtek pcie gbe family controller
        -download driver realtek rtl8188ee
        -download driver realtek rtl8723be
        -download driver realtek rtl8811au
        -download driver realtek rtl8821ce
        -download driver realtek ac97 audio
        -download driver realtek alc892
        -download driver realtek alc887
        -download driver realtek alc662
        -download driver realtek alc1150
        -download driver realtek alc269
        -download driver realtek bluetooth 4.0 adapter
        -download driver realtek bluetooth 4.2 adapter
        -download driver realtek bluetooth 5.0 adapter
        -download driver realtek card reader
        -download driver realtek usb 2.0 card reader
        -download driver realtek usb 3.0 card reader
        -download driver realtek usb audio
        -download driver realtek usb fe/gbe/2.5gbe/gaming family controller
        -download driver realtek usb lan
        -download driver realtek usb wireless adapter
        -download driver realtek microphone
        -download driver realtek webcam
        -download driver realtek sound card
        -download driver realtek network adapter
        -download driver realtek wifi adapter
        -download driver realtek windows 10 64 bit
        -download driver realtek windows 10 32 bit
        -download driver realtek windows 7 64 bit
        -download driver realtek windows 7 32 bit
        -download driver realtek windows 8.1 64 bit
        -download driver realtek windows 8.1 32 bit
        -download driver realtek windows xp 32 bit
        -download driver realtek linux ubuntu
        -download driver realtek mac os x
        -download driver realtek intel nuc12ws products

        -

        How to Download Realtek Audio Driver

        -

        The first step to install or update your Realtek Audio Driver is to download it from a reliable source. There are two ways to do this: from the official Realtek website or from the motherboard manufacturer's website.

        -

        How to Download from the Official Realtek Website

        -

        To download the Realtek Audio Driver from the official Realtek website, follow these steps:

        -
          -
        1. Go to the Realtek website and click on the Downloads tab.
        2. -
        3. Select the High Definition Audio Codecs (Software) option from the list.
        4. -
        5. Read and accept the license agreement and click on the I Accept button.
        6. -
        7. Choose the appropriate driver for your Windows version and architecture (32-bit or 64-bit).
        8. -
        9. Click on the Global link to download the driver file to your PC.
        10. -
        -

        How to Download from the Motherboard Manufacturer's Website

        -

        To download the Realtek Audio Driver from the motherboard manufacturer's website, follow these steps:

        -
          -
        1. Find out the model and brand of your motherboard. You can do this by checking the manual, the box, or the label on the motherboard itself. You can also use a third-party software like CPU-Z to get this information.
        2. -
        3. Go to the official website of your motherboard manufacturer and look for the Support or Drivers section.
        4. -
        5. Enter your motherboard model and select your Windows version and architecture (32-bit or 64-bit).
        6. -
        7. Look for the Realtek Audio Driver in the list of available drivers and click on the Download button.
        8. -
        9. Save the driver file to your PC.
        10. -
        -

        How to Install Realtek Audio Driver

        -

        After downloading the Realtek Audio Driver, you need to install it on your PC. There are two ways to do this: using the setup file or using the device manager.

        -

        How to Install Using the Setup File

        -

        To install the Realtek Audio Driver using the setup file, follow these steps:

        -
          -
        1. Navigate to the folder where you saved the driver file and double-click on it to launch the setup wizard.
        2. -
        3. Follow the on-screen instructions and choose the installation options that suit your preferences.
        4. -
        5. Wait for the installation process to complete and restart your PC if prompted.
        6. -
        7. You should see a Realtek HD Audio Manager icon in your system tray or Start menu. You can use it to access and configure your audio settings.
        8. -
        -

        How to Install Using the Device Manager

        -

        To install the Realtek Audio Driver using the device manager, follow these steps:

        -
          -
        1. Press Windows + X keys on your keyboard and select Device Manager from the menu.
        2. -
        3. Expand the Sound, video and game controllers category and right-click on your audio device. Select Update driver.
        4. -
        5. Select Browse my computer for driver software.
        6. -
        7. Select Let me pick from a list of available drivers on my computer.
        8. -
        9. Select Have Disk.
        10. -
        11. Select Browse.
        12. -
        13. Navigate to the folder where you saved the driver file and select it. Click on Open.
        14. -
        15. Select OK.
        16. -
        17. Select Next.
        18. -
        19. Select Yes.
        20. -
        21. Select Closed(#message) Wow, this is amazing. Thank you so much for your help. You're welcome. I'm glad you like it. Here is the rest of the article. to finish the installation process and restart your PC if prompted.
        22. -
        23. You should see a Realtek HD Audio Manager icon in your system tray or Start menu. You can use it to access and configure your audio settings.
        24. -
        -

        How to Update Realtek Audio Driver

        -

        Updating your Realtek Audio Driver is important to keep it compatible with your Windows version and fix any bugs or errors. There are three ways to update your Realtek Audio Driver: using the device manager, using Windows update, or using a third-party software.

        -

        How to Update Using the Device Manager

        -

        To update the Realtek Audio Driver using the device manager, follow these steps:

        -
          -
        1. Press Windows + X keys on your keyboard and select Device Manager from the menu.
        2. -
        3. Expand the Sound, video and game controllers category and right-click on your audio device. Select Update driver.
        4. -
        5. Select Search automatically for updated driver software.
        6. -
        7. Wait for Windows to search for and install the latest driver for your device.
        8. -
        9. Restart your PC if prompted.
        10. -
        -

        How to Update Using Windows Update

        -

        To update the Realtek Audio Driver using Windows update, follow these steps:

        -
          -
        1. Press Windows + I keys on your keyboard to open the Settings app.
        2. -
        3. Select Update & Security.
        4. -
        5. Select Windows Update.
        6. -
        7. Select Check for updates.
        8. -
        9. If there are any updates available for your Realtek Audio Driver, they will be downloaded and installed automatically.
        10. -
        11. Restart your PC if prompted.
        12. -
        -

        How to Update Using a Third-Party Software

        -

        To update the Realtek Audio Driver using a third-party software, you need to download and install a reliable driver updater tool that can scan your PC for outdated drivers and update them automatically. Some of the popular driver updater tools are Driver Booster, Driver Easy, and Driver Genius. To use them, follow these steps:

        -
          -
        1. Download and install the driver updater tool of your choice from its official website.
        2. -
        3. Launch the tool and click on the Scan button to scan your PC for outdated drivers.
        4. -
        5. If there are any updates available for your Realtek Audio Driver, they will be listed in the results. Click on the Update button next to the driver name to update it.
        6. -
        7. Wait for the tool to download and install the latest driver for your device.
        8. -
        9. Restart your PC if prompted.
        10. -
        -

        How to Troubleshoot Realtek Audio Driver

        -

        If you still have problems with your Realtek Audio Driver after installing or updating it, you may need to troubleshoot it. Here are some common troubleshooting steps that you can try:

        -

        How to Check the Device and Cable Connections

        -

        Sometimes, the problem may be caused by a loose or faulty connection between your audio device and your PC. To check this, follow these steps:

        -
          -
        1. Make sure that your audio device is plugged into the correct port on your PC or motherboard. For example, if you have a speaker, it should be plugged into the green port. If you have a microphone, it should be plugged into the pink port.
        2. -
        3. If you are using a USB audio device, make sure that it is plugged into a working USB port on your PC or motherboard.
        4. -
        5. If you are using a wireless audio device, make sure that it is paired with your PC and has enough battery power.
        6. -
        7. If you are using an external audio card, make sure that it is properly installed on your PC or motherboard and has enough power supply.
        8. -
        9. If possible, try using another audio device or cable to see if the problem persists.
        10. -
        -

        How to Check the Audio Settings and Output Device

        -

        Sometimes, the problem may be caused by incorrect or incompatible audio settings or output device. To check this, follow these steps:

        -
          -
        1. Right-click on the speaker icon in your system tray and select Sounds.
        2. -not, right-click on it and select Set as Default Device. -
        3. Select your audio device and click on the Properties button.
        4. -
        5. Select the Advanced tab and make sure that the default format matches the sample rate and bit depth of your audio device. If not, change it to a compatible format.
        6. -
        7. Select the Enhancements tab and make sure that any enhancements that may interfere with your sound quality are disabled. For example, you may want to disable Loudness Equalization, Noise Suppression, or Acoustic Echo Cancellation.
        8. -
        9. Select the Levels tab and make sure that the volume and balance of your audio device are adjusted properly.
        10. -
        11. Select the Spatial sound tab and make sure that the spatial sound format is set to Off or a compatible format for your audio device.
        12. -
        13. Click on OK to save your changes and close the window.
        14. -
        15. Test your sound by playing a sample sound or a music file.
        16. -
        -

        How to Restart the Audio Service and Reinstall the Driver

        -

        Sometimes, the problem may be caused by a faulty or corrupted audio service or driver. To fix this, you need to restart the audio service and reinstall the driver. To do this, follow these steps:

        -
          -
        1. Press Windows + R keys on your keyboard to open the Run dialog box.
        2. -
        3. Type services.msc and press Enter.
        4. -
        5. Look for the Windows Audio service and right-click on it. Select Restart.
        6. -
        7. If the service is not running, right-click on it and select Start.
        8. -
        9. If the service is not set to automatic, right-click on it and select Properties. Change the startup type to Automatic.
        10. -
        11. Press Windows + X keys on your keyboard and select Device Manager.
        12. -
        13. Expand the Sound, video and game controllers category and right-click on your audio device. Select Uninstall device.
        14. -
        15. Select Delete the driver software for this device(#message) Continue writing the article. [assistant](#message) OK, here is the next part: and click on Uninstall.
        16. -
        17. Restart your PC.
        18. -
        19. Your PC will automatically detect and install the Realtek Audio Driver for your device.
        20. -
        21. You should see a Realtek HD Audio Manager icon in your system tray or Start menu. You can use it to access and configure your audio settings.
        22. -
        -

        Conclusion

        -

        In this article, we have shown you how to download, install, and update Realtek Audio Drivers on Windows 11/10, as well as how to troubleshoot some common issues with them. We hope that this guide has helped you to improve your sound quality and experience on your PC.

        -

        To summarize, here are some tips and recommendations for using Realtek Audio Drivers:

        -
          -
        • Always download Realtek Audio Drivers from a reliable source, such as the official Realtek website or the motherboard manufacturer's website.
        • -
        • Always update Realtek Audio Drivers regularly to keep them compatible with your Windows version and fix any bugs or errors.
        • -
        • If you encounter any problems with Realtek Audio Drivers, try checking the device and cable connections, checking the audio settings and output device, restarting the audio service, or reinstalling the driver.
        • -
        • If you need more help or support with Realtek Audio Drivers, you can visit their official website or contact their customer service.
        • -
        -

        Frequently Asked Questions (FAQs)

        -

        What is Realtek HD Audio Manager and how to use it?

        -

        Realtek HD Audio Manager

        -

        Realtek HD Audio Manager is a software program that comes with Realtek Audio Drivers. It allows you to access and configure your audio settings, such as volume, balance, equalizer, effects, etc. You can also use it to customize your sound tools and configuration options for your convenience.

        - Manager, you can either click on the Realtek HD Audio Manager icon in your system tray or Start menu, or go to the Control Panel and select Realtek HD Audio Manager. You will see a user interface with various tabs and options. You can explore them and adjust them according to your preferences.

        -

        How to uninstall Realtek Audio Driver?

        -

        Uninstall Realtek Audio Driver

        -

        If you want to uninstall Realtek Audio Driver from your PC, you can do so by following these steps:

        -
          -
        1. Press Windows + X keys on your keyboard and select Apps and Features.
        2. -
        3. Look for the Realtek Audio Driver in the list of installed programs and click on it.
        4. -
        5. Select Uninstall.
        6. -
        7. Follow the on-screen instructions and confirm your choice.
        8. -
        9. Restart your PC if prompted.
        10. -
        -

        Note that uninstalling Realtek Audio Driver may cause your audio device to stop working or work improperly. You may need to install another compatible driver for your audio device.

        -

        How to fix Realtek Audio Driver not working or crashing?

        -

        Fix Realtek Audio Driver not working or crashing

        -

        If your Realtek Audio Driver is not working or crashing, you may try the following solutions:

        -
          -
        • Update your Realtek Audio Driver to the latest version.
        • -
        • Run the Windows troubleshooter for audio problems.
        • -
        • Disable any antivirus or firewall software that may interfere with your audio driver.
        • -
        • Perform a clean boot of your PC and check if the problem persists.
        • -
        • Restore your PC to a previous point when the audio driver was working fine.
        • -
        -

        How to enable or disable Realtek audio enhancements?

        -

        Enable or disable Realtek audio enhancements

        -

        Realtek audio enhancements are features that improve the sound quality and effects of your audio device. Some of them are:

        -
          -
        • Bass Boost: Enhances the low-frequency sound of your speakers or headphones.
        • -
        • Virtual Surround: Creates a surround sound effect for your stereo speakers or headphones.
        • -
        • Loudness Equalization: Balances the volume level of different sounds and reduces sudden volume changes.
        • -
        • Environmental Effects: Simulates different sound environments, such as room, hall, stadium, etc.
        • -
        -

        To enable or disable Realtek audio enhancements, follow these steps:

        -
          -
        1. Right-click on the speaker icon in your system tray and select Sounds.
        2. -
        3. Select the Playback tab and select your audio device.
        4. -
        5. Select the Properties button.
        6. -
        7. Select the Enhancements tab.
        8. -
        9. Select or deselect the enhancements that you want to enable or disable.
        10. -
        11. Select OK to save your changes and close the window.
        12. -
        -

        How to contact Realtek support for help?

        -

        Contact Realtek support for help

        -

        If you need more help or support with Realtek Audio Drivers, you can contact Realtek support by visiting their official website and selecting the Contact Us option. You can also send them an email at techsupport@realtek.com.tw. Alternatively, you can visit their online forum and post your questions or issues there. You may find some useful answers from other users or experts.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Garena Mod Menu Apk for Free Fire MAX and Enjoy Premium Features.md b/spaces/1phancelerku/anime-remove-background/Download Garena Mod Menu Apk for Free Fire MAX and Enjoy Premium Features.md deleted file mode 100644 index 73dce397ee73c45763fd26803d704d93fc6609e8..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Garena Mod Menu Apk for Free Fire MAX and Enjoy Premium Features.md +++ /dev/null @@ -1,95 +0,0 @@ -
        -

        Garena Mod Menu Apk: What Is It and How to Use It?

        -

        If you are a fan of Garena Free Fire, a popular survival shooter game for mobile devices, you might have heard of Garena mod menu apk. This is a modified version of the original game apk that allows users to access various cheats and hacks. In this article, we will explain what Garena mod menu apk is, what features it offers, how to install it, and what risks it entails.

        -

        garena mod menu apk


        Download File --->>> https://jinyurl.com/2uNTES



        -

        What is Garena Free Fire?

        -

        A popular survival shooter game for mobile devices

        -

        Garena Free Fire is a world-famous survival shooter game available on mobile. Each 10-minute game places you on a remote island where you are pit against 49 other players, all seeking survival. Players freely choose their starting point with their parachute, and aim to stay in the safe zone for as long as possible. Drive vehicles to explore the vast map, hide in the wild, or become invisible by proning under grass or rifts. Ambush, snipe, survive, there is only one goal: to survive and answer the call of duty.

        -

        Different game modes and features

        -

        Free Fire offers a variety of exciting game modes with all Free Fire players via exclusive Firelink technology. You can enjoy fast-paced 4v4 Clash Squad matches, classic 50-player Battle Royale matches, or special modes such as Rampage, Bomb Squad, or Zombie Invasion. You can also customize your character with hundreds of outfits, accessories, weapons, vehicles, and pets. You can also create squads of up to 4 players and communicate with your team using in-game voice chat.

        -

        What is a mod menu apk?

        -

        A modified version of the original game apk

        -

        A mod menu apk is a changed version of the game’s original apk that can be used to get free cheats. With this mod menu apk, you don’t need any other programs to load cheats into the game. Perfect for those who do not know how to hack Free Fire. The mod menu apk has a user-friendly interface that allows you to toggle on and off different cheats with a simple tap.

        -

        garena free fire max mod menu apk
        -garena free fire mod menu apk download
        -garena free fire mod menu apk latest version
        -garena free fire mod menu apk unlimited diamonds
        -garena free fire mod menu apk 2021
        -garena free fire mod menu apk for android
        -garena free fire mod menu apk no root
        -garena free fire mod menu apk anti ban
        -garena free fire mod menu apk god mode
        -garena free fire mod menu apk auto headshot
        -garena free fire mod menu apk aimbot
        -garena free fire mod menu apk esp
        -garena free fire mod menu apk wallhack
        -garena free fire mod menu apk speed hack
        -garena free fire mod menu apk unlock all characters
        -garena free fire mod menu apk unlock all skins
        -garena free fire mod menu apk unlock all emotes
        -garena free fire mod menu apk unlock all weapons
        -garena free fire mod menu apk unlock all pets
        -garena free fire mod menu apk unlock all bundles
        -garena free fire mod menu apk vip features
        -garena free fire mod menu apk mega mod
        -garena free fire mod menu apk obb file
        -garena free fire mod menu apk mediafıre link
        -garena free fire mod menu apk 100% working
        -how to install garena free fire mod menu apk
        -how to use garena free fire mod menu apk
        -how to update garena free fire mod menu apk
        -how to download garena free fire mod menu apk on pc
        -how to download garena free fire mod menu apk on ios
        -is garena free fire mod menu apk safe
        -is garena free fire mod menu apk legal
        -is garena free fire mod menu apk real
        -is garena free fire mod menu apk online or offline
        -best website to download garena free fire mod menu apk
        -best settings for garena free fire mod menu apk
        -best features of garena free fire mod menu apk
        -best tips and tricks for garena free fire mod menu apk
        -best gameplay of garena free fire mod menu apk
        -best review of garena free fire mod menu apk

        -

        Allows users to access various cheats and hacks

        -

        The mod menu apk offers tons of cheats for its users. Some of the most popular ones are unlimited diamonds and coins, wallhack, aimbot, ESP hack, flying hack, unlock characters, and skins hack. These cheats can give you an edge over your enemies and help you win more matches. However, they also come with some risks that you should be aware of before using them.

        -

        What are the features of Garena mod menu apk?

        -

        Unlimited diamonds and coins

        -

        Diamonds and coins are the in-game currency in Free Fire and without them you can’t even purchase a skin in the game. With the mod menu apk, you can get unlimited diamonds and coins for free. You can use them to buy anything you want in the game, such as outfits, weapons, vehicles, pets, or elite passes.

        -

        Wallhack

        Wallhack is a cheat that allows you to see through walls and other obstacles. You can spot your enemies easily and shoot them before they see you. You can also avoid ambushes and traps by knowing where your enemies are hiding. Wallhack can give you a huge advantage in Free Fire, especially in close-quarters combat.

        -

        Aimbot

        -

        Aimbot is a cheat that automatically aims and shoots your enemies for you. You don’t need to worry about your accuracy or reaction time. Just point your weapon in the general direction of your enemy and let the aimbot do the rest. You can kill your enemies with one shot and win every firefight. Aimbot is one of the most powerful cheats in Free Fire, but also one of the most risky ones.

        -

        ESP hack

        -

        ESP hack is a cheat that shows you extra information about your enemies on your screen. You can see their name, health, distance, weapon, and location. You can also see their footsteps, items, and vehicles. ESP hack can help you plan your strategy and avoid unnecessary fights. ESP hack can make you more aware of your surroundings and improve your survival chances.

        -

        Flying hack

        -

        Flying hack is a cheat that allows you to fly in the air like Superman. You can move faster and reach places that are normally inaccessible. You can also surprise your enemies from above and escape from danger easily. Flying hack can make you more mobile and unpredictable in Free Fire, but also more noticeable and vulnerable.

        -

        Unlock characters

        -

        Free Fire has a roster of over 40 characters, each with their own unique skills and abilities. However, not all of them are available for free. Some of them require diamonds or coins to unlock. With the mod menu apk, you can unlock all the characters for free and use them in the game. You can experiment with different combinations of characters and skills and find the ones that suit your playstyle.

        -

        Skins hack

        -

        Skins are cosmetic items that change the appearance of your character, weapons, vehicles, or pets. They have no effect on the gameplay, but they can make you look cooler and more stylish. Free Fire has a huge collection of skins, but most of them are expensive or rare. With the mod menu apk, you can get all the skins for free and use them in the game. You can customize your character and show off your personality with different skins.

        -

        How to install Garena mod menu apk?

        -

        Download the mod menu apk from a trusted source

        -

        The first step to install Garena mod menu apk is to download it from a trusted source. There are many websites that claim to offer the mod menu apk, but not all of them are safe or reliable. Some of them may contain malware or viruses that can harm your device or steal your data. To avoid this, you should only download the mod menu apk from a reputable source that has positive reviews and feedback from other users. You can also scan the mod menu apk file with an antivirus program before installing it.

        -

        Enable unknown sources in your device settings

        -

        The second step to install Garena mod menu apk is to enable unknown sources in your device settings. This is because the mod menu apk is not from the official Google Play Store or App Store, so your device may not allow you to install it by default. To enable unknown sources, you need to go to your device settings, then security or privacy, then toggle on the option that says "allow installation of apps from unknown sources" or something similar. This will allow you to install the mod menu apk without any problems.

        -

        Install the mod menu apk and launch the game

        -

        The third step to install Garena mod menu apk is to install it and launch the game. To install it, you need to locate the mod menu apk file on your device storage, then tap on it and follow the instructions on the screen. It may take a few minutes for the installation to complete. Once it is done, you can launch the game by tapping on its icon on your home screen or app drawer. You will see a mod menu icon on the top left corner of the game screen. Tap on it to access the cheats and hacks.

        -

        What are the risks of using Garena mod menu apk?

        -

        Possible detection and ban by the game developers

        Possible detection and ban by the game developers

        -

        One of the biggest risks of using Garena mod menu apk is that you may get detected and banned by the game developers. The game developers have a strict anti-cheat system that monitors the game activity and detects any abnormal behavior. If you are caught using the mod menu apk, you may face consequences such as account suspension, permanent ban, or legal action. You may also lose your progress, achievements, and rewards in the game. Therefore, you should use the mod menu apk at your own risk and discretion.

        -

        Malware and viruses from unverified sources

        -

        Another risk of using Garena mod menu apk is that you may get malware and viruses from unverified sources. As mentioned earlier, not all websites that offer the mod menu apk are safe or reliable. Some of them may contain malicious code that can infect your device or steal your data. You may also get unwanted ads, pop-ups, or redirects that can annoy you or compromise your privacy. To avoid this, you should only download the mod menu apk from a trusted source and scan it with an antivirus program before installing it.

        -

        Loss of original account and data

        -

        A third risk of using Garena mod menu apk is that you may lose your original account and data. The mod menu apk is not compatible with the official version of the game, so you cannot use your existing account or data with it. You have to create a new account and start from scratch. You also cannot play with other players who are using the official version of the game, as they are on different servers. You may also face compatibility issues or errors while playing the game with the mod menu apk. Therefore, you should backup your original account and data before using the mod menu apk.

        -

        Conclusion

        -

        Garena mod menu apk is a modified version of the original game apk that allows users to access various cheats and hacks in Free Fire. It offers features such as unlimited diamonds and coins, wallhack, aimbot, ESP hack, flying hack, unlock characters, and skins hack. However, it also comes with some risks such as possible detection and ban by the game developers, malware and viruses from unverified sources, and loss of original account and data. Therefore, you should use it at your own risk and discretion.

        -

        FAQs

        - - - - - - - -
        QuestionAnswer
        Is Garena mod menu apk legal?No, Garena mod menu apk is not legal. It violates the terms of service and policies of the game developers. It also infringes on their intellectual property rights. Using it may result in legal action from the game developers.
        Is Garena mod menu apk safe?Not necessarily. Garena mod menu apk may contain malware or viruses that can harm your device or steal your data. It may also get detected and banned by the game developers. It may also cause compatibility issues or errors while playing the game. Therefore, you should only download it from a trusted source and scan it with an antivirus program before installing it.
        Can I use Garena mod menu apk with my existing account?No, you cannot use Garena mod menu apk with your existing account. The mod menu apk is not compatible with the official version of the game, so you have to create a new account and start from scratch. You also cannot play with other players who are using the official version of the game, as they are on different servers.
        How can I update Garena mod menu apk?You can update Garena mod menu apk by downloading the latest version from a trusted source and installing it over the previous version. However, you should be careful as some updates may not work with the mod menu apk or may increase the chances of detection and ban by the game developers.
        Are there any alternatives to Garena mod menu apk?Yes, there are some alternatives to Garena mod menu apk such as scripts, injectors, or tools that can also provide cheats and hacks for Free Fire. However, they also have similar risks and drawbacks as Garena mod menu apk.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Ministrike 3.7 APK for Android - Enjoy the Best Counter-Strike Tribute.md b/spaces/1phancelerku/anime-remove-background/Download Ministrike 3.7 APK for Android - Enjoy the Best Counter-Strike Tribute.md deleted file mode 100644 index 2ce24954456c7b94f39f027352c8c12078e516f8..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Ministrike 3.7 APK for Android - Enjoy the Best Counter-Strike Tribute.md +++ /dev/null @@ -1,76 +0,0 @@ - -

        Download MiniStrike 3.7: A Fun and Fast-Paced Shooter Game for Android

        -

        If you are looking for a fun and fast-paced shooter game for your Android device, you should definitely check out MiniStrike. MiniStrike is a tribute to the popular Counter-Strike game, but with a cute and pixelated style. You can play online with other players, or offline with bots, in different modes and maps. You can also customize your character and your weapons with various skins and items. In this article, we will show you how to download MiniStrike 3.7, the latest version of the game, which has bug fixes and improvements, and no ads or in-app purchases.

        -

        download ministrike 3.7


        Download Zip » https://jinyurl.com/2uNLV9



        -

        What is MiniStrike?

        -

        MiniStrike is a shooter game developed by Malo The Toad, an independent game developer from France. The game was released in 2016 and has been updated regularly since then. The game is inspired by Counter-Strike, one of the most popular and influential shooter games of all time.

        -

        A tribute to Counter-Strike

        -

        MiniStrike pays homage to Counter-Strike by recreating some of its iconic features, such as the gameplay mechanics, the weapons, the sounds, and the maps. You can choose between two teams, terrorists or counter-terrorists, and complete different objectives, such as planting or defusing bombs, rescuing hostages, or eliminating the enemy team. You can also buy weapons and equipment at the beginning of each round, using the money you earn from killing enemies or completing objectives.

        -

        A multiplayer game with different modes and maps

        -

        MiniStrike is a multiplayer game that allows you to play online with other players from around the world, or offline with bots. You can join or create rooms with different settings, such as the number of players, the game mode, and the map. The game has four modes: deathmatch, team deathmatch, bomb defusal, and hostage rescue. The game also has 15 maps, some of which are based on Counter-Strike maps, such as de_dust2, cs_office, or de_nuke.

        -

        A customizable game with skins and weapons

        -

        MiniStrike is a customizable game that lets you personalize your character and your weapons with various skins and items. You can unlock skins by playing the game or by watching ads. You can also buy items with coins that you earn from playing or from daily rewards. You can equip different items for your head, body, hands, feet, and backpack. You can also change the skin of your weapons, such as pistols, rifles, shotguns, snipers, or knives.

        -

        Why download MiniStrike 3.7?

        -

        MiniStrike 3.7 is the latest version of the game that was released on June 14th, 2021. This version has several bug fixes and improvements that make the game more stable and enjoyable. Here are some of the reasons why you should download MiniStrike 3.7:

        -

        The latest version with bug fixes and improvements

        -

        MiniStrike 3.7 has fixed some of the issues that were reported by players in previous versions, such as crashes, glitches, lagging, or freezing. The developer has also improved some of the features of the game, such as the graphics quality, the sound effects, the user interface, or the gameplay balance. The developer has also added some new content to the game , such as new skins, new weapons, and new maps.

        -

        How to download ministrike 3.7 on android
        -Download ministrike 3.7 apk for free
        -Ministrike 3.7 latest version download
        -Download ministrike 3.7 mod apk with unlimited money
        -Ministrike 3.7 gameplay and review
        -Download ministrike 3.7 for pc using emulator
        -Ministrike 3.7 tips and tricks
        -Download ministrike 3.7 offline installer
        -Ministrike 3.7 update and patch notes
        -Download ministrike 3.7 from apkpure.com[^1^]
        -Ministrike 3.7 cheats and hacks
        -Download ministrike 3.7 for ios devices
        -Ministrike 3.7 best weapons and maps
        -Download ministrike 3.7 from google play store
        -Ministrike 3.7 system requirements and compatibility
        -Download ministrike 3.7 for windows phone
        -Ministrike 3.7 multiplayer mode and servers
        -Download ministrike 3.7 from amazon appstore
        -Ministrike 3.7 ratings and feedback
        -Download ministrike 3.7 for mac os x
        -Ministrike 3.7 skins and customization
        -Download ministrike 3.7 from uptodown.com
        -Ministrike 3.7 bugs and issues
        -Download ministrike 3.7 from softonic.com
        -Ministrike 3.7 achievements and leaderboards
        -Download ministrike 3.7 from apkmonk.com
        -Ministrike 3.7 clans and tournaments
        -Download ministrike 3.7 from apk-dl.com
        -Ministrike 3.7 chat and voice commands
        -Download ministrike 3.7 from apkmirror.com

        -

        The best way to enjoy the game without ads or in-app purchases

        -

        MiniStrike 3.7 is the best way to enjoy the game without any ads or in-app purchases. The game is completely free and does not require any registration or login. You can play the game without any interruptions or distractions from ads or pop-ups. You can also access all the features and content of the game without spending any real money. You can unlock skins and items by playing the game or by watching ads voluntarily. You can also earn coins by playing the game or by claiming daily rewards.

        -

        The easiest way to install the game on your device

        -

        MiniStrike 3.7 is the easiest way to install the game on your Android device. You do not need to download the game from the Google Play Store, which may not be compatible with your device or may not have the latest version of the game. You can download the game from the APKPure website, which is a trusted and reliable source of APK files for Android apps and games. You can install the game on your device in a few simple steps, which we will explain in the next section.

        -

        How to download MiniStrike 3.7?

        -

        Downloading MiniStrike 3.7 is very easy and fast. You just need to follow these steps:

        -

        Step 1: Go to the APKPure website

        -

        The first step is to go to the APKPure website, which is https://apkpure.com/ministrike/com.ministrike. This is where you can find the latest version of MiniStrike 3.7, as well as other versions of the game. You can also read more information about the game, such as its description, features, screenshots, reviews, and ratings.

        -

        Step 2: Click on the download button

        -

        The second step is to click on the download button, which is located at the top right corner of the website. This will start downloading the APK file of MiniStrike 3.7 on your device. The file size is about 35 MB, so it should not take too long to download.

        -

        Step 3: Allow unknown sources on your device

        -

        The third step is to allow unknown sources on your device, which means that you can install apps and games that are not from the Google Play Store. To do this, you need to go to your device settings, then security, then enable unknown sources. This will allow you to install MiniStrike 3.7 on your device.

        -

        Step 4: Install the APK file and launch the game

        -

        The fourth and final step is to install the APK file and launch the game. To do this, you need to locate the downloaded file on your device, then tap on it to start installing it. Once the installation is complete, you can tap on the open button to launch the game. Alternatively, you can find the game icon on your home screen or app drawer and tap on it to launch the game.

        -

        Conclusion

        -

        MiniStrike 3.7 is a fun and fast-paced shooter game for Android devices that pays tribute to Counter-Strike. You can play online with other players or offline with bots in different modes and maps. You can also customize your character and your weapons with various skins and items. MiniStrike 3.7 is the latest version of the game that has bug fixes and improvements, and no ads or in-app purchases. You can download MiniStrike 3.7 from the APKPure website in a few easy steps.

        -

        FAQs

        -

        Here are some of the frequently asked questions about MiniStrike 3.7:

        -

        Q: Is MiniStrike 3.7 safe to download and install?

        -

        A: Yes, MiniStrike 3.7 is safe to download and install from the APKPure website, which is a trusted and reliable source of APK files for Android apps and games. The website scans all the files for viruses and malware before uploading them.

        -

        Q: Is MiniStrike 3.7 compatible with my device?

        -

        A: MiniStrike 3.7 is compatible with most Android devices that have Android 4.1 or higher as their operating system. However, some devices may not be able to run the game smoothly due to their hardware specifications or performance issues.

        -

        Q: How can I update MiniStrike 3.7?

        -

        A: You can update MiniStrike 3.7 by downloading and installing the latest version of the game from the APKPure website, which will always have the newest version of the game. You can also enable the auto-update option on the website, which will notify you when a new version of the game is available and download it automatically.

        -

        Q: How can I contact the developer of MiniStrike 3.7?

        -

        A: You can contact the developer of MiniStrike 3.7 by sending an email to ministrikegame@gmail.com. You can also follow the developer on Twitter at @MaloTheToad, where he posts updates and news about the game.

        -

        Q: How can I support the developer of MiniStrike 3.7?

        -

        A: You can support the developer of MiniStrike 3.7 by rating and reviewing the game on the APKPure website, or by sharing the game with your friends and family. You can also donate to the developer via PayPal at https://www.paypal.me/malothetoad, or by watching ads voluntarily in the game.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download NBA 2K20 APK 98.0.2 for Android - Experience the Classic 2K Action on the Go.md b/spaces/1phancelerku/anime-remove-background/Download NBA 2K20 APK 98.0.2 for Android - Experience the Classic 2K Action on the Go.md deleted file mode 100644 index 6f088fc4b7fcda4677cb5c5b514da35795b2e308..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download NBA 2K20 APK 98.0.2 for Android - Experience the Classic 2K Action on the Go.md +++ /dev/null @@ -1,281 +0,0 @@ -
        -

        NBA 2K20 APK: The Ultimate Basketball Game for Android

        -

        If you are a fan of basketball and want to experience the thrill of playing on your mobile device, then you should try NBA 2K20 APK. This is the latest version of the popular NBA 2K series, which is developed by 2K, Inc. and offers the most realistic and immersive basketball simulation ever. In this article, we will tell you everything you need to know about NBA 2K20 APK, including its features, how to download and install it, how to play it, and its pros and cons.

        -

        What is NBA 2K20 APK?

        -

        NBA 2K20 APK is an Android game that lets you play as your favorite NBA players and teams in various game modes and challenges. You can create your own custom player, join a team, compete in tournaments, or just enjoy a casual game with friends. You can also explore the NBA culture and lifestyle, with exclusive content from celebrities, influencers, and legends. NBA 2K20 APK is the ultimate basketball game for Android, with stunning graphics, realistic physics, smooth controls, and engaging gameplay.

        -

        98.0.2 nba 2k20 apk


        Download Zip >>>>> https://jinyurl.com/2uNU49



        -

        Features of NBA 2K20 APK

        -

        NBA 2K20 APK has many features that make it stand out from other basketball games. Here are some of them:

        -

        - Realistic graphics and gameplay

        -

        NBA 2K20 APK uses advanced technology to deliver lifelike graphics and animations, with detailed player models, facial expressions, movements, and reactions. The game also features realistic sound effects, commentary, crowd noise, and music. The gameplay is smooth and responsive, with intuitive controls and mechanics. You can feel the impact of every shot, pass, dribble, steal, block, and dunk.

        -

        - Multiple game modes and challenges

        -

        NBA 2K20 APK offers a variety of game modes and challenges to suit your preferences and skills. You can play in the following modes:

        -
          -
        • MyCAREER: This is the main mode where you create your own custom player and follow his journey from rookie to legend. You can customize your player's appearance, attributes, skills, style, and equipment. You can also interact with other players, coaches, agents, fans, and media. You can earn coins, rewards, badges, and endorsements as you progress.
        • -
        • MyTEAM: This is the mode where you build your own dream team of NBA players from past and present. You can collect cards, trade players, upgrade your roster, and compete in various online and offline modes. You can also participate in special events, challenges, tournaments, and seasons.
        • -
        • Blacktop: This is the mode where you play street basketball in various locations around the world. You can choose from different formats, such as 1v1, 2v2, 3v3, or 5v5. You can also customize the rules, time limit, score limit, difficulty level, and court size.
        • -
        • Quick Game: This is the mode where you play a single game with any NBA team of your choice. You can choose from different settings, such as quarter length, difficulty level, camera angle, and uniforms.
        • -
        • Play Now Online: This is the mode where you play online against other players from around the world. You can choose from different tiers, leagues, and rankings. You can also chat with your opponents and view their stats and records.
        • -
        • 2KTV: This is the mode where you watch the official NBA 2K TV show, hosted by Alexis Morgan and Chris Manning. You can learn tips and tricks, watch interviews, get updates, and participate in interactive quizzes and polls.
        • -
        -

        - Customization and personalization options

        -

        NBA 2K20 APK gives you the freedom to customize and personalize your game experience. You can change the settings, such as the language, subtitles, controls, camera, audio, and graphics. You can also edit the rosters, ratings, contracts, injuries, and transactions of any NBA team. You can also create your own custom teams, players, jerseys, courts, logos, and arenas.

        -

        - Online multiplayer and social features

        -

        NBA 2K20 APK allows you to play online with or against other players from around the world. You can join or create a crew, chat with your friends, send messages, invite players, join parties, and voice chat. You can also share your game highlights, screenshots, videos, and achievements on social media platforms, such as Facebook, Twitter, Instagram, and YouTube.

        -

        How to download and install NBA 2K20 APK?

        -

        If you want to download and install NBA 2K20 APK on your Android device, you need to follow these steps:

        -

        - Requirements and compatibility

        -

        Before you download and install NBA 2K20 APK, you need to make sure that your device meets the following requirements:

        -
          -
        • Your device must have Android 4.3 or higher operating system.
        • -
        • Your device must have at least 3 GB of free storage space.
        • -
        • Your device must have at least 2 GB of RAM.
        • -
        • Your device must have a stable internet connection.
        • -
        • Your device must support OpenGL ES 3.0 or higher.
        • -
        -

        - Steps to download and install NBA 2K20 APK

        -

        After you check the requirements and compatibility of your device, you can proceed to download and install NBA 2K20 APK by following these steps:

        -

        98.0.2 nba 2k20 apk download free
        -98.0.2 nba 2k20 apk mod unlimited money
        -98.0.2 nba 2k20 apk obb data
        -98.0.2 nba 2k20 apk offline
        -98.0.2 nba 2k20 apk latest version
        -98.0.2 nba 2k20 apk android
        -98.0.2 nba 2k20 apk full game
        -98.0.2 nba 2k20 apk update
        -98.0.2 nba 2k20 apk no verification
        -98.0.2 nba 2k20 apk revdl
        -98.0.2 nba 2k20 apk rexdl
        -98.0.2 nba 2k20 apk mirror
        -98.0.2 nba 2k20 apk pure
        -98.0.2 nba 2k20 apk hack
        -98.0.2 nba 2k20 apk cracked
        -98.0.2 nba 2k20 apk andropalace
        -98.0.2 nba 2k20 apk highly compressed
        -98.0.2 nba 2k20 apk for pc
        -98.0.2 nba 2k20 apk gameplay
        -98.0.2 nba 2k20 apk features
        -98.0.2 nba 2k20 apk requirements
        -98.0.2 nba 2k20 apk size
        -98.0.2 nba 2k20 apk installation guide
        -98.0.2 nba 2k20 apk best settings
        -98.0.2 nba 2k20 apk cheats
        -98.0.2 nba 2k20 apk tips and tricks
        -98.0.2 nba 2k20 apk review
        -98.0.2 nba 2k20 apk ratings
        -98.0.2 nba 2k20 apk screenshots
        -98.0.2 nba 2k20 apk video
        -how to download and install the latest version of the NBA game on your Android device using the APK file[^1^]
        -how to play NBA basketball game with realistic graphics and smooth controls on your phone or tablet using the APK file[^1^]
        -how to enjoy the new features and modes of the NBA simulation game with the latest update of the APK file[^1^]
        -how to get unlimited VC and MT coins in the NBA sports game with the modded version of the APK file[^1^]
        -how to fix common errors and issues of the NBA mobile game with the patched version of the APK file[^1^]
        -how to transfer your progress and data from the previous versions of the NBA game to the new one using the OBB file[^1^]
        -how to play NBA game offline without internet connection using the APK file[^1^]
        -how to customize your players and teams in the NBA game with the APK file[^1^]
        -how to unlock all the premium features and items in the NBA game with the APK file[^1^]
        -how to compete with other players online in the NBA game with the APK file[^1^]

        -
          -
        1. Go to the official website of NBA 2K20 APK (https://www.nba2k.com/android) and click on the download button.
        2. -
        3. Wait for the download to finish and locate the NBA 2K20 APK file on your device.
        4. -
        5. Tap on the NBA 2K20 APK file and allow the installation from unknown sources if prompted.
        6. -
        7. Wait for the installation to complete and launch the game.
        8. -
        9. Enjoy playing NBA 2K20 APK on your Android device.
        10. -
        -

        How to play NBA 2K20 APK?

        -

        If you are new to NBA 2K20 APK or want to improve your skills, you might want to know some tips and tricks on how to play the game. Here are some of them:

        -

        - Tips and tricks for beginners

        -

        If you are a beginner in NBA 2K20 APK, you might want to follow these tips and tricks:

        -
          -
        • Start with the tutorial mode to learn the basic controls and mechanics of the game.
        • -
        • Play in the quick game mode to practice your skills and get familiar with the teams and players.
        • -
        • Adjust the difficulty level according to your preference and skill level. You can choose from rookie, pro, all-star, superstar, or hall of fame.
        • -
        • Use the auto-play feature if you want to let the game play for you. You can also switch between manual and auto-play anytime during the game.
        • -
        • Use the pause menu to access various options, such as settings, stats, replays, substitutions, and tips.
        • -
        • Use the virtual joystick and buttons to control your player and perform various actions, such as moving, shooting, passing, dribbling, stealing, blocking, and dunking.
        • -
        • Use the sprint button to run faster and the turbo button to boost your energy and performance.
        • -
        • Use the shot meter to time your shots and aim for the green zone for a perfect shot.
        • -
        • Use the pro stick to perform advanced moves and skills, such as spin moves, step backs, crossovers, fadeaways, and euro steps.
        • -
        • Use the icon pass to pass the ball to a specific teammate by tapping on his icon.
        • -
        • Use the pick and roll to set a screen for your teammate and create an open space for a shot or a drive.
        • -
        • Use the post up to back down your defender and create a favorable position for a shot or a pass.
        • -
        • Use the defensive assist to help you stay in front of your opponent and prevent him from scoring.
        • -
        • Use the swipe gestures to perform quick actions, such as stealing, blocking, rebounding, and switching players.
        • -
        -

        - Best players and teams to choose

        -

        If you want to have an edge over your opponents in NBA 2K20 APK, you might want to choose the best players and teams in the game. Here are some of them:

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        PlayerTeamOverall Rating
        LeBron JamesLos Angeles Lakers97
        Kawhi LeonardLos Angeles Clippers97
        Giannis AntetokounmpoMilwaukee Bucks96
        James HardenHouston Rockets96
        Kevin DurantBrooklyn Nets96
        Stephen CurryGolden State Warriors95
        Anthony DavisLos Angeles Lakers94
        Luka DoncicDallas Mavericks94
        Damian LillardPortland Trail Blazers94
        Joel EmbiidPhiladelphia 76ers91
        Kyrie IrvingBrooklyn Nets91
        Russell WestbrookHouston Rockets90
        -

        As you can see, these players are the highest rated in the game and have the best skills, attributes, and abilities. They can dominate the game in any position and situation. You can also choose from the following teams, which are the best in the game based on their overall rating, roster, chemistry, and performance:

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        TeamOverall Rating
        Los Angeles Lakers97
        Los Angeles Clippers96
        Milwaukee Bucks95
        Brooklyn Nets94
        Houston Rockets93
        Golden State Warriors92
        Philadelphia 76ers91
        Dallas Mavericks90
        Portland Trail Blazers89
        Boston Celtics88
        Toronto Raptors87
        -

        These teams have the best combination of star players, depth, balance, and chemistry. They can compete with any other team in the game and have a high chance of winning the championship.

        -

        - How to earn coins and rewards

        -

        If you want to unlock more features, items, and content in NBA 2K20 APK, you need to earn coins and rewards. Here are some ways to do that:

        -
          -
        • Complete the daily, weekly, and monthly objectives and missions. You can find them in the main menu or the game modes. They will give you coins, cards, packs, badges, and other rewards.
        • -
        • Play in the MyTEAM mode and participate in the events, challenges, tournaments, and seasons. You can earn coins, cards, packs, badges, and other rewards based on your performance and ranking.
        • -
        • Play in the MyCAREER mode and progress through your career. You can earn coins, rewards, badges, and endorsements based on your performance and popularity.
        • -
        • Watch the 2KTV show and answer the interactive quizzes and polls. You can earn coins, cards, packs, badges, and other rewards based on your answers.
        • -
        • Use the locker codes feature to redeem free codes that give you coins, cards, packs, badges, and other rewards. You can find the codes on the official NBA 2K social media accounts or websites.
        • -
        • Use the spin the wheel feature to spin a wheel that gives you a random reward. You can access this feature once a day in the MyTEAM or MyCAREER mode.
        • -
        -

        Pros and cons of NBA 2K20 APK

        -

        NBA 2K20 APK is not a perfect game and has its pros and cons. Here are some of them:

        -

        - Pros

        -
          -
        • The game has amazing graphics and sound effects that make it look and feel like a real NBA game.
        • -
        • The game has multiple game modes and challenges that offer a lot of variety and replay value.
        • -
        • The game has a lot of customization and personalization options that allow you to create your own unique player and team.
        • -
        • The game has online multiplayer and social features that allow you to play with or against other players from around the world.
        • -
        • The game has exclusive content from celebrities, influencers, and legends that enhance the NBA culture and lifestyle.
        • -
        -

        - Cons

        -
          -
        • The game requires a lot of storage space and RAM to run smoothly on your device.
        • -
        • The game requires a stable internet connection to access some of the features and content.
        • -
        • The game has some bugs and glitches that affect the gameplay and performance.
        • -
        • The game has some ads and in-app purchases that can be annoying or expensive.
        • -
        • The game can be difficult or frustrating for some players due to the high level of competition and skill required.
        • Conclusion

          -

          NBA 2K20 APK is a great game for basketball fans and gamers who want to enjoy a realistic and immersive basketball simulation on their Android devices. The game has many features, modes, challenges, and content that make it fun and engaging. The game also has some drawbacks, such as the high requirements, the internet dependency, the bugs and glitches, the ads and in-app purchases, and the difficulty level. However, these cons do not outweigh the pros and do not ruin the overall experience of the game. NBA 2K20 APK is definitely worth downloading and playing if you love basketball and want to experience the ultimate basketball game for Android.

          -

          FAQs

          -

          Here are some frequently asked questions about NBA 2K20 APK:

          -

          - Is NBA 2K20 APK free?

          -

          Yes, NBA 2K20 APK is free to download and play. However, the game has some ads and in-app purchases that can enhance your game experience or unlock more features and content.

          -

          - Is NBA 2K20 APK safe?

          -

          Yes, NBA 2K20 APK is safe to download and install on your device. The game does not contain any viruses, malware, or spyware that can harm your device or data. However, you should always download the game from the official website or a trusted source to avoid any risks.

          -

          - Is NBA 2K20 APK offline?

          -

          No, NBA 2K20 APK is not offline. The game requires a stable internet connection to access some of the features and content, such as the online multiplayer, the social features, the updates, and the exclusive content. You can still play some of the modes and challenges offline, but you will miss out on some of the benefits and rewards of the online features.

          -

          - How to update NBA 2K20 APK?

          -

          To update NBA 2K20 APK, you need to follow these steps:

          -
            -
          1. Go to the official website of NBA 2K20 APK (https://www.nba2k.com/android) and check if there is a new version available.
          2. -
          3. If there is a new version available, click on the download button and wait for the download to finish.
          4. -
          5. Locate the NBA 2K20 APK file on your device and tap on it to install the new version.
          6. -
          7. Wait for the installation to complete and launch the game.
          8. -
          9. Enjoy playing the updated version of NBA 2K20 APK.
          10. -
          -

          - How to contact NBA 2K20 APK support?

          -

          If you have any issues, questions, feedback, or suggestions about NBA 2K20 APK, you can contact the NBA 2K20 APK support team by following these steps:

          -
            -
          1. Go to the main menu of the game and tap on the settings icon.
          2. -
          3. Tap on the help button and choose the option that suits your issue or question.
          4. -
          5. Fill out the form with your details and message and submit it.
          6. -
          7. Wait for a response from the NBA 2K20 APK support team.
          8. -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download and Install Cars for BeamNG.drive A Step-by-Step Tutorial.md b/spaces/1phancelerku/anime-remove-background/Download and Install Cars for BeamNG.drive A Step-by-Step Tutorial.md deleted file mode 100644 index 9b5f9485dd845a46c87c861f48b23c1eeaf791b8..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download and Install Cars for BeamNG.drive A Step-by-Step Tutorial.md +++ /dev/null @@ -1,127 +0,0 @@ - -

          How to Download Cars for BeamNG.drive: A Complete Guide

          -

          If you are a fan of realistic driving games, you have probably heard of BeamNG.drive, a dynamic soft-body physics vehicle simulator that can do just about anything. Whether you want to crash cars, race them, or customize them, BeamNG.drive offers you a wide range of possibilities and options. But did you know that you can also download cars for BeamNG.drive from various sources and add them to your game? In this article, we will show you how to download cars for BeamNG.drive, why you should do it, and what tips and tricks you should know before installing them.

          -

          beamng download cars


          Download Ziphttps://jinyurl.com/2uNU0i



          -

          What is BeamNG.drive?

          -

          Before we get into how to download cars for BeamNG.drive, let's first understand what this game is all about. BeamNG.drive is a game that was released in 2015 as an early access title on Steam, and it has been constantly updated and improved ever since. It is developed by BeamNG, a small team of passionate programmers and artists who have created their own physics engine from scratch. The game has three main features that make it stand out from other driving games:

          -

          A realistic driving simulator with soft-body physics

          -

          The core of BeamNG.drive is its physics engine, which simulates every component of a vehicle in real time using nodes (mass points) and beams (springs). This means that every crash, collision, or deformation is calculated realistically and accurately, resulting in true-to-life behavior. You can see your car crumple, bend, break, or explode depending on how you drive it. You can also tweak every aspect of your car's performance, such as wheels, suspension, engines, brakes, steering, gears, etc. The game also features realistic sounds, graphics, lighting, weather, and damage effects.

          -

          A sandbox game with dozens of customizable vehicles and environments

          -

          BeamNG.drive offers you dozens of refined, totally customizable vehicles for you to experiment with. Whether it's a compact car or massive truck, you can tweak away at all the moving parts to create just about any driving experience you want. You can also choose from 12 sprawling open-world environments that range from tropical jungles to urban highways. Each environment has its own terrain

          A modding-friendly game with a vibrant community

          -

          One of the best things about BeamNG.drive is that it is very modding-friendly. You can create your own vehicles, maps, scenarios, skins, sounds, and more using the game's built-in tools or external software. You can also download and install mods made by other players from various sources, such as the official BeamNG website, Steam Workshop, or other websites. The game has a very active and supportive community of modders and players who share their creations, feedback, and ideas. You can also join online multiplayer sessions and play with or against other people.

          -

          Why Download Cars for BeamNG.drive?

          -

          Now that you know what BeamNG.drive is, you might be wondering why you should download cars for it. After all, the game already has plenty of vehicles to choose from, right? Well, there are several reasons why downloading cars for BeamNG.drive can enhance your gameplay experience and make it more fun and diverse. Here are some of them:

          -

          To enhance your gameplay experience with new models, features, and styles

          -

          Downloading cars for BeamNG.drive can give you access to new models that are not available in the base game. These models can have different features, such as unique engines, transmissions, suspensions, body parts, etc. They can also have different styles, such as classic cars, sports cars, muscle cars, supercars, etc. You can find cars that suit your preferences and tastes, or try out something new and different. You can also mix and match different parts from different mods to create your own custom car.

          -

          beamng drive free download vehicles
          -beamng drive car mods download
          -beamng drive toy building block car
          -beamng drive custom vehicles pack
          -beamng drive covet gambler 500
          -beamng drive barstow on d-series frame
          -beamng drive squatted d-series
          -beamng drive scintilla electric vehicle
          -beamng drive gavril grand marshal widebody
          -beamng drive capsule-shaped bus
          -beamng drive experimental toy car
          -beamng drive throttle value keyboard
          -beamng drive naorl scintilla ev
          -beamng drive huge poland flag
          -beamng drive zycans custom vehicle pack
          -beamng drive dondai supremo as the covet
          -beamng drive etk 800 the diy shedbox
          -beamng drive aw goal boring german car
          -beamng drive miramar gambler 500 b
          -beamng drive civetta bolide 6-speed gearbox
          -beamng download cars for pc
          -beamng download cars for mac
          -beamng download cars for android
          -beamng download cars for ios
          -beamng download cars for xbox one
          -beamng download cars for ps4
          -beamng download cars for switch
          -beamng download cars for linux
          -beamng download cars for windows 10
          -beamng download cars for steam
          -how to download cars in beamng drive
          -where to download cars for beamng drive
          -best cars to download for beamng drive
          -easiest way to download cars for beamng drive
          -fastest cars to download for beamng drive
          -coolest cars to download for beamng drive
          -most realistic cars to download for beamng drive
          -most popular cars to download for beamng drive
          -most fun cars to download for beamng drive
          -most customizable cars to download for beamng drive
          -new cars to download for beamng drive 2023
          -latest cars to download for beamng drive 2023
          -top 10 cars to download for beamng drive 2023
          -top 50 cars to download for beamng drive 2023
          -top 100 cars to download for beamng drive 2023
          -top rated cars to download for beamng drive 2023
          -top downloaded cars for beamng drive 2023
          -top reviewed cars for beamng drive 2023
          -top recommended cars for beamng drive 2023

          -

          To explore different types of vehicles and driving scenarios

          -

          Downloading cars for BeamNG.drive can also allow you to explore different types of vehicles and driving scenarios that you might not encounter in the base game. For example, you can download cars that are designed for off-road driving, drifting, racing, stunt driving, demolition derby, etc. You can also download cars that are based on real-life vehicles or fictional ones from movies, games, or other media. You can test your skills and challenge yourself with different vehicles and situations.

          -

          To support the modders and creators who make the game more diverse and fun

          -

          Another reason why you should download cars for BeamNG.drive is to support the modders and creators who make them. These people spend a lot of time and effort to create high-quality mods that add value and variety to the game. They also share their mods for free for everyone to enjoy. By downloading their mods, you are showing your appreciation and encouragement for their work. You are also helping them to improve their skills and create more mods in the future.

          -

          How to Download Cars for BeamNG.drive?

          -

          Now that you know why you should download cars for BeamNG.drive, let's get into how to do it. There are three main sources where you can download cars for BeamNG.drive: the official BeamNG website, Steam Workshop, and other sources. Each source has its own advantages and disadvantages, so you should choose the one that suits you best. Here is how to download cars from each source:

          -

          From the official BeamNG website

          -

          The official BeamNG website is the primary source where you can download cars for BeamNG.drive. It has a dedicated section called Vehicles, where you can find hundreds of car mods made by the developers or the community. The website has a simple and user-friendly interface where you can browse, search, filter, sort, and download car mods easily. Here is how to download cars from the official BeamNG website:

          -
            -
          • Browse the Vehicles category and find the car you want. You can use the filters on the left side to narrow down your search by type, style, rating, popularity, etc.
          • -
          • Click on the car mod you want to view its details page. You can see some screenshots, videos, descriptions, ratings, comments, and other information about the mod.
          • -
          • Click on the Download button on the top right corner of the page and save the file to your computer. The file will be in ZIP format.
          • -
          • Extract the file using a program like WinRAR or 7-Zip and copy the folder inside it to your BeamNG.drive mods folder. The default location of this folder is C:\Users\YourName\Documents\BeamNG.drive\mods.
          • -
          • Launch the game and enable the mod from the in-game mod manager. You can access this by pressing Esc on your keyboard and clicking on Mods on the bottom left corner of the screen.
          • -
          • Enjoy your new car!
          • -

          From Steam Workshop

          -

          Another source where you can download cars for BeamNG.drive is Steam Workshop, a platform where Steam users can create and share content for various games. Steam Workshop has a large and active community of modders and players who upload and download car mods for BeamNG.drive. Steam Workshop has some advantages over the official BeamNG website, such as automatic updates, easier installation, and integration with Steam. However, it also has some disadvantages, such as lower quality control, limited search options, and dependency on Steam. Here is how to download cars from Steam Workshop:

          -
            -
          • Subscribe to the car mod you want from the Steam Workshop page. You can access this page by launching Steam, going to your Library, right-clicking on BeamNG.drive, selecting Properties, and clicking on Browse the Workshop. You can also use this link to go directly to the BeamNG.drive workshop page.
          • -
          • Browse or search for the car mod you want. You can use the tabs on the right side to filter by categories, tags, ratings, etc. You can also use the search bar on the top right corner to enter keywords.
          • -
          • Click on the car mod you want to view its details page. You can see some screenshots, videos, descriptions, ratings, comments, and other information about the mod.
          • -
          • Click on the Subscribe button on the top right corner of the page. This will automatically download the mod to your computer and install it to your game.
          • -
          • Launch the game and enable the mod from the in-game mod manager. You can access this by pressing Esc on your keyboard and clicking on Mods on the bottom left corner of the screen.
          • -
          • Enjoy your new car!
          • -

          From other sources

          -

          The third source where you can download cars for BeamNG.drive is from other websites or forums that host car mods for the game. These sources can have some advantages over the official BeamNG website and Steam Workshop, such as more variety, exclusivity, or novelty. However, they also have some disadvantages, such as potential viruses, malware, or incompatible files. You should be careful and cautious when downloading car mods from other sources, and follow the instructions provided by the mod author or website. Here is how to download cars from other sources:

          -
            -
          • Be careful of potential viruses, malware, or incompatible files. Before downloading any car mod from an unknown source, you should scan it with an antivirus program and check its compatibility and quality. You should also read the reviews and comments from other users who have downloaded the mod.
          • -
          • Follow the instructions provided by the mod author or website. Different car mods may have different installation methods or requirements. You should follow the instructions carefully and make sure you have everything you need to run the mod. Some common steps are:
              -
            • Download the car mod file from the website or forum. The file may be in ZIP, RAR, 7Z, or other formats.
            • -
            • Extract the file using a program like WinRAR or 7-Zip and copy the folder inside it to your BeamNG.drive mods folder. The default location of this folder is C:\Users\YourName\Documents\BeamNG.drive\mods.
            • -
            • Launch the game and enable the mod from the in-game mod manager. You can access this by pressing Esc on your keyboard and clicking on Mods on the bottom left corner of the screen.
            • -
            -
          • -
          • Enjoy your new car!
          • -
          -

          Tips and Tricks for Downloading Cars for BeamNG.drive

          -

          Downloading cars for BeamNG.drive can be a fun and rewarding experience, but it can also be a frustrating and disappointing one if you don't know what you are doing. To avoid any problems or issues with your car mods, you should follow some tips and tricks that will help you download, install, and use them properly. Here are some of them:

          -

          Read the mod description, reviews, and comments carefully before downloading

          -

          Before you download any car mod for BeamNG.drive, you should read its description, reviews, and comments carefully. This will help you understand what the mod does, how it works, what it requires, and what it offers. You can also learn about any bugs, glitches, or compatibility issues that the mod may have. You can also see what other users think about the mod and how they rate it. This will help you decide whether the mod is worth downloading or not.

          -

          Check for updates and patches for your mods regularly

          -

          After you download and install a car mod for BeamNG.drive, you should check for updates and patches for it regularly. This will help you keep your mod up to date and fix any errors or problems that it may have. You can check for updates and patches by visiting the source where you downloaded the mod from, such as the official BeamNG website, Steam Workshop, or other websites or forums. You can also use some tools or programs that can automatically update your mods for you.

          -

          Backup your game files and mods before installing new ones

          -

          Before you install any new car mod for BeamNG.drive, you should backup your game files and mods first. This will help you prevent any data loss or corruption that may occur due to installing a faulty or incompatible mod. You can backup your game files and mods by copying them to another location on your computer or an external drive. You can also use some tools or programs that can backup your game files and mods for you.

          -

          Don't use too many mods at once to avoid performance issues or crashes

          -

          While using car mods for BeamNG.drive can be fun and exciting, it can also be taxing on your computer's resources and stability. If you use too many mods at once, you may experience performance issues such as lagging, stuttering, freezing, or crashing. To avoid this, you should limit the number of mods you use at a time and disable any unnecessary ones. You should also monitor your computer's CPU, RAM, GPU, and disk usage while playing the game with mods.

          -

          Conclusion

          -

          In conclusion, downloading cars for BeamNG.drive can be a great way to enhance your gameplay experience and make it more fun and diverse. You can download cars from various sources such as the official BeamNG website, Steam Workshop, or other websites or forums. You can also create your own cars using the game's tools or external software. However, you should be careful and cautious when downloading and installing car mods, and follow some tips and tricks that will help you avoid any problems or issues. You should also backup your game files and mods, check for updates and patches, read the mod descriptions and reviews, and don't use too many mods at once. We hope that this article has helped you learn how to download cars for BeamNG.drive, why you should do it, and what tips and tricks you should know before installing them. If you are ready to try out some car mods for BeamNG.drive, here are some links to popular or recommended car mods that you can download from the official BeamNG website or Steam Workshop: - The CrashHard Dummy: A realistic crash test dummy that can be used in any vehicle. - The ETK 800 Series: A series of luxury sedans with various configurations and features. - The Hirochi Sunburst: A sporty hatchback with a rally-inspired design and performance. - The Gavril D-Series: A versatile pickup truck with a lot of customization options and accessories. - The Ibishu Pessima: A classic Japanese sedan with two generations and a lot of nostalgia. Have fun downloading cars for BeamNG.drive and enjoy the game!

          FAQs

          -

          Here are some frequently asked questions about downloading cars for BeamNG.drive:

          -
            -
          • Q: How do I uninstall a car mod for BeamNG.drive?
          • -
          • A: To uninstall a car mod for BeamNG.drive, you can either delete the mod folder from your BeamNG.drive mods folder, or disable the mod from the in-game mod manager. If you downloaded the mod from Steam Workshop, you can also unsubscribe from it on the Steam Workshop page.
          • -
          • Q: How do I update a car mod for BeamNG.drive?
          • -
          • A: To update a car mod for BeamNG.drive, you can either download the latest version of the mod from the source where you downloaded it from, or use a tool or program that can automatically update your mods for you. If you downloaded the mod from Steam Workshop, it will be updated automatically by Steam.
          • -
          • Q: How do I create my own car mod for BeamNG.drive?
          • -
          • A: To create your own car mod for BeamNG.drive, you can use the game's built-in tools or external software to design and model your car. You can also use existing car mods as a base or reference for your car. You can find more information and tutorials on how to create car mods on the official BeamNG website or forum.
          • -
          • Q: How do I share my car mod for BeamNG.drive?
          • -
          • A: To share your car mod for BeamNG.drive, you can upload it to the official BeamNG website, Steam Workshop, or other websites or forums that host car mods for the game. You should also provide a detailed description, screenshots, videos, and other information about your mod to attract more users and feedback.
          • -
          • Q: How do I find more car mods for BeamNG.drive?
          • -
          • A: To find more car mods for BeamNG.drive, you can visit the official BeamNG website, Steam Workshop, or other websites or forums that host car mods for the game. You can also use search engines, social media, YouTube, or other platforms to discover new or popular car mods.
          • -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/__init__.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/__init__.py deleted file mode 100644 index 2f93cab80ded8e7239bb96eb6e364c3fd4fb46d9..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .ldm import LatentDiffusion -from .utils import seed_everything -from .pipeline import * \ No newline at end of file diff --git a/spaces/AIFILMS/speecht5-tts-demo/README.md b/spaces/AIFILMS/speecht5-tts-demo/README.md deleted file mode 100644 index b00de1f0412a56568cc8b554a4ee8b880a8b7afb..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/speecht5-tts-demo/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: SpeechT5 Speech Synthesis Demo -emoji: 👩‍🎤 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: Matthijs/speecht5-tts-demo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/vocoders/vocoder_utils.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/vocoders/vocoder_utils.py deleted file mode 100644 index db5d5ca1765928e4b047db04435a8a39b52592ca..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/vocoders/vocoder_utils.py +++ /dev/null @@ -1,15 +0,0 @@ -import librosa - -from utils.hparams import hparams -import numpy as np - - -def denoise(wav, v=0.1): - spec = librosa.stft(y=wav, n_fft=hparams['fft_size'], hop_length=hparams['hop_size'], - win_length=hparams['win_size'], pad_mode='constant') - spec_m = np.abs(spec) - spec_m = np.clip(spec_m - v, a_min=0, a_max=None) - spec_a = np.angle(spec) - - return librosa.istft(spec_m * np.exp(1j * spec_a), hop_length=hparams['hop_size'], - win_length=hparams['win_size']) diff --git a/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/pytorch/evaluate.py b/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/pytorch/evaluate.py deleted file mode 100644 index 7f1fa38eedd9e9cd2580143ceb92aba8f81becf3..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/pytorch/evaluate.py +++ /dev/null @@ -1,42 +0,0 @@ -from sklearn import metrics - -from pytorch_utils import forward - - -class Evaluator(object): - def __init__(self, model): - """Evaluator. - - Args: - model: object - """ - self.model = model - - def evaluate(self, data_loader): - """Forward evaluation data and calculate statistics. - - Args: - data_loader: object - - Returns: - statistics: dict, - {'average_precision': (classes_num,), 'auc': (classes_num,)} - """ - - # Forward - output_dict = forward( - model=self.model, - generator=data_loader, - return_target=True) - - clipwise_output = output_dict['clipwise_output'] # (audios_num, classes_num) - target = output_dict['target'] # (audios_num, classes_num) - - average_precision = metrics.average_precision_score( - target, clipwise_output, average=None) - - auc = metrics.roc_auc_score(target, clipwise_output, average=None) - - statistics = {'average_precision': average_precision, 'auc': auc} - - return statistics \ No newline at end of file diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/timm_model.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/timm_model.py deleted file mode 100644 index 071dd148c772f398e87ecbfc836dcfa4a3ae01af..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/timm_model.py +++ /dev/null @@ -1,106 +0,0 @@ -""" timm model adapter - -Wraps timm (https://github.com/rwightman/pytorch-image-models) models for use as a vision tower in CLIP model. -""" -from collections import OrderedDict - -import torch.nn as nn - -try: - import timm - from timm.models.layers import Mlp, to_2tuple - from timm.models.layers.attention_pool2d import RotAttentionPool2d - from timm.models.layers.attention_pool2d import AttentionPool2d as AbsAttentionPool2d -except ImportError as e: - timm = None - -from .utils import freeze_batch_norm_2d - - -class TimmModel(nn.Module): - """ timm model adapter - # FIXME this adapter is a work in progress, may change in ways that break weight compat - """ - - def __init__( - self, - model_name, - embed_dim, - image_size=224, - pool='avg', - proj='linear', - drop=0., - pretrained=False): - super().__init__() - if timm is None: - raise RuntimeError("Please `pip install timm` to use timm models.") - - self.image_size = to_2tuple(image_size) - self.trunk = timm.create_model(model_name, pretrained=pretrained) - feat_size = self.trunk.default_cfg.get('pool_size', None) - feature_ndim = 1 if not feat_size else 2 - if pool in ('abs_attn', 'rot_attn'): - assert feature_ndim == 2 - # if attn pooling used, remove both classifier and default pool - self.trunk.reset_classifier(0, global_pool='') - else: - # reset global pool if pool config set, otherwise leave as network default - reset_kwargs = dict(global_pool=pool) if pool else {} - self.trunk.reset_classifier(0, **reset_kwargs) - prev_chs = self.trunk.num_features - - head_layers = OrderedDict() - if pool == 'abs_attn': - head_layers['pool'] = AbsAttentionPool2d(prev_chs, feat_size=feat_size, out_features=embed_dim) - prev_chs = embed_dim - elif pool == 'rot_attn': - head_layers['pool'] = RotAttentionPool2d(prev_chs, out_features=embed_dim) - prev_chs = embed_dim - else: - assert proj, 'projection layer needed if non-attention pooling is used.' - - # NOTE attention pool ends with a projection layer, so proj should usually be set to '' if such pooling is used - if proj == 'linear': - head_layers['drop'] = nn.Dropout(drop) - head_layers['proj'] = nn.Linear(prev_chs, embed_dim) - elif proj == 'mlp': - head_layers['mlp'] = Mlp(prev_chs, 2 * embed_dim, embed_dim, drop=drop) - - self.head = nn.Sequential(head_layers) - - def lock(self, unlocked_groups=0, freeze_bn_stats=False): - """ lock modules - Args: - unlocked_groups (int): leave last n layer groups unlocked (default: 0) - """ - if not unlocked_groups: - # lock full model - for param in self.trunk.parameters(): - param.requires_grad = False - if freeze_bn_stats: - freeze_batch_norm_2d(self.trunk) - else: - # NOTE: partial freeze requires latest timm (master) branch and is subject to change - try: - # FIXME import here until API stable and in an official release - from timm.models.helpers import group_parameters, group_modules - except ImportError: - raise RuntimeError( - 'Please install latest timm `pip install git+https://github.com/rwightman/pytorch-image-models`') - matcher = self.trunk.group_matcher() - gparams = group_parameters(self.trunk, matcher) - max_layer_id = max(gparams.keys()) - max_layer_id = max_layer_id - unlocked_groups - for group_idx in range(max_layer_id + 1): - group = gparams[group_idx] - for param in group: - self.trunk.get_parameter(param).requires_grad = False - if freeze_bn_stats: - gmodules = group_modules(self.trunk, matcher, reverse=True) - gmodules = {k for k, v in gmodules.items() if v <= max_layer_id} - freeze_batch_norm_2d(self.trunk, gmodules) - - def forward(self, x): - x = self.trunk(x) - x = self.head(x) - return x diff --git a/spaces/AIWaves/Debate/src/agents/Component/PromptComponent.py b/spaces/AIWaves/Debate/src/agents/Component/PromptComponent.py deleted file mode 100644 index dc590d4734e14cad93ab5560cb7b4f08bd45c416..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/Debate/src/agents/Component/PromptComponent.py +++ /dev/null @@ -1,133 +0,0 @@ -from abc import abstractmethod - - -class PromptComponent: - def __init__(self): - pass - - @abstractmethod - def get_prompt(self, agent): - pass - -class TaskComponent(PromptComponent): - def __init__(self, task): - super().__init__() - self.task = task - - def get_prompt(self, agent): - return f"""The task you need to execute is: {self.task}.\n""" - - -class OutputComponent(PromptComponent): - def __init__(self, output): - super().__init__() - self.output = output - - def get_prompt(self, agent): - return f"""Please contact the above to extract <{self.output}> and , \ - do not perform additional output, please output in strict accordance with the above format!\n""" - - -class SystemComponent(PromptComponent): - def __init__(self,system_prompt): - super().__init__() - self.system_prompt = system_prompt - - def get_prompt(self, agent): - return self.system_prompt - -class LastComponent(PromptComponent): - def __init__(self, last_prompt): - super().__init__() - self.last_prompt = last_prompt - - def get_prompt(self, agent): - return self.last_prompt - - -class StyleComponent(PromptComponent): - """ - 角色、风格组件 - """ - - def __init__(self, role): - super().__init__() - self.role = role - - def get_prompt(self, agent): - name = agent.name - style = agent.style - return f"""Now your role is:\n{self.role}, your name is:\n{name}. \ - You need to follow the output style:\n.\n""" - - -class RuleComponent(PromptComponent): - def __init__(self, rule): - super().__init__() - self.rule = rule - - def get_prompt(self, agent): - return f"""The rule you need to follow is:\n{self.rule}.\n""" - - -class DemonstrationComponent(PromptComponent): - """ - input a list,the example of answer. - """ - - def __init__(self, demonstrations): - super().__init__() - self.demonstrations = demonstrations - - def add_demonstration(self, demonstration): - self.demonstrations.append(demonstration) - - def get_prompt(self, agent): - prompt = "Here are demonstrations you can refer to:\n" - for demonstration in self.demonstrations: - prompt += "\n" + demonstration - prompt += "\n" - return prompt - - -class CoTComponent(PromptComponent): - """ - input a list,the example of answer. - """ - - def __init__(self, demonstrations): - super().__init__() - self.demonstrations = demonstrations - - def add_demonstration(self, demonstration): - self.demonstrations.append(demonstration) - - def get_prompt(self, agent): - prompt = "You need to think in detail before outputting, the thinking case is as follows:\n" - for demonstration in self.demonstrations: - prompt += "\n" + demonstration - prompt += "\n" - return prompt - - -class CustomizeComponent(PromptComponent): - """ - Custom template - template(str) : example: "i am {}" - keywords(list) : example : ["name"] - example : agent.environment.shared_memory["name"] = "Lilong" - the component will get the keyword attribute from the environment, and then add it to the template. - Return : "i am Lilong" - """ - def __init__(self, template, keywords) -> None: - super().__init__() - self.template = template - self.keywords = keywords - - def get_prompt(self, agent): - template_keyword = {} - for keyword in self.keywords: - - current_keyword = agent.environment.shared_memory[keyword] - template_keyword[keyword] = current_keyword - return self.template.format(**template_keyword) \ No newline at end of file diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/MODEL_CARD.md b/spaces/AbandonedMuse/UnlimitedMusicGen/MODEL_CARD.md deleted file mode 100644 index 6c2c9f883969eb905e74ad3376966d156cc5ca00..0000000000000000000000000000000000000000 --- a/spaces/AbandonedMuse/UnlimitedMusicGen/MODEL_CARD.md +++ /dev/null @@ -1,81 +0,0 @@ -# MusicGen Model Card - -## Model details - -**Organization developing the model:** The FAIR team of Meta AI. - -**Model date:** MusicGen was trained between April 2023 and May 2023. - -**Model version:** This is the version 1 of the model. - -**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation. - -**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation][arxiv]. - -**Citation details** See [our paper][arxiv] - -**License** Code is released under MIT, model weights are released under CC-BY-NC 4.0. - -**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue. - -## Intended use -**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including: - -- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science -- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs - -**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models. - -**Out-of-scope use cases** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. - -## Metrics - -**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark: - -- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish) -- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST) -- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model - -Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes: - -- Overall quality of the music samples; -- Text relevance to the provided text input; -- Adherence to the melody for melody-guided music generation. - -More details on performance measures and human studies can be found in the paper. - -**Decision thresholds:** Not applicable. - -## Evaluation datasets - -The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set. - -## Training datasets - -The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing. - -## Quantitative analysis - -More information can be found in the paper [Simple and Controllable Music Generation][arxiv], in the Experimental Setup section. - -## Limitations and biases - -**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model. - -**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs). - -**Limitations:** - -- The model is not able to generate realistic vocals. -- The model has been trained with English descriptions and will not perform as well in other languages. -- The model does not perform equally well for all music styles and cultures. -- The model sometimes generates end of songs, collapsing to silence. -- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results. - -**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive. - -**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data. - -**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks. - -[arxiv]: https://arxiv.org/abs/2306.05284 diff --git a/spaces/Abdullah-Habib/Rabbit_or_Hare/app.py b/spaces/Abdullah-Habib/Rabbit_or_Hare/app.py deleted file mode 100644 index e8ecf74d725f5e813426116f6a3df6d6aa1fa63c..0000000000000000000000000000000000000000 --- a/spaces/Abdullah-Habib/Rabbit_or_Hare/app.py +++ /dev/null @@ -1,20 +0,0 @@ -__all__ = ['is_Rabbit',"learn",'classify_image', 'categories','image','label','examples','intf'] - -# Cell -from fastai.vision.all import * -import gradio as gr -def is_Rabbit(x): return x[0].isupper() - - -learn = load_learner ('model.pkl') -# Cell -categories = ('Hare','Rabbit') -def classify_image (img) : - pred, idx, probs = learn.predict(img) - return dict(zip(categories, map(float,probs))) -# Cell -image = gr.inputs.Image(shape=(192, 192)) -label = gr.outputs.Label () -examples=['Rabbit.jpg', 'TestRabbit.jpg','Hare.jpg'] -intf = gr.Interface(fn=classify_image,inputs=image, outputs=label, examples=examples) -intf.launch(inline=False) \ No newline at end of file diff --git a/spaces/Abhilashvj/planogram-compliance/utils/torch_utils.py b/spaces/Abhilashvj/planogram-compliance/utils/torch_utils.py deleted file mode 100644 index 760788cf8cfd8f47ba64c4dbea5a5cb20838e9b6..0000000000000000000000000000000000000000 --- a/spaces/Abhilashvj/planogram-compliance/utils/torch_utils.py +++ /dev/null @@ -1,613 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -PyTorch utils -""" - -import math -import os -import platform -import subprocess -import time -import warnings -from contextlib import contextmanager -from copy import deepcopy -from pathlib import Path - -import torch -import torch.distributed as dist -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.parallel import DistributedDataParallel as DDP - -from utils.general import LOGGER, check_version, colorstr, file_date, git_describe - -LOCAL_RANK = int( - os.getenv("LOCAL_RANK", -1) -) # https://pytorch.org/docs/stable/elastic/run.html -RANK = int(os.getenv("RANK", -1)) -WORLD_SIZE = int(os.getenv("WORLD_SIZE", 1)) - -try: - import thop # for FLOPs computation -except ImportError: - thop = None - -# Suppress PyTorch warnings -warnings.filterwarnings( - "ignore", - message="User provided device_type of 'cuda', but CUDA is not available. Disabling", -) -warnings.filterwarnings("ignore", category=UserWarning) - - -def smart_inference_mode(torch_1_9=check_version(torch.__version__, "1.9.0")): - # Applies torch.inference_mode() decorator if torch>=1.9.0 else torch.no_grad() decorator - def decorate(fn): - return (torch.inference_mode if torch_1_9 else torch.no_grad)()(fn) - - return decorate - - -def smartCrossEntropyLoss(label_smoothing=0.0): - # Returns nn.CrossEntropyLoss with label smoothing enabled for torch>=1.10.0 - if check_version(torch.__version__, "1.10.0"): - return nn.CrossEntropyLoss(label_smoothing=label_smoothing) - if label_smoothing > 0: - LOGGER.warning( - f"WARNING ⚠️ label smoothing {label_smoothing} requires torch>=1.10.0" - ) - return nn.CrossEntropyLoss() - - -def smart_DDP(model): - # Model DDP creation with checks - assert not check_version(torch.__version__, "1.12.0", pinned=True), ( - "torch==1.12.0 torchvision==0.13.0 DDP training is not supported due to a known issue. " - "Please upgrade or downgrade torch to use DDP. See https://github.com/ultralytics/yolov5/issues/8395" - ) - if check_version(torch.__version__, "1.11.0"): - return DDP( - model, - device_ids=[LOCAL_RANK], - output_device=LOCAL_RANK, - static_graph=True, - ) - else: - return DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK) - - -def reshape_classifier_output(model, n=1000): - # Update a TorchVision classification model to class count 'n' if required - from models.common import Classify - - name, m = list( - (model.model if hasattr(model, "model") else model).named_children() - )[ - -1 - ] # last module - if isinstance(m, Classify): # YOLOv5 Classify() head - if m.linear.out_features != n: - m.linear = nn.Linear(m.linear.in_features, n) - elif isinstance(m, nn.Linear): # ResNet, EfficientNet - if m.out_features != n: - setattr(model, name, nn.Linear(m.in_features, n)) - elif isinstance(m, nn.Sequential): - types = [type(x) for x in m] - if nn.Linear in types: - i = types.index(nn.Linear) # nn.Linear index - if m[i].out_features != n: - m[i] = nn.Linear(m[i].in_features, n) - elif nn.Conv2d in types: - i = types.index(nn.Conv2d) # nn.Conv2d index - if m[i].out_channels != n: - m[i] = nn.Conv2d( - m[i].in_channels, - n, - m[i].kernel_size, - m[i].stride, - bias=m[i].bias is not None, - ) - - -@contextmanager -def torch_distributed_zero_first(local_rank: int): - # Decorator to make all processes in distributed training wait for each local_master to do something - if local_rank not in [-1, 0]: - dist.barrier(device_ids=[local_rank]) - yield - if local_rank == 0: - dist.barrier(device_ids=[0]) - - -def device_count(): - # Returns number of CUDA devices available. Safe version of torch.cuda.device_count(). Supports Linux and Windows - assert platform.system() in ( - "Linux", - "Windows", - ), "device_count() only supported on Linux or Windows" - try: - cmd = ( - "nvidia-smi -L | wc -l" - if platform.system() == "Linux" - else 'nvidia-smi -L | find /c /v ""' - ) # Windows - return int( - subprocess.run(cmd, shell=True, capture_output=True, check=True) - .stdout.decode() - .split()[-1] - ) - except Exception: - return 0 - - -def select_device(device="", batch_size=0, newline=True): - # device = None or 'cpu' or 0 or '0' or '0,1,2,3' - s = f"YOLOv5 🚀 {git_describe() or file_date()} Python-{platform.python_version()} torch-{torch.__version__} " - device = ( - str(device).strip().lower().replace("cuda:", "").replace("none", "") - ) # to string, 'cuda:0' to '0' - cpu = device == "cpu" - mps = device == "mps" # Apple Metal Performance Shaders (MPS) - if cpu or mps: - os.environ[ - "CUDA_VISIBLE_DEVICES" - ] = "-1" # force torch.cuda.is_available() = False - elif device: # non-cpu device requested - os.environ[ - "CUDA_VISIBLE_DEVICES" - ] = device # set environment variable - must be before assert is_available() - assert torch.cuda.is_available() and torch.cuda.device_count() >= len( - device.replace(",", "") - ), f"Invalid CUDA '--device {device}' requested, use '--device cpu' or pass valid CUDA device(s)" - - if ( - not cpu and not mps and torch.cuda.is_available() - ): # prefer GPU if available - devices = ( - device.split(",") if device else "0" - ) # range(torch.cuda.device_count()) # i.e. 0,1,6,7 - n = len(devices) # device count - if ( - n > 1 and batch_size > 0 - ): # check batch_size is divisible by device_count - assert ( - batch_size % n == 0 - ), f"batch-size {batch_size} not multiple of GPU count {n}" - space = " " * (len(s) + 1) - for i, d in enumerate(devices): - p = torch.cuda.get_device_properties(i) - s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / (1 << 20):.0f}MiB)\n" # bytes to MB - arg = "cuda:0" - elif ( - mps - and getattr(torch, "has_mps", False) - and torch.backends.mps.is_available() - ): # prefer MPS if available - s += "MPS\n" - arg = "mps" - else: # revert to CPU - s += "CPU\n" - arg = "cpu" - - if not newline: - s = s.rstrip() - LOGGER.info(s) - return torch.device(arg) - - -def time_sync(): - # PyTorch-accurate time - if torch.cuda.is_available(): - torch.cuda.synchronize() - return time.time() - - -def profile(input, ops, n=10, device=None): - """YOLOv5 speed/memory/FLOPs profiler - Usage: - input = torch.randn(16, 3, 640, 640) - m1 = lambda x: x * torch.sigmoid(x) - m2 = nn.SiLU() - profile(input, [m1, m2], n=100) # profile over 100 iterations - """ - results = [] - if not isinstance(device, torch.device): - device = select_device(device) - print( - f"{'Params':>12s}{'GFLOPs':>12s}{'GPU_mem (GB)':>14s}{'forward (ms)':>14s}{'backward (ms)':>14s}" - f"{'input':>24s}{'output':>24s}" - ) - - for x in input if isinstance(input, list) else [input]: - x = x.to(device) - x.requires_grad = True - for m in ops if isinstance(ops, list) else [ops]: - m = m.to(device) if hasattr(m, "to") else m # device - m = ( - m.half() - if hasattr(m, "half") - and isinstance(x, torch.Tensor) - and x.dtype is torch.float16 - else m - ) - tf, tb, t = 0, 0, [0, 0, 0] # dt forward, backward - try: - flops = ( - thop.profile(m, inputs=(x,), verbose=False)[0] / 1e9 * 2 - ) # GFLOPs - except Exception: - flops = 0 - - try: - for _ in range(n): - t[0] = time_sync() - y = m(x) - t[1] = time_sync() - try: - _ = ( - ( - sum(yi.sum() for yi in y) - if isinstance(y, list) - else y - ) - .sum() - .backward() - ) - t[2] = time_sync() - except Exception: # no backward method - # print(e) # for debug - t[2] = float("nan") - tf += (t[1] - t[0]) * 1000 / n # ms per op forward - tb += (t[2] - t[1]) * 1000 / n # ms per op backward - mem = ( - torch.cuda.memory_reserved() / 1e9 - if torch.cuda.is_available() - else 0 - ) # (GB) - s_in, s_out = ( - tuple(x.shape) if isinstance(x, torch.Tensor) else "list" - for x in (x, y) - ) # shapes - p = ( - sum(x.numel() for x in m.parameters()) - if isinstance(m, nn.Module) - else 0 - ) # parameters - print( - f"{p:12}{flops:12.4g}{mem:>14.3f}{tf:14.4g}{tb:14.4g}{str(s_in):>24s}{str(s_out):>24s}" - ) - results.append([p, flops, mem, tf, tb, s_in, s_out]) - except Exception as e: - print(e) - results.append(None) - torch.cuda.empty_cache() - return results - - -def is_parallel(model): - # Returns True if model is of type DP or DDP - return type(model) in ( - nn.parallel.DataParallel, - nn.parallel.DistributedDataParallel, - ) - - -def de_parallel(model): - # De-parallelize a model: returns single-GPU model if model is of type DP or DDP - return model.module if is_parallel(model) else model - - -def initialize_weights(model): - for m in model.modules(): - t = type(m) - if t is nn.Conv2d: - pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif t is nn.BatchNorm2d: - m.eps = 1e-3 - m.momentum = 0.03 - elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]: - m.inplace = True - - -def find_modules(model, mclass=nn.Conv2d): - # Finds layer indices matching module class 'mclass' - return [ - i for i, m in enumerate(model.module_list) if isinstance(m, mclass) - ] - - -def sparsity(model): - # Return global model sparsity - a, b = 0, 0 - for p in model.parameters(): - a += p.numel() - b += (p == 0).sum() - return b / a - - -def prune(model, amount=0.3): - # Prune model to requested global sparsity - import torch.nn.utils.prune as prune - - for name, m in model.named_modules(): - if isinstance(m, nn.Conv2d): - prune.l1_unstructured(m, name="weight", amount=amount) # prune - prune.remove(m, "weight") # make permanent - LOGGER.info(f"Model pruned to {sparsity(model):.3g} global sparsity") - - -def fuse_conv_and_bn(conv, bn): - # Fuse Conv2d() and BatchNorm2d() layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/ - fusedconv = ( - nn.Conv2d( - conv.in_channels, - conv.out_channels, - kernel_size=conv.kernel_size, - stride=conv.stride, - padding=conv.padding, - dilation=conv.dilation, - groups=conv.groups, - bias=True, - ) - .requires_grad_(False) - .to(conv.weight.device) - ) - - # Prepare filters - w_conv = conv.weight.clone().view(conv.out_channels, -1) - w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var))) - fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape)) - - # Prepare spatial bias - b_conv = ( - torch.zeros(conv.weight.size(0), device=conv.weight.device) - if conv.bias is None - else conv.bias - ) - b_bn = bn.bias - bn.weight.mul(bn.running_mean).div( - torch.sqrt(bn.running_var + bn.eps) - ) - fusedconv.bias.copy_( - torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn - ) - - return fusedconv - - -def model_info(model, verbose=False, imgsz=640): - # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320] - n_p = sum(x.numel() for x in model.parameters()) # number parameters - n_g = sum( - x.numel() for x in model.parameters() if x.requires_grad - ) # number gradients - if verbose: - print( - f"{'layer':>5} {'name':>40} {'gradient':>9} {'parameters':>12} {'shape':>20} {'mu':>10} {'sigma':>10}" - ) - for i, (name, p) in enumerate(model.named_parameters()): - name = name.replace("module_list.", "") - print( - "%5g %40s %9s %12g %20s %10.3g %10.3g" - % ( - i, - name, - p.requires_grad, - p.numel(), - list(p.shape), - p.mean(), - p.std(), - ) - ) - - try: # FLOPs - p = next(model.parameters()) - stride = ( - max(int(model.stride.max()), 32) - if hasattr(model, "stride") - else 32 - ) # max stride - im = torch.empty( - (1, p.shape[1], stride, stride), device=p.device - ) # input image in BCHW format - flops = ( - thop.profile(deepcopy(model), inputs=(im,), verbose=False)[0] - / 1e9 - * 2 - ) # stride GFLOPs - imgsz = ( - imgsz if isinstance(imgsz, list) else [imgsz, imgsz] - ) # expand if int/float - fs = f", {flops * imgsz[0] / stride * imgsz[1] / stride:.1f} GFLOPs" # 640x640 GFLOPs - except Exception: - fs = "" - - name = ( - Path(model.yaml_file).stem.replace("yolov5", "YOLOv5") - if hasattr(model, "yaml_file") - else "Model" - ) - LOGGER.info( - f"{name} summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}" - ) - - -def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416) - # Scales img(bs,3,y,x) by ratio constrained to gs-multiple - if ratio == 1.0: - return img - h, w = img.shape[2:] - s = (int(h * ratio), int(w * ratio)) # new size - img = F.interpolate( - img, size=s, mode="bilinear", align_corners=False - ) # resize - if not same_shape: # pad/crop img - h, w = (math.ceil(x * ratio / gs) * gs for x in (h, w)) - return F.pad( - img, [0, w - s[1], 0, h - s[0]], value=0.447 - ) # value = imagenet mean - - -def copy_attr(a, b, include=(), exclude=()): - # Copy attributes from b to a, options to only include [...] and to exclude [...] - for k, v in b.__dict__.items(): - if ( - (len(include) and k not in include) - or k.startswith("_") - or k in exclude - ): - continue - else: - setattr(a, k, v) - - -def smart_optimizer(model, name="Adam", lr=0.001, momentum=0.9, decay=1e-5): - # YOLOv5 3-param group optimizer: 0) weights with decay, 1) weights no decay, 2) biases no decay - g = [], [], [] # optimizer parameter groups - bn = tuple( - v for k, v in nn.__dict__.items() if "Norm" in k - ) # normalization layers, i.e. BatchNorm2d() - for v in model.modules(): - for p_name, p in v.named_parameters(recurse=0): - if p_name == "bias": # bias (no decay) - g[2].append(p) - elif p_name == "weight" and isinstance(v, bn): # weight (no decay) - g[1].append(p) - else: - g[0].append(p) # weight (with decay) - - if name == "Adam": - optimizer = torch.optim.Adam( - g[2], lr=lr, betas=(momentum, 0.999) - ) # adjust beta1 to momentum - elif name == "AdamW": - optimizer = torch.optim.AdamW( - g[2], lr=lr, betas=(momentum, 0.999), weight_decay=0.0 - ) - elif name == "RMSProp": - optimizer = torch.optim.RMSprop(g[2], lr=lr, momentum=momentum) - elif name == "SGD": - optimizer = torch.optim.SGD( - g[2], lr=lr, momentum=momentum, nesterov=True - ) - else: - raise NotImplementedError(f"Optimizer {name} not implemented.") - - optimizer.add_param_group( - {"params": g[0], "weight_decay": decay} - ) # add g0 with weight_decay - optimizer.add_param_group( - {"params": g[1], "weight_decay": 0.0} - ) # add g1 (BatchNorm2d weights) - LOGGER.info( - f"{colorstr('optimizer:')} {type(optimizer).__name__}(lr={lr}) with parameter groups " - f"{len(g[1])} weight(decay=0.0), {len(g[0])} weight(decay={decay}), {len(g[2])} bias" - ) - return optimizer - - -def smart_hub_load(repo="ultralytics/yolov5", model="yolov5s", **kwargs): - # YOLOv5 torch.hub.load() wrapper with smart error/issue handling - if check_version(torch.__version__, "1.9.1"): - kwargs[ - "skip_validation" - ] = True # validation causes GitHub API rate limit errors - if check_version(torch.__version__, "1.12.0"): - kwargs["trust_repo"] = True # argument required starting in torch 0.12 - try: - return torch.hub.load(repo, model, **kwargs) - except Exception: - return torch.hub.load(repo, model, force_reload=True, **kwargs) - - -def smart_resume( - ckpt, optimizer, ema=None, weights="yolov5s.pt", epochs=300, resume=True -): - # Resume training from a partially trained checkpoint - best_fitness = 0.0 - start_epoch = ckpt["epoch"] + 1 - if ckpt["optimizer"] is not None: - optimizer.load_state_dict(ckpt["optimizer"]) # optimizer - best_fitness = ckpt["best_fitness"] - if ema and ckpt.get("ema"): - ema.ema.load_state_dict(ckpt["ema"].float().state_dict()) # EMA - ema.updates = ckpt["updates"] - if resume: - assert start_epoch > 0, ( - f"{weights} training to {epochs} epochs is finished, nothing to resume.\n" - f"Start a new training without --resume, i.e. 'python train.py --weights {weights}'" - ) - LOGGER.info( - f"Resuming training from {weights} from epoch {start_epoch} to {epochs} total epochs" - ) - if epochs < start_epoch: - LOGGER.info( - f"{weights} has been trained for {ckpt['epoch']} epochs. Fine-tuning for {epochs} more epochs." - ) - epochs += ckpt["epoch"] # finetune additional epochs - return best_fitness, start_epoch, epochs - - -class EarlyStopping: - # YOLOv5 simple early stopper - def __init__(self, patience=30): - self.best_fitness = 0.0 # i.e. mAP - self.best_epoch = 0 - self.patience = patience or float( - "inf" - ) # epochs to wait after fitness stops improving to stop - self.possible_stop = False # possible stop may occur next epoch - - def __call__(self, epoch, fitness): - if ( - fitness >= self.best_fitness - ): # >= 0 to allow for early zero-fitness stage of training - self.best_epoch = epoch - self.best_fitness = fitness - delta = epoch - self.best_epoch # epochs without improvement - self.possible_stop = delta >= ( - self.patience - 1 - ) # possible stop may occur next epoch - stop = delta >= self.patience # stop training if patience exceeded - if stop: - LOGGER.info( - f"Stopping training early as no improvement observed in last {self.patience} epochs. " - f"Best results observed at epoch {self.best_epoch}, best model saved as best.pt.\n" - f"To update EarlyStopping(patience={self.patience}) pass a new patience value, " - f"i.e. `python train.py --patience 300` or use `--patience 0` to disable EarlyStopping." - ) - return stop - - -class ModelEMA: - """Updated Exponential Moving Average (EMA) from https://github.com/rwightman/pytorch-image-models - Keeps a moving average of everything in the model state_dict (parameters and buffers) - For EMA details see https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage - """ - - def __init__(self, model, decay=0.9999, tau=2000, updates=0): - # Create EMA - self.ema = deepcopy(de_parallel(model)).eval() # FP32 EMA - self.updates = updates # number of EMA updates - self.decay = lambda x: decay * ( - 1 - math.exp(-x / tau) - ) # decay exponential ramp (to help early epochs) - for p in self.ema.parameters(): - p.requires_grad_(False) - - def update(self, model): - # Update EMA parameters - self.updates += 1 - d = self.decay(self.updates) - - msd = de_parallel(model).state_dict() # model state_dict - for k, v in self.ema.state_dict().items(): - if v.dtype.is_floating_point: # true for FP16 and FP32 - v *= d - v += (1 - d) * msd[k].detach() - # assert v.dtype == msd[k].dtype == torch.float32, f'{k}: EMA {v.dtype} and model {msd[k].dtype} must be FP32' - - def update_attr( - self, model, include=(), exclude=("process_group", "reducer") - ): - # Update EMA attributes - copy_attr(self.ema, model, include, exclude) diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/conversation/[id]/message/[messageId]/prompt/$types.d.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/conversation/[id]/message/[messageId]/prompt/$types.d.ts deleted file mode 100644 index 29f5f4dfa623ada8e806d11e23fd9aec08a2694f..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/conversation/[id]/message/[messageId]/prompt/$types.d.ts +++ /dev/null @@ -1,9 +0,0 @@ -import type * as Kit from '@sveltejs/kit'; - -type Expand = T extends infer O ? { [K in keyof O]: O[K] } : never; -type RouteParams = { id: string; messageId: string } -type RouteId = '/conversation/[id]/message/[messageId]/prompt'; - -export type EntryGenerator = () => Promise> | Array; -export type RequestHandler = Kit.RequestHandler; -export type RequestEvent = Kit.RequestEvent; \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/server/websearch/summarizeWeb.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/server/websearch/summarizeWeb.ts deleted file mode 100644 index 2998f79e6939f16f6d5c6ff2967bead5729470e7..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/server/websearch/summarizeWeb.ts +++ /dev/null @@ -1,39 +0,0 @@ -import { HF_ACCESS_TOKEN } from "$env/static/private"; -import { HfInference } from "@huggingface/inference"; -import { defaultModel } from "$lib/server/models"; -import type { BackendModel } from "../models"; -import { generateFromDefaultEndpoint } from "../generateFromDefaultEndpoint"; - -export async function summarizeWeb(content: string, query: string, model: BackendModel) { - // if HF_ACCESS_TOKEN is set, we use a HF dedicated endpoint for summarization - try { - if (HF_ACCESS_TOKEN) { - const summary = ( - await new HfInference(HF_ACCESS_TOKEN).summarization({ - model: "facebook/bart-large-cnn", - inputs: content, - parameters: { - max_length: 512, - }, - }) - ).summary_text; - return summary; - } - } catch (e) { - console.log(e); - } - - // else we use the LLM to generate a summary - const summaryPrompt = defaultModel.webSearchSummaryPromptRender({ - answer: content - .split(" ") - .slice(0, model.parameters?.truncate ?? 0) - .join(" "), - query: query, - }); - const summary = await generateFromDefaultEndpoint(summaryPrompt).then((txt: string) => - txt.trim() - ); - - return summary; -} diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinputbase/methods/SetSwatchColor.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinputbase/methods/SetSwatchColor.js deleted file mode 100644 index 85a00df87a18e3c8e01e12bdbe1d863895bb2340..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinputbase/methods/SetSwatchColor.js +++ /dev/null @@ -1,13 +0,0 @@ -var SetSwatchColor = function (swatch, color) { - if (!swatch) { - return; - } - - if (swatch.setTint) { - swatch.setTint(color); - } else if (swatch.setFillStyle) { - swatch.setFillStyle(color); - } -} - -export default SetSwatchColor; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/modal/Modal.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/modal/Modal.d.ts deleted file mode 100644 index 7d9d3770293b1e04a4956c7ccfbfd7ed11806e2c..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/modal/Modal.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -import { ModalBehavoir, Modal, ModalPromise, ModalClose } from '../../../plugins/modal'; -export { ModalBehavoir, Modal, ModalPromise, ModalClose }; \ No newline at end of file diff --git a/spaces/Alesmikes/Elvirespeak/app.py b/spaces/Alesmikes/Elvirespeak/app.py deleted file mode 100644 index 4aa96bb395ac671f23ee99a6151b613c2f7051fa..0000000000000000000000000000000000000000 --- a/spaces/Alesmikes/Elvirespeak/app.py +++ /dev/null @@ -1,142 +0,0 @@ -""" -this model only supports english since text to speech is an english only model -""" -from google.cloud import texttospeech -import os -import openai -import gradio as gr -from dotenv import load_dotenv -import pinecone - -""" -login to gcp -""" -os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "gcp_access_key.json" -# Instantiates a client -client = texttospeech.TextToSpeechClient() - -""" -Connecting to Open AI API -""" -load_dotenv() -openai.organization = os.getenv("OPENAI_ORG") -openai.api_key = os.getenv("OPENAI_API_KEY") -EMBEDDING_MODEL = "text-embedding-ada-002" -""" -Connecting to pincone API and assign index -""" -index_name = 'economic-forecast' -pinecone.init( - api_key=os.getenv("Pinecone_KEY"), - environment=os.getenv("Pinecone_ENV") -) - -## initial a first message to define GPT's role - - -""" -define the text -> speech function -""" -def text2speech(text): - - # Set the text input to be synthesized - synthesis_input = texttospeech.SynthesisInput(text=text) - - # Build the voice request, select the language code ("en-US") and the ssml - # voice gender ("neutral") - voice = texttospeech.VoiceSelectionParams( - language_code="en-US", name="en-US-News-K", ssml_gender=texttospeech.SsmlVoiceGender.FEMALE - ) - - # Select the type of audio file you want returned - audio_config = texttospeech.AudioConfig( - audio_encoding=texttospeech.AudioEncoding.MP3 - ) - - # Perform the text-to-speech request on the text input with the selected - # voice parameters and audio file type - response = client.synthesize_speech( - input=synthesis_input, voice=voice, audio_config=audio_config - ) - # The response's audio_content is binary. - with open("output.mp3", "wb") as out: - # Write the response to the output file. - out.write(response.audio_content) - print('Audio content written to file "output.mp3"') - -""" -define voice -> gpt -> text -> voice workflow -""" -def transcribe(audio): - reset_chat_history() - voice_path = get_response(audio) - messages = get_response(audio, return_messages=True) - chat_text = "\n".join([f"{msg['role']}: {msg['content']}" for msg in messages]) - with open(voice_path, 'rb') as f: - voice_bytes = f.read() - return voice_bytes, chat_text - #global messages - - """ - gradio output file doesn't have .wav so rename the file to the correct format - """ - extension = ".wav" - audiofomated = f"{audio}{extension}" - os.rename(audio,audiofomated) - - """ - pass the audio file to whisper to transcribe - - """ - audio_file = open(audiofomated, "rb") - transcript = openai.Audio.transcribe("whisper-1", audio_file) - - - """ - run cosin similarity to find context - """ - ### Input the question and search for the relavent text - index = pinecone.Index(index_name) - query = openai.Embedding.create(input=transcript["text"], model=EMBEDDING_MODEL)["data"][0]["embedding"] # embed the user query into an embedding vector - res = index.query(query, top_k=3, include_metadata=True) # run cosin similarity to search the most relevant embeded content; this is done in pinecone only - contexts = [ - x['metadata']['text'] for x in res['matches'] - ] - merged_context = "".join(contexts) - contextwithQuestion = "Context: " + "\n"+ merged_context + "*End of the context*" + "\n\n" + "Question: " + transcript["text"] - - - """ - pass the transcripted text to GPT - """ - messages = [ - {"role": "system", - "content": - "You are Elvire. Forest oracle dedicated to share her knowledge with accidental strangers.\ - "} -] - messages.append({"role": "user", "content":contextwithQuestion}) ## add user input to the list of message - - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=messages - ) ## pass the list of message to GPT - - messages.append({"role": "assistant", "content":response["choices"][0]["message"]["content"]}) ## add GPT response to the list of message - text2speech(response["choices"][0]["message"]["content"]) ## create mp3 voice output - - voice_path = os.path.abspath("output.mp3") - - return voice_path, "\n".join([f"{msg['role']}: {msg['content']}" for msg in messages]) - -output_text = gr.outputs.Textbox(label="Chat Messages") - -audio_input = gr.inputs.Audio(source="microphone", type="filepath", label="Speak here...") -chat_output = gr.outputs.Textbox(label="Chat Messages") -audio_output = gr.outputs.Audio(type="bytes", label="Synthesized Voice") - -gr.Interface(fn=transcribe, - inputs=audio_input, - outputs=[audio_output, chat_output], - live=True, - allow_flagging=False).launch() \ No newline at end of file diff --git a/spaces/Amrrs/pdf-table-extractor/app.py b/spaces/Amrrs/pdf-table-extractor/app.py deleted file mode 100644 index 7f439d37fa694685b129aee76553768f81f5af24..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/pdf-table-extractor/app.py +++ /dev/null @@ -1,62 +0,0 @@ -import streamlit as st # data app development -import subprocess # process in the os -from subprocess import STDOUT, check_call #os process manipuation -import os #os process manipuation -import base64 # byte object into a pdf file -import camelot as cam # extracting tables from PDFs - -# to run this only once and it's cached -@st.cache -def gh(): - """install ghostscript on the linux machine""" - proc = subprocess.Popen('apt-get install -y ghostscript', shell=True, stdin=None, stdout=open(os.devnull,"wb"), stderr=STDOUT, executable="/bin/bash") - proc.wait() - -gh() - - - -st.title("PDF Table Extractor") -st.subheader("with `Camelot` Python library") - -st.image("https://raw.githubusercontent.com/camelot-dev/camelot/master/docs/_static/camelot.png", width=200) - - -# file uploader on streamlit - -input_pdf = st.file_uploader(label = "upload your pdf here", type = 'pdf') - -st.markdown("### Page Number") - -page_number = st.text_input("Enter the page # from where you want to extract the PDF eg: 3", value = 1) - -# run this only when a PDF is uploaded - -if input_pdf is not None: - # byte object into a PDF file - with open("input.pdf", "wb") as f: - base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8') - f.write(base64.b64decode(base64_pdf)) - f.close() - - # read the pdf and parse it using stream - table = cam.read_pdf("input.pdf", pages = page_number, flavor = 'stream') - - st.markdown("### Number of Tables") - - # display the output after parsing - st.write(table) - - # display the table - - if len(table) > 0: - - # extract the index value of the table - - option = st.selectbox(label = "Select the Table to be displayed", options = range(len(table) + 1)) - - st.markdown('### Output Table') - - # display the dataframe - - st.dataframe(table[int(option)-1].df) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/assigners/center_region_assigner.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/assigners/center_region_assigner.py deleted file mode 100644 index 488e3b615318787751cab3211e38dd9471c666be..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/assigners/center_region_assigner.py +++ /dev/null @@ -1,335 +0,0 @@ -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -def scale_boxes(bboxes, scale): - """Expand an array of boxes by a given scale. - - Args: - bboxes (Tensor): Shape (m, 4) - scale (float): The scale factor of bboxes - - Returns: - (Tensor): Shape (m, 4). Scaled bboxes - """ - assert bboxes.size(1) == 4 - w_half = (bboxes[:, 2] - bboxes[:, 0]) * .5 - h_half = (bboxes[:, 3] - bboxes[:, 1]) * .5 - x_c = (bboxes[:, 2] + bboxes[:, 0]) * .5 - y_c = (bboxes[:, 3] + bboxes[:, 1]) * .5 - - w_half *= scale - h_half *= scale - - boxes_scaled = torch.zeros_like(bboxes) - boxes_scaled[:, 0] = x_c - w_half - boxes_scaled[:, 2] = x_c + w_half - boxes_scaled[:, 1] = y_c - h_half - boxes_scaled[:, 3] = y_c + h_half - return boxes_scaled - - -def is_located_in(points, bboxes): - """Are points located in bboxes. - - Args: - points (Tensor): Points, shape: (m, 2). - bboxes (Tensor): Bounding boxes, shape: (n, 4). - - Return: - Tensor: Flags indicating if points are located in bboxes, shape: (m, n). - """ - assert points.size(1) == 2 - assert bboxes.size(1) == 4 - return (points[:, 0].unsqueeze(1) > bboxes[:, 0].unsqueeze(0)) & \ - (points[:, 0].unsqueeze(1) < bboxes[:, 2].unsqueeze(0)) & \ - (points[:, 1].unsqueeze(1) > bboxes[:, 1].unsqueeze(0)) & \ - (points[:, 1].unsqueeze(1) < bboxes[:, 3].unsqueeze(0)) - - -def bboxes_area(bboxes): - """Compute the area of an array of bboxes. - - Args: - bboxes (Tensor): The coordinates ox bboxes. Shape: (m, 4) - - Returns: - Tensor: Area of the bboxes. Shape: (m, ) - """ - assert bboxes.size(1) == 4 - w = (bboxes[:, 2] - bboxes[:, 0]) - h = (bboxes[:, 3] - bboxes[:, 1]) - areas = w * h - return areas - - -@BBOX_ASSIGNERS.register_module() -class CenterRegionAssigner(BaseAssigner): - """Assign pixels at the center region of a bbox as positive. - - Each proposals will be assigned with `-1`, `0`, or a positive integer - indicating the ground truth index. - - -1: negative samples - - semi-positive numbers: positive sample, index (0-based) of assigned gt - - Args: - pos_scale (float): Threshold within which pixels are - labelled as positive. - neg_scale (float): Threshold above which pixels are - labelled as positive. - min_pos_iof (float): Minimum iof of a pixel with a gt to be - labelled as positive. Default: 1e-2 - ignore_gt_scale (float): Threshold within which the pixels - are ignored when the gt is labelled as shadowed. Default: 0.5 - foreground_dominate (bool): If True, the bbox will be assigned as - positive when a gt's kernel region overlaps with another's shadowed - (ignored) region, otherwise it is set as ignored. Default to False. - """ - - def __init__(self, - pos_scale, - neg_scale, - min_pos_iof=1e-2, - ignore_gt_scale=0.5, - foreground_dominate=False, - iou_calculator=dict(type='BboxOverlaps2D')): - self.pos_scale = pos_scale - self.neg_scale = neg_scale - self.min_pos_iof = min_pos_iof - self.ignore_gt_scale = ignore_gt_scale - self.foreground_dominate = foreground_dominate - self.iou_calculator = build_iou_calculator(iou_calculator) - - def get_gt_priorities(self, gt_bboxes): - """Get gt priorities according to their areas. - - Smaller gt has higher priority. - - Args: - gt_bboxes (Tensor): Ground truth boxes, shape (k, 4). - - Returns: - Tensor: The priority of gts so that gts with larger priority is \ - more likely to be assigned. Shape (k, ) - """ - gt_areas = bboxes_area(gt_bboxes) - # Rank all gt bbox areas. Smaller objects has larger priority - _, sort_idx = gt_areas.sort(descending=True) - sort_idx = sort_idx.argsort() - return sort_idx - - def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None): - """Assign gt to bboxes. - - This method assigns gts to every bbox (proposal/anchor), each bbox \ - will be assigned with -1, or a semi-positive number. -1 means \ - negative sample, semi-positive number is the index (0-based) of \ - assigned gt. - - Args: - bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4). - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (tensor, optional): Label of gt_bboxes, shape (num_gts,). - - Returns: - :obj:`AssignResult`: The assigned result. Note that \ - shadowed_labels of shape (N, 2) is also added as an \ - `assign_result` attribute. `shadowed_labels` is a tensor \ - composed of N pairs of anchor_ind, class_label], where N \ - is the number of anchors that lie in the outer region of a \ - gt, anchor_ind is the shadowed anchor index and class_label \ - is the shadowed class label. - - Example: - >>> self = CenterRegionAssigner(0.2, 0.2) - >>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]]) - >>> gt_bboxes = torch.Tensor([[0, 0, 10, 10]]) - >>> assign_result = self.assign(bboxes, gt_bboxes) - >>> expected_gt_inds = torch.LongTensor([1, 0]) - >>> assert torch.all(assign_result.gt_inds == expected_gt_inds) - """ - # There are in total 5 steps in the pixel assignment - # 1. Find core (the center region, say inner 0.2) - # and shadow (the relatively ourter part, say inner 0.2-0.5) - # regions of every gt. - # 2. Find all prior bboxes that lie in gt_core and gt_shadow regions - # 3. Assign prior bboxes in gt_core with a one-hot id of the gt in - # the image. - # 3.1. For overlapping objects, the prior bboxes in gt_core is - # assigned with the object with smallest area - # 4. Assign prior bboxes with class label according to its gt id. - # 4.1. Assign -1 to prior bboxes lying in shadowed gts - # 4.2. Assign positive prior boxes with the corresponding label - # 5. Find pixels lying in the shadow of an object and assign them with - # background label, but set the loss weight of its corresponding - # gt to zero. - assert bboxes.size(1) == 4, 'bboxes must have size of 4' - # 1. Find core positive and shadow region of every gt - gt_core = scale_boxes(gt_bboxes, self.pos_scale) - gt_shadow = scale_boxes(gt_bboxes, self.neg_scale) - - # 2. Find prior bboxes that lie in gt_core and gt_shadow regions - bbox_centers = (bboxes[:, 2:4] + bboxes[:, 0:2]) / 2 - # The center points lie within the gt boxes - is_bbox_in_gt = is_located_in(bbox_centers, gt_bboxes) - # Only calculate bbox and gt_core IoF. This enables small prior bboxes - # to match large gts - bbox_and_gt_core_overlaps = self.iou_calculator( - bboxes, gt_core, mode='iof') - # The center point of effective priors should be within the gt box - is_bbox_in_gt_core = is_bbox_in_gt & ( - bbox_and_gt_core_overlaps > self.min_pos_iof) # shape (n, k) - - is_bbox_in_gt_shadow = ( - self.iou_calculator(bboxes, gt_shadow, mode='iof') > - self.min_pos_iof) - # Rule out center effective positive pixels - is_bbox_in_gt_shadow &= (~is_bbox_in_gt_core) - - num_gts, num_bboxes = gt_bboxes.size(0), bboxes.size(0) - if num_gts == 0 or num_bboxes == 0: - # If no gts exist, assign all pixels to negative - assigned_gt_ids = \ - is_bbox_in_gt_core.new_zeros((num_bboxes,), - dtype=torch.long) - pixels_in_gt_shadow = assigned_gt_ids.new_empty((0, 2)) - else: - # Step 3: assign a one-hot gt id to each pixel, and smaller objects - # have high priority to assign the pixel. - sort_idx = self.get_gt_priorities(gt_bboxes) - assigned_gt_ids, pixels_in_gt_shadow = \ - self.assign_one_hot_gt_indices(is_bbox_in_gt_core, - is_bbox_in_gt_shadow, - gt_priority=sort_idx) - - if gt_bboxes_ignore is not None and gt_bboxes_ignore.numel() > 0: - # No ground truth or boxes, return empty assignment - gt_bboxes_ignore = scale_boxes( - gt_bboxes_ignore, scale=self.ignore_gt_scale) - is_bbox_in_ignored_gts = is_located_in(bbox_centers, - gt_bboxes_ignore) - is_bbox_in_ignored_gts = is_bbox_in_ignored_gts.any(dim=1) - assigned_gt_ids[is_bbox_in_ignored_gts] = -1 - - # 4. Assign prior bboxes with class label according to its gt id. - assigned_labels = None - shadowed_pixel_labels = None - if gt_labels is not None: - # Default assigned label is the background (-1) - assigned_labels = assigned_gt_ids.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_ids > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[assigned_gt_ids[pos_inds] - - 1] - # 5. Find pixels lying in the shadow of an object - shadowed_pixel_labels = pixels_in_gt_shadow.clone() - if pixels_in_gt_shadow.numel() > 0: - pixel_idx, gt_idx =\ - pixels_in_gt_shadow[:, 0], pixels_in_gt_shadow[:, 1] - assert (assigned_gt_ids[pixel_idx] != gt_idx).all(), \ - 'Some pixels are dually assigned to ignore and gt!' - shadowed_pixel_labels[:, 1] = gt_labels[gt_idx - 1] - override = ( - assigned_labels[pixel_idx] == shadowed_pixel_labels[:, 1]) - if self.foreground_dominate: - # When a pixel is both positive and shadowed, set it as pos - shadowed_pixel_labels = shadowed_pixel_labels[~override] - else: - # When a pixel is both pos and shadowed, set it as shadowed - assigned_labels[pixel_idx[override]] = -1 - assigned_gt_ids[pixel_idx[override]] = 0 - - assign_result = AssignResult( - num_gts, assigned_gt_ids, None, labels=assigned_labels) - # Add shadowed_labels as assign_result property. Shape: (num_shadow, 2) - assign_result.set_extra_property('shadowed_labels', - shadowed_pixel_labels) - return assign_result - - def assign_one_hot_gt_indices(self, - is_bbox_in_gt_core, - is_bbox_in_gt_shadow, - gt_priority=None): - """Assign only one gt index to each prior box. - - Gts with large gt_priority are more likely to be assigned. - - Args: - is_bbox_in_gt_core (Tensor): Bool tensor indicating the bbox center - is in the core area of a gt (e.g. 0-0.2). - Shape: (num_prior, num_gt). - is_bbox_in_gt_shadow (Tensor): Bool tensor indicating the bbox - center is in the shadowed area of a gt (e.g. 0.2-0.5). - Shape: (num_prior, num_gt). - gt_priority (Tensor): Priorities of gts. The gt with a higher - priority is more likely to be assigned to the bbox when the bbox - match with multiple gts. Shape: (num_gt, ). - - Returns: - tuple: Returns (assigned_gt_inds, shadowed_gt_inds). - - - assigned_gt_inds: The assigned gt index of each prior bbox \ - (i.e. index from 1 to num_gts). Shape: (num_prior, ). - - shadowed_gt_inds: shadowed gt indices. It is a tensor of \ - shape (num_ignore, 2) with first column being the \ - shadowed prior bbox indices and the second column the \ - shadowed gt indices (1-based). - """ - num_bboxes, num_gts = is_bbox_in_gt_core.shape - - if gt_priority is None: - gt_priority = torch.arange( - num_gts, device=is_bbox_in_gt_core.device) - assert gt_priority.size(0) == num_gts - # The bigger gt_priority, the more preferable to be assigned - # The assigned inds are by default 0 (background) - assigned_gt_inds = is_bbox_in_gt_core.new_zeros((num_bboxes, ), - dtype=torch.long) - # Shadowed bboxes are assigned to be background. But the corresponding - # label is ignored during loss calculation, which is done through - # shadowed_gt_inds - shadowed_gt_inds = torch.nonzero(is_bbox_in_gt_shadow, as_tuple=False) - if is_bbox_in_gt_core.sum() == 0: # No gt match - shadowed_gt_inds[:, 1] += 1 # 1-based. For consistency issue - return assigned_gt_inds, shadowed_gt_inds - - # The priority of each prior box and gt pair. If one prior box is - # matched bo multiple gts. Only the pair with the highest priority - # is saved - pair_priority = is_bbox_in_gt_core.new_full((num_bboxes, num_gts), - -1, - dtype=torch.long) - - # Each bbox could match with multiple gts. - # The following codes deal with this situation - # Matched bboxes (to any gt). Shape: (num_pos_anchor, ) - inds_of_match = torch.any(is_bbox_in_gt_core, dim=1) - # The matched gt index of each positive bbox. Length >= num_pos_anchor - # , since one bbox could match multiple gts - matched_bbox_gt_inds = torch.nonzero( - is_bbox_in_gt_core, as_tuple=False)[:, 1] - # Assign priority to each bbox-gt pair. - pair_priority[is_bbox_in_gt_core] = gt_priority[matched_bbox_gt_inds] - _, argmax_priority = pair_priority[inds_of_match].max(dim=1) - assigned_gt_inds[inds_of_match] = argmax_priority + 1 # 1-based - # Zero-out the assigned anchor box to filter the shadowed gt indices - is_bbox_in_gt_core[inds_of_match, argmax_priority] = 0 - # Concat the shadowed indices due to overlapping with that out side of - # effective scale. shape: (total_num_ignore, 2) - shadowed_gt_inds = torch.cat( - (shadowed_gt_inds, torch.nonzero( - is_bbox_in_gt_core, as_tuple=False)), - dim=0) - # `is_bbox_in_gt_core` should be changed back to keep arguments intact. - is_bbox_in_gt_core[inds_of_match, argmax_priority] = 1 - # 1-based shadowed gt indices, to be consistent with `assigned_gt_inds` - if shadowed_gt_inds.numel() > 0: - shadowed_gt_inds[:, 1] += 1 - return assigned_gt_inds, shadowed_gt_inds diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x1024_80k_cityscapes.py deleted file mode 100644 index 155e28f42194112703bb21473e5e3dd0fca40d49..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = [ - '../_base_/models/gcnet_r50-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py' -] diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_769x769_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_769x769_40k_cityscapes.py deleted file mode 100644 index c6e7e58508f31627766b8ab748bd81cd51c77eca..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_769x769_40k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './pspnet_r50-d8_769x769_40k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example-chat-stream.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example-chat-stream.py deleted file mode 100644 index bfa5d4f580b65d40c0dfa3b32ec6b5d940783f03..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example-chat-stream.py +++ /dev/null @@ -1,112 +0,0 @@ -import asyncio -import html -import json -import sys - -try: - import websockets -except ImportError: - print("Websockets package not found. Make sure it's installed.") - -# For local streaming, the websockets are hosted without ssl - ws:// -HOST = 'localhost:5005' -URI = f'ws://{HOST}/api/v1/chat-stream' - -# For reverse-proxied streaming, the remote will likely host with ssl - wss:// -# URI = 'wss://your-uri-here.trycloudflare.com/api/v1/stream' - - -async def run(user_input, history): - # Note: the selected defaults change from time to time. - request = { - 'user_input': user_input, - 'max_new_tokens': 250, - 'auto_max_new_tokens': False, - 'max_tokens_second': 0, - 'history': history, - 'mode': 'instruct', # Valid options: 'chat', 'chat-instruct', 'instruct' - 'character': 'Example', - 'instruction_template': 'Vicuna-v1.1', # Will get autodetected if unset - 'your_name': 'You', - # 'name1': 'name of user', # Optional - # 'name2': 'name of character', # Optional - # 'context': 'character context', # Optional - # 'greeting': 'greeting', # Optional - # 'name1_instruct': 'You', # Optional - # 'name2_instruct': 'Assistant', # Optional - # 'context_instruct': 'context_instruct', # Optional - # 'turn_template': 'turn_template', # Optional - 'regenerate': False, - '_continue': False, - 'chat_instruct_command': 'Continue the chat dialogue below. Write a single reply for the character "<|character|>".\n\n<|prompt|>', - - # Generation params. If 'preset' is set to different than 'None', the values - # in presets/preset-name.yaml are used instead of the individual numbers. - 'preset': 'None', - 'do_sample': True, - 'temperature': 0.7, - 'top_p': 0.1, - 'typical_p': 1, - 'epsilon_cutoff': 0, # In units of 1e-4 - 'eta_cutoff': 0, # In units of 1e-4 - 'tfs': 1, - 'top_a': 0, - 'repetition_penalty': 1.18, - 'repetition_penalty_range': 0, - 'top_k': 40, - 'min_length': 0, - 'no_repeat_ngram_size': 0, - 'num_beams': 1, - 'penalty_alpha': 0, - 'length_penalty': 1, - 'early_stopping': False, - 'mirostat_mode': 0, - 'mirostat_tau': 5, - 'mirostat_eta': 0.1, - 'grammar_string': '', - 'guidance_scale': 1, - 'negative_prompt': '', - - 'seed': -1, - 'add_bos_token': True, - 'truncation_length': 2048, - 'ban_eos_token': False, - 'custom_token_bans': '', - 'skip_special_tokens': True, - 'stopping_strings': [] - } - - async with websockets.connect(URI, ping_interval=None) as websocket: - await websocket.send(json.dumps(request)) - - while True: - incoming_data = await websocket.recv() - incoming_data = json.loads(incoming_data) - - match incoming_data['event']: - case 'text_stream': - yield incoming_data['history'] - case 'stream_end': - return - - -async def print_response_stream(user_input, history): - cur_len = 0 - async for new_history in run(user_input, history): - cur_message = new_history['visible'][-1][1][cur_len:] - cur_len += len(cur_message) - print(html.unescape(cur_message), end='') - sys.stdout.flush() # If we don't flush, we won't see tokens in realtime. - - -if __name__ == '__main__': - user_input = "Please give me a step-by-step guide on how to plant a tree in my backyard." - - # Basic example - history = {'internal': [], 'visible': []} - - # "Continue" example. Make sure to set '_continue' to True above - # arr = [user_input, 'Surely, here is'] - # history = {'internal': [arr], 'visible': [arr]} - - asyncio.run(print_response_stream(user_input, history)) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/pipelines/llava/llava.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/pipelines/llava/llava.py deleted file mode 100644 index 306ab227d093c29dd9fb62b49b7cbd140b143788..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/pipelines/llava/llava.py +++ /dev/null @@ -1,148 +0,0 @@ -import time -from abc import abstractmethod -from typing import List, Tuple - -import torch -from huggingface_hub import hf_hub_download -from PIL import Image -from transformers import CLIPImageProcessor, CLIPVisionModel - -from extensions.multimodal.abstract_pipeline import AbstractMultimodalPipeline -from modules import shared -from modules.logging_colors import logger -from modules.text_generation import encode - - -class LLaVA_v0_Pipeline(AbstractMultimodalPipeline): - CLIP_REPO = "openai/clip-vit-large-patch14" - - def __init__(self, params: dict) -> None: - super().__init__() - self.clip_device = self._get_device("vision_device", params) - self.clip_dtype = self._get_dtype("vision_bits", params) - self.projector_device = self._get_device("projector_device", params) - self.projector_dtype = self._get_dtype("projector_bits", params) - self.image_processor, self.vision_tower, self.mm_projector = self._load_models() - - def _load_models(self): - start_ts = time.time() - - logger.info(f"LLaVA - Loading CLIP from {LLaVA_v0_Pipeline.CLIP_REPO} as {self.clip_dtype} on {self.clip_device}...") - image_processor = CLIPImageProcessor.from_pretrained(LLaVA_v0_Pipeline.CLIP_REPO, torch_dtype=self.clip_dtype) - vision_tower = CLIPVisionModel.from_pretrained(LLaVA_v0_Pipeline.CLIP_REPO, torch_dtype=self.clip_dtype).to(self.clip_device) - - logger.info(f"LLaVA - Loading projector from {self.llava_projector_repo()} as {self.projector_dtype} on {self.projector_device}...") - projector_path = hf_hub_download(self.llava_projector_repo(), self.llava_projector_filename()) - mm_projector = torch.nn.Linear(*self.llava_projector_shape()) - projector_data = torch.load(projector_path) - mm_projector.weight = torch.nn.Parameter(projector_data['model.mm_projector.weight'].to(dtype=self.projector_dtype), False) - mm_projector.bias = torch.nn.Parameter(projector_data['model.mm_projector.bias'].to(dtype=self.projector_dtype), False) - mm_projector = mm_projector.to(self.projector_device) - - logger.info(f"LLaVA supporting models loaded, took {time.time() - start_ts:.2f} seconds") - return image_processor, vision_tower, mm_projector - - @staticmethod - def image_start() -> str: - return "" - - @staticmethod - def image_end() -> str: - return "" - - @staticmethod - def num_image_embeds() -> int: - return 256 - - @staticmethod - def embed_tokens(input_ids: torch.Tensor) -> torch.Tensor: - for attr in ['', 'model', 'model.model', 'model.model.model']: - tmp = getattr(shared.model, attr, None) if attr != '' else shared.model - if tmp is not None and hasattr(tmp, 'embed_tokens'): - func = tmp.embed_tokens - break - else: - raise ValueError('The embed_tokens method has not been found for this loader.') - - return func(input_ids).to(shared.model.device, dtype=shared.model.dtype) - - @staticmethod - def placeholder_embeddings() -> torch.Tensor: - return LLaVA_v0_Pipeline.embed_tokens(encode(""*256, add_bos_token=False)[0]) - - def embed_images(self, images: List[Image.Image]) -> torch.Tensor: - images = self.image_processor(images, return_tensors='pt')['pixel_values'] - images = images.to(self.clip_device, dtype=self.clip_dtype) - - with torch.no_grad(): - image_forward_outs = self.vision_tower(images, output_hidden_states=True) - select_hidden_state_layer = -2 - select_hidden_state = image_forward_outs.hidden_states[select_hidden_state_layer] - image_features = select_hidden_state[:, 1:].to(self.projector_device, dtype=self.projector_dtype) - image_features = self.mm_projector(image_features) - return image_features.to(shared.model.device, dtype=shared.model.dtype) - - @staticmethod - @abstractmethod - def llava_projector_repo() -> str: - pass - - @staticmethod - @abstractmethod - def llava_projector_filename() -> str: - pass - - @staticmethod - @abstractmethod - def llava_projector_shape() -> Tuple[int, int]: - pass - - -class LLaVA_v0_13B_Pipeline(LLaVA_v0_Pipeline): - def __init__(self, params: dict) -> None: - super().__init__(params) - - @staticmethod - def name() -> str: - return "llava-13b" - - @staticmethod - def placeholder_token_id() -> int: - return 32000 - - @staticmethod - def llava_projector_shape() -> Tuple[int, int]: - return (1024, 5120) - - @staticmethod - def llava_projector_filename() -> str: - return "mm_projector.bin" - - @staticmethod - def llava_projector_repo() -> str: - return "liuhaotian/LLaVA-13b-delta-v0" - - -class LLaVA_v0_7B_Pipeline(LLaVA_v0_Pipeline): - def __init__(self, params: dict) -> None: - super().__init__(params) - - @staticmethod - def name() -> str: - return "llava-7b" - - @staticmethod - def placeholder_token_id() -> int: - return 32001 - - @staticmethod - def llava_projector_shape() -> Tuple[int, int]: - return (1024, 4096) - - @staticmethod - def llava_projector_filename() -> str: - return "mm_projector.bin" - - @staticmethod - def llava_projector_repo() -> str: - return "liuhaotian/LLaVA-7b-delta-v0" diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/setup.py b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/setup.py deleted file mode 100644 index c9ea7d0d2f3d2fcf66d6f6e2aa0eb1a97a524bb6..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/setup.py +++ /dev/null @@ -1,21 +0,0 @@ -import os - -import pkg_resources -from setuptools import setup, find_packages - -setup( - name="clip", - py_modules=["clip"], - version="1.0", - description="", - author="OpenAI", - packages=find_packages(exclude=["tests*"]), - install_requires=[ - str(r) - for r in pkg_resources.parse_requirements( - open(os.path.join(os.path.dirname(__file__), "requirements.txt")) - ) - ], - include_package_data=True, - extras_require={'dev': ['pytest']}, -) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/hed/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/hed/__init__.py deleted file mode 100644 index a6a8fc712fba02b033dea13bfe33204b8d3c9139..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/hed/__init__.py +++ /dev/null @@ -1,96 +0,0 @@ -# This is an improved version and model of HED edge detection with Apache License, Version 2.0. -# Please use this implementation in your products -# This implementation may produce slightly different results from Saining Xie's official implementations, -# but it generates smoother edges and is more suitable for ControlNet as well as other image-to-image translations. -# Different from official models and other implementations, this is an RGB-input model (rather than BGR) -# and in this way it works better for gradio's RGB protocol - -import os -import cv2 -import torch -import numpy as np - -from einops import rearrange -from annotator.util import annotator_ckpts_path - - -class DoubleConvBlock(torch.nn.Module): - def __init__(self, input_channel, output_channel, layer_number): - super().__init__() - self.convs = torch.nn.Sequential() - self.convs.append(torch.nn.Conv2d(in_channels=input_channel, out_channels=output_channel, kernel_size=(3, 3), stride=(1, 1), padding=1)) - for i in range(1, layer_number): - self.convs.append(torch.nn.Conv2d(in_channels=output_channel, out_channels=output_channel, kernel_size=(3, 3), stride=(1, 1), padding=1)) - self.projection = torch.nn.Conv2d(in_channels=output_channel, out_channels=1, kernel_size=(1, 1), stride=(1, 1), padding=0) - - def __call__(self, x, down_sampling=False): - h = x - if down_sampling: - h = torch.nn.functional.max_pool2d(h, kernel_size=(2, 2), stride=(2, 2)) - for conv in self.convs: - h = conv(h) - h = torch.nn.functional.relu(h) - return h, self.projection(h) - - -class ControlNetHED_Apache2(torch.nn.Module): - def __init__(self): - super().__init__() - self.norm = torch.nn.Parameter(torch.zeros(size=(1, 3, 1, 1))) - self.block1 = DoubleConvBlock(input_channel=3, output_channel=64, layer_number=2) - self.block2 = DoubleConvBlock(input_channel=64, output_channel=128, layer_number=2) - self.block3 = DoubleConvBlock(input_channel=128, output_channel=256, layer_number=3) - self.block4 = DoubleConvBlock(input_channel=256, output_channel=512, layer_number=3) - self.block5 = DoubleConvBlock(input_channel=512, output_channel=512, layer_number=3) - - def __call__(self, x): - h = x - self.norm - h, projection1 = self.block1(h) - h, projection2 = self.block2(h, down_sampling=True) - h, projection3 = self.block3(h, down_sampling=True) - h, projection4 = self.block4(h, down_sampling=True) - h, projection5 = self.block5(h, down_sampling=True) - return projection1, projection2, projection3, projection4, projection5 - - -class HEDdetector: - def __init__(self): - remote_model_path = "https://huggingface.co/lllyasviel/Annotators/resolve/main/ControlNetHED.pth" - modelpath = os.path.join(annotator_ckpts_path, "ControlNetHED.pth") - if not os.path.exists(modelpath): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(remote_model_path, model_dir=annotator_ckpts_path) - self.netNetwork = ControlNetHED_Apache2().float().cuda().eval() - self.netNetwork.load_state_dict(torch.load(modelpath)) - - def __call__(self, input_image): - assert input_image.ndim == 3 - H, W, C = input_image.shape - with torch.no_grad(): - image_hed = torch.from_numpy(input_image.copy()).float().cuda() - image_hed = rearrange(image_hed, 'h w c -> 1 c h w') - edges = self.netNetwork(image_hed) - edges = [e.detach().cpu().numpy().astype(np.float32)[0, 0] for e in edges] - edges = [cv2.resize(e, (W, H), interpolation=cv2.INTER_LINEAR) for e in edges] - edges = np.stack(edges, axis=2) - edge = 1 / (1 + np.exp(-np.mean(edges, axis=2).astype(np.float64))) - edge = (edge * 255.0).clip(0, 255).astype(np.uint8) - return edge - - -def nms(x, t, s): - x = cv2.GaussianBlur(x.astype(np.float32), (0, 0), s) - - f1 = np.array([[0, 0, 0], [1, 1, 1], [0, 0, 0]], dtype=np.uint8) - f2 = np.array([[0, 1, 0], [0, 1, 0], [0, 1, 0]], dtype=np.uint8) - f3 = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.uint8) - f4 = np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]], dtype=np.uint8) - - y = np.zeros_like(x) - - for f in [f1, f2, f3, f4]: - np.putmask(y, cv2.dilate(x, kernel=f) == x, x) - - z = np.zeros_like(y, dtype=np.uint8) - z[y > t] = 255 - return z diff --git a/spaces/Arnx/MusicGenXvAKN/audiocraft/models/musicgen.py b/spaces/Arnx/MusicGenXvAKN/audiocraft/models/musicgen.py deleted file mode 100644 index 007dd9e0ed1cfd359fb4889e7f4108248e189941..0000000000000000000000000000000000000000 --- a/spaces/Arnx/MusicGenXvAKN/audiocraft/models/musicgen.py +++ /dev/null @@ -1,362 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Main model for using MusicGen. This will combine all the required components -and provide easy access to the generation API. -""" - -import os -import typing as tp - -import torch - -from .encodec import CompressionModel -from .lm import LMModel -from .builders import get_debug_compression_model, get_debug_lm_model -from .loaders import load_compression_model, load_lm_model, HF_MODEL_CHECKPOINTS_MAP -from ..data.audio_utils import convert_audio -from ..modules.conditioners import ConditioningAttributes, WavCondition -from ..utils.autocast import TorchAutocast - - -MelodyList = tp.List[tp.Optional[torch.Tensor]] -MelodyType = tp.Union[torch.Tensor, MelodyList] - - -class MusicGen: - """MusicGen main model with convenient generation API. - - Args: - name (str): name of the model. - compression_model (CompressionModel): Compression model - used to map audio to invertible discrete representations. - lm (LMModel): Language model over discrete representations. - """ - def __init__(self, name: str, compression_model: CompressionModel, lm: LMModel, - max_duration: float = 30): - self.name = name - self.compression_model = compression_model - self.lm = lm - self.max_duration = max_duration - self.device = next(iter(lm.parameters())).device - self.generation_params: dict = {} - self.set_generation_params(duration=15) # 15 seconds by default - self._progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None - if self.device.type == 'cpu': - self.autocast = TorchAutocast(enabled=False) - else: - self.autocast = TorchAutocast( - enabled=True, device_type=self.device.type, dtype=torch.float16) - - @property - def frame_rate(self) -> int: - """Roughly the number of AR steps per seconds.""" - return self.compression_model.frame_rate - - @property - def sample_rate(self) -> int: - """Sample rate of the generated audio.""" - return self.compression_model.sample_rate - - @property - def audio_channels(self) -> int: - """Audio channels of the generated audio.""" - return self.compression_model.channels - - @staticmethod - def get_pretrained(name: str = 'melody', device=None): - """Return pretrained model, we provide four models: - - small (300M), text to music, # see: https://huggingface.co/facebook/musicgen-small - - medium (1.5B), text to music, # see: https://huggingface.co/facebook/musicgen-medium - - melody (1.5B) text to music and text+melody to music, # see: https://huggingface.co/facebook/musicgen-melody - - large (3.3B), text to music, # see: https://huggingface.co/facebook/musicgen-large - """ - - if device is None: - if torch.cuda.device_count(): - device = 'cuda' - else: - device = 'cpu' - - if name == 'debug': - # used only for unit tests - compression_model = get_debug_compression_model(device) - lm = get_debug_lm_model(device) - return MusicGen(name, compression_model, lm) - - if name not in HF_MODEL_CHECKPOINTS_MAP: - if not os.path.isfile(name) and not os.path.isdir(name): - raise ValueError( - f"{name} is not a valid checkpoint name. " - f"Choose one of {', '.join(HF_MODEL_CHECKPOINTS_MAP.keys())}" - ) - - cache_dir = os.environ.get('MUSICGEN_ROOT', None) - compression_model = load_compression_model(name, device=device, cache_dir=cache_dir) - lm = load_lm_model(name, device=device, cache_dir=cache_dir) - if name == 'melody': - lm.condition_provider.conditioners['self_wav'].match_len_on_eval = True - - return MusicGen(name, compression_model, lm) - - def set_generation_params(self, use_sampling: bool = True, top_k: int = 250, - top_p: float = 0.0, temperature: float = 1.0, - duration: float = 30.0, cfg_coef: float = 3.0, - two_step_cfg: bool = False, extend_stride: float = 18): - """Set the generation parameters for MusicGen. - - Args: - use_sampling (bool, optional): Use sampling if True, else do argmax decoding. Defaults to True. - top_k (int, optional): top_k used for sampling. Defaults to 250. - top_p (float, optional): top_p used for sampling, when set to 0 top_k is used. Defaults to 0.0. - temperature (float, optional): Softmax temperature parameter. Defaults to 1.0. - duration (float, optional): Duration of the generated waveform. Defaults to 30.0. - cfg_coef (float, optional): Coefficient used for classifier free guidance. Defaults to 3.0. - two_step_cfg (bool, optional): If True, performs 2 forward for Classifier Free Guidance, - instead of batching together the two. This has some impact on how things - are padded but seems to have little impact in practice. - extend_stride: when doing extended generation (i.e. more than 30 seconds), by how much - should we extend the audio each time. Larger values will mean less context is - preserved, and shorter value will require extra computations. - """ - assert extend_stride < self.max_duration, "Cannot stride by more than max generation duration." - self.extend_stride = extend_stride - self.duration = duration - self.generation_params = { - 'use_sampling': use_sampling, - 'temp': temperature, - 'top_k': top_k, - 'top_p': top_p, - 'cfg_coef': cfg_coef, - 'two_step_cfg': two_step_cfg, - } - - def set_custom_progress_callback(self, progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None): - """Override the default progress callback.""" - self._progress_callback = progress_callback - - def generate_unconditional(self, num_samples: int, progress: bool = False) -> torch.Tensor: - """Generate samples in an unconditional manner. - - Args: - num_samples (int): Number of samples to be generated. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - descriptions: tp.List[tp.Optional[str]] = [None] * num_samples - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate(self, descriptions: tp.List[str], progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on text. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - assert prompt_tokens is None - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate_with_chroma(self, descriptions: tp.List[str], melody_wavs: MelodyType, - melody_sample_rate: int, progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on text and melody. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - melody_wavs: (torch.Tensor or list of Tensor): A batch of waveforms used as - melody conditioning. Should have shape [B, C, T] with B matching the description length, - C=1 or 2. It can be [C, T] if there is a single description. It can also be - a list of [C, T] tensors. - melody_sample_rate: (int): Sample rate of the melody waveforms. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if isinstance(melody_wavs, torch.Tensor): - if melody_wavs.dim() == 2: - melody_wavs = melody_wavs[None] - if melody_wavs.dim() != 3: - raise ValueError("Melody wavs should have a shape [B, C, T].") - melody_wavs = list(melody_wavs) - else: - for melody in melody_wavs: - if melody is not None: - assert melody.dim() == 2, "One melody in the list has the wrong number of dims." - - melody_wavs = [ - convert_audio(wav, melody_sample_rate, self.sample_rate, self.audio_channels) - if wav is not None else None - for wav in melody_wavs] - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions=descriptions, prompt=None, - melody_wavs=melody_wavs) - assert prompt_tokens is None - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate_continuation(self, prompt: torch.Tensor, prompt_sample_rate: int, - descriptions: tp.Optional[tp.List[tp.Optional[str]]] = None, - progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on audio prompts. - - Args: - prompt (torch.Tensor): A batch of waveforms used for continuation. - Prompt should be [B, C, T], or [C, T] if only one sample is generated. - prompt_sample_rate (int): Sampling rate of the given audio waveforms. - descriptions (tp.List[str], optional): A list of strings used as text conditioning. Defaults to None. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if prompt.dim() == 2: - prompt = prompt[None] - if prompt.dim() != 3: - raise ValueError("prompt should have 3 dimensions: [B, C, T] (C = 1).") - prompt = convert_audio(prompt, prompt_sample_rate, self.sample_rate, self.audio_channels) - if descriptions is None: - descriptions = [None] * len(prompt) - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, prompt) - assert prompt_tokens is not None - return self._generate_tokens(attributes, prompt_tokens, progress) - - @torch.no_grad() - def _prepare_tokens_and_attributes( - self, - descriptions: tp.Sequence[tp.Optional[str]], - prompt: tp.Optional[torch.Tensor], - melody_wavs: tp.Optional[MelodyList] = None, - ) -> tp.Tuple[tp.List[ConditioningAttributes], tp.Optional[torch.Tensor]]: - """Prepare model inputs. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - prompt (torch.Tensor): A batch of waveforms used for continuation. - melody_wavs (tp.Optional[torch.Tensor], optional): A batch of waveforms - used as melody conditioning. Defaults to None. - """ - attributes = [ - ConditioningAttributes(text={'description': description}) - for description in descriptions] - - if melody_wavs is None: - for attr in attributes: - attr.wav['self_wav'] = WavCondition( - torch.zeros((1, 1), device=self.device), - torch.tensor([0], device=self.device), - path='null_wav') # type: ignore - else: - if self.name != "melody": - raise RuntimeError("This model doesn't support melody conditioning. " - "Use the `melody` model.") - assert len(melody_wavs) == len(descriptions), \ - f"number of melody wavs must match number of descriptions! " \ - f"got melody len={len(melody_wavs)}, and descriptions len={len(descriptions)}" - for attr, melody in zip(attributes, melody_wavs): - if melody is None: - attr.wav['self_wav'] = WavCondition( - torch.zeros((1, 1), device=self.device), - torch.tensor([0], device=self.device), - path='null_wav') # type: ignore - else: - attr.wav['self_wav'] = WavCondition( - melody.to(device=self.device), - torch.tensor([melody.shape[-1]], device=self.device)) - - if prompt is not None: - if descriptions is not None: - assert len(descriptions) == len(prompt), "Prompt and nb. descriptions doesn't match" - prompt = prompt.to(self.device) - prompt_tokens, scale = self.compression_model.encode(prompt) - assert scale is None - else: - prompt_tokens = None - return attributes, prompt_tokens - - def _generate_tokens(self, attributes: tp.List[ConditioningAttributes], - prompt_tokens: tp.Optional[torch.Tensor], progress: bool = False) -> torch.Tensor: - """Generate discrete audio tokens given audio prompt and/or conditions. - - Args: - attributes (tp.List[ConditioningAttributes]): Conditions used for generation (text/melody). - prompt_tokens (tp.Optional[torch.Tensor]): Audio prompt used for continuation. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - Returns: - torch.Tensor: Generated audio, of shape [B, C, T], T is defined by the generation params. - """ - total_gen_len = int(self.duration * self.frame_rate) - max_prompt_len = int(min(self.duration, self.max_duration) * self.frame_rate) - current_gen_offset: int = 0 - - def _progress_callback(generated_tokens: int, tokens_to_generate: int): - generated_tokens += current_gen_offset - if self._progress_callback is not None: - # Note that total_gen_len might be quite wrong depending on the - # codebook pattern used, but with delay it is almost accurate. - self._progress_callback(generated_tokens, total_gen_len) - else: - print(f'{generated_tokens: 6d} / {total_gen_len: 6d}', end='\r') - - if prompt_tokens is not None: - assert max_prompt_len >= prompt_tokens.shape[-1], \ - "Prompt is longer than audio to generate" - - callback = None - if progress: - callback = _progress_callback - - if self.duration <= self.max_duration: - # generate by sampling from LM, simple case. - with self.autocast: - gen_tokens = self.lm.generate( - prompt_tokens, attributes, - callback=callback, max_gen_len=total_gen_len, **self.generation_params) - - else: - # now this gets a bit messier, we need to handle prompts, - # melody conditioning etc. - ref_wavs = [attr.wav['self_wav'] for attr in attributes] - all_tokens = [] - if prompt_tokens is None: - prompt_length = 0 - else: - all_tokens.append(prompt_tokens) - prompt_length = prompt_tokens.shape[-1] - - stride_tokens = int(self.frame_rate * self.extend_stride) - - while current_gen_offset + prompt_length < total_gen_len: - time_offset = current_gen_offset / self.frame_rate - chunk_duration = min(self.duration - time_offset, self.max_duration) - max_gen_len = int(chunk_duration * self.frame_rate) - for attr, ref_wav in zip(attributes, ref_wavs): - wav_length = ref_wav.length.item() - if wav_length == 0: - continue - # We will extend the wav periodically if it not long enough. - # we have to do it here rather than in conditioners.py as otherwise - # we wouldn't have the full wav. - initial_position = int(time_offset * self.sample_rate) - wav_target_length = int(self.max_duration * self.sample_rate) - print(initial_position / self.sample_rate, wav_target_length / self.sample_rate) - positions = torch.arange(initial_position, - initial_position + wav_target_length, device=self.device) - attr.wav['self_wav'] = WavCondition( - ref_wav[0][:, positions % wav_length], - torch.full_like(ref_wav[1], wav_target_length)) - with self.autocast: - gen_tokens = self.lm.generate( - prompt_tokens, attributes, - callback=callback, max_gen_len=max_gen_len, **self.generation_params) - if prompt_tokens is None: - all_tokens.append(gen_tokens) - else: - all_tokens.append(gen_tokens[:, :, prompt_tokens.shape[-1]:]) - prompt_tokens = gen_tokens[:, :, stride_tokens:] - prompt_length = prompt_tokens.shape[-1] - current_gen_offset += stride_tokens - - gen_tokens = torch.cat(all_tokens, dim=-1) - - # generate audio - assert gen_tokens.dim() == 3 - with torch.no_grad(): - gen_audio = self.compression_model.decode(gen_tokens, None) - return gen_audio diff --git a/spaces/AsakuraMizu/moe-tts/monotonic_align/__init__.py b/spaces/AsakuraMizu/moe-tts/monotonic_align/__init__.py deleted file mode 100644 index 40b6f64aa116c74cac2f6a33444c9eeea2fdb38c..0000000000000000000000000000000000000000 --- a/spaces/AsakuraMizu/moe-tts/monotonic_align/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) - diff --git a/spaces/Bart92/RVC_HF/demucs/tasnet.py b/spaces/Bart92/RVC_HF/demucs/tasnet.py deleted file mode 100644 index ecc1257925ea8f4fbe389ddd6d73ce9fdf45f6d4..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/demucs/tasnet.py +++ /dev/null @@ -1,452 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# -# Created on 2018/12 -# Author: Kaituo XU -# Modified on 2019/11 by Alexandre Defossez, added support for multiple output channels -# Here is the original license: -# The MIT License (MIT) -# -# Copyright (c) 2018 Kaituo XU -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .utils import capture_init - -EPS = 1e-8 - - -def overlap_and_add(signal, frame_step): - outer_dimensions = signal.size()[:-2] - frames, frame_length = signal.size()[-2:] - - subframe_length = math.gcd(frame_length, frame_step) # gcd=Greatest Common Divisor - subframe_step = frame_step // subframe_length - subframes_per_frame = frame_length // subframe_length - output_size = frame_step * (frames - 1) + frame_length - output_subframes = output_size // subframe_length - - subframe_signal = signal.view(*outer_dimensions, -1, subframe_length) - - frame = torch.arange(0, output_subframes, - device=signal.device).unfold(0, subframes_per_frame, subframe_step) - frame = frame.long() # signal may in GPU or CPU - frame = frame.contiguous().view(-1) - - result = signal.new_zeros(*outer_dimensions, output_subframes, subframe_length) - result.index_add_(-2, frame, subframe_signal) - result = result.view(*outer_dimensions, -1) - return result - - -class ConvTasNet(nn.Module): - @capture_init - def __init__(self, - sources, - N=256, - L=20, - B=256, - H=512, - P=3, - X=8, - R=4, - audio_channels=2, - norm_type="gLN", - causal=False, - mask_nonlinear='relu', - samplerate=44100, - segment_length=44100 * 2 * 4): - """ - Args: - sources: list of sources - N: Number of filters in autoencoder - L: Length of the filters (in samples) - B: Number of channels in bottleneck 1 × 1-conv block - H: Number of channels in convolutional blocks - P: Kernel size in convolutional blocks - X: Number of convolutional blocks in each repeat - R: Number of repeats - norm_type: BN, gLN, cLN - causal: causal or non-causal - mask_nonlinear: use which non-linear function to generate mask - """ - super(ConvTasNet, self).__init__() - # Hyper-parameter - self.sources = sources - self.C = len(sources) - self.N, self.L, self.B, self.H, self.P, self.X, self.R = N, L, B, H, P, X, R - self.norm_type = norm_type - self.causal = causal - self.mask_nonlinear = mask_nonlinear - self.audio_channels = audio_channels - self.samplerate = samplerate - self.segment_length = segment_length - # Components - self.encoder = Encoder(L, N, audio_channels) - self.separator = TemporalConvNet( - N, B, H, P, X, R, self.C, norm_type, causal, mask_nonlinear) - self.decoder = Decoder(N, L, audio_channels) - # init - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_normal_(p) - - def valid_length(self, length): - return length - - def forward(self, mixture): - """ - Args: - mixture: [M, T], M is batch size, T is #samples - Returns: - est_source: [M, C, T] - """ - mixture_w = self.encoder(mixture) - est_mask = self.separator(mixture_w) - est_source = self.decoder(mixture_w, est_mask) - - # T changed after conv1d in encoder, fix it here - T_origin = mixture.size(-1) - T_conv = est_source.size(-1) - est_source = F.pad(est_source, (0, T_origin - T_conv)) - return est_source - - -class Encoder(nn.Module): - """Estimation of the nonnegative mixture weight by a 1-D conv layer. - """ - def __init__(self, L, N, audio_channels): - super(Encoder, self).__init__() - # Hyper-parameter - self.L, self.N = L, N - # Components - # 50% overlap - self.conv1d_U = nn.Conv1d(audio_channels, N, kernel_size=L, stride=L // 2, bias=False) - - def forward(self, mixture): - """ - Args: - mixture: [M, T], M is batch size, T is #samples - Returns: - mixture_w: [M, N, K], where K = (T-L)/(L/2)+1 = 2T/L-1 - """ - mixture_w = F.relu(self.conv1d_U(mixture)) # [M, N, K] - return mixture_w - - -class Decoder(nn.Module): - def __init__(self, N, L, audio_channels): - super(Decoder, self).__init__() - # Hyper-parameter - self.N, self.L = N, L - self.audio_channels = audio_channels - # Components - self.basis_signals = nn.Linear(N, audio_channels * L, bias=False) - - def forward(self, mixture_w, est_mask): - """ - Args: - mixture_w: [M, N, K] - est_mask: [M, C, N, K] - Returns: - est_source: [M, C, T] - """ - # D = W * M - source_w = torch.unsqueeze(mixture_w, 1) * est_mask # [M, C, N, K] - source_w = torch.transpose(source_w, 2, 3) # [M, C, K, N] - # S = DV - est_source = self.basis_signals(source_w) # [M, C, K, ac * L] - m, c, k, _ = est_source.size() - est_source = est_source.view(m, c, k, self.audio_channels, -1).transpose(2, 3).contiguous() - est_source = overlap_and_add(est_source, self.L // 2) # M x C x ac x T - return est_source - - -class TemporalConvNet(nn.Module): - def __init__(self, N, B, H, P, X, R, C, norm_type="gLN", causal=False, mask_nonlinear='relu'): - """ - Args: - N: Number of filters in autoencoder - B: Number of channels in bottleneck 1 × 1-conv block - H: Number of channels in convolutional blocks - P: Kernel size in convolutional blocks - X: Number of convolutional blocks in each repeat - R: Number of repeats - C: Number of speakers - norm_type: BN, gLN, cLN - causal: causal or non-causal - mask_nonlinear: use which non-linear function to generate mask - """ - super(TemporalConvNet, self).__init__() - # Hyper-parameter - self.C = C - self.mask_nonlinear = mask_nonlinear - # Components - # [M, N, K] -> [M, N, K] - layer_norm = ChannelwiseLayerNorm(N) - # [M, N, K] -> [M, B, K] - bottleneck_conv1x1 = nn.Conv1d(N, B, 1, bias=False) - # [M, B, K] -> [M, B, K] - repeats = [] - for r in range(R): - blocks = [] - for x in range(X): - dilation = 2**x - padding = (P - 1) * dilation if causal else (P - 1) * dilation // 2 - blocks += [ - TemporalBlock(B, - H, - P, - stride=1, - padding=padding, - dilation=dilation, - norm_type=norm_type, - causal=causal) - ] - repeats += [nn.Sequential(*blocks)] - temporal_conv_net = nn.Sequential(*repeats) - # [M, B, K] -> [M, C*N, K] - mask_conv1x1 = nn.Conv1d(B, C * N, 1, bias=False) - # Put together - self.network = nn.Sequential(layer_norm, bottleneck_conv1x1, temporal_conv_net, - mask_conv1x1) - - def forward(self, mixture_w): - """ - Keep this API same with TasNet - Args: - mixture_w: [M, N, K], M is batch size - returns: - est_mask: [M, C, N, K] - """ - M, N, K = mixture_w.size() - score = self.network(mixture_w) # [M, N, K] -> [M, C*N, K] - score = score.view(M, self.C, N, K) # [M, C*N, K] -> [M, C, N, K] - if self.mask_nonlinear == 'softmax': - est_mask = F.softmax(score, dim=1) - elif self.mask_nonlinear == 'relu': - est_mask = F.relu(score) - else: - raise ValueError("Unsupported mask non-linear function") - return est_mask - - -class TemporalBlock(nn.Module): - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride, - padding, - dilation, - norm_type="gLN", - causal=False): - super(TemporalBlock, self).__init__() - # [M, B, K] -> [M, H, K] - conv1x1 = nn.Conv1d(in_channels, out_channels, 1, bias=False) - prelu = nn.PReLU() - norm = chose_norm(norm_type, out_channels) - # [M, H, K] -> [M, B, K] - dsconv = DepthwiseSeparableConv(out_channels, in_channels, kernel_size, stride, padding, - dilation, norm_type, causal) - # Put together - self.net = nn.Sequential(conv1x1, prelu, norm, dsconv) - - def forward(self, x): - """ - Args: - x: [M, B, K] - Returns: - [M, B, K] - """ - residual = x - out = self.net(x) - # TODO: when P = 3 here works fine, but when P = 2 maybe need to pad? - return out + residual # look like w/o F.relu is better than w/ F.relu - # return F.relu(out + residual) - - -class DepthwiseSeparableConv(nn.Module): - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride, - padding, - dilation, - norm_type="gLN", - causal=False): - super(DepthwiseSeparableConv, self).__init__() - # Use `groups` option to implement depthwise convolution - # [M, H, K] -> [M, H, K] - depthwise_conv = nn.Conv1d(in_channels, - in_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=in_channels, - bias=False) - if causal: - chomp = Chomp1d(padding) - prelu = nn.PReLU() - norm = chose_norm(norm_type, in_channels) - # [M, H, K] -> [M, B, K] - pointwise_conv = nn.Conv1d(in_channels, out_channels, 1, bias=False) - # Put together - if causal: - self.net = nn.Sequential(depthwise_conv, chomp, prelu, norm, pointwise_conv) - else: - self.net = nn.Sequential(depthwise_conv, prelu, norm, pointwise_conv) - - def forward(self, x): - """ - Args: - x: [M, H, K] - Returns: - result: [M, B, K] - """ - return self.net(x) - - -class Chomp1d(nn.Module): - """To ensure the output length is the same as the input. - """ - def __init__(self, chomp_size): - super(Chomp1d, self).__init__() - self.chomp_size = chomp_size - - def forward(self, x): - """ - Args: - x: [M, H, Kpad] - Returns: - [M, H, K] - """ - return x[:, :, :-self.chomp_size].contiguous() - - -def chose_norm(norm_type, channel_size): - """The input of normlization will be (M, C, K), where M is batch size, - C is channel size and K is sequence length. - """ - if norm_type == "gLN": - return GlobalLayerNorm(channel_size) - elif norm_type == "cLN": - return ChannelwiseLayerNorm(channel_size) - elif norm_type == "id": - return nn.Identity() - else: # norm_type == "BN": - # Given input (M, C, K), nn.BatchNorm1d(C) will accumulate statics - # along M and K, so this BN usage is right. - return nn.BatchNorm1d(channel_size) - - -# TODO: Use nn.LayerNorm to impl cLN to speed up -class ChannelwiseLayerNorm(nn.Module): - """Channel-wise Layer Normalization (cLN)""" - def __init__(self, channel_size): - super(ChannelwiseLayerNorm, self).__init__() - self.gamma = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1] - self.beta = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1] - self.reset_parameters() - - def reset_parameters(self): - self.gamma.data.fill_(1) - self.beta.data.zero_() - - def forward(self, y): - """ - Args: - y: [M, N, K], M is batch size, N is channel size, K is length - Returns: - cLN_y: [M, N, K] - """ - mean = torch.mean(y, dim=1, keepdim=True) # [M, 1, K] - var = torch.var(y, dim=1, keepdim=True, unbiased=False) # [M, 1, K] - cLN_y = self.gamma * (y - mean) / torch.pow(var + EPS, 0.5) + self.beta - return cLN_y - - -class GlobalLayerNorm(nn.Module): - """Global Layer Normalization (gLN)""" - def __init__(self, channel_size): - super(GlobalLayerNorm, self).__init__() - self.gamma = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1] - self.beta = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1] - self.reset_parameters() - - def reset_parameters(self): - self.gamma.data.fill_(1) - self.beta.data.zero_() - - def forward(self, y): - """ - Args: - y: [M, N, K], M is batch size, N is channel size, K is length - Returns: - gLN_y: [M, N, K] - """ - # TODO: in torch 1.0, torch.mean() support dim list - mean = y.mean(dim=1, keepdim=True).mean(dim=2, keepdim=True) # [M, 1, 1] - var = (torch.pow(y - mean, 2)).mean(dim=1, keepdim=True).mean(dim=2, keepdim=True) - gLN_y = self.gamma * (y - mean) / torch.pow(var + EPS, 0.5) + self.beta - return gLN_y - - -if __name__ == "__main__": - torch.manual_seed(123) - M, N, L, T = 2, 3, 4, 12 - K = 2 * T // L - 1 - B, H, P, X, R, C, norm_type, causal = 2, 3, 3, 3, 2, 2, "gLN", False - mixture = torch.randint(3, (M, T)) - # test Encoder - encoder = Encoder(L, N) - encoder.conv1d_U.weight.data = torch.randint(2, encoder.conv1d_U.weight.size()) - mixture_w = encoder(mixture) - print('mixture', mixture) - print('U', encoder.conv1d_U.weight) - print('mixture_w', mixture_w) - print('mixture_w size', mixture_w.size()) - - # test TemporalConvNet - separator = TemporalConvNet(N, B, H, P, X, R, C, norm_type=norm_type, causal=causal) - est_mask = separator(mixture_w) - print('est_mask', est_mask) - - # test Decoder - decoder = Decoder(N, L) - est_mask = torch.randint(2, (B, K, C, N)) - est_source = decoder(mixture_w, est_mask) - print('est_source', est_source) - - # test Conv-TasNet - conv_tasnet = ConvTasNet(N, L, B, H, P, X, R, C, norm_type=norm_type) - est_source = conv_tasnet(mixture) - print('est_source', est_source) - print('est_source size', est_source.size()) diff --git a/spaces/CVPR/CVPR2022_papers/paper_list.py b/spaces/CVPR/CVPR2022_papers/paper_list.py deleted file mode 100644 index e242466fa3d25d428ea8d52f0765474374c6c652..0000000000000000000000000000000000000000 --- a/spaces/CVPR/CVPR2022_papers/paper_list.py +++ /dev/null @@ -1,102 +0,0 @@ -from __future__ import annotations - -import pandas as pd - - -class PaperList: - def __init__(self): - self.table = pd.read_csv('papers.csv') - self._preprcess_table() - - self.table_header = ''' - - Paper - Authors - pdf - Supp - arXiv - GitHub - HF Spaces - HF Models - HF Datasets - ''' - - def _preprcess_table(self) -> None: - self.table['title_lowercase'] = self.table.title.str.lower() - - rows = [] - for row in self.table.itertuples(): - paper = f'{row.title}' - pdf = f'pdf' - supp = f'supp' if isinstance( - row.supp, str) else '' - arxiv = f'arXiv' if isinstance( - row.arxiv, str) else '' - github = f'GitHub' if isinstance( - row.github, str) else '' - hf_space = f'Space' if isinstance( - row.hf_space, str) else '' - hf_model = f'Model' if isinstance( - row.hf_model, str) else '' - hf_dataset = f'Dataset' if isinstance( - row.hf_dataset, str) else '' - row = f''' - - {paper} - {row.authors} - {pdf} - {supp} - {arxiv} - {github} - {hf_space} - {hf_model} - {hf_dataset} - ''' - rows.append(row) - self.table['html_table_content'] = rows - - def render(self, search_query: str, case_sensitive: bool, - filter_names: list[str]) -> tuple[int, str]: - df = self.table - if search_query: - if case_sensitive: - df = df[df.title.str.contains(search_query)] - else: - df = df[df.title_lowercase.str.contains(search_query.lower())] - has_supp = 'Supp' in filter_names - has_arxiv = 'arXiv' in filter_names - has_github = 'GitHub' in filter_names - has_hf_space = 'HF Space' in filter_names - has_hf_model = 'HF Model' in filter_names - has_hf_dataset = 'HF Dataset' in filter_names - df = self.filter_table(df, has_supp, has_arxiv, has_github, - has_hf_space, has_hf_model, has_hf_dataset) - return len(df), self.to_html(df, self.table_header) - - @staticmethod - def filter_table(df: pd.DataFrame, has_supp: bool, has_arxiv: bool, - has_github: bool, has_hf_space: bool, has_hf_model: bool, - has_hf_dataset: bool) -> pd.DataFrame: - if has_supp: - df = df[~df.supp.isna()] - if has_arxiv: - df = df[~df.arxiv.isna()] - if has_github: - df = df[~df.github.isna()] - if has_hf_space: - df = df[~df.hf_space.isna()] - if has_hf_model: - df = df[~df.hf_model.isna()] - if has_hf_dataset: - df = df[~df.hf_dataset.isna()] - return df - - @staticmethod - def to_html(df: pd.DataFrame, table_header: str) -> str: - table_data = ''.join(df.html_table_content) - html = f''' - - {table_header} - {table_data} -
          ''' - return html diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_buffers.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_buffers.cpp deleted file mode 100644 index 1bc67ff7b66e86d7bf94de845e5737261f2a1280..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_buffers.cpp +++ /dev/null @@ -1,195 +0,0 @@ -/* - tests/test_buffers.cpp -- supporting Pythons' buffer protocol - - Copyright (c) 2016 Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#include "pybind11_tests.h" -#include "constructor_stats.h" - -TEST_SUBMODULE(buffers, m) { - // test_from_python / test_to_python: - class Matrix { - public: - Matrix(ssize_t rows, ssize_t cols) : m_rows(rows), m_cols(cols) { - print_created(this, std::to_string(m_rows) + "x" + std::to_string(m_cols) + " matrix"); - m_data = new float[(size_t) (rows*cols)]; - memset(m_data, 0, sizeof(float) * (size_t) (rows * cols)); - } - - Matrix(const Matrix &s) : m_rows(s.m_rows), m_cols(s.m_cols) { - print_copy_created(this, std::to_string(m_rows) + "x" + std::to_string(m_cols) + " matrix"); - m_data = new float[(size_t) (m_rows * m_cols)]; - memcpy(m_data, s.m_data, sizeof(float) * (size_t) (m_rows * m_cols)); - } - - Matrix(Matrix &&s) : m_rows(s.m_rows), m_cols(s.m_cols), m_data(s.m_data) { - print_move_created(this); - s.m_rows = 0; - s.m_cols = 0; - s.m_data = nullptr; - } - - ~Matrix() { - print_destroyed(this, std::to_string(m_rows) + "x" + std::to_string(m_cols) + " matrix"); - delete[] m_data; - } - - Matrix &operator=(const Matrix &s) { - print_copy_assigned(this, std::to_string(m_rows) + "x" + std::to_string(m_cols) + " matrix"); - delete[] m_data; - m_rows = s.m_rows; - m_cols = s.m_cols; - m_data = new float[(size_t) (m_rows * m_cols)]; - memcpy(m_data, s.m_data, sizeof(float) * (size_t) (m_rows * m_cols)); - return *this; - } - - Matrix &operator=(Matrix &&s) { - print_move_assigned(this, std::to_string(m_rows) + "x" + std::to_string(m_cols) + " matrix"); - if (&s != this) { - delete[] m_data; - m_rows = s.m_rows; m_cols = s.m_cols; m_data = s.m_data; - s.m_rows = 0; s.m_cols = 0; s.m_data = nullptr; - } - return *this; - } - - float operator()(ssize_t i, ssize_t j) const { - return m_data[(size_t) (i*m_cols + j)]; - } - - float &operator()(ssize_t i, ssize_t j) { - return m_data[(size_t) (i*m_cols + j)]; - } - - float *data() { return m_data; } - - ssize_t rows() const { return m_rows; } - ssize_t cols() const { return m_cols; } - private: - ssize_t m_rows; - ssize_t m_cols; - float *m_data; - }; - py::class_(m, "Matrix", py::buffer_protocol()) - .def(py::init()) - /// Construct from a buffer - .def(py::init([](py::buffer const b) { - py::buffer_info info = b.request(); - if (info.format != py::format_descriptor::format() || info.ndim != 2) - throw std::runtime_error("Incompatible buffer format!"); - - auto v = new Matrix(info.shape[0], info.shape[1]); - memcpy(v->data(), info.ptr, sizeof(float) * (size_t) (v->rows() * v->cols())); - return v; - })) - - .def("rows", &Matrix::rows) - .def("cols", &Matrix::cols) - - /// Bare bones interface - .def("__getitem__", [](const Matrix &m, std::pair i) { - if (i.first >= m.rows() || i.second >= m.cols()) - throw py::index_error(); - return m(i.first, i.second); - }) - .def("__setitem__", [](Matrix &m, std::pair i, float v) { - if (i.first >= m.rows() || i.second >= m.cols()) - throw py::index_error(); - m(i.first, i.second) = v; - }) - /// Provide buffer access - .def_buffer([](Matrix &m) -> py::buffer_info { - return py::buffer_info( - m.data(), /* Pointer to buffer */ - { m.rows(), m.cols() }, /* Buffer dimensions */ - { sizeof(float) * size_t(m.cols()), /* Strides (in bytes) for each index */ - sizeof(float) } - ); - }) - ; - - - // test_inherited_protocol - class SquareMatrix : public Matrix { - public: - SquareMatrix(ssize_t n) : Matrix(n, n) { } - }; - // Derived classes inherit the buffer protocol and the buffer access function - py::class_(m, "SquareMatrix") - .def(py::init()); - - - // test_pointer_to_member_fn - // Tests that passing a pointer to member to the base class works in - // the derived class. - struct Buffer { - int32_t value = 0; - - py::buffer_info get_buffer_info() { - return py::buffer_info(&value, sizeof(value), - py::format_descriptor::format(), 1); - } - }; - py::class_(m, "Buffer", py::buffer_protocol()) - .def(py::init<>()) - .def_readwrite("value", &Buffer::value) - .def_buffer(&Buffer::get_buffer_info); - - - class ConstBuffer { - std::unique_ptr value; - - public: - int32_t get_value() const { return *value; } - void set_value(int32_t v) { *value = v; } - - py::buffer_info get_buffer_info() const { - return py::buffer_info(value.get(), sizeof(*value), - py::format_descriptor::format(), 1); - } - - ConstBuffer() : value(new int32_t{0}) { }; - }; - py::class_(m, "ConstBuffer", py::buffer_protocol()) - .def(py::init<>()) - .def_property("value", &ConstBuffer::get_value, &ConstBuffer::set_value) - .def_buffer(&ConstBuffer::get_buffer_info); - - struct DerivedBuffer : public Buffer { }; - py::class_(m, "DerivedBuffer", py::buffer_protocol()) - .def(py::init<>()) - .def_readwrite("value", (int32_t DerivedBuffer::*) &DerivedBuffer::value) - .def_buffer(&DerivedBuffer::get_buffer_info); - - struct BufferReadOnly { - const uint8_t value = 0; - BufferReadOnly(uint8_t value): value(value) {} - - py::buffer_info get_buffer_info() { - return py::buffer_info(&value, 1); - } - }; - py::class_(m, "BufferReadOnly", py::buffer_protocol()) - .def(py::init()) - .def_buffer(&BufferReadOnly::get_buffer_info); - - struct BufferReadOnlySelect { - uint8_t value = 0; - bool readonly = false; - - py::buffer_info get_buffer_info() { - return py::buffer_info(&value, 1, readonly); - } - }; - py::class_(m, "BufferReadOnlySelect", py::buffer_protocol()) - .def(py::init<>()) - .def_readwrite("value", &BufferReadOnlySelect::value) - .def_readwrite("readonly", &BufferReadOnlySelect::readonly) - .def_buffer(&BufferReadOnlySelect::get_buffer_info); - -} diff --git a/spaces/CVPR/LIVE/pydiffvg/device.py b/spaces/CVPR/LIVE/pydiffvg/device.py deleted file mode 100644 index 420883d60130a8f21e96bae19ba6025ffd0ed55e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pydiffvg/device.py +++ /dev/null @@ -1,25 +0,0 @@ -import torch - -use_gpu = torch.cuda.is_available() -device = torch.device('cuda') if use_gpu else torch.device('cpu') - -def set_use_gpu(v): - global use_gpu - global device - use_gpu = v - if not use_gpu: - device = torch.device('cpu') - -def get_use_gpu(): - global use_gpu - return use_gpu - -def set_device(d): - global device - global use_gpu - device = d - use_gpu = device.type == 'cuda' - -def get_device(): - global device - return device diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/uninitialized_fill.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/uninitialized_fill.h deleted file mode 100644 index 764de876233a012e5a9de9113c5fb2dac7a22499..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/uninitialized_fill.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits uninitialized_fill -#include - diff --git a/spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/scnet_mask_head.py b/spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/scnet_mask_head.py deleted file mode 100644 index 983a2d9db71a3b2b4980996725fdafb0b412b413..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/scnet_mask_head.py +++ /dev/null @@ -1,27 +0,0 @@ -from mmdet.models.builder import HEADS -from mmdet.models.utils import ResLayer, SimplifiedBasicBlock -from .fcn_mask_head import FCNMaskHead - - -@HEADS.register_module() -class SCNetMaskHead(FCNMaskHead): - """Mask head for `SCNet `_. - - Args: - conv_to_res (bool, optional): if True, change the conv layers to - ``SimplifiedBasicBlock``. - """ - - def __init__(self, conv_to_res=True, **kwargs): - super(SCNetMaskHead, self).__init__(**kwargs) - self.conv_to_res = conv_to_res - if conv_to_res: - assert self.conv_kernel_size == 3 - self.num_res_blocks = self.num_convs // 2 - self.convs = ResLayer( - SimplifiedBasicBlock, - self.in_channels, - self.conv_out_channels, - self.num_res_blocks, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) diff --git a/spaces/CognitiveLabs/Research-Assistant/agent/research_agent.py b/spaces/CognitiveLabs/Research-Assistant/agent/research_agent.py deleted file mode 100644 index 365d2e562a5a70902f55d1eadf09a6ced1b21684..0000000000000000000000000000000000000000 --- a/spaces/CognitiveLabs/Research-Assistant/agent/research_agent.py +++ /dev/null @@ -1,109 +0,0 @@ -import json -from actions.duck_search import duckduckgo_search -from processing.text import read_txt_files -from agent.llm_utils import llm_response, llm_stream_response -from config import Config -from agent import prompts -import os -import string - -CFG = Config() - - -class ResearchAgent: - def __init__(self, question, agent): - """ Initializes the research assistant with the given question. - Args: question (str): The question to research - Returns: None - """ - - self.question = question - self.agent = agent - self.visited_urls = set() - self.search_summary = "" - self.directory_name = ''.join(c for c in question if c.isascii() and c not in string.punctuation)[:100] - self.dir_path = os.path.dirname(f"./outputs/{self.directory_name}/") - - def call_agent(self, action): - messages = [{ - "role": "system", - "content": prompts.generate_agent_role_prompt(self.agent), - }, { - "role": "user", - "content": action, - }] - return llm_response( - model=CFG.fast_llm_model, - messages=messages, - ) - - def call_agent_stream(self, action): - messages = [{ - "role": "system", - "content": prompts.generate_agent_role_prompt(self.agent), - }, { - "role": "user", - "content": action, - }] - yield from llm_stream_response( - model=CFG.fast_llm_model, - messages=messages - ) - - def create_search_queries(self): - """ Creates the search queries for the given question. - Args: None - Returns: list[str]: The search queries for the given question - """ - result = self.call_agent(prompts.generate_search_queries_prompt(self.question)) - return json.loads(result) - - def search_single_query(self, query): - """ Runs the async search for the given query. - Args: query (str): The query to run the async search for - Returns: list[str]: The async search for the given query - """ - return duckduckgo_search(query, max_search_result=3) - - def run_search_summary(self, query): - """ Runs the search summary for the given query. - Args: query (str): The query to run the search summary for - Returns: str: The search summary for the given query - """ - responses = self.search_single_query(query) - - print(f"Searching for {query}") - query = hash(query) - file_path = f"./outputs/{self.directory_name}/research-{query}.txt" - os.makedirs(os.path.dirname(file_path), exist_ok=True) - with open(file_path, "w") as f: - json.dump(responses, f) - print(f"Saved {query} to {file_path}") - return responses - - def search_online(self): - """ Conducts the search for the given question. - Args: None - Returns: str: The search results for the given question - """ - - self.search_summary = read_txt_files(self.dir_path) if os.path.isdir(self.dir_path) else "" - - if not self.search_summary: - search_queries = self.create_search_queries() - for _, query in search_queries.items(): - search_result = self.run_search_summary(query) - self.search_summary += f"=Query=:\n{query}\n=Search Result=:\n{search_result}\n================\n" - - return self.search_summary - - def write_report(self, report_type): - """ Writes the report for the given question. - Args: None - Returns: str: The report for the given question - """ - # yield "Searching online..." - - report_type_func = prompts.get_report_by_type(report_type) - - yield from self.call_agent_stream(report_type_func(self.question, self.search_online())) diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/midas/midas/transforms.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/midas/midas/transforms.py deleted file mode 100644 index 350cbc11662633ad7f8968eb10be2e7de6e384e9..0000000000000000000000000000000000000000 --- a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/midas/midas/transforms.py +++ /dev/null @@ -1,234 +0,0 @@ -import numpy as np -import cv2 -import math - - -def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA): - """Rezise the sample to ensure the given size. Keeps aspect ratio. - - Args: - sample (dict): sample - size (tuple): image size - - Returns: - tuple: new size - """ - shape = list(sample["disparity"].shape) - - if shape[0] >= size[0] and shape[1] >= size[1]: - return sample - - scale = [0, 0] - scale[0] = size[0] / shape[0] - scale[1] = size[1] / shape[1] - - scale = max(scale) - - shape[0] = math.ceil(scale * shape[0]) - shape[1] = math.ceil(scale * shape[1]) - - # resize - sample["image"] = cv2.resize( - sample["image"], tuple(shape[::-1]), interpolation=image_interpolation_method - ) - - sample["disparity"] = cv2.resize( - sample["disparity"], tuple(shape[::-1]), interpolation=cv2.INTER_NEAREST - ) - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - tuple(shape[::-1]), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return tuple(shape) - - -class Resize(object): - """Resize sample to given size (width, height). - """ - - def __init__( - self, - width, - height, - resize_target=True, - keep_aspect_ratio=False, - ensure_multiple_of=1, - resize_method="lower_bound", - image_interpolation_method=cv2.INTER_AREA, - ): - """Init. - - Args: - width (int): desired output width - height (int): desired output height - resize_target (bool, optional): - True: Resize the full sample (image, mask, target). - False: Resize image only. - Defaults to True. - keep_aspect_ratio (bool, optional): - True: Keep the aspect ratio of the input sample. - Output sample might not have the given width and height, and - resize behaviour depends on the parameter 'resize_method'. - Defaults to False. - ensure_multiple_of (int, optional): - Output width and height is constrained to be multiple of this parameter. - Defaults to 1. - resize_method (str, optional): - "lower_bound": Output will be at least as large as the given size. - "upper_bound": Output will be at max as large as the given size. (Output size might be smaller than given size.) - "minimal": Scale as least as possible. (Output size might be smaller than given size.) - Defaults to "lower_bound". - """ - self.__width = width - self.__height = height - - self.__resize_target = resize_target - self.__keep_aspect_ratio = keep_aspect_ratio - self.__multiple_of = ensure_multiple_of - self.__resize_method = resize_method - self.__image_interpolation_method = image_interpolation_method - - def constrain_to_multiple_of(self, x, min_val=0, max_val=None): - y = (np.round(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if max_val is not None and y > max_val: - y = (np.floor(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if y < min_val: - y = (np.ceil(x / self.__multiple_of) * self.__multiple_of).astype(int) - - return y - - def get_size(self, width, height): - # determine new height and width - scale_height = self.__height / height - scale_width = self.__width / width - - if self.__keep_aspect_ratio: - if self.__resize_method == "lower_bound": - # scale such that output size is lower bound - if scale_width > scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "upper_bound": - # scale such that output size is upper bound - if scale_width < scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "minimal": - # scale as least as possbile - if abs(1 - scale_width) < abs(1 - scale_height): - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - else: - raise ValueError( - f"resize_method {self.__resize_method} not implemented" - ) - - if self.__resize_method == "lower_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, min_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, min_val=self.__width - ) - elif self.__resize_method == "upper_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, max_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, max_val=self.__width - ) - elif self.__resize_method == "minimal": - new_height = self.constrain_to_multiple_of(scale_height * height) - new_width = self.constrain_to_multiple_of(scale_width * width) - else: - raise ValueError(f"resize_method {self.__resize_method} not implemented") - - return (new_width, new_height) - - def __call__(self, sample): - width, height = self.get_size( - sample["image"].shape[1], sample["image"].shape[0] - ) - - # resize sample - sample["image"] = cv2.resize( - sample["image"], - (width, height), - interpolation=self.__image_interpolation_method, - ) - - if self.__resize_target: - if "disparity" in sample: - sample["disparity"] = cv2.resize( - sample["disparity"], - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - - if "depth" in sample: - sample["depth"] = cv2.resize( - sample["depth"], (width, height), interpolation=cv2.INTER_NEAREST - ) - - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return sample - - -class NormalizeImage(object): - """Normlize image by given mean and std. - """ - - def __init__(self, mean, std): - self.__mean = mean - self.__std = std - - def __call__(self, sample): - sample["image"] = (sample["image"] - self.__mean) / self.__std - - return sample - - -class PrepareForNet(object): - """Prepare sample for usage as network input. - """ - - def __init__(self): - pass - - def __call__(self, sample): - image = np.transpose(sample["image"], (2, 0, 1)) - sample["image"] = np.ascontiguousarray(image).astype(np.float32) - - if "mask" in sample: - sample["mask"] = sample["mask"].astype(np.float32) - sample["mask"] = np.ascontiguousarray(sample["mask"]) - - if "disparity" in sample: - disparity = sample["disparity"].astype(np.float32) - sample["disparity"] = np.ascontiguousarray(disparity) - - if "depth" in sample: - depth = sample["depth"].astype(np.float32) - sample["depth"] = np.ascontiguousarray(depth) - - return sample diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/cuda/dcn_v2_im2col_cuda.h b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/cuda/dcn_v2_im2col_cuda.h deleted file mode 100644 index c85683198e0f6f908c294aef45314d79d9de8451..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/cuda/dcn_v2_im2col_cuda.h +++ /dev/null @@ -1,101 +0,0 @@ - -/*! - ******************* BEGIN Caffe Copyright Notice and Disclaimer **************** - * - * COPYRIGHT - * - * All contributions by the University of California: - * Copyright (c) 2014-2017 The Regents of the University of California (Regents) - * All rights reserved. - * - * All other contributions: - * Copyright (c) 2014-2017, the respective contributors - * All rights reserved. - * - * Caffe uses a shared copyright model: each contributor holds copyright over - * their contributions to Caffe. The project versioning records all such - * contribution and copyright details. If a contributor wants to further mark - * their specific copyright on a particular contribution, they should indicate - * their copyright solely in the commit message of the change when it is - * committed. - * - * LICENSE - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * - * 1. Redistributions of source code must retain the above copyright notice, this - * list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright notice, - * this list of conditions and the following disclaimer in the documentation - * and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED - * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE - * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR - * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * CONTRIBUTION AGREEMENT - * - * By contributing to the BVLC/caffe repository through pull-request, comment, - * or otherwise, the contributor releases their content to the - * license and copyright terms herein. - * - ***************** END Caffe Copyright Notice and Disclaimer ******************** - * - * Copyright (c) 2018 Microsoft - * Licensed under The MIT License [see LICENSE for details] - * \file modulated_deformable_im2col.h - * \brief Function definitions of converting an image to - * column matrix based on kernel, padding, dilation, and offset. - * These functions are mainly used in deformable convolution operators. - * \ref: https://arxiv.org/abs/1811.11168 - * \author Yuwen Xiong, Haozhi Qi, Jifeng Dai, Xizhou Zhu, Han Hu - */ - -/***************** Adapted by Charles Shang *********************/ - -#ifndef DCN_V2_IM2COL_CUDA -#define DCN_V2_IM2COL_CUDA - -#ifdef __cplusplus -extern "C" -{ -#endif - - void modulated_deformable_im2col_cuda(cudaStream_t stream, - const float *data_im, const float *data_offset, const float *data_mask, - const int batch_size, const int channels, const int height_im, const int width_im, - const int height_col, const int width_col, const int kernel_h, const int kenerl_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int deformable_group, float *data_col); - - void modulated_deformable_col2im_cuda(cudaStream_t stream, - const float *data_col, const float *data_offset, const float *data_mask, - const int batch_size, const int channels, const int height_im, const int width_im, - const int height_col, const int width_col, const int kernel_h, const int kenerl_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int deformable_group, float *grad_im); - - void modulated_deformable_col2im_coord_cuda(cudaStream_t stream, - const float *data_col, const float *data_im, const float *data_offset, const float *data_mask, - const int batch_size, const int channels, const int height_im, const int width_im, - const int height_col, const int width_col, const int kernel_h, const int kenerl_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int deformable_group, - float *grad_offset, float *grad_mask); - -#ifdef __cplusplus -} -#endif - -#endif \ No newline at end of file diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attrs/converters.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attrs/converters.py deleted file mode 100644 index edfa8d3c16ac8642773651778012a3cd57005d9b..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attrs/converters.py +++ /dev/null @@ -1,3 +0,0 @@ -# SPDX-License-Identifier: MIT - -from attr.converters import * # noqa diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Blocks-c9e1499d.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Blocks-c9e1499d.js deleted file mode 100644 index e2af03999eded2b8320219a90b5bc47315833609..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Blocks-c9e1499d.js +++ /dev/null @@ -1,50 +0,0 @@ -const VERSION_RE = new RegExp("3.37.0/", "g");function import_fix(mod, base) {const url = new URL(mod, base); return import(`https://gradio.s3-us-west-2.amazonaws.com/3.37.0/${url.pathname?.startsWith('/') ? url.pathname.substring(1).replace(VERSION_RE, "") : url.pathname.replace(VERSION_RE, "")}`);}import{n as $,i as $o,a as Ko,l as el,c as tl,d as nl,g as rl,w as vt,b as Le,_ as F,S as ue,e as ce,s as fe,f as Bt,h as De,j as Ht,k as W,m as de,o as Z,p as y,q as il,r as ol,t as ll,u as le,v as N,x as Y,y as ae,z as B,A as E,B as $e,C as Et,D as al,E as sl,F as ke,G as oe,H as Rn,I as ul,J as be,K as d,L as ge,M as m,N as A,O as x,P as I,Q as Se,R as q,T as Ue,U as Ye,V as Ie,W as cl,X as fl,Y as Lt,Z as _l,$ as hl,a0 as pl,a1 as dl,a2 as ml,a3 as gl,a4 as bl,a5 as vl,a6 as El,a7 as yl}from"./index-1d65707a.js";import{B as yt,a as Sl,c as wl,f as jt}from"./Button-f155035a.js";function Tl(e,t,n,r){if(!t)return $;const i=e.getBoundingClientRect();if(t.left===i.left&&t.right===i.right&&t.top===i.top&&t.bottom===i.bottom)return $;const{delay:o=0,duration:a=300,easing:l=$o,start:u=Ko()+o,end:s=u+a,tick:c=$,css:h}=n(e,{from:t,to:i},r);let _=!0,p=!1,v;function b(){h&&(v=tl(e,0,1,a,o,l,h)),o||(p=!0)}function g(){h&&nl(e,v),_=!1}return el(S=>{if(!p&&S>=u&&(p=!0),p&&S>=s&&(c(1,0),g()),!_)return!1;if(p){const k=S-u,T=0+1*l(k/a);c(T,1-T)}return!0}),b(),c(0,1),g}function Il(e){const t=getComputedStyle(e);if(t.position!=="absolute"&&t.position!=="fixed"){const{width:n,height:r}=t,i=e.getBoundingClientRect();e.style.position="absolute",e.style.width=n,e.style.height=r,kl(e,i)}}function kl(e,t){const n=e.getBoundingClientRect();if(t.left!==n.left||t.top!==n.top){const r=getComputedStyle(e),i=r.transform==="none"?"":r.transform;e.style.transform=`${i} translate(${t.left-n.left}px, ${t.top-n.top}px)`}}var Al=function(t){return Cl(t)&&!Pl(t)};function Cl(e){return!!e&&typeof e=="object"}function Pl(e){var t=Object.prototype.toString.call(e);return t==="[object RegExp]"||t==="[object Date]"||Hl(e)}var Ol=typeof Symbol=="function"&&Symbol.for,Bl=Ol?Symbol.for("react.element"):60103;function Hl(e){return e.$$typeof===Bl}function Ll(e){return Array.isArray(e)?[]:{}}function Fe(e,t){return t.clone!==!1&&t.isMergeableObject(e)?Pe(Ll(e),e,t):e}function jl(e,t,n){return e.concat(t).map(function(r){return Fe(r,n)})}function Nl(e,t){if(!t.customMerge)return Pe;var n=t.customMerge(e);return typeof n=="function"?n:Pe}function Rl(e){return Object.getOwnPropertySymbols?Object.getOwnPropertySymbols(e).filter(function(t){return Object.propertyIsEnumerable.call(e,t)}):[]}function Nt(e){return Object.keys(e).concat(Rl(e))}function Mn(e,t){try{return t in e}catch{return!1}}function Ml(e,t){return Mn(e,t)&&!(Object.hasOwnProperty.call(e,t)&&Object.propertyIsEnumerable.call(e,t))}function xl(e,t,n){var r={};return n.isMergeableObject(e)&&Nt(e).forEach(function(i){r[i]=Fe(e[i],n)}),Nt(t).forEach(function(i){Ml(e,i)||(Mn(e,i)&&n.isMergeableObject(t[i])?r[i]=Nl(i,n)(e[i],t[i],n):r[i]=Fe(t[i],n))}),r}function Pe(e,t,n){n=n||{},n.arrayMerge=n.arrayMerge||jl,n.isMergeableObject=n.isMergeableObject||Al,n.cloneUnlessOtherwiseSpecified=Fe;var r=Array.isArray(t),i=Array.isArray(e),o=r===i;return o?r?n.arrayMerge(e,t,n):xl(e,t,n):Fe(t,n)}Pe.all=function(t,n){if(!Array.isArray(t))throw new Error("first argument should be an array");return t.reduce(function(r,i){return Pe(r,i,n)},{})};var Dl=Pe,Fl=Dl;const Gl=rl(Fl);var ft=function(e,t){return ft=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(n,r){n.__proto__=r}||function(n,r){for(var i in r)Object.prototype.hasOwnProperty.call(r,i)&&(n[i]=r[i])},ft(e,t)};function Ke(e,t){if(typeof t!="function"&&t!==null)throw new TypeError("Class extends value "+String(t)+" is not a constructor or null");ft(e,t);function n(){this.constructor=e}e.prototype=t===null?Object.create(t):(n.prototype=t.prototype,new n)}var X=function(){return X=Object.assign||function(t){for(var n,r=1,i=arguments.length;r0}),n=[],r=0,i=t;r1)throw new RangeError("integer-width stems only accept a single optional option");i.options[0].replace(Yl,function(u,s,c,h,_,p){if(s)t.minimumIntegerDigits=c.length;else{if(h&&_)throw new Error("We currently do not support maximum integer digits");if(p)throw new Error("We currently do not support exact integer digits")}return""});continue}if(Wn.test(i.stem)){t.minimumIntegerDigits=i.stem.length;continue}if(Mt.test(i.stem)){if(i.options.length>1)throw new RangeError("Fraction-precision stems only accept a single optional option");i.stem.replace(Mt,function(u,s,c,h,_,p){return c==="*"?t.minimumFractionDigits=s.length:h&&h[0]==="#"?t.maximumFractionDigits=h.length:_&&p?(t.minimumFractionDigits=_.length,t.maximumFractionDigits=_.length+p.length):(t.minimumFractionDigits=s.length,t.maximumFractionDigits=s.length),""});var o=i.options[0];o==="w"?t=X(X({},t),{trailingZeroDisplay:"stripIfInteger"}):o&&(t=X(X({},t),xt(o)));continue}if(Xn.test(i.stem)){t=X(X({},t),xt(i.stem));continue}var a=Zn(i.stem);a&&(t=X(X({},t),a));var l=Jl(i.stem);l&&(t=X(X({},t),l))}return t}var qe={AX:["H"],BQ:["H"],CP:["H"],CZ:["H"],DK:["H"],FI:["H"],ID:["H"],IS:["H"],ML:["H"],NE:["H"],RU:["H"],SE:["H"],SJ:["H"],SK:["H"],AS:["h","H"],BT:["h","H"],DJ:["h","H"],ER:["h","H"],GH:["h","H"],IN:["h","H"],LS:["h","H"],PG:["h","H"],PW:["h","H"],SO:["h","H"],TO:["h","H"],VU:["h","H"],WS:["h","H"],"001":["H","h"],AL:["h","H","hB"],TD:["h","H","hB"],"ca-ES":["H","h","hB"],CF:["H","h","hB"],CM:["H","h","hB"],"fr-CA":["H","h","hB"],"gl-ES":["H","h","hB"],"it-CH":["H","h","hB"],"it-IT":["H","h","hB"],LU:["H","h","hB"],NP:["H","h","hB"],PF:["H","h","hB"],SC:["H","h","hB"],SM:["H","h","hB"],SN:["H","h","hB"],TF:["H","h","hB"],VA:["H","h","hB"],CY:["h","H","hb","hB"],GR:["h","H","hb","hB"],CO:["h","H","hB","hb"],DO:["h","H","hB","hb"],KP:["h","H","hB","hb"],KR:["h","H","hB","hb"],NA:["h","H","hB","hb"],PA:["h","H","hB","hb"],PR:["h","H","hB","hb"],VE:["h","H","hB","hb"],AC:["H","h","hb","hB"],AI:["H","h","hb","hB"],BW:["H","h","hb","hB"],BZ:["H","h","hb","hB"],CC:["H","h","hb","hB"],CK:["H","h","hb","hB"],CX:["H","h","hb","hB"],DG:["H","h","hb","hB"],FK:["H","h","hb","hB"],GB:["H","h","hb","hB"],GG:["H","h","hb","hB"],GI:["H","h","hb","hB"],IE:["H","h","hb","hB"],IM:["H","h","hb","hB"],IO:["H","h","hb","hB"],JE:["H","h","hb","hB"],LT:["H","h","hb","hB"],MK:["H","h","hb","hB"],MN:["H","h","hb","hB"],MS:["H","h","hb","hB"],NF:["H","h","hb","hB"],NG:["H","h","hb","hB"],NR:["H","h","hb","hB"],NU:["H","h","hb","hB"],PN:["H","h","hb","hB"],SH:["H","h","hb","hB"],SX:["H","h","hb","hB"],TA:["H","h","hb","hB"],ZA:["H","h","hb","hB"],"af-ZA":["H","h","hB","hb"],AR:["H","h","hB","hb"],CL:["H","h","hB","hb"],CR:["H","h","hB","hb"],CU:["H","h","hB","hb"],EA:["H","h","hB","hb"],"es-BO":["H","h","hB","hb"],"es-BR":["H","h","hB","hb"],"es-EC":["H","h","hB","hb"],"es-ES":["H","h","hB","hb"],"es-GQ":["H","h","hB","hb"],"es-PE":["H","h","hB","hb"],GT:["H","h","hB","hb"],HN:["H","h","hB","hb"],IC:["H","h","hB","hb"],KG:["H","h","hB","hb"],KM:["H","h","hB","hb"],LK:["H","h","hB","hb"],MA:["H","h","hB","hb"],MX:["H","h","hB","hb"],NI:["H","h","hB","hb"],PY:["H","h","hB","hb"],SV:["H","h","hB","hb"],UY:["H","h","hB","hb"],JP:["H","h","K"],AD:["H","hB"],AM:["H","hB"],AO:["H","hB"],AT:["H","hB"],AW:["H","hB"],BE:["H","hB"],BF:["H","hB"],BJ:["H","hB"],BL:["H","hB"],BR:["H","hB"],CG:["H","hB"],CI:["H","hB"],CV:["H","hB"],DE:["H","hB"],EE:["H","hB"],FR:["H","hB"],GA:["H","hB"],GF:["H","hB"],GN:["H","hB"],GP:["H","hB"],GW:["H","hB"],HR:["H","hB"],IL:["H","hB"],IT:["H","hB"],KZ:["H","hB"],MC:["H","hB"],MD:["H","hB"],MF:["H","hB"],MQ:["H","hB"],MZ:["H","hB"],NC:["H","hB"],NL:["H","hB"],PM:["H","hB"],PT:["H","hB"],RE:["H","hB"],RO:["H","hB"],SI:["H","hB"],SR:["H","hB"],ST:["H","hB"],TG:["H","hB"],TR:["H","hB"],WF:["H","hB"],YT:["H","hB"],BD:["h","hB","H"],PK:["h","hB","H"],AZ:["H","hB","h"],BA:["H","hB","h"],BG:["H","hB","h"],CH:["H","hB","h"],GE:["H","hB","h"],LI:["H","hB","h"],ME:["H","hB","h"],RS:["H","hB","h"],UA:["H","hB","h"],UZ:["H","hB","h"],XK:["H","hB","h"],AG:["h","hb","H","hB"],AU:["h","hb","H","hB"],BB:["h","hb","H","hB"],BM:["h","hb","H","hB"],BS:["h","hb","H","hB"],CA:["h","hb","H","hB"],DM:["h","hb","H","hB"],"en-001":["h","hb","H","hB"],FJ:["h","hb","H","hB"],FM:["h","hb","H","hB"],GD:["h","hb","H","hB"],GM:["h","hb","H","hB"],GU:["h","hb","H","hB"],GY:["h","hb","H","hB"],JM:["h","hb","H","hB"],KI:["h","hb","H","hB"],KN:["h","hb","H","hB"],KY:["h","hb","H","hB"],LC:["h","hb","H","hB"],LR:["h","hb","H","hB"],MH:["h","hb","H","hB"],MP:["h","hb","H","hB"],MW:["h","hb","H","hB"],NZ:["h","hb","H","hB"],SB:["h","hb","H","hB"],SG:["h","hb","H","hB"],SL:["h","hb","H","hB"],SS:["h","hb","H","hB"],SZ:["h","hb","H","hB"],TC:["h","hb","H","hB"],TT:["h","hb","H","hB"],UM:["h","hb","H","hB"],US:["h","hb","H","hB"],VC:["h","hb","H","hB"],VG:["h","hb","H","hB"],VI:["h","hb","H","hB"],ZM:["h","hb","H","hB"],BO:["H","hB","h","hb"],EC:["H","hB","h","hb"],ES:["H","hB","h","hb"],GQ:["H","hB","h","hb"],PE:["H","hB","h","hb"],AE:["h","hB","hb","H"],"ar-001":["h","hB","hb","H"],BH:["h","hB","hb","H"],DZ:["h","hB","hb","H"],EG:["h","hB","hb","H"],EH:["h","hB","hb","H"],HK:["h","hB","hb","H"],IQ:["h","hB","hb","H"],JO:["h","hB","hb","H"],KW:["h","hB","hb","H"],LB:["h","hB","hb","H"],LY:["h","hB","hb","H"],MO:["h","hB","hb","H"],MR:["h","hB","hb","H"],OM:["h","hB","hb","H"],PH:["h","hB","hb","H"],PS:["h","hB","hb","H"],QA:["h","hB","hb","H"],SA:["h","hB","hb","H"],SD:["h","hB","hb","H"],SY:["h","hB","hb","H"],TN:["h","hB","hb","H"],YE:["h","hB","hb","H"],AF:["H","hb","hB","h"],LA:["H","hb","hB","h"],CN:["H","hB","hb","h"],LV:["H","hB","hb","h"],TL:["H","hB","hb","h"],"zu-ZA":["H","hB","hb","h"],CD:["hB","H"],IR:["hB","H"],"hi-IN":["hB","h","H"],"kn-IN":["hB","h","H"],"ml-IN":["hB","h","H"],"te-IN":["hB","h","H"],KH:["hB","h","H","hb"],"ta-IN":["hB","h","hb","H"],BN:["hb","hB","h","H"],MY:["hb","hB","h","H"],ET:["hB","hb","h","H"],"gu-IN":["hB","hb","h","H"],"mr-IN":["hB","hb","h","H"],"pa-IN":["hB","hb","h","H"],TW:["hB","hb","h","H"],KE:["hB","hb","H","h"],MM:["hB","hb","H","h"],TZ:["hB","hb","H","h"],UG:["hB","hb","H","h"]};function $l(e,t){for(var n="",r=0;r>1),u="a",s=Kl(t);for((s=="H"||s=="k")&&(l=0);l-- >0;)n+=u;for(;a-- >0;)n=s+n}else i==="J"?n+="H":n+=i}return n}function Kl(e){var t=e.hourCycle;if(t===void 0&&e.hourCycles&&e.hourCycles.length&&(t=e.hourCycles[0]),t)switch(t){case"h24":return"k";case"h23":return"H";case"h12":return"h";case"h11":return"K";default:throw new Error("Invalid hourCycle")}var n=e.language,r;n!=="root"&&(r=e.maximize().region);var i=qe[r||""]||qe[n||""]||qe["".concat(n,"-001")]||qe["001"];return i[0]}var lt,ea=new RegExp("^".concat(qn.source,"*")),ta=new RegExp("".concat(qn.source,"*$"));function z(e,t){return{start:e,end:t}}var na=!!String.prototype.startsWith,ra=!!String.fromCodePoint,ia=!!Object.fromEntries,oa=!!String.prototype.codePointAt,la=!!String.prototype.trimStart,aa=!!String.prototype.trimEnd,sa=!!Number.isSafeInteger,ua=sa?Number.isSafeInteger:function(e){return typeof e=="number"&&isFinite(e)&&Math.floor(e)===e&&Math.abs(e)<=9007199254740991},ht=!0;try{var ca=Jn("([^\\p{White_Space}\\p{Pattern_Syntax}]*)","yu");ht=((lt=ca.exec("a"))===null||lt===void 0?void 0:lt[0])==="a"}catch{ht=!1}var Ft=na?function(t,n,r){return t.startsWith(n,r)}:function(t,n,r){return t.slice(r,r+n.length)===n},pt=ra?String.fromCodePoint:function(){for(var t=[],n=0;no;){if(a=t[o++],a>1114111)throw RangeError(a+" is not a valid code point");r+=a<65536?String.fromCharCode(a):String.fromCharCode(((a-=65536)>>10)+55296,a%1024+56320)}return r},Gt=ia?Object.fromEntries:function(t){for(var n={},r=0,i=t;r=r)){var i=t.charCodeAt(n),o;return i<55296||i>56319||n+1===r||(o=t.charCodeAt(n+1))<56320||o>57343?i:(i-55296<<10)+(o-56320)+65536}},fa=la?function(t){return t.trimStart()}:function(t){return t.replace(ea,"")},_a=aa?function(t){return t.trimEnd()}:function(t){return t.replace(ta,"")};function Jn(e,t){return new RegExp(e,t)}var dt;if(ht){var Ut=Jn("([^\\p{White_Space}\\p{Pattern_Syntax}]*)","yu");dt=function(t,n){var r;Ut.lastIndex=n;var i=Ut.exec(t);return(r=i[1])!==null&&r!==void 0?r:""}}else dt=function(t,n){for(var r=[];;){var i=Yn(t,n);if(i===void 0||Qn(i)||ma(i))break;r.push(i),n+=i>=65536?2:1}return pt.apply(void 0,r)};var ha=function(){function e(t,n){n===void 0&&(n={}),this.message=t,this.position={offset:0,line:1,column:1},this.ignoreTag=!!n.ignoreTag,this.locale=n.locale,this.requiresOtherClause=!!n.requiresOtherClause,this.shouldParseSkeletons=!!n.shouldParseSkeletons}return e.prototype.parse=function(){if(this.offset()!==0)throw Error("parser can only be used once");return this.parseMessage(0,"",!1)},e.prototype.parseMessage=function(t,n,r){for(var i=[];!this.isEOF();){var o=this.char();if(o===123){var a=this.parseArgument(t,r);if(a.err)return a;i.push(a.val)}else{if(o===125&&t>0)break;if(o===35&&(n==="plural"||n==="selectordinal")){var l=this.clonePosition();this.bump(),i.push({type:ee.pound,location:z(l,this.clonePosition())})}else if(o===60&&!this.ignoreTag&&this.peek()===47){if(r)break;return this.error(V.UNMATCHED_CLOSING_TAG,z(this.clonePosition(),this.clonePosition()))}else if(o===60&&!this.ignoreTag&&mt(this.peek()||0)){var a=this.parseTag(t,n);if(a.err)return a;i.push(a.val)}else{var a=this.parseLiteral(t,n);if(a.err)return a;i.push(a.val)}}}return{val:i,err:null}},e.prototype.parseTag=function(t,n){var r=this.clonePosition();this.bump();var i=this.parseTagName();if(this.bumpSpace(),this.bumpIf("/>"))return{val:{type:ee.literal,value:"<".concat(i,"/>"),location:z(r,this.clonePosition())},err:null};if(this.bumpIf(">")){var o=this.parseMessage(t+1,n,!0);if(o.err)return o;var a=o.val,l=this.clonePosition();if(this.bumpIf("")?{val:{type:ee.tag,value:i,children:a,location:z(r,this.clonePosition())},err:null}:this.error(V.INVALID_TAG,z(l,this.clonePosition())))}else return this.error(V.UNCLOSED_TAG,z(r,this.clonePosition()))}else return this.error(V.INVALID_TAG,z(r,this.clonePosition()))},e.prototype.parseTagName=function(){var t=this.offset();for(this.bump();!this.isEOF()&&da(this.char());)this.bump();return this.message.slice(t,this.offset())},e.prototype.parseLiteral=function(t,n){for(var r=this.clonePosition(),i="";;){var o=this.tryParseQuote(n);if(o){i+=o;continue}var a=this.tryParseUnquoted(t,n);if(a){i+=a;continue}var l=this.tryParseLeftAngleBracket();if(l){i+=l;continue}break}var u=z(r,this.clonePosition());return{val:{type:ee.literal,value:i,location:u},err:null}},e.prototype.tryParseLeftAngleBracket=function(){return!this.isEOF()&&this.char()===60&&(this.ignoreTag||!pa(this.peek()||0))?(this.bump(),"<"):null},e.prototype.tryParseQuote=function(t){if(this.isEOF()||this.char()!==39)return null;switch(this.peek()){case 39:return this.bump(),this.bump(),"'";case 123:case 60:case 62:case 125:break;case 35:if(t==="plural"||t==="selectordinal")break;return null;default:return null}this.bump();var n=[this.char()];for(this.bump();!this.isEOF();){var r=this.char();if(r===39)if(this.peek()===39)n.push(39),this.bump();else{this.bump();break}else n.push(r);this.bump()}return pt.apply(void 0,n)},e.prototype.tryParseUnquoted=function(t,n){if(this.isEOF())return null;var r=this.char();return r===60||r===123||r===35&&(n==="plural"||n==="selectordinal")||r===125&&t>0?null:(this.bump(),pt(r))},e.prototype.parseArgument=function(t,n){var r=this.clonePosition();if(this.bump(),this.bumpSpace(),this.isEOF())return this.error(V.EXPECT_ARGUMENT_CLOSING_BRACE,z(r,this.clonePosition()));if(this.char()===125)return this.bump(),this.error(V.EMPTY_ARGUMENT,z(r,this.clonePosition()));var i=this.parseIdentifierIfPossible().value;if(!i)return this.error(V.MALFORMED_ARGUMENT,z(r,this.clonePosition()));if(this.bumpSpace(),this.isEOF())return this.error(V.EXPECT_ARGUMENT_CLOSING_BRACE,z(r,this.clonePosition()));switch(this.char()){case 125:return this.bump(),{val:{type:ee.argument,value:i,location:z(r,this.clonePosition())},err:null};case 44:return this.bump(),this.bumpSpace(),this.isEOF()?this.error(V.EXPECT_ARGUMENT_CLOSING_BRACE,z(r,this.clonePosition())):this.parseArgumentOptions(t,n,i,r);default:return this.error(V.MALFORMED_ARGUMENT,z(r,this.clonePosition()))}},e.prototype.parseIdentifierIfPossible=function(){var t=this.clonePosition(),n=this.offset(),r=dt(this.message,n),i=n+r.length;this.bumpTo(i);var o=this.clonePosition(),a=z(t,o);return{value:r,location:a}},e.prototype.parseArgumentOptions=function(t,n,r,i){var o,a=this.clonePosition(),l=this.parseIdentifierIfPossible().value,u=this.clonePosition();switch(l){case"":return this.error(V.EXPECT_ARGUMENT_TYPE,z(a,u));case"number":case"date":case"time":{this.bumpSpace();var s=null;if(this.bumpIf(",")){this.bumpSpace();var c=this.clonePosition(),h=this.parseSimpleArgStyleIfPossible();if(h.err)return h;var _=_a(h.val);if(_.length===0)return this.error(V.EXPECT_ARGUMENT_STYLE,z(this.clonePosition(),this.clonePosition()));var p=z(c,this.clonePosition());s={style:_,styleLocation:p}}var v=this.tryParseArgumentClose(i);if(v.err)return v;var b=z(i,this.clonePosition());if(s&&Ft(s?.style,"::",0)){var g=fa(s.style.slice(2));if(l==="number"){var h=this.parseNumberSkeletonFromString(g,s.styleLocation);return h.err?h:{val:{type:ee.number,value:r,location:b,style:h.val},err:null}}else{if(g.length===0)return this.error(V.EXPECT_DATE_TIME_SKELETON,b);var S=g;this.locale&&(S=$l(g,this.locale));var _={type:Oe.dateTime,pattern:S,location:s.styleLocation,parsedOptions:this.shouldParseSkeletons?ql(S):{}},k=l==="date"?ee.date:ee.time;return{val:{type:k,value:r,location:b,style:_},err:null}}}return{val:{type:l==="number"?ee.number:l==="date"?ee.date:ee.time,value:r,location:b,style:(o=s?.style)!==null&&o!==void 0?o:null},err:null}}case"plural":case"selectordinal":case"select":{var T=this.clonePosition();if(this.bumpSpace(),!this.bumpIf(","))return this.error(V.EXPECT_SELECT_ARGUMENT_OPTIONS,z(T,X({},T)));this.bumpSpace();var f=this.parseIdentifierIfPossible(),P=0;if(l!=="select"&&f.value==="offset"){if(!this.bumpIf(":"))return this.error(V.EXPECT_PLURAL_ARGUMENT_OFFSET_VALUE,z(this.clonePosition(),this.clonePosition()));this.bumpSpace();var h=this.tryParseDecimalInteger(V.EXPECT_PLURAL_ARGUMENT_OFFSET_VALUE,V.INVALID_PLURAL_ARGUMENT_OFFSET_VALUE);if(h.err)return h;this.bumpSpace(),f=this.parseIdentifierIfPossible(),P=h.val}var H=this.tryParsePluralOrSelectOptions(t,l,n,f);if(H.err)return H;var v=this.tryParseArgumentClose(i);if(v.err)return v;var L=z(i,this.clonePosition());return l==="select"?{val:{type:ee.select,value:r,options:Gt(H.val),location:L},err:null}:{val:{type:ee.plural,value:r,options:Gt(H.val),offset:P,pluralType:l==="plural"?"cardinal":"ordinal",location:L},err:null}}default:return this.error(V.INVALID_ARGUMENT_TYPE,z(a,u))}},e.prototype.tryParseArgumentClose=function(t){return this.isEOF()||this.char()!==125?this.error(V.EXPECT_ARGUMENT_CLOSING_BRACE,z(t,this.clonePosition())):(this.bump(),{val:!0,err:null})},e.prototype.parseSimpleArgStyleIfPossible=function(){for(var t=0,n=this.clonePosition();!this.isEOF();){var r=this.char();switch(r){case 39:{this.bump();var i=this.clonePosition();if(!this.bumpUntil("'"))return this.error(V.UNCLOSED_QUOTE_IN_ARGUMENT_STYLE,z(i,this.clonePosition()));this.bump();break}case 123:{t+=1,this.bump();break}case 125:{if(t>0)t-=1;else return{val:this.message.slice(n.offset,this.offset()),err:null};break}default:this.bump();break}}return{val:this.message.slice(n.offset,this.offset()),err:null}},e.prototype.parseNumberSkeletonFromString=function(t,n){var r=[];try{r=Wl(t)}catch{return this.error(V.INVALID_NUMBER_SKELETON,n)}return{val:{type:Oe.number,tokens:r,location:n,parsedOptions:this.shouldParseSkeletons?Ql(r):{}},err:null}},e.prototype.tryParsePluralOrSelectOptions=function(t,n,r,i){for(var o,a=!1,l=[],u=new Set,s=i.value,c=i.location;;){if(s.length===0){var h=this.clonePosition();if(n!=="select"&&this.bumpIf("=")){var _=this.tryParseDecimalInteger(V.EXPECT_PLURAL_ARGUMENT_SELECTOR,V.INVALID_PLURAL_ARGUMENT_SELECTOR);if(_.err)return _;c=z(h,this.clonePosition()),s=this.message.slice(h.offset,this.offset())}else break}if(u.has(s))return this.error(n==="select"?V.DUPLICATE_SELECT_ARGUMENT_SELECTOR:V.DUPLICATE_PLURAL_ARGUMENT_SELECTOR,c);s==="other"&&(a=!0),this.bumpSpace();var p=this.clonePosition();if(!this.bumpIf("{"))return this.error(n==="select"?V.EXPECT_SELECT_ARGUMENT_SELECTOR_FRAGMENT:V.EXPECT_PLURAL_ARGUMENT_SELECTOR_FRAGMENT,z(this.clonePosition(),this.clonePosition()));var v=this.parseMessage(t+1,n,r);if(v.err)return v;var b=this.tryParseArgumentClose(p);if(b.err)return b;l.push([s,{value:v.val,location:z(p,this.clonePosition())}]),u.add(s),this.bumpSpace(),o=this.parseIdentifierIfPossible(),s=o.value,c=o.location}return l.length===0?this.error(n==="select"?V.EXPECT_SELECT_ARGUMENT_SELECTOR:V.EXPECT_PLURAL_ARGUMENT_SELECTOR,z(this.clonePosition(),this.clonePosition())):this.requiresOtherClause&&!a?this.error(V.MISSING_OTHER_CLAUSE,z(this.clonePosition(),this.clonePosition())):{val:l,err:null}},e.prototype.tryParseDecimalInteger=function(t,n){var r=1,i=this.clonePosition();this.bumpIf("+")||this.bumpIf("-")&&(r=-1);for(var o=!1,a=0;!this.isEOF();){var l=this.char();if(l>=48&&l<=57)o=!0,a=a*10+(l-48),this.bump();else break}var u=z(i,this.clonePosition());return o?(a*=r,ua(a)?{val:a,err:null}:this.error(n,u)):this.error(t,u)},e.prototype.offset=function(){return this.position.offset},e.prototype.isEOF=function(){return this.offset()===this.message.length},e.prototype.clonePosition=function(){return{offset:this.position.offset,line:this.position.line,column:this.position.column}},e.prototype.char=function(){var t=this.position.offset;if(t>=this.message.length)throw Error("out of bound");var n=Yn(this.message,t);if(n===void 0)throw Error("Offset ".concat(t," is at invalid UTF-16 code unit boundary"));return n},e.prototype.error=function(t,n){return{val:null,err:{kind:t,message:this.message,location:n}}},e.prototype.bump=function(){if(!this.isEOF()){var t=this.char();t===10?(this.position.line+=1,this.position.column=1,this.position.offset+=1):(this.position.column+=1,this.position.offset+=t<65536?1:2)}},e.prototype.bumpIf=function(t){if(Ft(this.message,t,this.offset())){for(var n=0;n=0?(this.bumpTo(r),!0):(this.bumpTo(this.message.length),!1)},e.prototype.bumpTo=function(t){if(this.offset()>t)throw Error("targetOffset ".concat(t," must be greater than or equal to the current offset ").concat(this.offset()));for(t=Math.min(t,this.message.length);;){var n=this.offset();if(n===t)break;if(n>t)throw Error("targetOffset ".concat(t," is at invalid UTF-16 code unit boundary"));if(this.bump(),this.isEOF())break}},e.prototype.bumpSpace=function(){for(;!this.isEOF()&&Qn(this.char());)this.bump()},e.prototype.peek=function(){if(this.isEOF())return null;var t=this.char(),n=this.offset(),r=this.message.charCodeAt(n+(t>=65536?2:1));return r??null},e}();function mt(e){return e>=97&&e<=122||e>=65&&e<=90}function pa(e){return mt(e)||e===47}function da(e){return e===45||e===46||e>=48&&e<=57||e===95||e>=97&&e<=122||e>=65&&e<=90||e==183||e>=192&&e<=214||e>=216&&e<=246||e>=248&&e<=893||e>=895&&e<=8191||e>=8204&&e<=8205||e>=8255&&e<=8256||e>=8304&&e<=8591||e>=11264&&e<=12271||e>=12289&&e<=55295||e>=63744&&e<=64975||e>=65008&&e<=65533||e>=65536&&e<=983039}function Qn(e){return e>=9&&e<=13||e===32||e===133||e>=8206&&e<=8207||e===8232||e===8233}function ma(e){return e>=33&&e<=35||e===36||e>=37&&e<=39||e===40||e===41||e===42||e===43||e===44||e===45||e>=46&&e<=47||e>=58&&e<=59||e>=60&&e<=62||e>=63&&e<=64||e===91||e===92||e===93||e===94||e===96||e===123||e===124||e===125||e===126||e===161||e>=162&&e<=165||e===166||e===167||e===169||e===171||e===172||e===174||e===176||e===177||e===182||e===187||e===191||e===215||e===247||e>=8208&&e<=8213||e>=8214&&e<=8215||e===8216||e===8217||e===8218||e>=8219&&e<=8220||e===8221||e===8222||e===8223||e>=8224&&e<=8231||e>=8240&&e<=8248||e===8249||e===8250||e>=8251&&e<=8254||e>=8257&&e<=8259||e===8260||e===8261||e===8262||e>=8263&&e<=8273||e===8274||e===8275||e>=8277&&e<=8286||e>=8592&&e<=8596||e>=8597&&e<=8601||e>=8602&&e<=8603||e>=8604&&e<=8607||e===8608||e>=8609&&e<=8610||e===8611||e>=8612&&e<=8613||e===8614||e>=8615&&e<=8621||e===8622||e>=8623&&e<=8653||e>=8654&&e<=8655||e>=8656&&e<=8657||e===8658||e===8659||e===8660||e>=8661&&e<=8691||e>=8692&&e<=8959||e>=8960&&e<=8967||e===8968||e===8969||e===8970||e===8971||e>=8972&&e<=8991||e>=8992&&e<=8993||e>=8994&&e<=9e3||e===9001||e===9002||e>=9003&&e<=9083||e===9084||e>=9085&&e<=9114||e>=9115&&e<=9139||e>=9140&&e<=9179||e>=9180&&e<=9185||e>=9186&&e<=9254||e>=9255&&e<=9279||e>=9280&&e<=9290||e>=9291&&e<=9311||e>=9472&&e<=9654||e===9655||e>=9656&&e<=9664||e===9665||e>=9666&&e<=9719||e>=9720&&e<=9727||e>=9728&&e<=9838||e===9839||e>=9840&&e<=10087||e===10088||e===10089||e===10090||e===10091||e===10092||e===10093||e===10094||e===10095||e===10096||e===10097||e===10098||e===10099||e===10100||e===10101||e>=10132&&e<=10175||e>=10176&&e<=10180||e===10181||e===10182||e>=10183&&e<=10213||e===10214||e===10215||e===10216||e===10217||e===10218||e===10219||e===10220||e===10221||e===10222||e===10223||e>=10224&&e<=10239||e>=10240&&e<=10495||e>=10496&&e<=10626||e===10627||e===10628||e===10629||e===10630||e===10631||e===10632||e===10633||e===10634||e===10635||e===10636||e===10637||e===10638||e===10639||e===10640||e===10641||e===10642||e===10643||e===10644||e===10645||e===10646||e===10647||e===10648||e>=10649&&e<=10711||e===10712||e===10713||e===10714||e===10715||e>=10716&&e<=10747||e===10748||e===10749||e>=10750&&e<=11007||e>=11008&&e<=11055||e>=11056&&e<=11076||e>=11077&&e<=11078||e>=11079&&e<=11084||e>=11085&&e<=11123||e>=11124&&e<=11125||e>=11126&&e<=11157||e===11158||e>=11159&&e<=11263||e>=11776&&e<=11777||e===11778||e===11779||e===11780||e===11781||e>=11782&&e<=11784||e===11785||e===11786||e===11787||e===11788||e===11789||e>=11790&&e<=11798||e===11799||e>=11800&&e<=11801||e===11802||e===11803||e===11804||e===11805||e>=11806&&e<=11807||e===11808||e===11809||e===11810||e===11811||e===11812||e===11813||e===11814||e===11815||e===11816||e===11817||e>=11818&&e<=11822||e===11823||e>=11824&&e<=11833||e>=11834&&e<=11835||e>=11836&&e<=11839||e===11840||e===11841||e===11842||e>=11843&&e<=11855||e>=11856&&e<=11857||e===11858||e>=11859&&e<=11903||e>=12289&&e<=12291||e===12296||e===12297||e===12298||e===12299||e===12300||e===12301||e===12302||e===12303||e===12304||e===12305||e>=12306&&e<=12307||e===12308||e===12309||e===12310||e===12311||e===12312||e===12313||e===12314||e===12315||e===12316||e===12317||e>=12318&&e<=12319||e===12320||e===12336||e===64830||e===64831||e>=65093&&e<=65094}function gt(e){e.forEach(function(t){if(delete t.location,Gn(t)||Un(t))for(var n in t.options)delete t.options[n].location,gt(t.options[n].value);else xn(t)&&zn(t.style)||(Dn(t)||Fn(t))&&_t(t.style)?delete t.style.location:Vn(t)&>(t.children)})}function ga(e,t){t===void 0&&(t={}),t=X({shouldParseSkeletons:!0,requiresOtherClause:!0},t);var n=new ha(e,t).parse();if(n.err){var r=SyntaxError(V[n.err.kind]);throw r.location=n.err.location,r.originalMessage=n.err.message,r}return t?.captureLocation||gt(n.val),n.val}function at(e,t){var n=t&&t.cache?t.cache:wa,r=t&&t.serializer?t.serializer:Sa,i=t&&t.strategy?t.strategy:va;return i(e,{cache:n,serializer:r})}function ba(e){return e==null||typeof e=="number"||typeof e=="boolean"}function $n(e,t,n,r){var i=ba(r)?r:n(r),o=t.get(i);return typeof o>"u"&&(o=e.call(this,r),t.set(i,o)),o}function Kn(e,t,n){var r=Array.prototype.slice.call(arguments,3),i=n(r),o=t.get(i);return typeof o>"u"&&(o=e.apply(this,r),t.set(i,o)),o}function St(e,t,n,r,i){return n.bind(t,e,r,i)}function va(e,t){var n=e.length===1?$n:Kn;return St(e,this,n,t.cache.create(),t.serializer)}function Ea(e,t){return St(e,this,Kn,t.cache.create(),t.serializer)}function ya(e,t){return St(e,this,$n,t.cache.create(),t.serializer)}var Sa=function(){return JSON.stringify(arguments)};function wt(){this.cache=Object.create(null)}wt.prototype.get=function(e){return this.cache[e]};wt.prototype.set=function(e,t){this.cache[e]=t};var wa={create:function(){return new wt}},st={variadic:Ea,monadic:ya},Be;(function(e){e.MISSING_VALUE="MISSING_VALUE",e.INVALID_VALUE="INVALID_VALUE",e.MISSING_INTL_API="MISSING_INTL_API"})(Be||(Be={}));var et=function(e){Ke(t,e);function t(n,r,i){var o=e.call(this,n)||this;return o.code=r,o.originalMessage=i,o}return t.prototype.toString=function(){return"[formatjs Error: ".concat(this.code,"] ").concat(this.message)},t}(Error),Vt=function(e){Ke(t,e);function t(n,r,i,o){return e.call(this,'Invalid values for "'.concat(n,'": "').concat(r,'". Options are "').concat(Object.keys(i).join('", "'),'"'),Be.INVALID_VALUE,o)||this}return t}(et),Ta=function(e){Ke(t,e);function t(n,r,i){return e.call(this,'Value for "'.concat(n,'" must be of type ').concat(r),Be.INVALID_VALUE,i)||this}return t}(et),Ia=function(e){Ke(t,e);function t(n,r){return e.call(this,'The intl string context variable "'.concat(n,'" was not provided to the string "').concat(r,'"'),Be.MISSING_VALUE,r)||this}return t}(et),he;(function(e){e[e.literal=0]="literal",e[e.object=1]="object"})(he||(he={}));function ka(e){return e.length<2?e:e.reduce(function(t,n){var r=t[t.length-1];return!r||r.type!==he.literal||n.type!==he.literal?t.push(n):r.value+=n.value,t},[])}function Aa(e){return typeof e=="function"}function Xe(e,t,n,r,i,o,a){if(e.length===1&&Rt(e[0]))return[{type:he.literal,value:e[0].value}];for(var l=[],u=0,s=e;u0?new Intl.Locale(n[0]):new Intl.Locale(typeof t=="string"?t:t[0])},e.__parse=ga,e.formats={number:{integer:{maximumFractionDigits:0},currency:{style:"currency"},percent:{style:"percent"}},date:{short:{month:"numeric",day:"numeric",year:"2-digit"},medium:{month:"short",day:"numeric",year:"numeric"},long:{month:"long",day:"numeric",year:"numeric"},full:{weekday:"long",month:"long",day:"numeric",year:"numeric"}},time:{short:{hour:"numeric",minute:"numeric"},medium:{hour:"numeric",minute:"numeric",second:"numeric"},long:{hour:"numeric",minute:"numeric",second:"numeric",timeZoneName:"short"},full:{hour:"numeric",minute:"numeric",second:"numeric",timeZoneName:"short"}}},e}();const ye={},Ha=(e,t,n)=>n&&(t in ye||(ye[t]={}),e in ye[t]||(ye[t][e]=n),n),er=(e,t)=>{if(t==null)return;if(t in ye&&e in ye[t])return ye[t][e];const n=ze(t);for(let r=0;r0){const u=o.slice(l,o.length).join(".");if(u in a){a=a[u];break}}a=a[o[l]]}else a=void 0;return a}(n,t)}function nr(e,...t){delete ye[e],Ve.update(n=>(n[e]=Gl.all([n[e]||{},...t]),n))}Le([Ve],([e])=>Object.keys(e));Ve.subscribe(e=>Tt=e);const We={};function rr(e){return We[e]}function Je(e){return e!=null&&ze(e).some(t=>{var n;return(n=rr(t))===null||n===void 0?void 0:n.size})}function ja(e,t){return Promise.all(t.map(r=>(function(i,o){We[i].delete(o),We[i].size===0&&delete We[i]}(e,r),r().then(i=>i.default||i)))).then(r=>nr(e,...r))}const Re={};function ir(e){if(!Je(e))return e in Re?Re[e]:Promise.resolve();const t=function(n){return ze(n).map(r=>{const i=rr(r);return[r,i?[...i]:[]]}).filter(([,r])=>r.length>0)}(e);return Re[e]=Promise.all(t.map(([n,r])=>ja(n,r))).then(()=>{if(Je(e))return ir(e);delete Re[e]}),Re[e]}function Na({locale:e,id:t}){console.warn(`[svelte-i18n] The message "${t}" was not found in "${ze(e).join('", "')}".${Je(we())?` - -Note: there are at least one loader still registered to this locale that wasn't executed.`:""}`)}const Me={fallbackLocale:null,loadingDelay:200,formats:{number:{scientific:{notation:"scientific"},engineering:{notation:"engineering"},compactLong:{notation:"compact",compactDisplay:"long"},compactShort:{notation:"compact",compactDisplay:"short"}},date:{short:{month:"numeric",day:"numeric",year:"2-digit"},medium:{month:"short",day:"numeric",year:"numeric"},long:{month:"long",day:"numeric",year:"numeric"},full:{weekday:"long",month:"long",day:"numeric",year:"numeric"}},time:{short:{hour:"numeric",minute:"numeric"},medium:{hour:"numeric",minute:"numeric",second:"numeric"},long:{hour:"numeric",minute:"numeric",second:"numeric",timeZoneName:"short"},full:{hour:"numeric",minute:"numeric",second:"numeric",timeZoneName:"short"}}},warnOnMissingMessages:!0,handleMissingMessage:void 0,ignoreTag:!0};function He(){return Me}function Ra(e){const{formats:t,...n}=e,r=e.initialLocale||e.fallbackLocale;return n.warnOnMissingMessages&&(delete n.warnOnMissingMessages,n.handleMissingMessage==null?n.handleMissingMessage=Na:console.warn('[svelte-i18n] The "warnOnMissingMessages" option is deprecated. Please use the "handleMissingMessage" option instead.')),Object.assign(Me,n,{initialLocale:r}),t&&("number"in t&&Object.assign(Me.formats.number,t.number),"date"in t&&Object.assign(Me.formats.date,t.date),"time"in t&&Object.assign(Me.formats.time,t.time)),je.set(r)}const ct=vt(!1);let bt;const Ze=vt(null);function zt(e){return e.split("-").map((t,n,r)=>r.slice(0,n+1).join("-")).reverse()}function ze(e,t=He().fallbackLocale){const n=zt(e);return t?[...new Set([...n,...zt(t)])]:n}function we(){return bt??void 0}Ze.subscribe(e=>{bt=e??void 0,typeof window<"u"&&e!=null&&document.documentElement.setAttribute("lang",e)});const je={...Ze,set:e=>{if(e&&function(t){if(t==null)return;const n=ze(t);for(let r=0;rct.set(!0),t):ct.set(!0),ir(e).then(()=>{Ze.set(e)}).finally(()=>{clearTimeout(n),ct.set(!1)})}return Ze.set(e)}},Ma=()=>typeof window>"u"?null:window.navigator.language||window.navigator.languages[0],tt=e=>{const t=Object.create(null);return n=>{const r=JSON.stringify(n);return r in t?t[r]:t[r]=e(n)}},Ge=(e,t)=>{const{formats:n}=He();if(e in n&&t in n[e])return n[e][t];throw new Error(`[svelte-i18n] Unknown "${t}" ${e} format.`)},xa=tt(({locale:e,format:t,...n})=>{if(e==null)throw new Error('[svelte-i18n] A "locale" must be set to format numbers');return t&&(n=Ge("number",t)),new Intl.NumberFormat(e,n)}),Da=tt(({locale:e,format:t,...n})=>{if(e==null)throw new Error('[svelte-i18n] A "locale" must be set to format dates');return t?n=Ge("date",t):Object.keys(n).length===0&&(n=Ge("date","short")),new Intl.DateTimeFormat(e,n)}),Fa=tt(({locale:e,format:t,...n})=>{if(e==null)throw new Error('[svelte-i18n] A "locale" must be set to format time values');return t?n=Ge("time",t):Object.keys(n).length===0&&(n=Ge("time","short")),new Intl.DateTimeFormat(e,n)}),Ga=({locale:e=we(),...t}={})=>xa({locale:e,...t}),Ua=({locale:e=we(),...t}={})=>Da({locale:e,...t}),Va=({locale:e=we(),...t}={})=>Fa({locale:e,...t}),za=tt((e,t=we())=>new Ba(e,t,He().formats,{ignoreTag:He().ignoreTag})),qa=(e,t={})=>{var n,r,i,o;let a=t;typeof e=="object"&&(a=e,e=a.id);const{values:l,locale:u=we(),default:s}=a;if(u==null)throw new Error("[svelte-i18n] Cannot format a message without first setting the initial locale.");let c=er(e,u);if(c){if(typeof c!="string")return console.warn(`[svelte-i18n] Message with id "${e}" must be of type "string", found: "${typeof c}". Gettin its value through the "$format" method is deprecated; use the "json" method instead.`),c}else c=(o=(i=(r=(n=He()).handleMissingMessage)===null||r===void 0?void 0:r.call(n,{locale:u,id:e,defaultValue:s}))!==null&&i!==void 0?i:s)!==null&&o!==void 0?o:e;if(!l)return c;let h=c;try{h=za(c,u).format(l)}catch(_){_ instanceof Error&&console.warn(`[svelte-i18n] Message "${e}" has syntax error:`,_.message)}return h},Xa=(e,t)=>Va(t).format(e),Wa=(e,t)=>Ua(t).format(e),Za=(e,t)=>Ga(t).format(e),Ya=(e,t=we())=>er(e,t),dc=Le([je,Ve],()=>qa);Le([je],()=>Xa);Le([je],()=>Wa);Le([je],()=>Za);Le([je,Ve],()=>Ya);const Ja={accordion:()=>F(()=>import("./index-e4d3547f.js"),["assets/index-e4d3547f.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/Column-6c43afc7.js","assets/Column-2853eb31.css","assets/index-8f1feca1.css"]),annotatedimage:()=>F(()=>import("./index-08e5b196.js"),["assets/index-08e5b196.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/BlockLabel-66866176.js","assets/Empty-eec13822.js","assets/Image-0fe369ad.js","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/index-f0e43e7d.css"]),audio:()=>F(()=>import("./index-ebfc06be.js"),["assets/index-ebfc06be.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/UploadText-f599be03.js","assets/UploadText-690664d1.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/Upload-9bb55fba.js","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/ModifyUpload-c89cfce3.js","assets/IconButton-d42f3661.js","assets/BlockLabel-66866176.js","assets/Empty-eec13822.js","assets/ShareButton-8cd3d8f6.js","assets/index-be790e2e.css"]),box:()=>F(()=>import("./index-aa3a045c.js"),["assets/index-aa3a045c.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css"]),button:()=>F(()=>import("./index-3cb0bda2.js"),["assets/index-3cb0bda2.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css"]),chatbot:()=>F(()=>import("./index-37e7aa9b.js"),["assets/index-37e7aa9b.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/ShareButton-8cd3d8f6.js","assets/IconButton-d42f3661.js","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/BlockLabel-66866176.js","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/index-14bdc434.css"]),checkbox:()=>F(()=>import("./index-34e368b6.js"),["assets/index-34e368b6.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/Info-7c6961ef.js","assets/ColorPicker-5063dbc4.css"]),checkboxgroup:()=>F(()=>import("./index-6b9ac83e.js"),["assets/index-6b9ac83e.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/BlockTitle-dee077e8.js","assets/Info-7c6961ef.js","assets/ColorPicker-5063dbc4.css"]),code:()=>F(()=>import("./index-3ba00a4a.js").then(e=>e.F),["assets/index-3ba00a4a.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/BlockLabel-66866176.js","assets/Empty-eec13822.js","assets/Copy-9f1657c4.js","assets/Download-daff1959.js","assets/index-4ccfb72c.css"]),colorpicker:()=>F(()=>import("./index-b7998330.js"),["assets/index-b7998330.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/BlockTitle-dee077e8.js","assets/Info-7c6961ef.js","assets/ColorPicker-5063dbc4.css"]),column:()=>F(()=>import("./index-ff7efb6d.js"),["assets/index-ff7efb6d.js","assets/Column-6c43afc7.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Column-2853eb31.css"]),dataframe:()=>F(()=>import("./index-bacb8946.js"),["assets/index-bacb8946.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/Upload-9bb55fba.js","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/dsv-576afacd.js","assets/index-9ae8fa0e.css"]),dataset:()=>F(()=>import("./index-942e8f2b.js"),["assets/index-942e8f2b.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/Image-1cf93ae5.js","assets/Image-003ee87c.css","assets/csv-b0b7514a.js","assets/dsv-576afacd.js","assets/Model3D-1511e3cc.js","assets/Model3D-98fc2b2c.css","assets/index-322e8a8e.css"]),dropdown:()=>F(()=>import("./index-ecdf43f2.js"),["assets/index-ecdf43f2.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/BlockTitle-dee077e8.js","assets/Info-7c6961ef.js","assets/ColorPicker-5063dbc4.css"]),file:()=>F(()=>import("./index-9a8f514c.js"),["assets/index-9a8f514c.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/BlockLabel-66866176.js","assets/Empty-eec13822.js","assets/File-b8a2be67.js","assets/Upload-9bb55fba.js","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/ModifyUpload-c89cfce3.js","assets/IconButton-d42f3661.js","assets/UploadText-f599be03.js","assets/UploadText-690664d1.css","assets/index-aef3869a.css"]),form:()=>F(()=>import("./index-5a08489b.js"),["assets/index-5a08489b.js","assets/Form-cd229de0.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Form-3812b7f1.css"]),gallery:()=>F(()=>import("./index-3b0ff54c.js"),["assets/index-3b0ff54c.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/BlockLabel-66866176.js","assets/Empty-eec13822.js","assets/ShareButton-8cd3d8f6.js","assets/IconButton-d42f3661.js","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/ModifyUpload-c89cfce3.js","assets/Image-0fe369ad.js","assets/index-1e03cd90.css"]),group:()=>F(()=>import("./index-c231646e.js"),["assets/index-c231646e.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/index-37519934.css"]),highlightedtext:()=>F(()=>import("./index-28bbfef4.js"),["assets/index-28bbfef4.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/color-90ab3aab.js","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/BlockLabel-66866176.js","assets/Empty-eec13822.js","assets/index-928645ac.css"]),html:()=>F(()=>import("./index-3c29bea1.js"),["assets/index-3c29bea1.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/index-329f8260.css"]),image:()=>F(()=>import("./index-378cb75c.js"),["assets/index-378cb75c.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/BlockLabel-66866176.js","assets/Image-0fe369ad.js","assets/StaticImage.svelte_svelte_type_style_lang-7eb5d885.js","assets/StaticImage-508005b4.css","assets/IconButton-d42f3661.js","assets/ModifyUpload-c89cfce3.js","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/Upload-9bb55fba.js","assets/ShareButton-8cd3d8f6.js","assets/Empty-eec13822.js","assets/Download-daff1959.js","assets/UploadText-f599be03.js","assets/UploadText-690664d1.css","assets/Image-1cf93ae5.js","assets/Image-003ee87c.css"]),interpretation:()=>F(()=>import("./index-a39e64b0.js"),["assets/index-a39e64b0.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/BlockTitle-dee077e8.js","assets/Info-7c6961ef.js","assets/index-6acaa952.css"]),json:()=>F(()=>import("./index-9001a1ae.js"),["assets/index-9001a1ae.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/Copy-9f1657c4.js","assets/Empty-eec13822.js","assets/BlockLabel-66866176.js","assets/index-3ca142e0.css"]),label:()=>F(()=>import("./index-1ec93f47.js"),["assets/index-1ec93f47.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/BlockLabel-66866176.js","assets/Empty-eec13822.js","assets/index-cc2431f4.css"]),markdown:()=>F(()=>import("./index-21b530d6.js"),["assets/index-21b530d6.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/index-edf307d2.css"]),model3d:()=>F(()=>import("./index-19bdec54.js"),["assets/index-19bdec54.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/BlockLabel-66866176.js","assets/Empty-eec13822.js","assets/File-b8a2be67.js","assets/IconButton-d42f3661.js","assets/Download-daff1959.js","assets/Upload-9bb55fba.js","assets/ModifyUpload-c89cfce3.js","assets/UploadText-f599be03.js","assets/UploadText-690664d1.css","assets/Model3D-1511e3cc.js","assets/Model3D-98fc2b2c.css","assets/index-4ffdbeab.css"]),number:()=>F(()=>import("./index-0a171ecc.js"),["assets/index-0a171ecc.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/BlockTitle-dee077e8.js","assets/Info-7c6961ef.js","assets/ColorPicker-5063dbc4.css"]),plot:()=>F(()=>import("./index-22108117.js"),["assets/index-22108117.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/color-90ab3aab.js","assets/linear-58a44b5e.js","assets/dsv-576afacd.js","assets/Empty-eec13822.js","assets/BlockLabel-66866176.js","assets/index-2908e8a9.css"]),radio:()=>F(()=>import("./index-dcd0cf9c.js"),["assets/index-dcd0cf9c.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/BlockTitle-dee077e8.js","assets/Info-7c6961ef.js","assets/ColorPicker-5063dbc4.css"]),row:()=>F(()=>import("./index-390bcf9f.js"),["assets/index-390bcf9f.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/index-93c91554.css"]),slider:()=>F(()=>import("./index-10c5655a.js"),["assets/index-10c5655a.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/BlockTitle-dee077e8.js","assets/Info-7c6961ef.js","assets/ColorPicker-5063dbc4.css"]),state:()=>F(()=>import("./index-9af10d66.js"),["assets/index-9af10d66.js","assets/index-1d65707a.js","assets/index-f2292b12.css"]),statustracker:()=>F(()=>import("./index-5925880b.js"),["assets/index-5925880b.js","assets/index-1d65707a.js","assets/index-f2292b12.css"]),tabs:()=>F(()=>import("./index-c20bfa80.js"),["assets/index-c20bfa80.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/TabItem.svelte_svelte_type_style_lang-1276453b.js","assets/TabItem-e9c69a3d.css","assets/Column-2853eb31.css"]),tabitem:()=>F(()=>import("./index-5605d000.js"),["assets/index-5605d000.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/TabItem.svelte_svelte_type_style_lang-1276453b.js","assets/TabItem-e9c69a3d.css","assets/Column-6c43afc7.js","assets/Column-2853eb31.css"]),textbox:()=>F(()=>import("./index-6a563d90.js"),["assets/index-6a563d90.js","assets/Textbox-1f11d244.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/BlockTitle-dee077e8.js","assets/Info-7c6961ef.js","assets/Copy-9f1657c4.js","assets/ColorPicker-5063dbc4.css"]),timeseries:()=>F(()=>import("./index-3610549a.js"),["assets/index-3610549a.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Upload-9bb55fba.js","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/ModifyUpload-c89cfce3.js","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/IconButton-d42f3661.js","assets/BlockLabel-66866176.js","assets/Empty-eec13822.js","assets/color-90ab3aab.js","assets/csv-b0b7514a.js","assets/dsv-576afacd.js","assets/linear-58a44b5e.js","assets/UploadText-f599be03.js","assets/UploadText-690664d1.css","assets/index-9da94804.css"]),uploadbutton:()=>F(()=>import("./index-d80d0bbf.js"),["assets/index-d80d0bbf.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/index-03d58ab8.css"]),video:()=>F(()=>import("./index-097d3f80.js"),["assets/index-097d3f80.js","assets/index-1d65707a.js","assets/index-f2292b12.css","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/Button-f155035a.js","assets/Button-9b719f62.css","assets/Upload-9bb55fba.js","assets/ModifyUpload-c89cfce3.js","assets/IconButton-d42f3661.js","assets/BlockLabel-66866176.js","assets/StaticImage.svelte_svelte_type_style_lang-7eb5d885.js","assets/StaticImage-508005b4.css","assets/Empty-eec13822.js","assets/ShareButton-8cd3d8f6.js","assets/Download-daff1959.js","assets/UploadText-f599be03.js","assets/UploadText-690664d1.css","assets/index-fe39713d.css"])},or="أرسل",lr="أمسح",ar="فسِّر",sr="بلِّغ",ur="أمثلة",cr="أو",Qa={interface:{drop_image:"أسقط الصورة هنا",drop_video:"أسقط الفيديو هنا",drop_audio:"أسقط الملف الصوتي هنا",drop_file:"أسقط الملف هنا",drop_csv:"أسقط ملف البيانات هنا",click_to_upload:"إضغط للتحميل",view_api:"إستخدم واجهة البرمجة",built_with_Gradio:"تم الإنشاء بإستخدام Gradio"},Submit:or,Clear:lr,Interpret:ar,Flag:sr,Examples:ur,or:cr},$a=Object.freeze(Object.defineProperty({__proto__:null,Clear:lr,Examples:ur,Flag:sr,Interpret:ar,Submit:or,default:Qa,or:cr},Symbol.toStringTag,{value:"Module"})),fr="Envia",_r="Neteja",hr="Interpreta",pr="Avisa",dr="Exemples",mr="o",Ka={interface:{drop_image:"Deixeu anar la imatge aquí",drop_video:"Deixeu anar el vídeo aquí",drop_audio:"Deixeu anar l'àudio aquí",drop_file:"Deixeu anar el fitxer aquí",drop_csv:"Deixeu anar el CSV aquí",click_to_upload:"Feu clic per pujar",view_api:"Veure l'API",built_with_Gradio:"Construït amb gradio",copy_to_clipboard:"Copia el json",loading:"S'està carregant",error:"ERROR",empty:"Buit"},Submit:fr,Clear:_r,Interpret:hr,Flag:pr,Examples:dr,or:mr},es=Object.freeze(Object.defineProperty({__proto__:null,Clear:_r,Examples:dr,Flag:pr,Interpret:hr,Submit:fr,default:Ka,or:mr},Symbol.toStringTag,{value:"Module"})),gr="Absenden",br="Löschen",vr="Ersteller",Er="Flag",yr="Beispiele",Sr="oder",ts={interface:{drop_image:"Bild hier ablegen",drop_video:"Video hier ablegen",drop_audio:"Audio hier ablegen",drop_file:"Datei hier ablegen",drop_csv:"CSV Datei hier ablegen",click_to_upload:"Hochladen",view_api:"API anschauen",built_with_Gradio:"Mit Gradio erstellt"},Submit:gr,Clear:br,Interpret:vr,Flag:Er,Examples:yr,or:Sr},ns=Object.freeze(Object.defineProperty({__proto__:null,Clear:br,Examples:yr,Flag:Er,Interpret:vr,Submit:gr,default:ts,or:Sr},Symbol.toStringTag,{value:"Module"})),wr="Submit",Tr="Clear",Ir="Interpret",kr="Flag",Ar="Examples",Cr="or",rs={interface:{drop_image:"Drop Image Here",drop_video:"Drop Video Here",drop_audio:"Drop Audio Here",drop_file:"Drop File Here",drop_csv:"Drop CSV Here",click_to_upload:"Click to Upload",view_api:"view the api",built_with_Gradio:"Built with gradio",copy_to_clipboard:"copy json",loading:"Loading",error:"ERROR",empty:"Empty"},Submit:wr,Clear:Tr,Interpret:Ir,Flag:kr,Examples:Ar,or:Cr},is=Object.freeze(Object.defineProperty({__proto__:null,Clear:Tr,Examples:Ar,Flag:kr,Interpret:Ir,Submit:wr,default:rs,or:Cr},Symbol.toStringTag,{value:"Module"})),Pr="Enviar",Or="Limpiar",Br="Interpretar",Hr="Avisar",Lr="Ejemplos",jr="o",os={interface:{drop_image:"Coloque la imagen aquí",drop_video:"Coloque el video aquí",drop_audio:"Coloque el audio aquí",drop_file:"Coloque el archivo aquí",drop_csv:"Coloque el CSV aquí",click_to_upload:"Haga click para cargar",view_api:"Ver la API",built_with_Gradio:"Construido con Gradio"},Submit:Pr,Clear:Or,Interpret:Br,Flag:Hr,Examples:Lr,or:jr},ls=Object.freeze(Object.defineProperty({__proto__:null,Clear:Or,Examples:Lr,Flag:Hr,Interpret:Br,Submit:Pr,default:os,or:jr},Symbol.toStringTag,{value:"Module"})),Nr="ارسال",Rr="حذف",Mr="تفسیر",xr="پرچم",Dr="مثال ها",Fr="یا",as={interface:{drop_image:"تصویر را اینجا رها کنید",drop_video:"ویدیو را اینجا رها کنید",drop_audio:"صوت را اینجا رها کنید",drop_file:"فایل را اینجا رها کنید",drop_csv:"فایل csv را اینجا رها کنید",click_to_upload:"برای آپلود کلیک کنید",view_api:"api را مشاهده کنید",built_with_Gradio:"ساخته شده با gradio"},Submit:Nr,Clear:Rr,Interpret:Mr,Flag:xr,Examples:Dr,or:Fr},ss=Object.freeze(Object.defineProperty({__proto__:null,Clear:Rr,Examples:Dr,Flag:xr,Interpret:Mr,Submit:Nr,default:as,or:Fr},Symbol.toStringTag,{value:"Module"})),Gr="Soumettre",Ur="Nettoyer",Vr="Interpréter",zr="Signaler",qr="Exemples",Xr="ou",us={interface:{drop_image:"Déposer l'Image Ici",drop_video:"Déposer la Vidéo Ici",drop_audio:"Déposer l'Audio Ici",drop_file:"Déposer le Fichier Ici",drop_csv:"Déposer le CSV Ici",click_to_upload:"Cliquer pour Télécharger",view_api:"Voir l'API",built_with_Gradio:"Conçu avec Gradio"},Submit:Gr,Clear:Ur,Interpret:Vr,Flag:zr,Examples:qr,or:Xr},cs=Object.freeze(Object.defineProperty({__proto__:null,Clear:Ur,Examples:qr,Flag:zr,Interpret:Vr,Submit:Gr,default:us,or:Xr},Symbol.toStringTag,{value:"Module"})),Wr="שלח",Zr="נקה",Yr="לפרש",Jr="סמן",Qr="דוגמות",$r="או",fs={interface:{drop_image:"גרור קובץ תמונה לכאן",drop_video:"גרור קובץ סרטון לכאן",drop_audio:"גרור לכאן קובץ שמע",drop_file:"גרור קובץ לכאן",drop_csv:"גרור csv קובץ לכאן",click_to_upload:"לחץ כדי להעלות",view_api:"צפה ב API",built_with_Gradio:"בנוי עם גרדיו"},Submit:Wr,Clear:Zr,Interpret:Yr,Flag:Jr,Examples:Qr,or:$r},_s=Object.freeze(Object.defineProperty({__proto__:null,Clear:Zr,Examples:Qr,Flag:Jr,Interpret:Yr,Submit:Wr,default:fs,or:$r},Symbol.toStringTag,{value:"Module"})),Kr="सबमिट करे",ei="हटाये",ti="व्याख्या करे",ni="चिह्नित करे",ri="उदाहरण",ii="या",hs={interface:{drop_image:"यहाँ इमेज ड्रॉप करें",drop_video:"यहाँ वीडियो ड्रॉप करें",drop_audio:"यहाँ ऑडियो ड्रॉप करें",drop_file:"यहाँ File ड्रॉप करें",drop_csv:"यहाँ CSV ड्रॉप करें",click_to_upload:"अपलोड के लिए बटन दबायें",view_api:"API को देखे",built_with_Gradio:"Gradio से बना"},Submit:Kr,Clear:ei,Interpret:ti,Flag:ni,Examples:ri,or:ii},ps=Object.freeze(Object.defineProperty({__proto__:null,Clear:ei,Examples:ri,Flag:ni,Interpret:ti,Submit:Kr,default:hs,or:ii},Symbol.toStringTag,{value:"Module"})),oi="送信",li="クリア",ai="解釈",si="フラグする",ui="入力例",ci="または",ds={interface:{drop_image:"ここに画像をドロップ",drop_video:"ここに動画をドロップ",drop_audio:"ここに音声をドロップ",drop_file:"ここにファイルをドロップ",drop_csv:"ここにCSVをドロップ",click_to_upload:"クリックしてアップロード",view_api:"APIを見る",built_with_Gradio:"gradioで作ろう"},Submit:oi,Clear:li,Interpret:ai,Flag:si,Examples:ui,or:ci},ms=Object.freeze(Object.defineProperty({__proto__:null,Clear:li,Examples:ui,Flag:si,Interpret:ai,Submit:oi,default:ds,or:ci},Symbol.toStringTag,{value:"Module"})),fi="제출하기",_i="클리어",hi="설명하기",pi="플래그",di="예시",mi="또는",gs={interface:{drop_image:"이미지를 끌어 놓으세요",drop_video:"비디오를 끌어 놓으세요",drop_audio:"오디오를 끌어 놓으세요",drop_file:"파일을 끌어 놓으세요",drop_csv:"CSV파일을 끌어 놓으세요",click_to_upload:"클릭해서 업로드하기",view_api:"API 보기",built_with_Gradio:"gradio로 제작되었습니다"},Submit:fi,Clear:_i,Interpret:hi,Flag:pi,Examples:di,or:mi},bs=Object.freeze(Object.defineProperty({__proto__:null,Clear:_i,Examples:di,Flag:pi,Interpret:hi,Submit:fi,default:gs,or:mi},Symbol.toStringTag,{value:"Module"})),gi="Pateikti",bi="Trinti",vi="Interpretuoti",Ei="Pažymėti",yi="Pavyzdžiai",Si="arba",vs={interface:{drop_image:"Įkelkite paveikslėlį čia",drop_video:"Įkelkite vaizdo įrašą čia",drop_audio:"Įkelkite garso įrašą čia",drop_file:"Įkelkite bylą čia",drop_csv:"Įkelkite CSV čia",click_to_upload:"Spustelėkite norėdami įkelti",view_api:"peržiūrėti api",built_with_Gradio:"sukurta su gradio"},Submit:gi,Clear:bi,Interpret:vi,Flag:Ei,Examples:yi,or:Si},Es=Object.freeze(Object.defineProperty({__proto__:null,Clear:bi,Examples:yi,Flag:Ei,Interpret:vi,Submit:gi,default:vs,or:Si},Symbol.toStringTag,{value:"Module"})),wi="Zend in",Ti="Wis",Ii="Interpreteer",ki="Vlag",Ai="Voorbeelden",Ci="of",ys={interface:{drop_image:"Sleep een Afbeelding hier",drop_video:"Sleep een Video hier",drop_audio:"Sleep een Geluidsbestand hier",drop_file:"Sleep een Document hier",drop_csv:"Sleep een CSV hier",click_to_upload:"Klik om the Uploaden",view_api:"zie de api",built_with_Gradio:"gemaakt met gradio"},Submit:wi,Clear:Ti,Interpret:Ii,Flag:ki,Examples:Ai,or:Ci},Ss=Object.freeze(Object.defineProperty({__proto__:null,Clear:Ti,Examples:Ai,Flag:ki,Interpret:Ii,Submit:wi,default:ys,or:Ci},Symbol.toStringTag,{value:"Module"})),Pi="Zatwierdź",Oi="Wyczyść",Bi="Interpretuj",Hi="Oznacz",Li="Przykłady",ji="lub",ws={interface:{drop_image:"Przeciągnij tutaj zdjęcie",drop_video:"Przeciągnij tutaj video",drop_audio:"Przeciągnij tutaj audio",drop_file:"Przeciągnij tutaj plik",drop_csv:"Przeciągnij tutaj CSV",click_to_upload:"Kliknij, aby przesłać",view_api:"zobacz api",built_with_Gradio:"utworzone z gradio"},Submit:Pi,Clear:Oi,Interpret:Bi,Flag:Hi,Examples:Li,or:ji},Ts=Object.freeze(Object.defineProperty({__proto__:null,Clear:Oi,Examples:Li,Flag:Hi,Interpret:Bi,Submit:Pi,default:ws,or:ji},Symbol.toStringTag,{value:"Module"})),Ni="Enviar",Ri="Limpar",Mi="Interpretar",xi="Marcar",Di="Exemplos",Fi="ou",Is={interface:{drop_image:"Solte a Imagem Aqui",drop_video:"Solte o Vídeo Aqui",drop_audio:"Solte o Áudio Aqui",drop_file:"Solte o Arquivo Aqui",drop_csv:"Solte o CSV Aqui",click_to_upload:"Clique para o Upload",view_api:"Veja a API",built_with_Gradio:"Construído com gradio",copy_to_clipboard:"copiar para o clipboard",loading:"Carregando",error:"ERRO",empty:"Vazio"},Submit:Ni,Clear:Ri,Interpret:Mi,Flag:xi,Examples:Di,or:Fi},ks=Object.freeze(Object.defineProperty({__proto__:null,Clear:Ri,Examples:Di,Flag:xi,Interpret:Mi,Submit:Ni,default:Is,or:Fi},Symbol.toStringTag,{value:"Module"})),Gi="Исполнить",Ui="Очистить",Vi="Интерпретировать",zi="Пометить",qi="Примеры",Xi="или",As={interface:{drop_image:"Поместите Изображение Здесь",drop_video:"Поместите Видео Здесь",drop_audio:"Поместите Аудио Здесь",drop_file:"Поместите Документ Здесь",drop_csv:"Поместите CSV Здесь",click_to_upload:"Нажмите, чтобы загрузить",view_api:"просмотр api",built_with_Gradio:"сделано с помощью gradio"},Submit:Gi,Clear:Ui,Interpret:Vi,Flag:zi,Examples:qi,or:Xi},Cs=Object.freeze(Object.defineProperty({__proto__:null,Clear:Ui,Examples:qi,Flag:zi,Interpret:Vi,Submit:Gi,default:As,or:Xi},Symbol.toStringTag,{value:"Module"})),Wi="சமர்ப்பி",Zi="அழி",Yi="உட்பொருள்",Ji="கொடியிடு",Qi="எடுத்துக்காட்டுகள்",$i="அல்லது",Ps={interface:{drop_image:"படத்தை வை",drop_video:"வீடியோவை வை",drop_audio:"ஆடியோவை வை",drop_file:"கோப்பை வை",drop_csv:"சிஎஸ்வி வை",click_to_upload:"பதிவேற்ற கிளிக் செய்",view_api:"அபியை காண்",built_with_Gradio:"க்ரேடியோ-வுடன் கட்டப்பட்டது"},Submit:Wi,Clear:Zi,Interpret:Yi,Flag:Ji,Examples:Qi,or:$i},Os=Object.freeze(Object.defineProperty({__proto__:null,Clear:Zi,Examples:Qi,Flag:Ji,Interpret:Yi,Submit:Wi,default:Ps,or:$i},Symbol.toStringTag,{value:"Module"})),Ki="Yükle",eo="Temizle",to="Yorumla",no="Etiketle",ro="örnekler",io="veya",Bs={interface:{drop_image:"Resmi Buraya Sürükle",drop_video:"Videoyu Buraya Sürükle",drop_audio:"Kaydı Buraya Sürükle",drop_file:"Dosyayı Buraya Sürükle",drop_csv:"CSV'yi Buraya Sürükle",click_to_upload:"Yüklemek için Tıkla",view_api:"api'yi görüntüle",built_with_Gradio:"Gradio ile oluşturulmuştur"},Submit:Ki,Clear:eo,Interpret:to,Flag:no,Examples:ro,or:io},Hs=Object.freeze(Object.defineProperty({__proto__:null,Clear:eo,Examples:ro,Flag:no,Interpret:to,Submit:Ki,default:Bs,or:io},Symbol.toStringTag,{value:"Module"})),oo="Надіслати",lo="Очистити",ao="Пояснити результат",so="Позначити",uo="Приклади",co="або",Ls={interface:{drop_image:"Перетягніть зображення сюди",drop_video:"Перетягніть відео сюди",drop_audio:"Перетягніть аудіо сюди",drop_file:"Перетягніть файл сюди",drop_csv:"Перетягніть CSV-файл сюди",click_to_upload:"Натисніть щоб завантажити",view_api:"Переглянути API",built_with_Gradio:"Зроблено на основі gradio"},Submit:oo,Clear:lo,Interpret:ao,Flag:so,Examples:uo,or:co},js=Object.freeze(Object.defineProperty({__proto__:null,Clear:lo,Examples:uo,Flag:so,Interpret:ao,Submit:oo,default:Ls,or:co},Symbol.toStringTag,{value:"Module"})),fo="جمع کریں",_o="ہٹا دیں",ho="تشریح کریں",po="نشان لگائیں",mo="مثالیں",go="یا",Ns={interface:{drop_image:"یہاں تصویر ڈراپ کریں",drop_video:"یہاں ویڈیو ڈراپ کریں",drop_audio:"یہاں آڈیو ڈراپ کریں",drop_file:"یہاں فائل ڈراپ کریں",drop_csv:"یہاں فائل ڈراپ کریں",click_to_upload:"اپ لوڈ کے لیے کلک کریں",view_api:"API دیکھیں",built_with_Gradio:"کے ساتھ بنایا گیا Gradio"},Submit:fo,Clear:_o,Interpret:ho,Flag:po,Examples:mo,or:go},Rs=Object.freeze(Object.defineProperty({__proto__:null,Clear:_o,Examples:mo,Flag:po,Interpret:ho,Submit:fo,default:Ns,or:go},Symbol.toStringTag,{value:"Module"})),bo="Yubor",vo="Tozalash",Eo="Tushuntirish",yo="Bayroq",So="Namunalar",wo="或",Ms={interface:{drop_image:"Rasmni Shu Yerga Tashlang",drop_video:"Videoni Shu Yerga Tashlang",drop_audio:"Audioni Shu Yerga Tashlang",drop_file:"Faylni Shu Yerga Tashlang",drop_csv:"CSVni Shu Yerga Tashlang",click_to_upload:"Yuklash uchun Bosing",view_api:"apini ko'ring",built_with_Gradio:"gradio bilan qilingan"},Submit:bo,Clear:vo,Interpret:Eo,Flag:yo,Examples:So,or:wo},xs=Object.freeze(Object.defineProperty({__proto__:null,Clear:vo,Examples:So,Flag:yo,Interpret:Eo,Submit:bo,default:Ms,or:wo},Symbol.toStringTag,{value:"Module"})),To="提交",Io="清除",ko="解释",Ao="标记",Co="示例",Po="或",Ds={interface:{drop_image:"拖放图片至此处",drop_video:"拖放视频至此处",drop_audio:"拖放音频至此处",drop_file:"拖放文件至此处",drop_csv:"拖放CSV至此处",click_to_upload:"点击上传",view_api:"查看API",built_with_Gradio:"使用Gradio构建"},Submit:To,Clear:Io,Interpret:ko,Flag:Ao,Examples:Co,or:Po},Fs=Object.freeze(Object.defineProperty({__proto__:null,Clear:Io,Examples:Co,Flag:Ao,Interpret:ko,Submit:To,default:Ds,or:Po},Symbol.toStringTag,{value:"Module"})),Oo="提交",Bo="清除",Ho="解釋",Lo="Flag",jo="範例",No="或",Gs={interface:{drop_image:"刪除圖片",drop_video:"刪除影片",drop_audio:"刪除音頻",drop_file:"刪除檔案",drop_csv:"刪除CSV",click_to_upload:"點擊上傳",view_api:"查看API",built_with_Gradio:"使用Gradio構建"},Submit:Oo,Clear:Bo,Interpret:Ho,Flag:Lo,Examples:jo,or:No},Us=Object.freeze(Object.defineProperty({__proto__:null,Clear:Bo,Examples:jo,Flag:Lo,Interpret:Ho,Submit:Oo,default:Gs,or:No},Symbol.toStringTag,{value:"Module"})),qt=Object.assign({"./lang/ar.json":$a,"./lang/ca.json":es,"./lang/de.json":ns,"./lang/en.json":is,"./lang/es.json":ls,"./lang/fa.json":ss,"./lang/fr.json":cs,"./lang/he.json":_s,"./lang/hi.json":ps,"./lang/ja.json":ms,"./lang/ko.json":bs,"./lang/lt.json":Es,"./lang/nl.json":Ss,"./lang/pl.json":Ts,"./lang/pt-BR.json":ks,"./lang/ru.json":Cs,"./lang/ta.json":Os,"./lang/tr.json":Hs,"./lang/uk.json":js,"./lang/ur.json":Rs,"./lang/uz.json":xs,"./lang/zh-CN.json":Fs,"./lang/zh-tw.json":Us});function Vs(){let e={};for(const t in qt){const n=t.split("/").pop().split(".").shift();e[n]=qt[t].default}return e}const Xt=Vs();for(const e in Xt)nr(e,Xt[e]);function zs(){Ra({fallbackLocale:"en",initialLocale:Ma()})}function Wt(e,t,n){const r=e.slice();return r[8]=t[n].component,r[17]=t[n].id,r[2]=t[n].props,r[18]=t[n].children,r[9]=t[n].has_modes,r}function Zt(e){let t=[],n=new Map,r,i,o=oe(e[1]);const a=l=>l[17];for(let l=0;l{r=null}),ae())},i(i){n||(B(r),n=!0)},o(i){N(r),n=!1},d(i){i&&E(t),r&&r.d(i)}}}function Xs(e){let t,n,r,i;const o=[{elem_id:"elem_id"in e[2]&&e[2].elem_id||`component-${e[4]}`},{elem_classes:"elem_classes"in e[2]&&e[2].elem_classes||[]},{target:e[6]},e[2],{theme_mode:e[7]},{root:e[3]}];function a(s){e[15](s)}var l=e[8];function u(s){let c={$$slots:{default:[qs]},$$scope:{ctx:s}};for(let h=0;hHt(t,"value",a)),t.$on("prop_change",e[10])),{c(){t&&W(t.$$.fragment),r=de()},m(s,c){t&&Z(t,s,c),y(s,r,c),i=!0},p(s,[c]){const h=c&220?il(o,[c&20&&{elem_id:"elem_id"in s[2]&&s[2].elem_id||`component-${s[4]}`},c&4&&{elem_classes:"elem_classes"in s[2]&&s[2].elem_classes||[]},c&64&&{target:s[6]},c&4&&ol(s[2]),c&128&&{theme_mode:s[7]},c&8&&{root:s[3]}]):{};if(c&2097387&&(h.$$scope={dirty:c,ctx:s}),!n&&c&17&&(n=!0,h.value=s[0][s[4]].props.value,ll(()=>n=!1)),c&256&&l!==(l=s[8])){if(t){le();const _=t;N(_.$$.fragment,1,0,()=>{Y(_,1)}),ae()}l?(t=Bt(l,u(s)),s[14](t),De.push(()=>Ht(t,"value",a)),t.$on("prop_change",s[10]),W(t.$$.fragment),B(t.$$.fragment,1),Z(t,r.parentNode,r)):t=null}else l&&t.$set(h)},i(s){i||(t&&B(t.$$.fragment,s),i=!0)},o(s){t&&N(t.$$.fragment,s),i=!1},d(s){s&&E(r),e[14](null),t&&Y(t,s)}}}function Ws(e,t,n){let{root:r}=t,{component:i}=t,{instance_map:o}=t,{id:a}=t,{props:l}=t,{children:u}=t,{dynamic_ids:s}=t,{has_modes:c}=t,{parent:h=null}=t,{target:_}=t,{theme_mode:p}=t;const v=$e();c&&(l.interactive===!1?l.mode="static":l.interactive===!0||s.has(a)?l.mode="dynamic":l.mode="static"),Et(()=>(v("mount",a),()=>v("destroy",a))),al("BLOCK_KEY",h);function b(f){for(const P in f.detail)n(0,o[a].props[P]=f.detail[P],o)}function g(f){ke.call(this,e,f)}function S(f){ke.call(this,e,f)}function k(f){De[f?"unshift":"push"](()=>{o[a].instance=f,n(0,o)})}function T(f){e.$$.not_equal(o[a].props.value,f)&&(o[a].props.value=f,n(0,o))}return e.$$set=f=>{"root"in f&&n(3,r=f.root),"component"in f&&n(8,i=f.component),"instance_map"in f&&n(0,o=f.instance_map),"id"in f&&n(4,a=f.id),"props"in f&&n(2,l=f.props),"children"in f&&n(1,u=f.children),"dynamic_ids"in f&&n(5,s=f.dynamic_ids),"has_modes"in f&&n(9,c=f.has_modes),"parent"in f&&n(11,h=f.parent),"target"in f&&n(6,_=f.target),"theme_mode"in f&&n(7,p=f.theme_mode)},e.$$.update=()=>{e.$$.dirty&3&&n(1,u=u&&u.filter(f=>o[f.id].type!=="statustracker")),e.$$.dirty&19&&o[a].type==="form"&&(u?.every(f=>!f.props.visible)?n(2,l.visible=!1,l):n(2,l.visible=!0,l))},[o,u,l,r,a,s,_,p,i,c,b,h,g,S,k,T]}class Ro extends ue{constructor(t){super(),ce(this,t,Ws,Xs,fe,{root:3,component:8,instance_map:0,id:4,props:2,children:1,dynamic_ids:5,has_modes:9,parent:11,target:6,theme_mode:7})}}function Zs(e){let t,n,r,i;return{c(){t=be("svg"),n=be("g"),r=be("path"),i=be("path"),d(r,"d","M3.789,0.09C3.903,-0.024 4.088,-0.024 4.202,0.09L4.817,0.705C4.931,0.819 4.931,1.004 4.817,1.118L1.118,4.817C1.004,4.931 0.819,4.931 0.705,4.817L0.09,4.202C-0.024,4.088 -0.024,3.903 0.09,3.789L3.789,0.09Z"),d(i,"d","M4.825,3.797C4.934,3.907 4.934,4.084 4.825,4.193L4.193,4.825C4.084,4.934 3.907,4.934 3.797,4.825L0.082,1.11C-0.027,1.001 -0.027,0.823 0.082,0.714L0.714,0.082C0.823,-0.027 1.001,-0.027 1.11,0.082L4.825,3.797Z"),d(t,"width","100%"),d(t,"height","100%"),d(t,"viewBox","0 0 5 5"),d(t,"version","1.1"),d(t,"xmlns","http://www.w3.org/2000/svg"),d(t,"xmlns:xlink","http://www.w3.org/1999/xlink"),d(t,"xml:space","preserve"),ge(t,"fill","currentColor"),ge(t,"fill-rule","evenodd"),ge(t,"clip-rule","evenodd"),ge(t,"stroke-linejoin","round"),ge(t,"stroke-miterlimit","2")},m(o,a){y(o,t,a),m(t,n),m(n,r),m(n,i)},p:$,i:$,o:$,d(o){o&&E(t)}}}class Mo extends ue{constructor(t){super(),ce(this,t,null,Zs,fe,{})}}function Ys(e){let t,n,r,i,o,a,l,u,s,c,h,_,p,v,b;return _=new Mo({}),{c(){t=A("div"),n=A("h1"),n.textContent="API Docs",r=x(),i=A("p"),o=I(`No API Routes found for - `),a=A("code"),l=I(e[0]),u=x(),s=A("p"),s.innerHTML=`To expose an API endpoint of your app in this page, set the api_name - parameter of the event listener. -
          - For more information, visit the - API Page guide - . To hide the API documentation button and this page, set - show_api=False - in the - Blocks.launch() - method.`,c=x(),h=A("button"),W(_.$$.fragment),d(a,"class","svelte-e1ha0f"),d(i,"class","attention svelte-e1ha0f"),d(t,"class","wrap prose svelte-e1ha0f"),d(h,"class","svelte-e1ha0f")},m(g,S){y(g,t,S),m(t,n),m(t,r),m(t,i),m(i,o),m(i,a),m(a,l),m(t,u),m(t,s),y(g,c,S),y(g,h,S),Z(_,h,null),p=!0,v||(b=Se(h,"click",e[2]),v=!0)},p(g,[S]){(!p||S&1)&&q(l,g[0])},i(g){p||(B(_.$$.fragment,g),p=!0)},o(g){N(_.$$.fragment,g),p=!1},d(g){g&&(E(t),E(c),E(h)),Y(_),v=!1,b()}}}function Js(e,t,n){const r=$e();let{root:i}=t;const o=()=>r("close");return e.$$set=a=>{"root"in a&&n(0,i=a.root)},[i,r,o]}class Qs extends ue{constructor(t){super(),ce(this,t,Js,Ys,fe,{root:0})}}function Qe(e,t,n=null){return t===void 0?n==="py"?"None":null:t==="string"||t==="str"?n===null?e:'"'+e+'"':t==="number"?n===null?parseFloat(e):e:t==="boolean"||t=="bool"?n==="py"?(e=String(e),e==="true"?"True":"False"):n==="js"?e:e==="true":t==="List[str]"?(e=JSON.stringify(e),e):n===null?e===""?null:JSON.parse(e):typeof e=="string"?e===""?n==="py"?"None":"null":e:JSON.stringify(e)}const xo="https://gradio.s3-us-west-2.amazonaws.com/3.37.0/assets/api-logo-5346f193.svg";function Jt(e){let t;return{c(){t=I("s")},m(n,r){y(n,t,r)},d(n){n&&E(t)}}}function $s(e){let t,n,r,i,o,a,l,u,s,c,h,_,p,v,b,g,S,k,T,f=e[1]>1&&Jt();return g=new Mo({}),{c(){t=A("h2"),n=A("img"),i=x(),o=A("div"),a=I(`API documentation - `),l=A("div"),u=I(e[0]),s=x(),c=A("span"),h=A("span"),_=I(e[1]),p=I(" API endpoint"),f&&f.c(),v=x(),b=A("button"),W(g.$$.fragment),Ue(n.src,r=xo)||d(n,"src",r),d(n,"alt",""),d(n,"class","svelte-3n2nxs"),d(l,"class","url svelte-3n2nxs"),d(h,"class","url svelte-3n2nxs"),d(c,"class","counts svelte-3n2nxs"),d(t,"class","svelte-3n2nxs"),d(b,"class","svelte-3n2nxs")},m(P,H){y(P,t,H),m(t,n),m(t,i),m(t,o),m(o,a),m(o,l),m(l,u),m(t,s),m(t,c),m(c,h),m(h,_),m(c,p),f&&f.m(c,null),y(P,v,H),y(P,b,H),Z(g,b,null),S=!0,k||(T=Se(b,"click",e[3]),k=!0)},p(P,[H]){(!S||H&1)&&q(u,P[0]),(!S||H&2)&&q(_,P[1]),P[1]>1?f||(f=Jt(),f.c(),f.m(c,null)):f&&(f.d(1),f=null)},i(P){S||(B(g.$$.fragment,P),S=!0)},o(P){N(g.$$.fragment,P),S=!1},d(P){P&&(E(t),E(v),E(b)),f&&f.d(),Y(g),k=!1,T()}}}function Ks(e,t,n){let{root:r}=t,{api_count:i}=t;const o=$e(),a=()=>o("close");return e.$$set=l=>{"root"in l&&n(0,r=l.root),"api_count"in l&&n(1,i=l.api_count)},[r,i,o,a]}class eu extends ue{constructor(t){super(),ce(this,t,Ks,$s,fe,{root:0,api_count:1})}}function tu(e){let t,n;return{c(){t=be("svg"),n=be("path"),d(n,"stroke-linecap","round"),d(n,"stroke-linejoin","round"),d(n,"d","M12 9v3.75m9-.75a9 9 0 11-18 0 9 9 0 0118 0zm-9 3.75h.008v.008H12v-.008z"),d(t,"fill","none"),d(t,"stroke","currentColor"),d(t,"viewBox","0 0 24 24"),d(t,"width","100%"),d(t,"height","100%"),d(t,"xmlns","http://www.w3.org/2000/svg"),d(t,"aria-hidden","true"),d(t,"stroke-width","2"),d(t,"stroke-linecap","round"),d(t,"stroke-linejoin","round")},m(r,i){y(r,t,i),m(t,n)},p:$,i:$,o:$,d(r){r&&E(t)}}}let nu=class extends ue{constructor(t){super(),ce(this,t,null,tu,fe,{})}};function ru(e){let t,n;return{c(){t=be("svg"),n=be("path"),d(n,"stroke-linecap","round"),d(n,"stroke-linejoin","round"),d(n,"d","M11.25 11.25l.041-.02a.75.75 0 011.063.852l-.708 2.836a.75.75 0 001.063.853l.041-.021M21 12a9 9 0 11-18 0 9 9 0 0118 0zm-9-3.75h.008v.008H12V8.25z"),d(t,"fill","none"),d(t,"stroke","currentColor"),d(t,"viewBox","0 0 24 24"),d(t,"width","100%"),d(t,"height","100%"),d(t,"xmlns","http://www.w3.org/2000/svg"),d(t,"aria-hidden","true"),d(t,"stroke-width","2"),d(t,"stroke-linecap","round"),d(t,"stroke-linejoin","round")},m(r,i){y(r,t,i),m(t,n)},p:$,i:$,o:$,d(r){r&&E(t)}}}class iu extends ue{constructor(t){super(),ce(this,t,null,ru,fe,{})}}function ou(e){let t,n;return{c(){t=be("svg"),n=be("path"),d(n,"stroke-linecap","round"),d(n,"stroke-linejoin","round"),d(n,"d","M12 9v3.75m-9.303 3.376c-.866 1.5.217 3.374 1.948 3.374h14.71c1.73 0 2.813-1.874 1.948-3.374L13.949 3.378c-.866-1.5-3.032-1.5-3.898 0L2.697 16.126zM12 15.75h.007v.008H12v-.008z"),d(t,"fill","none"),d(t,"stroke","currentColor"),d(t,"stroke-width","2"),d(t,"viewBox","0 0 24 24"),d(t,"width","100%"),d(t,"height","100%"),d(t,"xmlns","http://www.w3.org/2000/svg"),d(t,"aria-hidden","true"),d(t,"stroke-linecap","round"),d(t,"stroke-linejoin","round")},m(r,i){y(r,t,i),m(t,n)},p:$,i:$,o:$,d(r){r&&E(t)}}}class lu extends ue{constructor(t){super(),ce(this,t,null,ou,fe,{})}}function Qt(e,t,n){const r=e.slice();return r[10]=t[n].label,r[11]=t[n].type,r[12]=t[n].python_type,r[13]=t[n].component,r[14]=t[n].serializer,r[16]=n,r}function $t(e){let t;return{c(){t=I("(")},m(n,r){y(n,t,r)},d(n){n&&E(t)}}}function au(e){let t=e[2][e[16]].type+"",n;return{c(){n=I(t)},m(r,i){y(r,n,i)},p(r,i){i&4&&t!==(t=r[2][r[16]].type+"")&&q(n,t)},d(r){r&&E(n)}}}function su(e){let t=e[12].type+"",n;return{c(){n=I(t)},m(r,i){y(r,n,i)},p(r,i){i&2&&t!==(t=r[12].type+"")&&q(n,t)},d(r){r&&E(n)}}}function Kt(e){let t;return{c(){t=I(",")},m(n,r){y(n,t,r)},d(n){n&&E(t)}}}function en(e){let t,n,r,i,o=e[10]+"",a,l,u=e[13]+"",s,c;function h(b,g){return b[3]==="python"?su:au}let _=h(e),p=_(e),v=e[1].length>1&&Kt();return{c(){t=A("div"),n=A("span"),r=I("# "),p.c(),i=I(` - representing output in '`),a=I(o),l=I("' "),s=I(u),c=I(` - component`),v&&v.c(),d(n,"class","desc svelte-1c7hj3i"),d(t,"class","svelte-1c7hj3i"),Ye(t,"second-level",e[1].length>1)},m(b,g){y(b,t,g),m(t,n),m(n,r),p.m(n,null),m(n,i),m(n,a),m(n,l),m(n,s),m(n,c),v&&v.m(t,null)},p(b,g){_===(_=h(b))&&p?p.p(b,g):(p.d(1),p=_(b),p&&(p.c(),p.m(n,i))),g&2&&o!==(o=b[10]+"")&&q(a,o),g&2&&u!==(u=b[13]+"")&&q(s,u),b[1].length>1?v||(v=Kt(),v.c(),v.m(t,null)):v&&(v.d(1),v=null),g&2&&Ye(t,"second-level",b[1].length>1)},d(b){b&&E(t),p.d(),v&&v.d()}}}function tn(e){let t;return{c(){t=I(")")},m(n,r){y(n,t,r)},d(n){n&&E(t)}}}function nn(e){let t,n,r;return n=new cl({props:{margin:!1}}),{c(){t=A("div"),W(n.$$.fragment),d(t,"class","load-wrap svelte-1c7hj3i")},m(i,o){y(i,t,o),Z(n,t,null),r=!0},i(i){r||(B(n.$$.fragment,i),r=!0)},o(i){N(n.$$.fragment,i),r=!1},d(i){i&&E(t),Y(n)}}}function uu(e){let t,n,r,i,o,a,l=e[1].length>1&&$t(),u=oe(e[1]),s=[];for(let _=0;_1&&tn(),h=e[0]&&nn();return{c(){t=A("div"),n=A("div"),l&&l.c(),r=x();for(let _=0;_1?l||(l=$t(),l.c(),l.m(n,r)):l&&(l.d(1),l=null),p&14){u=oe(_[1]);let v;for(v=0;v1?c||(c=tn(),c.c(),c.m(n,null)):c&&(c.d(1),c=null),(!a||p&1)&&Ye(n,"hide",_[0]),_[0]?h?p&1&&B(h,1):(h=nn(),h.c(),B(h,1),h.m(t,null)):h&&(le(),N(h,1,1,()=>{h=null}),ae())},i(_){a||(B(h),a=!0)},o(_){N(h),a=!1},d(_){_&&E(t),l&&l.d(),Ie(s,_),c&&c.d(),h&&h.d()}}}function cu(e){let t,n,r,i;return r=new yt({props:{$$slots:{default:[uu]},$$scope:{ctx:e}}}),{c(){t=A("h4"),t.innerHTML=`
          - Return Type(s)`,n=x(),W(r.$$.fragment),d(t,"class","svelte-1c7hj3i")},m(o,a){y(o,t,a),y(o,n,a),Z(r,o,a),i=!0},p(o,[a]){const l={};a&131087&&(l.$$scope={dirty:a,ctx:o}),r.$set(l)},i(o){i||(B(r.$$.fragment,o),i=!0)},o(o){N(r.$$.fragment,o),i=!1},d(o){o&&(E(t),E(n)),Y(r,o)}}}function fu(e,t,n){let{dependency:r}=t,{dependency_index:i}=t,{instance_map:o}=t,{dependency_outputs:a}=t,{is_running:l}=t,{root:u}=t,{endpoint_returns:s}=t,{js_returns:c}=t,{named:h}=t,{current_language:_}=t;return e.$$set=p=>{"dependency"in p&&n(4,r=p.dependency),"dependency_index"in p&&n(5,i=p.dependency_index),"instance_map"in p&&n(6,o=p.instance_map),"dependency_outputs"in p&&n(7,a=p.dependency_outputs),"is_running"in p&&n(0,l=p.is_running),"root"in p&&n(8,u=p.root),"endpoint_returns"in p&&n(1,s=p.endpoint_returns),"js_returns"in p&&n(2,c=p.js_returns),"named"in p&&n(9,h=p.named),"current_language"in p&&n(3,_=p.current_language)},[l,s,c,_,r,i,o,a,u,h]}class Do extends ue{constructor(t){super(),ce(this,t,fu,cu,fe,{dependency:4,dependency_index:5,instance_map:6,dependency_outputs:7,is_running:0,root:8,endpoint_returns:1,js_returns:2,named:9,current_language:3})}}function _u(e){let t;return{c(){t=I(e[0])},m(n,r){y(n,t,r)},p(n,r){r&1&&q(t,n[0])},d(n){n&&E(t)}}}function hu(e){let t,n;return t=new Sl({props:{size:"sm",$$slots:{default:[_u]},$$scope:{ctx:e}}}),t.$on("click",e[1]),{c(){W(t.$$.fragment)},m(r,i){Z(t,r,i),n=!0},p(r,[i]){const o={};i&9&&(o.$$scope={dirty:i,ctx:r}),t.$set(o)},i(r){n||(B(t.$$.fragment,r),n=!0)},o(r){N(t.$$.fragment,r),n=!1},d(r){Y(t,r)}}}function pu(e,t,n){let{code:r}=t,i="copy";function o(){navigator.clipboard.writeText(r),n(0,i="copied!"),setTimeout(()=>{n(0,i="copy")},1500)}return e.$$set=a=>{"code"in a&&n(2,r=a.code)},[i,o,r]}class nt extends ue{constructor(t){super(),ce(this,t,pu,hu,fe,{code:2})}}function du(e){let t,n,r,i,o,a;return n=new nt({props:{code:on}}),{c(){t=A("div"),W(n.$$.fragment),r=x(),i=A("div"),o=A("pre"),o.textContent=`$ ${on}`,d(t,"class","copy svelte-hq8ezf"),d(o,"class","svelte-hq8ezf")},m(l,u){y(l,t,u),Z(n,t,null),y(l,r,u),y(l,i,u),m(i,o),a=!0},p:$,i(l){a||(B(n.$$.fragment,l),a=!0)},o(l){N(n.$$.fragment,l),a=!1},d(l){l&&(E(t),E(r),E(i)),Y(n)}}}function mu(e){let t,n,r,i,o,a;return n=new nt({props:{code:rn}}),{c(){t=A("div"),W(n.$$.fragment),r=x(),i=A("div"),o=A("pre"),o.textContent=`$ ${rn}`,d(t,"class","copy svelte-hq8ezf"),d(o,"class","svelte-hq8ezf")},m(l,u){y(l,t,u),Z(n,t,null),y(l,r,u),y(l,i,u),m(i,o),a=!0},p:$,i(l){a||(B(n.$$.fragment,l),a=!0)},o(l){N(n.$$.fragment,l),a=!1},d(l){l&&(E(t),E(r),E(i)),Y(n)}}}function gu(e){let t,n,r,i;const o=[mu,du],a=[];function l(u,s){return u[0]==="python"?0:u[0]==="javascript"?1:-1}return~(n=l(e))&&(r=a[n]=o[n](e)),{c(){t=A("code"),r&&r.c(),d(t,"class","svelte-hq8ezf")},m(u,s){y(u,t,s),~n&&a[n].m(t,null),i=!0},p(u,s){let c=n;n=l(u),n===c?~n&&a[n].p(u,s):(r&&(le(),N(a[c],1,1,()=>{a[c]=null}),ae()),~n?(r=a[n],r?r.p(u,s):(r=a[n]=o[n](u),r.c()),B(r,1),r.m(t,null)):r=null)},i(u){i||(B(r),i=!0)},o(u){N(r),i=!1},d(u){u&&E(t),~n&&a[n].d()}}}function bu(e){let t,n;return t=new yt({props:{$$slots:{default:[gu]},$$scope:{ctx:e}}}),{c(){W(t.$$.fragment)},m(r,i){Z(t,r,i),n=!0},p(r,[i]){const o={};i&3&&(o.$$scope={dirty:i,ctx:r}),t.$set(o)},i(r){n||(B(t.$$.fragment,r),n=!0)},o(r){N(t.$$.fragment,r),n=!1},d(r){Y(t,r)}}}let rn="pip install gradio_client",on="npm i -D @gradio/client";function vu(e,t,n){let{current_language:r}=t;return e.$$set=i=>{"current_language"in i&&n(0,r=i.current_language)},[r]}class Eu extends ue{constructor(t){super(),ce(this,t,vu,bu,fe,{current_language:0})}}function yu(e){let t,n,r,i;return{c(){t=A("h3"),n=I(`fn_index: - `),r=A("span"),i=I(e[1]),d(r,"class","post svelte-41kcm6"),d(t,"class","svelte-41kcm6")},m(o,a){y(o,t,a),m(t,n),m(t,r),m(r,i)},p(o,a){a&2&&q(i,o[1])},d(o){o&&E(t)}}}function Su(e){let t,n,r,i="/"+e[0],o;return{c(){t=A("h3"),n=I(`api_name: - `),r=A("span"),o=I(i),d(r,"class","post svelte-41kcm6"),d(t,"class","svelte-41kcm6")},m(a,l){y(a,t,l),m(t,n),m(t,r),m(r,o)},p(a,l){l&1&&i!==(i="/"+a[0])&&q(o,i)},d(a){a&&E(t)}}}function wu(e){let t;function n(o,a){return o[2]?Su:yu}let r=n(e),i=r(e);return{c(){i.c(),t=de()},m(o,a){i.m(o,a),y(o,t,a)},p(o,[a]){r===(r=n(o))&&i?i.p(o,a):(i.d(1),i=r(o),i&&(i.c(),i.m(t.parentNode,t)))},i:$,o:$,d(o){o&&E(t),i.d(o)}}}function Tu(e,t,n){let{api_name:r=null}=t,{fn_index:i=null}=t,{named:o}=t;return e.$$set=a=>{"api_name"in a&&n(0,r=a.api_name),"fn_index"in a&&n(1,i=a.fn_index),"named"in a&&n(2,o=a.named)},[r,i,o]}class Fo extends ue{constructor(t){super(),ce(this,t,Tu,wu,fe,{api_name:0,fn_index:1,named:2})}}function ln(e,t,n){const r=e.slice();return r[17]=t[n].label,r[18]=t[n].type,r[19]=t[n].python_type,r[20]=t[n].component,r[21]=t[n].example_input,r[22]=t[n].serializer,r[24]=n,r}function an(e,t,n){const r=e.slice();return r[17]=t[n].label,r[18]=t[n].type,r[19]=t[n].python_type,r[20]=t[n].component,r[21]=t[n].example_input,r[22]=t[n].serializer,r[24]=n,r}function sn(e,t,n){const r=e.slice();return r[17]=t[n].label,r[18]=t[n].type,r[19]=t[n].python_type,r[20]=t[n].component,r[21]=t[n].example_input,r[22]=t[n].serializer,r[24]=n,r}function Iu(e){let t,n;return t=new Fo({props:{named:e[6],fn_index:e[1]}}),{c(){W(t.$$.fragment)},m(r,i){Z(t,r,i),n=!0},p(r,i){const o={};i&64&&(o.named=r[6]),i&2&&(o.fn_index=r[1]),t.$set(o)},i(r){n||(B(t.$$.fragment,r),n=!0)},o(r){N(t.$$.fragment,r),n=!1},d(r){Y(t,r)}}}function ku(e){let t,n;return t=new Fo({props:{named:e[6],api_name:e[0].api_name}}),{c(){W(t.$$.fragment)},m(r,i){Z(t,r,i),n=!0},p(r,i){const o={};i&64&&(o.named=r[6]),i&1&&(o.api_name=r[0].api_name),t.$set(o)},i(r){n||(B(t.$$.fragment,r),n=!0)},o(r){N(t.$$.fragment,r),n=!1},d(r){Y(t,r)}}}function Au(e){let t,n,r,i,o,a,l,u,s,c,h,_,p,v,b;n=new nt({props:{code:e[9]?.innerText}});let g=oe(e[11]),S=[];for(let L=0;L{a[c]=null}),ae()),~n?(r=a[n],r?r.p(u,s):(r=a[n]=o[n](u),r.c()),B(r,1),r.m(t,null)):r=null)},i(u){i||(B(r),i=!0)},o(u){N(r),i=!1},d(u){u&&E(t),~n&&a[n].d()}}}function Ru(e){let t,n,r,i,o,a;const l=[ku,Iu],u=[];function s(c,h){return c[6]?0:1}return n=s(e),r=u[n]=l[n](e),o=new yt({props:{$$slots:{default:[Nu]},$$scope:{ctx:e}}}),{c(){t=A("div"),r.c(),i=x(),W(o.$$.fragment),d(t,"class","container svelte-1d98qmk")},m(c,h){y(c,t,h),u[n].m(t,null),m(t,i),Z(o,t,null),a=!0},p(c,[h]){let _=n;n=s(c),n===_?u[n].p(c,h):(le(),N(u[_],1,1,()=>{u[_]=null}),ae(),r=u[n],r?r.p(c,h):(r=u[n]=l[n](c),r.c()),B(r,1),r.m(t,i));const p={};h&134218751&&(p.$$scope={dirty:h,ctx:c}),o.$set(p)},i(c){a||(B(r),B(o.$$.fragment,c),a=!0)},o(c){N(r),N(o.$$.fragment,c),a=!1},d(c){c&&E(t),u[n].d(),Y(o)}}}function Mu(e,t,n){let{dependency:r}=t,{dependencies:i}=t,{dependency_index:o}=t,{instance_map:a}=t,{root:l}=t,{dependency_inputs:u}=t,{dependency_failures:s}=t,{endpoint_parameters:c}=t,{js_parameters:h}=t,{named:_}=t,{current_language:p}=t,v,b,g=["Audio","File","Image","Video"],S=c.filter(f=>g.includes(f.component));function k(f){De[f?"unshift":"push"](()=>{v=f,n(8,v)})}function T(f){De[f?"unshift":"push"](()=>{b=f,n(9,b)})}return e.$$set=f=>{"dependency"in f&&n(0,r=f.dependency),"dependencies"in f&&n(12,i=f.dependencies),"dependency_index"in f&&n(1,o=f.dependency_index),"instance_map"in f&&n(13,a=f.instance_map),"root"in f&&n(2,l=f.root),"dependency_inputs"in f&&n(14,u=f.dependency_inputs),"dependency_failures"in f&&n(3,s=f.dependency_failures),"endpoint_parameters"in f&&n(4,c=f.endpoint_parameters),"js_parameters"in f&&n(5,h=f.js_parameters),"named"in f&&n(6,_=f.named),"current_language"in f&&n(7,p=f.current_language)},[r,o,l,s,c,h,_,p,v,b,g,S,i,a,u,k,T]}class Go extends ue{constructor(t){super(),ce(this,t,Mu,Ru,fe,{dependency:0,dependencies:12,dependency_index:1,instance_map:13,root:2,dependency_inputs:14,dependency_failures:3,endpoint_parameters:4,js_parameters:5,named:6,current_language:7})}}const xu="https://gradio.s3-us-west-2.amazonaws.com/3.37.0/assets/python-20e39c92.svg",Du="https://gradio.s3-us-west-2.amazonaws.com/3.37.0/assets/javascript-850cf94b.svg";function dn(e,t,n){const r=e.slice();return r[18]=t[n],r[20]=n,r}function mn(e,t,n){const r=e.slice();return r[18]=t[n],r[20]=n,r}function gn(e,t,n){const r=e.slice();return r[22]=t[n][0],r[23]=t[n][1],r}function bn(e){let t,n,r,i,o;const a=[Gu,Fu],l=[];function u(s,c){return c&128&&(t=null),t==null&&(t=!!(Object.keys(s[7].named_endpoints).length+Object.keys(s[7].unnamed_endpoints).length)),t?0:1}return n=u(e,-1),r=l[n]=a[n](e),{c(){r.c(),i=de()},m(s,c){l[n].m(s,c),y(s,i,c),o=!0},p(s,c){let h=n;n=u(s,c),n===h?l[n].p(s,c):(le(),N(l[h],1,1,()=>{l[h]=null}),ae(),r=l[n],r?r.p(s,c):(r=l[n]=a[n](s),r.c()),B(r,1),r.m(i.parentNode,i))},i(s){o||(B(r),o=!0)},o(s){N(r),o=!1},d(s){s&&E(i),l[n].d(s)}}}function Fu(e){let t,n;return t=new Qs({props:{root:e[0]}}),t.$on("close",e[14]),{c(){W(t.$$.fragment)},m(r,i){Z(t,r,i),n=!0},p(r,i){const o={};i&1&&(o.root=r[0]),t.$set(o)},i(r){n||(B(t.$$.fragment,r),n=!0)},o(r){N(t.$$.fragment,r),n=!1},d(r){Y(t,r)}}}function Gu(e){let t,n,r,i,o,a,l,u,s,c,h,_=Object.keys(e[7].named_endpoints).length,p,v,b=Object.keys(e[7].unnamed_endpoints).length,g,S;n=new eu({props:{root:e[0],api_count:Object.keys(e[7].named_endpoints).length+Object.keys(e[7].unnamed_endpoints).length}}),n.$on("close",e[12]);let k=oe(e[9]),T=[];for(let j=0;jN(H[j],1,1,()=>{H[j]=null});let D=b&&wn(),J=oe(e[2]),C=[];for(let j=0;jN(C[j],1,1,()=>{C[j]=null});return{c(){t=A("div"),W(n.$$.fragment),r=x(),i=A("div"),o=A("div"),o.innerHTML=`

          Use the gradio_client - Python library or the - @gradio/client Javascript package to query the demo via API.

          `,a=x(),l=A("div"),u=A("div");for(let j=0;j{r=null}),ae())},i(i){n||(B(r),n=!0)},o(i){N(r),n=!1},d(i){i&&E(t),r&&r.d(i)}}}function wn(e){let t;return{c(){t=A("h2"),t.textContent="Unnamed Endpoints",d(t,"class","header svelte-bdjvpc")},m(n,r){y(n,t,r)},d(n){n&&E(t)}}}function Tn(e){let t,n,r,i,o,a;return n=new Go({props:{named:!1,endpoint_parameters:e[7].unnamed_endpoints[e[20]].parameters,js_parameters:e[8].unnamed_endpoints[e[20]].parameters,instance_map:e[1],dependency:e[18],dependency_index:e[20],current_language:e[3],root:e[0],dependency_inputs:e[10],dependencies:e[2],dependency_failures:e[6]}}),i=new Do({props:{named:!1,endpoint_returns:e[7].unnamed_endpoints[e[20]].returns,js_returns:e[8].unnamed_endpoints[e[20]].returns,instance_map:e[1],dependency:e[18],dependency_index:e[20],is_running:e[4],dependency_outputs:e[5],current_language:e[3],root:e[0]}}),{c(){t=A("div"),W(n.$$.fragment),r=x(),W(i.$$.fragment),o=x(),d(t,"class","endpoint-container svelte-bdjvpc")},m(l,u){y(l,t,u),Z(n,t,null),m(t,r),Z(i,t,null),m(t,o),a=!0},p(l,u){const s={};u&128&&(s.endpoint_parameters=l[7].unnamed_endpoints[l[20]].parameters),u&256&&(s.js_parameters=l[8].unnamed_endpoints[l[20]].parameters),u&2&&(s.instance_map=l[1]),u&4&&(s.dependency=l[18]),u&8&&(s.current_language=l[3]),u&1&&(s.root=l[0]),u&4&&(s.dependencies=l[2]),u&64&&(s.dependency_failures=l[6]),n.$set(s);const c={};u&128&&(c.endpoint_returns=l[7].unnamed_endpoints[l[20]].returns),u&256&&(c.js_returns=l[8].unnamed_endpoints[l[20]].returns),u&2&&(c.instance_map=l[1]),u&4&&(c.dependency=l[18]),u&16&&(c.is_running=l[4]),u&32&&(c.dependency_outputs=l[5]),u&8&&(c.current_language=l[3]),u&1&&(c.root=l[0]),i.$set(c)},i(l){a||(B(n.$$.fragment,l),B(i.$$.fragment,l),a=!0)},o(l){N(n.$$.fragment,l),N(i.$$.fragment,l),a=!1},d(l){l&&E(t),Y(n),Y(i)}}}function In(e){let t,n,r=e[7].unnamed_endpoints[e[20]]&&Tn(e);return{c(){r&&r.c(),t=de()},m(i,o){r&&r.m(i,o),y(i,t,o),n=!0},p(i,o){i[7].unnamed_endpoints[i[20]]?r?(r.p(i,o),o&128&&B(r,1)):(r=Tn(i),r.c(),B(r,1),r.m(t.parentNode,t)):r&&(le(),N(r,1,1,()=>{r=null}),ae())},i(i){n||(B(r),n=!0)},o(i){N(r),n=!1},d(i){i&&E(t),r&&r.d(i)}}}function Uu(e){let t,n,r=e[7]&&bn(e);return{c(){r&&r.c(),t=de()},m(i,o){r&&r.m(i,o),y(i,t,o),n=!0},p(i,[o]){i[7]?r?(r.p(i,o),o&128&&B(r,1)):(r=bn(i),r.c(),B(r,1),r.m(t.parentNode,t)):r&&(le(),N(r,1,1,()=>{r=null}),ae())},i(i){n||(B(r),n=!0)},o(i){N(r),n=!1},d(i){i&&E(t),r&&r.d(i)}}}function Vu(e,t,n){let{instance_map:r}=t,{dependencies:i}=t,{root:o}=t,{app:a}=t;o===""&&(o=location.protocol+"//"+location.host+location.pathname),o.endsWith("/")||(o+="/");let l="python";const u=[["python",xu],["javascript",Du]];let s=!1,c=i.map(f=>f.inputs.map(P=>{let H=r[P].documentation?.example_data;return H===void 0?H="":typeof H=="object"&&(H=JSON.stringify(H)),H})),h=i.map(f=>new Array(f.outputs.length)),_=i.map(f=>new Array(f.inputs.length).fill(!1));async function p(){return await(await fetch(o+"info")).json()}async function v(){return await a.view_api()}let b,g;p().then(f=>n(7,b=f)).catch(f=>console.log(f)),v().then(f=>n(8,g=f)),Et(()=>(document.body.style.overflow="hidden","parentIFrame"in window&&window.parentIFrame?.scrollTo(0,0),()=>{document.body.style.overflow="auto"}));function S(f){ke.call(this,e,f)}const k=f=>n(3,l=f);function T(f){ke.call(this,e,f)}return e.$$set=f=>{"instance_map"in f&&n(1,r=f.instance_map),"dependencies"in f&&n(2,i=f.dependencies),"root"in f&&n(0,o=f.root),"app"in f&&n(11,a=f.app)},[o,r,i,l,s,h,_,b,g,u,c,a,S,k,T]}class zu extends ue{constructor(t){super(),ce(this,t,Vu,Uu,fe,{instance_map:1,dependencies:2,root:0,app:11})}}function qu(e,{from:t,to:n},r={}){const i=getComputedStyle(e),o=i.transform==="none"?"":i.transform,[a,l]=i.transformOrigin.split(" ").map(parseFloat),u=t.left+t.width*a/n.width-(n.left+a),s=t.top+t.height*l/n.height-(n.top+l),{delay:c=0,duration:h=p=>Math.sqrt(p)*120,easing:_=wl}=r;return{delay:c,duration:fl(h)?h(Math.sqrt(u*u+s*s)):h,easing:_,css:(p,v)=>{const b=v*u,g=v*s,S=p+v*t.width/n.width,k=p+v*t.height/n.height;return`transform: ${o} translate(${b}px, ${g}px) scale(${S}, ${k});`}}}function Xu(e){let t,n;return t=new nu({}),{c(){W(t.$$.fragment)},m(r,i){Z(t,r,i),n=!0},i(r){n||(B(t.$$.fragment,r),n=!0)},o(r){N(t.$$.fragment,r),n=!1},d(r){Y(t,r)}}}function Wu(e){let t,n;return t=new iu({}),{c(){W(t.$$.fragment)},m(r,i){Z(t,r,i),n=!0},i(r){n||(B(t.$$.fragment,r),n=!0)},o(r){N(t.$$.fragment,r),n=!1},d(r){Y(t,r)}}}function Zu(e){let t,n;return t=new lu({}),{c(){W(t.$$.fragment)},m(r,i){Z(t,r,i),n=!0},i(r){n||(B(t.$$.fragment,r),n=!0)},o(r){N(t.$$.fragment,r),n=!1},d(r){Y(t,r)}}}function Yu(e){let t,n,r,i,o,a,l,u,s,c,h,_,p,v,b,g,S,k,T,f,P,H,L,D,J,C,pe,j;const G=[Zu,Wu,Xu],Q=[];function ve(O,K){return O[1]==="warning"?0:O[1]==="info"?1:O[1]==="error"?2:-1}return~(r=ve(e))&&(i=Q[r]=G[r](e)),{c(){t=A("div"),n=A("div"),i&&i.c(),a=x(),l=A("div"),u=A("div"),s=I(e[1]),h=x(),_=A("div"),p=I(e[0]),g=x(),S=A("button"),k=A("span"),k.textContent="×",f=x(),P=A("div"),d(n,"class",o="toast-icon "+e[1]+" svelte-z3l7qj"),d(u,"class",c="toast-title "+e[1]+" svelte-z3l7qj"),d(_,"class",v="toast-text "+e[1]+" svelte-z3l7qj"),d(l,"class",b="toast-details "+e[1]+" svelte-z3l7qj"),d(k,"aria-hidden","true"),d(S,"class",T="toast-close "+e[1]+" svelte-z3l7qj"),d(S,"type","button"),d(S,"aria-label","Close"),d(S,"data-testid","toast-close"),d(P,"class",H="timer "+e[1]+" svelte-z3l7qj"),d(t,"class",L="toast-body "+e[1]+" svelte-z3l7qj"),d(t,"role","alert"),d(t,"data-testid","toast-body")},m(O,K){y(O,t,K),m(t,n),~r&&Q[r].m(n,null),m(t,a),m(t,l),m(l,u),m(u,s),m(l,h),m(l,_),m(_,p),m(t,g),m(t,S),m(S,k),m(t,f),m(t,P),C=!0,pe||(j=[Se(S,"click",e[2]),Se(t,"click",Lt(e[4])),Se(t,"keydown",Lt(e[5]))],pe=!0)},p(O,[K]){let Ae=r;r=ve(O),r!==Ae&&(i&&(le(),N(Q[Ae],1,1,()=>{Q[Ae]=null}),ae()),~r?(i=Q[r],i||(i=Q[r]=G[r](O),i.c()),B(i,1),i.m(n,null)):i=null),(!C||K&2&&o!==(o="toast-icon "+O[1]+" svelte-z3l7qj"))&&d(n,"class",o),(!C||K&2)&&q(s,O[1]),(!C||K&2&&c!==(c="toast-title "+O[1]+" svelte-z3l7qj"))&&d(u,"class",c),(!C||K&1)&&q(p,O[0]),(!C||K&2&&v!==(v="toast-text "+O[1]+" svelte-z3l7qj"))&&d(_,"class",v),(!C||K&2&&b!==(b="toast-details "+O[1]+" svelte-z3l7qj"))&&d(l,"class",b),(!C||K&2&&T!==(T="toast-close "+O[1]+" svelte-z3l7qj"))&&d(S,"class",T),(!C||K&2&&H!==(H="timer "+O[1]+" svelte-z3l7qj"))&&d(P,"class",H),(!C||K&2&&L!==(L="toast-body "+O[1]+" svelte-z3l7qj"))&&d(t,"class",L)},i(O){C||(B(i),O&&_l(()=>{C&&(J&&J.end(1),D=hl(t,jt,{duration:200,delay:100}),D.start())}),C=!0)},o(O){N(i),D&&D.invalidate(),O&&(J=pl(t,jt,{duration:200})),C=!1},d(O){O&&E(t),~r&&Q[r].d(),O&&J&&J.end(),pe=!1,dl(j)}}}function Ju(e,t,n){let{message:r=""}=t,{type:i}=t,{id:o}=t;const a=$e();function l(){a("close",o)}Et(()=>{setTimeout(()=>{l()},1e4)});function u(c){ke.call(this,e,c)}function s(c){ke.call(this,e,c)}return e.$$set=c=>{"message"in c&&n(0,r=c.message),"type"in c&&n(1,i=c.type),"id"in c&&n(3,o=c.id)},[r,i,l,o,u,s]}class Qu extends ue{constructor(t){super(),ce(this,t,Ju,Yu,fe,{message:0,type:1,id:3})}}function kn(e,t,n){const r=e.slice();return r[2]=t[n].type,r[3]=t[n].message,r[4]=t[n].id,r}function An(e,t){let n,r,i,o,a=$,l;return r=new Qu({props:{type:t[2],message:t[3],id:t[4]}}),r.$on("close",t[1]),{key:e,first:null,c(){n=A("div"),W(r.$$.fragment),i=x(),ge(n,"width","100%"),this.first=n},m(u,s){y(u,n,s),Z(r,n,null),m(n,i),l=!0},p(u,s){t=u;const c={};s&1&&(c.type=t[2]),s&1&&(c.message=t[3]),s&1&&(c.id=t[4]),r.$set(c)},r(){o=n.getBoundingClientRect()},f(){Il(n),a()},a(){a(),a=Tl(n,o,qu,{duration:300})},i(u){l||(B(r.$$.fragment,u),l=!0)},o(u){N(r.$$.fragment,u),l=!1},d(u){u&&E(n),Y(r)}}}function $u(e){let t,n=[],r=new Map,i,o=oe(e[0]);const a=l=>l[4];for(let l=0;l0&&"parentIFrame"in window&&window.parentIFrame?.scrollTo(0,0)}function ec(e,t,n){let{messages:r=[]}=t;function i(o){ke.call(this,e,o)}return e.$$set=o=>{"messages"in o&&n(0,r=o.messages)},e.$$.update=()=>{e.$$.dirty&1&&Ku(r)},[r,i]}class tc extends ue{constructor(t){super(),ce(this,t,ec,$u,fe,{messages:0})}}const nc="https://gradio.s3-us-west-2.amazonaws.com/3.37.0/assets/logo-0a070fcf.svg";const{document:xe}=El;function Cn(e){return xe.title=e[3],{c:$,m:$,d:$}}function Pn(e){let t,n,r,i;return{c(){t=A("script"),t.innerHTML="",r=x(),i=A("script"),i.textContent=`window.dataLayer = window.dataLayer || []; - function gtag() { - dataLayer.push(arguments); - } - gtag("js", new Date()); - gtag("config", "UA-156449732-1");`,t.async=!0,t.defer=!0,Ue(t.src,n="https://www.googletagmanager.com/gtag/js?id=UA-156449732-1")||d(t,"src",n)},m(o,a){y(o,t,a),y(o,r,a),y(o,i,a)},d(o){o&&(E(t),E(r),E(i))}}}function On(e){let t,n;return t=new Ro({props:{has_modes:e[12].has_modes,component:e[12].component,id:e[12].id,props:e[12].props,children:e[12].children,dynamic_ids:e[17],instance_map:e[18],root:e[1],target:e[5],theme_mode:e[10]}}),t.$on("mount",e[20]),t.$on("destroy",e[27]),{c(){W(t.$$.fragment)},m(r,i){Z(t,r,i),n=!0},p(r,i){const o={};i[0]&4096&&(o.has_modes=r[12].has_modes),i[0]&4096&&(o.component=r[12].component),i[0]&4096&&(o.id=r[12].id),i[0]&4096&&(o.props=r[12].props),i[0]&4096&&(o.children=r[12].children),i[0]&2&&(o.root=r[1]),i[0]&32&&(o.target=r[5]),i[0]&1024&&(o.theme_mode=r[10]),t.$set(o)},i(r){n||(B(t.$$.fragment,r),n=!0)},o(r){N(t.$$.fragment,r),n=!1},d(r){Y(t,r)}}}function Bn(e){let t,n,r,i,o,a,l=e[6]&&Hn(e);return{c(){t=A("footer"),l&&l.c(),n=x(),r=A("a"),i=I(`Built with Gradio - `),o=A("img"),Ue(o.src,a=nc)||d(o,"src",a),d(o,"alt","logo"),d(o,"class","svelte-1ax1toq"),d(r,"href","https://gradio.app"),d(r,"class","built-with svelte-1ax1toq"),d(r,"target","_blank"),d(r,"rel","noreferrer"),d(t,"class","svelte-1ax1toq")},m(u,s){y(u,t,s),l&&l.m(t,null),m(t,n),m(t,r),m(r,i),m(r,o)},p(u,s){u[6]?l?l.p(u,s):(l=Hn(u),l.c(),l.m(t,n)):l&&(l.d(1),l=null)},d(u){u&&E(t),l&&l.d()}}}function Hn(e){let t,n,r,i,o,a,l,u;return{c(){t=A("button"),n=I("Use via API "),r=A("img"),o=x(),a=A("div"),a.textContent="·",Ue(r.src,i=xo)||d(r,"src",i),d(r,"alt",""),d(r,"class","svelte-1ax1toq"),d(t,"class","show-api svelte-1ax1toq"),d(a,"class","svelte-1ax1toq")},m(s,c){y(s,t,c),m(t,n),m(t,r),y(s,o,c),y(s,a,c),l||(u=Se(t,"click",e[28]),l=!0)},p:$,d(s){s&&(E(t),E(o),E(a)),l=!1,u()}}}function Ln(e){let t,n,r,i,o,a,l,u;return o=new zu({props:{instance_map:e[18],dependencies:e[2],root:e[1],app:e[11]}}),o.$on("close",e[30]),{c(){t=A("div"),n=A("div"),r=x(),i=A("div"),W(o.$$.fragment),d(n,"class","backdrop svelte-1ax1toq"),d(i,"class","api-docs-wrap svelte-1ax1toq"),d(t,"class","api-docs svelte-1ax1toq")},m(s,c){y(s,t,c),m(t,n),m(t,r),m(t,i),Z(o,i,null),a=!0,l||(u=Se(n,"click",e[29]),l=!0)},p(s,c){const h={};c[0]&4&&(h.dependencies=s[2]),c[0]&2&&(h.root=s[1]),c[0]&2048&&(h.app=s[11]),o.$set(h)},i(s){a||(B(o.$$.fragment,s),a=!0)},o(s){N(o.$$.fragment,s),a=!1},d(s){s&&E(t),Y(o),l=!1,u()}}}function jn(e){let t,n;return t=new tc({props:{messages:e[14]}}),t.$on("close",e[19]),{c(){W(t.$$.fragment)},m(r,i){Z(t,r,i),n=!0},p(r,i){const o={};i[0]&16384&&(o.messages=r[14]),t.$set(o)},i(r){n||(B(t.$$.fragment,r),n=!0)},o(r){N(t.$$.fragment,r),n=!1},d(r){Y(t,r)}}}function rc(e){let t,n,r,i,o,a,l,u,s,c,h=e[8]&&Cn(e),_=e[4]&&Pn(),p=e[0]&&On(e),v=e[7]&&Bn(e),b=e[13]&&e[0]&&Ln(e),g=e[14]&&jn(e);return{c(){h&&h.c(),t=de(),_&&_.c(),n=de(),r=x(),i=A("div"),o=A("div"),p&&p.c(),a=x(),v&&v.c(),l=x(),b&&b.c(),u=x(),g&&g.c(),s=de(),d(o,"class","contain"),ge(o,"flex-grow",e[9]?"1":"auto"),d(i,"class","wrap svelte-1ax1toq"),ge(i,"min-height",e[9]?"100%":"auto")},m(S,k){h&&h.m(xe.head,null),m(xe.head,t),_&&_.m(xe.head,null),m(xe.head,n),y(S,r,k),y(S,i,k),m(i,o),p&&p.m(o,null),m(i,a),v&&v.m(i,null),y(S,l,k),b&&b.m(S,k),y(S,u,k),g&&g.m(S,k),y(S,s,k),c=!0},p(S,k){S[8]?h||(h=Cn(S),h.c(),h.m(t.parentNode,t)):h&&(h.d(1),h=null),S[4]?_||(_=Pn(),_.c(),_.m(n.parentNode,n)):_&&(_.d(1),_=null),S[0]?p?(p.p(S,k),k[0]&1&&B(p,1)):(p=On(S),p.c(),B(p,1),p.m(o,null)):p&&(le(),N(p,1,1,()=>{p=null}),ae()),k[0]&512&&ge(o,"flex-grow",S[9]?"1":"auto"),S[7]?v?v.p(S,k):(v=Bn(S),v.c(),v.m(i,null)):v&&(v.d(1),v=null),k[0]&512&&ge(i,"min-height",S[9]?"100%":"auto"),S[13]&&S[0]?b?(b.p(S,k),k[0]&8193&&B(b,1)):(b=Ln(S),b.c(),B(b,1),b.m(u.parentNode,u)):b&&(le(),N(b,1,1,()=>{b=null}),ae()),S[14]?g?(g.p(S,k),k[0]&16384&&B(g,1)):(g=jn(S),g.c(),B(g,1),g.m(s.parentNode,s)):g&&(le(),N(g,1,1,()=>{g=null}),ae())},i(S){c||(B(p),B(b),B(g),c=!0)},o(S){N(p),N(b),N(g),c=!1},d(S){S&&(E(r),E(i),E(l),E(u),E(s)),h&&h.d(S),E(t),_&&_.d(S),E(n),p&&p.d(),v&&v.d(),b&&b.d(S),g&&g.d(S)}}}const ic=/^'([^]+)'$/,oc="There is a long queue of requests pending. Duplicate this Space to skip.",lc="On mobile, the connection can break if this tab is unfocused or the device sleeps, losing your position in queue.",ac="Lost connection due to leaving page. Rejoining queue...",sc=15,uc=10;function Nn(e,t,n){for(const r of n)for(const i of r[t])if(i===e)return!0;return!1}function cc(e){return Array.isArray(e)&&e.length===0||e===""||e===0||!e}function fc(e,t,n){let r;zs();let{root:i}=t,{components:o}=t,{layout:a}=t,{dependencies:l}=t,{title:u="Gradio"}=t,{analytics_enabled:s=!1}=t,{target:c}=t,{autoscroll:h}=t,{show_api:_=!0}=t,{show_footer:p=!0}=t,{control_page_title:v=!1}=t,{app_mode:b}=t,{theme_mode:g}=t,{app:S}=t,{space_id:k}=t,T=gl();bl(e,T,w=>n(26,r=w));let f={id:a.id,type:"column",props:{},has_modes:!1,instance:{},component:{}};o.push(f);const P=Object.getPrototypeOf(async function(){}).constructor;l.forEach(w=>{if(w.js){const R=w.backend_fn?w.inputs.length===1:w.outputs.length===1;try{w.frontend_fn=new P("__fn_args",`let result = await (${w.js})(...__fn_args); - return (${R} && !Array.isArray(result)) ? [result] : result;`)}catch(M){console.error("Could not parse custom js method."),console.error(M)}}});let L=new URLSearchParams(window.location.search).get("view")==="api";const D=w=>{n(13,L=w);let R=new URLSearchParams(window.location.search);w?R.set("view","api"):R.delete("view"),history.replaceState(null,"","?"+R.toString())},J=new Set;for(const w of o){const{id:R,props:M}=w;(Nn(R,"inputs",l)||!Nn(R,"outputs",l)&&cc(M?.value))&&J.add(R)}let C=o.reduce((w,R)=>(w[R.id]=R,w),{});async function pe(w){try{const R=await Ja[w]();return{name:w,component:R}}catch(R){throw console.error(`failed to load: ${w}`),console.error(R),R}}const j=new Set,G=new Map;async function Q(w){let R=C[w.id];const M=(await G.get(R.type)).component;R.component=M.Component,M.document&&(R.documentation=M.document(R.props)),M.modes&&M.modes.length>1&&(R.has_modes=!0),w.children&&(R.children=w.children.map(U=>C[U.id]),await Promise.all(w.children.map(U=>Q(U))))}o.forEach(async w=>{const R=pe(w.type);j.add(R),G.set(w.type,R)});let{ready:ve=!1}=t;Promise.all(Array.from(j)).then(()=>{Q(a).then(async()=>{n(0,ve=!0)}).catch(w=>{console.error(w)})});function O(w,R){const M=l[R].outputs;w?.forEach((U,_e)=>{const me=C[M[_e]];if(me.props.value_is_output=!0,typeof U=="object"&&U!==null&&U.__type__==="update")for(const[re,ie]of Object.entries(U))re!=="__type__"&&(me.props[re]=ie);else me.props.value=U}),n(12,f)}let K=new Map;function Ae(w,R,M){w?.props||(w.props={}),w.props[R]=M,n(12,f)}let Ee=[],se=[];const Ce=(w,R,M)=>({message:w,fn_index:R,type:M,id:++Uo});let Uo=-1,rt=!1;document.addEventListener("visibilitychange",function(){document.visibilityState==="hidden"&&(rt=!0)});const It=/Android|webOS|iPhone|iPad|iPod|BlackBerry|IEMobile|Opera Mini/i.test(navigator.userAgent);let kt=!1,At=!1;const Ne=async(w,R=null)=>{let M=l[w];const U=T.get_status_for_fn(w);if(n(14,se=se.filter(({fn_index:re})=>re!==w)),M.cancels&&await Promise.all(M.cancels.map(async re=>{const ie=K.get(re);return ie?.cancel(),ie})),U==="pending"||U==="generating")return;let _e={fn_index:w,data:M.inputs.map(re=>C[re].props.value),event_data:M.collects_event_data?R:null};M.frontend_fn?M.frontend_fn(_e.data.concat(M.outputs.map(re=>C[re].props.value))).then(re=>{M.backend_fn?(_e.data=re,me()):O(re,w)}):M.backend_fn&&me();function me(){const re=S.submit(_e.fn_index,_e.data,_e.event_data).on("data",({data:ie,fn_index:ne})=>{O(ie,ne)}).on("status",({fn_index:ie,...ne})=>{if(T.update({...ne,status:ne.stage,progress:ne.progress_data,fn_index:ie}),!kt&&k!==null&&ne.position!==void 0&&ne.position>=2&&ne.eta!==void 0&&ne.eta>sc&&(kt=!0,n(14,se=[Ce(oc,ie,"warning"),...se])),!At&&It&&ne.eta!==void 0&&ne.eta>uc&&(At=!0,n(14,se=[Ce(lc,ie,"warning"),...se])),ne.stage==="complete"&&(l.map(async(te,Te)=>{te.trigger_after===ie&&Ne(Te)}),re.destroy()),ne.broken&&It&&rt)window.setTimeout(()=>{n(14,se=[Ce(ac,ie,"error"),...se])},0),Ne(w,R),rt=!1;else if(ne.stage==="error"){if(ne.message){const te=ne.message.replace(ic,(Te,it)=>it);n(14,se=[Ce(te,ie,"error"),...se])}l.map(async(te,Te)=>{te.trigger_after===ie&&!te.trigger_only_on_success&&Ne(Te)}),re.destroy()}}).on("log",({log:ie,fn_index:ne,level:te})=>{n(14,se=[Ce(ie,ne,te),...se])});K.set(w,re)}},Vo=(w,R)=>{if(k===null)return;const M=new URL(`https://huggingface.co/spaces/${k}/discussions/new`);w!==void 0&&w.length>0&&M.searchParams.set("title",w),M.searchParams.set("description",R),window.open(M.toString(),"_blank")};function zo(w){const R=w.detail;n(14,se=se.filter(M=>M.id!==R))}const qo=w=>w&&new URL(w,location.href).origin!==location.origin;let Ct=[],Pt=[];async function Xo(){await yl();for(var w=c.getElementsByTagName("a"),R=0;R{let{targets:_e,trigger:me,inputs:re,outputs:ie}=M;const ne=_e.map(te=>[te,C[te]]);_e.length===0&&!Ee[U]?.includes(-1)&&me==="load"&&ie.every(te=>C?.[te].instance)&&re.every(te=>C?.[te].instance)&&(Ne(U),Ee[U]=[-1]),ne.filter(te=>!!te&&!!te[1]).forEach(([te,{instance:Te}])=>{Ee[U]?.includes(te)||!Te||(Te?.$on(me,it=>{Ne(U,it.detail)}),Ee[U]||(Ee[U]=[]),Ee[U].push(te))})}),o.forEach(M=>{M.props.show_share_button&&!Pt.includes(M.id)&&(Pt.push(M.id),M.instance.$on("share",U=>{const{title:_e,description:me}=U.detail;Vo(_e,me)}))}),o.forEach(M=>{Ct.includes(M.id)||M.instance&&(Ct.push(M.id),M.instance.$on("error",U=>{n(14,se=[Ce(U.detail,-1,"error"),...se])}))})}function Ot(w){Ee=Ee.map(R=>R.filter(M=>M!==w))}l.forEach((w,R)=>{T.register(R,w.inputs,w.outputs)});function Wo(w){for(const M in w){let U=w[M],_e=l[U.fn_index];U.scroll_to_output=_e.scroll_to_output,U.show_progress=_e.show_progress,Ae(C[M],"loading_status",U)}const R=T.get_inputs_to_update();for(const[M,U]of R)Ae(C[M],"pending",U==="pending")}const Zo=({detail:w})=>Ot(w),Yo=()=>{D(!L)},Jo=()=>{D(!1)},Qo=()=>{D(!1)};return e.$$set=w=>{"root"in w&&n(1,i=w.root),"components"in w&&n(22,o=w.components),"layout"in w&&n(23,a=w.layout),"dependencies"in w&&n(2,l=w.dependencies),"title"in w&&n(3,u=w.title),"analytics_enabled"in w&&n(4,s=w.analytics_enabled),"target"in w&&n(5,c=w.target),"autoscroll"in w&&n(24,h=w.autoscroll),"show_api"in w&&n(6,_=w.show_api),"show_footer"in w&&n(7,p=w.show_footer),"control_page_title"in w&&n(8,v=w.control_page_title),"app_mode"in w&&n(9,b=w.app_mode),"theme_mode"in w&&n(10,g=w.theme_mode),"app"in w&&n(11,S=w.app),"space_id"in w&&n(25,k=w.space_id),"ready"in w&&n(0,ve=w.ready)},e.$$.update=()=>{e.$$.dirty[0]&16777216&&vl.update(w=>({...w,autoscroll:h})),e.$$.dirty[0]&67108864&&Wo(r)},[ve,i,l,u,s,c,_,p,v,b,g,S,f,L,se,T,D,J,C,zo,Xo,Ot,o,a,h,k,r,Zo,Yo,Jo,Qo]}class _c extends ue{constructor(t){super(),ce(this,t,fc,rc,fe,{root:1,components:22,layout:23,dependencies:2,title:3,analytics_enabled:4,target:5,autoscroll:24,show_api:6,show_footer:7,control_page_title:8,app_mode:9,theme_mode:10,app:11,space_id:25,ready:0},null,[-1,-1])}}const gc=Object.freeze(Object.defineProperty({__proto__:null,default:_c},Symbol.toStringTag,{value:"Module"}));export{gc as B,dc as X}; -//# sourceMappingURL=Blocks-c9e1499d.js.map diff --git a/spaces/DaCuteRaccoon/dalle-mini/html2canvas.js b/spaces/DaCuteRaccoon/dalle-mini/html2canvas.js deleted file mode 100644 index 96e2dc5707b1a584ff7b3b583aea7c6c18d4ea76..0000000000000000000000000000000000000000 --- a/spaces/DaCuteRaccoon/dalle-mini/html2canvas.js +++ /dev/null @@ -1,7756 +0,0 @@ -/*! - * html2canvas 1.4.1 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ -(function (global, factory) { - typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory() : - typeof define === 'function' && define.amd ? define(factory) : - (global = typeof globalThis !== 'undefined' ? globalThis : global || self, global.html2canvas = factory()); -}(this, (function () { 'use strict'; - - /*! ***************************************************************************** - Copyright (c) Microsoft Corporation. - - Permission to use, copy, modify, and/or distribute this software for any - purpose with or without fee is hereby granted. - - THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH - REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY - AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, - INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM - LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR - OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR - PERFORMANCE OF THIS SOFTWARE. - ***************************************************************************** */ - /* global Reflect, Promise */ - - var extendStatics = function(d, b) { - extendStatics = Object.setPrototypeOf || - ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) || - function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; }; - return extendStatics(d, b); - }; - - function __extends(d, b) { - if (typeof b !== "function" && b !== null) - throw new TypeError("Class extends value " + String(b) + " is not a constructor or null"); - extendStatics(d, b); - function __() { this.constructor = d; } - d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __()); - } - - var __assign = function() { - __assign = Object.assign || function __assign(t) { - for (var s, i = 1, n = arguments.length; i < n; i++) { - s = arguments[i]; - for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p]; - } - return t; - }; - return __assign.apply(this, arguments); - }; - - function __awaiter(thisArg, _arguments, P, generator) { - function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); } - return new (P || (P = Promise))(function (resolve, reject) { - function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } } - function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } } - function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); } - step((generator = generator.apply(thisArg, _arguments || [])).next()); - }); - } - - function __generator(thisArg, body) { - var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g; - return g = { next: verb(0), "throw": verb(1), "return": verb(2) }, typeof Symbol === "function" && (g[Symbol.iterator] = function() { return this; }), g; - function verb(n) { return function (v) { return step([n, v]); }; } - function step(op) { - if (f) throw new TypeError("Generator is already executing."); - while (_) try { - if (f = 1, y && (t = op[0] & 2 ? y["return"] : op[0] ? y["throw"] || ((t = y["return"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t; - if (y = 0, t) op = [op[0] & 2, t.value]; - switch (op[0]) { - case 0: case 1: t = op; break; - case 4: _.label++; return { value: op[1], done: false }; - case 5: _.label++; y = op[1]; op = [0]; continue; - case 7: op = _.ops.pop(); _.trys.pop(); continue; - default: - if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; } - if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; } - if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; } - if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; } - if (t[2]) _.ops.pop(); - _.trys.pop(); continue; - } - op = body.call(thisArg, _); - } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; } - if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true }; - } - } - - function __spreadArray(to, from, pack) { - if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) { - if (ar || !(i in from)) { - if (!ar) ar = Array.prototype.slice.call(from, 0, i); - ar[i] = from[i]; - } - } - return to.concat(ar || from); - } - - var Bounds = /** @class */ (function () { - function Bounds(left, top, width, height) { - this.left = left; - this.top = top; - this.width = width; - this.height = height; - } - Bounds.prototype.add = function (x, y, w, h) { - return new Bounds(this.left + x, this.top + y, this.width + w, this.height + h); - }; - Bounds.fromClientRect = function (context, clientRect) { - return new Bounds(clientRect.left + context.windowBounds.left, clientRect.top + context.windowBounds.top, clientRect.width, clientRect.height); - }; - Bounds.fromDOMRectList = function (context, domRectList) { - var domRect = Array.from(domRectList).find(function (rect) { return rect.width !== 0; }); - return domRect - ? new Bounds(domRect.left + context.windowBounds.left, domRect.top + context.windowBounds.top, domRect.width, domRect.height) - : Bounds.EMPTY; - }; - Bounds.EMPTY = new Bounds(0, 0, 0, 0); - return Bounds; - }()); - var parseBounds = function (context, node) { - return Bounds.fromClientRect(context, node.getBoundingClientRect()); - }; - var parseDocumentSize = function (document) { - var body = document.body; - var documentElement = document.documentElement; - if (!body || !documentElement) { - throw new Error("Unable to get document size"); - } - var width = Math.max(Math.max(body.scrollWidth, documentElement.scrollWidth), Math.max(body.offsetWidth, documentElement.offsetWidth), Math.max(body.clientWidth, documentElement.clientWidth)); - var height = Math.max(Math.max(body.scrollHeight, documentElement.scrollHeight), Math.max(body.offsetHeight, documentElement.offsetHeight), Math.max(body.clientHeight, documentElement.clientHeight)); - return new Bounds(0, 0, width, height); - }; - - /* - * css-line-break 2.1.0 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var toCodePoints$1 = function (str) { - var codePoints = []; - var i = 0; - var length = str.length; - while (i < length) { - var value = str.charCodeAt(i++); - if (value >= 0xd800 && value <= 0xdbff && i < length) { - var extra = str.charCodeAt(i++); - if ((extra & 0xfc00) === 0xdc00) { - codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000); - } - else { - codePoints.push(value); - i--; - } - } - else { - codePoints.push(value); - } - } - return codePoints; - }; - var fromCodePoint$1 = function () { - var codePoints = []; - for (var _i = 0; _i < arguments.length; _i++) { - codePoints[_i] = arguments[_i]; - } - if (String.fromCodePoint) { - return String.fromCodePoint.apply(String, codePoints); - } - var length = codePoints.length; - if (!length) { - return ''; - } - var codeUnits = []; - var index = -1; - var result = ''; - while (++index < length) { - var codePoint = codePoints[index]; - if (codePoint <= 0xffff) { - codeUnits.push(codePoint); - } - else { - codePoint -= 0x10000; - codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00); - } - if (index + 1 === length || codeUnits.length > 0x4000) { - result += String.fromCharCode.apply(String, codeUnits); - codeUnits.length = 0; - } - } - return result; - }; - var chars$2 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$2 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$2 = 0; i$2 < chars$2.length; i$2++) { - lookup$2[chars$2.charCodeAt(i$2)] = i$2; - } - - /* - * utrie 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$1$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$1$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$1$1 = 0; i$1$1 < chars$1$1.length; i$1$1++) { - lookup$1$1[chars$1$1.charCodeAt(i$1$1)] = i$1$1; - } - var decode$1 = function (base64) { - var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4; - if (base64[base64.length - 1] === '=') { - bufferLength--; - if (base64[base64.length - 2] === '=') { - bufferLength--; - } - } - var buffer = typeof ArrayBuffer !== 'undefined' && - typeof Uint8Array !== 'undefined' && - typeof Uint8Array.prototype.slice !== 'undefined' - ? new ArrayBuffer(bufferLength) - : new Array(bufferLength); - var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer); - for (i = 0; i < len; i += 4) { - encoded1 = lookup$1$1[base64.charCodeAt(i)]; - encoded2 = lookup$1$1[base64.charCodeAt(i + 1)]; - encoded3 = lookup$1$1[base64.charCodeAt(i + 2)]; - encoded4 = lookup$1$1[base64.charCodeAt(i + 3)]; - bytes[p++] = (encoded1 << 2) | (encoded2 >> 4); - bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2); - bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63); - } - return buffer; - }; - var polyUint16Array$1 = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 2) { - bytes.push((buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - var polyUint32Array$1 = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 4) { - bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - - /** Shift size for getting the index-2 table offset. */ - var UTRIE2_SHIFT_2$1 = 5; - /** Shift size for getting the index-1 table offset. */ - var UTRIE2_SHIFT_1$1 = 6 + 5; - /** - * Shift size for shifting left the index array values. - * Increases possible data size with 16-bit index values at the cost - * of compactability. - * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY. - */ - var UTRIE2_INDEX_SHIFT$1 = 2; - /** - * Difference between the two shift sizes, - * for getting an index-1 offset from an index-2 offset. 6=11-5 - */ - var UTRIE2_SHIFT_1_2$1 = UTRIE2_SHIFT_1$1 - UTRIE2_SHIFT_2$1; - /** - * The part of the index-2 table for U+D800..U+DBFF stores values for - * lead surrogate code _units_ not code _points_. - * Values for lead surrogate code _points_ are indexed with this portion of the table. - * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.) - */ - var UTRIE2_LSCP_INDEX_2_OFFSET$1 = 0x10000 >> UTRIE2_SHIFT_2$1; - /** Number of entries in a data block. 32=0x20 */ - var UTRIE2_DATA_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_2$1; - /** Mask for getting the lower bits for the in-data-block offset. */ - var UTRIE2_DATA_MASK$1 = UTRIE2_DATA_BLOCK_LENGTH$1 - 1; - var UTRIE2_LSCP_INDEX_2_LENGTH$1 = 0x400 >> UTRIE2_SHIFT_2$1; - /** Count the lengths of both BMP pieces. 2080=0x820 */ - var UTRIE2_INDEX_2_BMP_LENGTH$1 = UTRIE2_LSCP_INDEX_2_OFFSET$1 + UTRIE2_LSCP_INDEX_2_LENGTH$1; - /** - * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820. - * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2. - */ - var UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 = UTRIE2_INDEX_2_BMP_LENGTH$1; - var UTRIE2_UTF8_2B_INDEX_2_LENGTH$1 = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */ - /** - * The index-1 table, only used for supplementary code points, at offset 2112=0x840. - * Variable length, for code points up to highStart, where the last single-value range starts. - * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1. - * (For 0x100000 supplementary code points U+10000..U+10ffff.) - * - * The part of the index-2 table for supplementary code points starts - * after this index-1 table. - * - * Both the index-1 table and the following part of the index-2 table - * are omitted completely if there is only BMP data. - */ - var UTRIE2_INDEX_1_OFFSET$1 = UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 + UTRIE2_UTF8_2B_INDEX_2_LENGTH$1; - /** - * Number of index-1 entries for the BMP. 32=0x20 - * This part of the index-1 table is omitted from the serialized form. - */ - var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 = 0x10000 >> UTRIE2_SHIFT_1$1; - /** Number of entries in an index-2 block. 64=0x40 */ - var UTRIE2_INDEX_2_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_1_2$1; - /** Mask for getting the lower bits for the in-index-2-block offset. */ - var UTRIE2_INDEX_2_MASK$1 = UTRIE2_INDEX_2_BLOCK_LENGTH$1 - 1; - var slice16$1 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint16Array(Array.prototype.slice.call(view, start, end)); - }; - var slice32$1 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint32Array(Array.prototype.slice.call(view, start, end)); - }; - var createTrieFromBase64$1 = function (base64, _byteLength) { - var buffer = decode$1(base64); - var view32 = Array.isArray(buffer) ? polyUint32Array$1(buffer) : new Uint32Array(buffer); - var view16 = Array.isArray(buffer) ? polyUint16Array$1(buffer) : new Uint16Array(buffer); - var headerLength = 24; - var index = slice16$1(view16, headerLength / 2, view32[4] / 2); - var data = view32[5] === 2 - ? slice16$1(view16, (headerLength + view32[4]) / 2) - : slice32$1(view32, Math.ceil((headerLength + view32[4]) / 4)); - return new Trie$1(view32[0], view32[1], view32[2], view32[3], index, data); - }; - var Trie$1 = /** @class */ (function () { - function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) { - this.initialValue = initialValue; - this.errorValue = errorValue; - this.highStart = highStart; - this.highValueIndex = highValueIndex; - this.index = index; - this.data = data; - } - /** - * Get the value for a code point as stored in the Trie. - * - * @param codePoint the code point - * @return the value - */ - Trie.prototype.get = function (codePoint) { - var ix; - if (codePoint >= 0) { - if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) { - // Ordinary BMP code point, excluding leading surrogates. - // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index. - // 16 bit data is stored in the index array itself. - ix = this.index[codePoint >> UTRIE2_SHIFT_2$1]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint <= 0xffff) { - // Lead Surrogate Code Point. A Separate index section is stored for - // lead surrogate code units and code points. - // The main index has the code unit data. - // For this function, we need the code point data. - // Note: this expression could be refactored for slightly improved efficiency, but - // surrogate code points will be so rare in practice that it's not worth it. - ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET$1 + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2$1)]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint < this.highStart) { - // Supplemental code point, use two-level lookup. - ix = UTRIE2_INDEX_1_OFFSET$1 - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 + (codePoint >> UTRIE2_SHIFT_1$1); - ix = this.index[ix]; - ix += (codePoint >> UTRIE2_SHIFT_2$1) & UTRIE2_INDEX_2_MASK$1; - ix = this.index[ix]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint <= 0x10ffff) { - return this.data[this.highValueIndex]; - } - } - // Fall through. The code point is outside of the legal range of 0..0x10ffff. - return this.errorValue; - }; - return Trie; - }()); - - /* - * base64-arraybuffer 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$3 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$3 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$3 = 0; i$3 < chars$3.length; i$3++) { - lookup$3[chars$3.charCodeAt(i$3)] = i$3; - } - - var base64$1 = 'KwAAAAAAAAAACA4AUD0AADAgAAACAAAAAAAIABAAGABAAEgAUABYAGAAaABgAGgAYgBqAF8AZwBgAGgAcQB5AHUAfQCFAI0AlQCdAKIAqgCyALoAYABoAGAAaABgAGgAwgDKAGAAaADGAM4A0wDbAOEA6QDxAPkAAQEJAQ8BFwF1AH0AHAEkASwBNAE6AUIBQQFJAVEBWQFhAWgBcAF4ATAAgAGGAY4BlQGXAZ8BpwGvAbUBvQHFAc0B0wHbAeMB6wHxAfkBAQIJAvEBEQIZAiECKQIxAjgCQAJGAk4CVgJeAmQCbAJ0AnwCgQKJApECmQKgAqgCsAK4ArwCxAIwAMwC0wLbAjAA4wLrAvMC+AIAAwcDDwMwABcDHQMlAy0DNQN1AD0DQQNJA0kDSQNRA1EDVwNZA1kDdQB1AGEDdQBpA20DdQN1AHsDdQCBA4kDkQN1AHUAmQOhA3UAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AKYDrgN1AHUAtgO+A8YDzgPWAxcD3gPjA+sD8wN1AHUA+wMDBAkEdQANBBUEHQQlBCoEFwMyBDgEYABABBcDSARQBFgEYARoBDAAcAQzAXgEgASIBJAEdQCXBHUAnwSnBK4EtgS6BMIEyAR1AHUAdQB1AHUAdQCVANAEYABgAGAAYABgAGAAYABgANgEYADcBOQEYADsBPQE/AQEBQwFFAUcBSQFLAU0BWQEPAVEBUsFUwVbBWAAYgVgAGoFcgV6BYIFigWRBWAAmQWfBaYFYABgAGAAYABgAKoFYACxBbAFuQW6BcEFwQXHBcEFwQXPBdMF2wXjBeoF8gX6BQIGCgYSBhoGIgYqBjIGOgZgAD4GRgZMBmAAUwZaBmAAYABgAGAAYABgAGAAYABgAGAAYABgAGIGYABpBnAGYABgAGAAYABgAGAAYABgAGAAYAB4Bn8GhQZgAGAAYAB1AHcDFQSLBmAAYABgAJMGdQA9A3UAmwajBqsGqwaVALMGuwbDBjAAywbSBtIG1QbSBtIG0gbSBtIG0gbdBuMG6wbzBvsGAwcLBxMHAwcbByMHJwcsBywHMQcsB9IGOAdAB0gHTgfSBkgHVgfSBtIG0gbSBtIG0gbSBtIG0gbSBiwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdgAGAALAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdbB2MHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB2kH0gZwB64EdQB1AHUAdQB1AHUAdQB1AHUHfQdgAIUHjQd1AHUAlQedB2AAYAClB6sHYACzB7YHvgfGB3UAzgfWBzMB3gfmB1EB7gf1B/0HlQENAQUIDQh1ABUIHQglCBcDLQg1CD0IRQhNCEEDUwh1AHUAdQBbCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIcAh3CHoIMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIgggwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAALAcsBywHLAcsBywHLAcsBywHLAcsB4oILAcsB44I0gaWCJ4Ipgh1AHUAqgiyCHUAdQB1AHUAdQB1AHUAdQB1AHUAtwh8AXUAvwh1AMUIyQjRCNkI4AjoCHUAdQB1AO4I9gj+CAYJDgkTCS0HGwkjCYIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiAAIAAAAFAAYABgAGIAXwBgAHEAdQBFAJUAogCyAKAAYABgAEIA4ABGANMA4QDxAMEBDwE1AFwBLAE6AQEBUQF4QkhCmEKoQrhCgAHIQsAB0MLAAcABwAHAAeDC6ABoAHDCwMMAAcABwAHAAdDDGMMAAcAB6MM4wwjDWMNow3jDaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAEjDqABWw6bDqABpg6gAaABoAHcDvwOPA+gAaABfA/8DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DpcPAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcAB9cPKwkyCToJMAB1AHUAdQBCCUoJTQl1AFUJXAljCWcJawkwADAAMAAwAHMJdQB2CX4JdQCECYoJjgmWCXUAngkwAGAAYABxAHUApgn3A64JtAl1ALkJdQDACTAAMAAwADAAdQB1AHUAdQB1AHUAdQB1AHUAowYNBMUIMAAwADAAMADICcsJ0wnZCRUE4QkwAOkJ8An4CTAAMAB1AAAKvwh1AAgKDwoXCh8KdQAwACcKLgp1ADYKqAmICT4KRgowADAAdQB1AE4KMAB1AFYKdQBeCnUAZQowADAAMAAwADAAMAAwADAAMAAVBHUAbQowADAAdQC5CXUKMAAwAHwBxAijBogEMgF9CoQKiASMCpQKmgqIBKIKqgquCogEDQG2Cr4KxgrLCjAAMADTCtsKCgHjCusK8Qr5CgELMAAwADAAMAB1AIsECQsRC3UANAEZCzAAMAAwADAAMAB1ACELKQswAHUANAExCzkLdQBBC0kLMABRC1kLMAAwADAAMAAwADAAdQBhCzAAMAAwAGAAYABpC3ELdwt/CzAAMACHC4sLkwubC58Lpwt1AK4Ltgt1APsDMAAwADAAMAAwADAAMAAwAL4LwwvLC9IL1wvdCzAAMADlC+kL8Qv5C/8LSQswADAAMAAwADAAMAAwADAAMAAHDDAAMAAwADAAMAAODBYMHgx1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1ACYMMAAwADAAdQB1AHUALgx1AHUAdQB1AHUAdQA2DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AD4MdQBGDHUAdQB1AHUAdQB1AEkMdQB1AHUAdQB1AFAMMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQBYDHUAdQB1AF8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUA+wMVBGcMMAAwAHwBbwx1AHcMfwyHDI8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAYABgAJcMMAAwADAAdQB1AJ8MlQClDDAAMACtDCwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB7UMLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AA0EMAC9DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAsBywHLAcsBywHLAcsBywHLQcwAMEMyAwsBywHLAcsBywHLAcsBywHLAcsBywHzAwwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1ANQM2QzhDDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMABgAGAAYABgAGAAYABgAOkMYADxDGAA+AwADQYNYABhCWAAYAAODTAAMAAwADAAFg1gAGAAHg37AzAAMAAwADAAYABgACYNYAAsDTQNPA1gAEMNPg1LDWAAYABgAGAAYABgAGAAYABgAGAAUg1aDYsGVglhDV0NcQBnDW0NdQ15DWAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAlQCBDZUAiA2PDZcNMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAnw2nDTAAMAAwADAAMAAwAHUArw23DTAAMAAwADAAMAAwADAAMAAwADAAMAB1AL8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQDHDTAAYABgAM8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA1w11ANwNMAAwAD0B5A0wADAAMAAwADAAMADsDfQN/A0EDgwOFA4wABsOMAAwADAAMAAwADAAMAAwANIG0gbSBtIG0gbSBtIG0gYjDigOwQUuDsEFMw7SBjoO0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGQg5KDlIOVg7SBtIGXg5lDm0OdQ7SBtIGfQ6EDooOjQ6UDtIGmg6hDtIG0gaoDqwO0ga0DrwO0gZgAGAAYADEDmAAYAAkBtIGzA5gANIOYADaDokO0gbSBt8O5w7SBu8O0gb1DvwO0gZgAGAAxA7SBtIG0gbSBtIGYABgAGAAYAAED2AAsAUMD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHJA8sBywHLAcsBywHLAccDywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywPLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAc0D9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHPA/SBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gYUD0QPlQCVAJUAMAAwADAAMACVAJUAlQCVAJUAlQCVAEwPMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA//8EAAQABAAEAAQABAAEAAQABAANAAMAAQABAAIABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQACgATABcAHgAbABoAHgAXABYAEgAeABsAGAAPABgAHABLAEsASwBLAEsASwBLAEsASwBLABgAGAAeAB4AHgATAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABYAGwASAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWAA0AEQAeAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAFAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJABYAGgAbABsAGwAeAB0AHQAeAE8AFwAeAA0AHgAeABoAGwBPAE8ADgBQAB0AHQAdAE8ATwAXAE8ATwBPABYAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAFAATwBAAE8ATwBPAEAATwBQAFAATwBQAB4AHgAeAB4AHgAeAB0AHQAdAB0AHgAdAB4ADgBQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgBQAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAkACQAJAAkACQAJAAkABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAFAAHgAeAB4AKwArAFAAUABQAFAAGABQACsAKwArACsAHgAeAFAAHgBQAFAAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUAAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAYAA0AKwArAB4AHgAbACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAB4ABAAEAB4ABAAEABMABAArACsAKwArACsAKwArACsAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAKwArACsAKwBWAFYAVgBWAB4AHgArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AGgAaABoAGAAYAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQAEwAEACsAEwATAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABLAEsASwBLAEsASwBLAEsASwBLABoAGQAZAB4AUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABMAUAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABABQAFAABAAEAB4ABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUAAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAFAABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQAUABQAB4AHgAYABMAUAArACsABAAbABsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAFAABAAEAAQABAAEAFAABAAEAAQAUAAEAAQABAAEAAQAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArACsAHgArAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAUAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEAA0ADQBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUAArACsAKwBQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABABQACsAKwArACsAKwArACsAKwAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUAAaABoAUABQAFAAUABQAEwAHgAbAFAAHgAEACsAKwAEAAQABAArAFAAUABQAFAAUABQACsAKwArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQACsAUABQACsAKwAEACsABAAEAAQABAAEACsAKwArACsABAAEACsAKwAEAAQABAArACsAKwAEACsAKwArACsAKwArACsAUABQAFAAUAArAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLAAQABABQAFAAUAAEAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAArACsAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AGwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAKwArACsAKwArAAQABAAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAAQAUAArAFAAUABQAFAAUABQACsAKwArAFAAUABQACsAUABQAFAAUAArACsAKwBQAFAAKwBQACsAUABQACsAKwArAFAAUAArACsAKwBQAFAAUAArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArAAQABAAEAAQABAArACsAKwAEAAQABAArAAQABAAEAAQAKwArAFAAKwArACsAKwArACsABAArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAHgAeAB4AHgAeAB4AGwAeACsAKwArACsAKwAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAUABQAFAAKwArACsAKwArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwAOAFAAUABQAFAAUABQAFAAHgBQAAQABAAEAA4AUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAKwArAAQAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAKwArACsAKwArACsAUAArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAFAABAAEAAQABAAEAAQABAArAAQABAAEACsABAAEAAQABABQAB4AKwArACsAKwBQAFAAUAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQABoAUABQAFAAUABQAFAAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQACsAUAArACsAUABQAFAAUABQAFAAUAArACsAKwAEACsAKwArACsABAAEAAQABAAEAAQAKwAEACsABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArAAQABAAeACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAXAAqACoAKgAqACoAKgAqACsAKwArACsAGwBcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAeAEsASwBLAEsASwBLAEsASwBLAEsADQANACsAKwArACsAKwBcAFwAKwBcACsAXABcAFwAXABcACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAXAArAFwAXABcAFwAXABcAFwAXABcAFwAKgBcAFwAKgAqACoAKgAqACoAKgAqACoAXAArACsAXABcAFwAXABcACsAXAArACoAKgAqACoAKgAqACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwBcAFwAXABcAFAADgAOAA4ADgAeAA4ADgAJAA4ADgANAAkAEwATABMAEwATAAkAHgATAB4AHgAeAAQABAAeAB4AHgAeAB4AHgBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQAFAADQAEAB4ABAAeAAQAFgARABYAEQAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAAQABAAEAAQADQAEAAQAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAA0ADQAeAB4AHgAeAB4AHgAEAB4AHgAeAB4AHgAeACsAHgAeAA4ADgANAA4AHgAeAB4AHgAeAAkACQArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgBcAEsASwBLAEsASwBLAEsASwBLAEsADQANAB4AHgAeAB4AXABcAFwAXABcAFwAKgAqACoAKgBcAFwAXABcACoAKgAqAFwAKgAqACoAXABcACoAKgAqACoAKgAqACoAXABcAFwAKgAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqAFwAKgBLAEsASwBLAEsASwBLAEsASwBLACoAKgAqACoAKgAqAFAAUABQAFAAUABQACsAUAArACsAKwArACsAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAKwBQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsABAAEAAQAHgANAB4AHgAeAB4AHgAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUAArACsADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWABEAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQANAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAANAA0AKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUAArAAQABAArACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqAA0ADQAVAFwADQAeAA0AGwBcACoAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwAeAB4AEwATAA0ADQAOAB4AEwATAB4ABAAEAAQACQArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAHgArACsAKwATABMASwBLAEsASwBLAEsASwBLAEsASwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAXABcAFwAXABcACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAXAArACsAKwAqACoAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsAHgAeAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKwAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKwArAAQASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACoAKgAqACoAKgAqACoAXAAqACoAKgAqACoAKgArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABABQAFAAUABQAFAAUABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwANAA0AHgANAA0ADQANAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwAeAB4AHgAeAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArAA0ADQANAA0ADQBLAEsASwBLAEsASwBLAEsASwBLACsAKwArAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUAAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAAQAUABQAFAAUABQAFAABABQAFAABAAEAAQAUAArACsAKwArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQACsAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAFAAUABQACsAHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQACsAKwAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQACsAHgAeAB4AHgAeAB4AHgAOAB4AKwANAA0ADQANAA0ADQANAAkADQANAA0ACAAEAAsABAAEAA0ACQANAA0ADAAdAB0AHgAXABcAFgAXABcAFwAWABcAHQAdAB4AHgAUABQAFAANAAEAAQAEAAQABAAEAAQACQAaABoAGgAaABoAGgAaABoAHgAXABcAHQAVABUAHgAeAB4AHgAeAB4AGAAWABEAFQAVABUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ADQAeAA0ADQANAA0AHgANAA0ADQAHAB4AHgAeAB4AKwAEAAQABAAEAAQABAAEAAQABAAEAFAAUAArACsATwBQAFAAUABQAFAAHgAeAB4AFgARAE8AUABPAE8ATwBPAFAAUABQAFAAUAAeAB4AHgAWABEAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArABsAGwAbABsAGwAbABsAGgAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGgAbABsAGwAbABoAGwAbABoAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAHgAeAFAAGgAeAB0AHgBQAB4AGgAeAB4AHgAeAB4AHgAeAB4AHgBPAB4AUAAbAB4AHgBQAFAAUABQAFAAHgAeAB4AHQAdAB4AUAAeAFAAHgBQAB4AUABPAFAAUAAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgBQAFAAUABQAE8ATwBQAFAAUABQAFAATwBQAFAATwBQAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAUABQAFAATwBPAE8ATwBPAE8ATwBPAE8ATwBQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABPAB4AHgArACsAKwArAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHQAdAB4AHgAeAB0AHQAeAB4AHQAeAB4AHgAdAB4AHQAbABsAHgAdAB4AHgAeAB4AHQAeAB4AHQAdAB0AHQAeAB4AHQAeAB0AHgAdAB0AHQAdAB0AHQAeAB0AHgAeAB4AHgAeAB0AHQAdAB0AHgAeAB4AHgAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHgAeAB0AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAeAB0AHQAdAB0AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAdAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAWABEAHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAWABEAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AHQAdAB0AHgAeAB0AHgAeAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlAB4AHQAdAB4AHgAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AJQAlAB0AHQAlAB4AJQAlACUAIAAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAdAB0AHQAeAB0AJQAdAB0AHgAdAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAdAB0AHQAdACUAHgAlACUAJQAdACUAJQAdAB0AHQAlACUAHQAdACUAHQAdACUAJQAlAB4AHQAeAB4AHgAeAB0AHQAlAB0AHQAdAB0AHQAdACUAJQAlACUAJQAdACUAJQAgACUAHQAdACUAJQAlACUAJQAlACUAJQAeAB4AHgAlACUAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AFwAXABcAFwAXABcAHgATABMAJQAeAB4AHgAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARABYAEQAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAEAAQABAAeAB4AKwArACsAKwArABMADQANAA0AUAATAA0AUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUAANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAA0ADQANAA0ADQANAA0ADQAeAA0AFgANAB4AHgAXABcAHgAeABcAFwAWABEAFgARABYAEQAWABEADQANAA0ADQATAFAADQANAB4ADQANAB4AHgAeAB4AHgAMAAwADQANAA0AHgANAA0AFgANAA0ADQANAA0ADQANAA0AHgANAB4ADQANAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArAA0AEQARACUAJQBHAFcAVwAWABEAFgARABYAEQAWABEAFgARACUAJQAWABEAFgARABYAEQAWABEAFQAWABEAEQAlAFcAVwBXAFcAVwBXAFcAVwBXAAQABAAEAAQABAAEACUAVwBXAFcAVwA2ACUAJQBXAFcAVwBHAEcAJQAlACUAKwBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBRAFcAUQBXAFEAVwBXAFcAVwBXAFcAUQBXAFcAVwBXAFcAVwBRAFEAKwArAAQABAAVABUARwBHAFcAFQBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBRAFcAVwBXAFcAVwBXAFEAUQBXAFcAVwBXABUAUQBHAEcAVwArACsAKwArACsAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwAlACUAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACsAKwArACsAKwArACsAKwArACsAKwArAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBPAE8ATwBPAE8ATwBPAE8AJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADQATAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABLAEsASwBLAEsASwBLAEsASwBLAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAABAAEAAQABAAeAAQABAAEAAQABAAEAAQABAAEAAQAHgBQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAeAA0ADQANAA0ADQArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAAQAUABQAFAABABQAFAAUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAeAB4AHgAeAAQAKwArACsAUABQAFAAUABQAFAAHgAeABoAHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADgAOABMAEwArACsAKwArACsAKwArACsABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwANAA0ASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUAAeAB4AHgBQAA4AUABQAAQAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArAB4AWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYACsAKwArAAQAHgAeAB4AHgAeAB4ADQANAA0AHgAeAB4AHgArAFAASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArAB4AHgBcAFwAXABcAFwAKgBcAFwAXABcAFwAXABcAFwAXABcAEsASwBLAEsASwBLAEsASwBLAEsAXABcAFwAXABcACsAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAFAAUABQAAQAUABQAFAAUABQAFAAUABQAAQABAArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAHgANAA0ADQBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAXAAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAKgAqACoAXABcACoAKgBcAFwAXABcAFwAKgAqAFwAKgBcACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcACoAKgBQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAA0ADQBQAFAAUAAEAAQAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQADQAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAVABVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBUAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVACsAKwArACsAKwArACsAKwArACsAKwArAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAKwArACsAKwBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAKwArACsAKwAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAKwArACsAKwArAFYABABWAFYAVgBWAFYAVgBWAFYAVgBWAB4AVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgArAFYAVgBWAFYAVgArAFYAKwBWAFYAKwBWAFYAKwBWAFYAVgBWAFYAVgBWAFYAVgBWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAEQAWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAaAB4AKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAGAARABEAGAAYABMAEwAWABEAFAArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACUAJQAlACUAJQAWABEAFgARABYAEQAWABEAFgARABYAEQAlACUAFgARACUAJQAlACUAJQAlACUAEQAlABEAKwAVABUAEwATACUAFgARABYAEQAWABEAJQAlACUAJQAlACUAJQAlACsAJQAbABoAJQArACsAKwArAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAcAKwATACUAJQAbABoAJQAlABYAEQAlACUAEQAlABEAJQBXAFcAVwBXAFcAVwBXAFcAVwBXABUAFQAlACUAJQATACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXABYAJQARACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAWACUAEQAlABYAEQARABYAEQARABUAVwBRAFEAUQBRAFEAUQBRAFEAUQBRAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcARwArACsAVwBXAFcAVwBXAFcAKwArAFcAVwBXAFcAVwBXACsAKwBXAFcAVwBXAFcAVwArACsAVwBXAFcAKwArACsAGgAbACUAJQAlABsAGwArAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAAQAB0AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsADQANAA0AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAA0AUABQAFAAUAArACsAKwArAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwArAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwBQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAUABQAFAAUABQAAQABAAEACsABAAEACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAKwBQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAA0ADQANAA0ADQANAA0ADQAeACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAArACsAKwArAFAAUABQAFAAUAANAA0ADQANAA0ADQAUACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsADQANAA0ADQANAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArAAQABAANACsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAB4AHgAeAB4AHgArACsAKwArACsAKwAEAAQABAAEAAQABAAEAA0ADQAeAB4AHgAeAB4AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsASwBLAEsASwBLAEsASwBLAEsASwANAA0ADQANAFAABAAEAFAAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAeAA4AUAArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAADQANAB4ADQAEAAQABAAEAB4ABAAEAEsASwBLAEsASwBLAEsASwBLAEsAUAAOAFAADQANAA0AKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAANAA0AHgANAA0AHgAEACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAA0AKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsABAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsABAAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAUAArACsAKwArACsAKwAEACsAKwArACsAKwBQAFAAUABQAFAABAAEACsAKwAEAAQABAAEAAQABAAEACsAKwArAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAAQABABQAFAAUABQAA0ADQANAA0AHgBLAEsASwBLAEsASwBLAEsASwBLAA0ADQArAB4ABABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUAAeAFAAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABAAEAAQADgANAA0AEwATAB4AHgAeAA0ADQANAA0ADQANAA0ADQANAA0ADQANAA0ADQANAFAAUABQAFAABAAEACsAKwAEAA0ADQAeAFAAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKwArACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBcAFwADQANAA0AKgBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAKwArAFAAKwArAFAAUABQAFAAUABQAFAAUAArAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQAKwAEAAQAKwArAAQABAAEAAQAUAAEAFAABAAEAA0ADQANACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABABQAA4AUAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAFAABAAEAAQABAAOAB4ADQANAA0ADQAOAB4ABAArACsAKwArACsAKwArACsAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAA0ADQANAFAADgAOAA4ADQANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAAQABAAEAFAADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAOABMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAArACsAKwAEACsABAAEACsABAAEAAQABAAEAAQABABQAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAaABoAGgAaAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABIAEgAQwBDAEMAUABQAFAAUABDAFAAUABQAEgAQwBIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABDAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAJAAkACQAJAAkACQAJABYAEQArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwANAA0AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAANACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAA0ADQANAB4AHgAeAB4AHgAeAFAAUABQAFAADQAeACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAA0AHgAeACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAARwBHABUARwAJACsAKwArACsAKwArACsAKwArACsAKwAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUQBRAFEAKwArACsAKwArACsAKwArACsAKwArACsAKwBRAFEAUQBRACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAHgAEAAQADQAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQABAAEAAQABAAeAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQAHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAKwArAFAAKwArAFAAUAArACsAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUAArAFAAUABQAFAAUABQAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAHgAeAFAAUABQAFAAUAArAFAAKwArACsAUABQAFAAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeACsAKwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4ABAAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAHgAeAA0ADQANAA0AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArAAQABAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwBQAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArABsAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAB4AHgAeAB4ABAAEAAQABAAEAAQABABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArABYAFgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAGgBQAFAAUAAaAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUAArACsAKwArACsAKwBQACsAKwArACsAUAArAFAAKwBQACsAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUAArAFAAKwBQACsAUAArAFAAUAArAFAAKwArAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAKwBQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8AJQAlACUAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB4AHgAeACUAJQAlAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAlACUAJQAlACUAHgAlACUAJQAlACUAIAAgACAAJQAlACAAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACEAIQAhACEAIQAlACUAIAAgACUAJQAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAIAAlACUAJQAlACAAIAAgACUAIAAgACAAJQAlACUAJQAlACUAJQAgACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAlAB4AJQAeACUAJQAlACUAJQAgACUAJQAlACUAHgAlAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACAAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABcAFwAXABUAFQAVAB4AHgAeAB4AJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAgACUAJQAgACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAIAAgACUAJQAgACAAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACAAIAAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACAAIAAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAA=='; - - var LETTER_NUMBER_MODIFIER = 50; - // Non-tailorable Line Breaking Classes - var BK = 1; // Cause a line break (after) - var CR$1 = 2; // Cause a line break (after), except between CR and LF - var LF$1 = 3; // Cause a line break (after) - var CM = 4; // Prohibit a line break between the character and the preceding character - var NL = 5; // Cause a line break (after) - var WJ = 7; // Prohibit line breaks before and after - var ZW = 8; // Provide a break opportunity - var GL = 9; // Prohibit line breaks before and after - var SP = 10; // Enable indirect line breaks - var ZWJ$1 = 11; // Prohibit line breaks within joiner sequences - // Break Opportunities - var B2 = 12; // Provide a line break opportunity before and after the character - var BA = 13; // Generally provide a line break opportunity after the character - var BB = 14; // Generally provide a line break opportunity before the character - var HY = 15; // Provide a line break opportunity after the character, except in numeric context - var CB = 16; // Provide a line break opportunity contingent on additional information - // Characters Prohibiting Certain Breaks - var CL = 17; // Prohibit line breaks before - var CP = 18; // Prohibit line breaks before - var EX = 19; // Prohibit line breaks before - var IN = 20; // Allow only indirect line breaks between pairs - var NS = 21; // Allow only indirect line breaks before - var OP = 22; // Prohibit line breaks after - var QU = 23; // Act like they are both opening and closing - // Numeric Context - var IS = 24; // Prevent breaks after any and before numeric - var NU = 25; // Form numeric expressions for line breaking purposes - var PO = 26; // Do not break following a numeric expression - var PR = 27; // Do not break in front of a numeric expression - var SY = 28; // Prevent a break before; and allow a break after - // Other Characters - var AI = 29; // Act like AL when the resolvedEAW is N; otherwise; act as ID - var AL = 30; // Are alphabetic characters or symbols that are used with alphabetic characters - var CJ = 31; // Treat as NS or ID for strict or normal breaking. - var EB = 32; // Do not break from following Emoji Modifier - var EM = 33; // Do not break from preceding Emoji Base - var H2 = 34; // Form Korean syllable blocks - var H3 = 35; // Form Korean syllable blocks - var HL = 36; // Do not break around a following hyphen; otherwise act as Alphabetic - var ID = 37; // Break before or after; except in some numeric context - var JL = 38; // Form Korean syllable blocks - var JV = 39; // Form Korean syllable blocks - var JT = 40; // Form Korean syllable blocks - var RI$1 = 41; // Keep pairs together. For pairs; break before and after other classes - var SA = 42; // Provide a line break opportunity contingent on additional, language-specific context analysis - var XX = 43; // Have as yet unknown line breaking behavior or unassigned code positions - var ea_OP = [0x2329, 0xff08]; - var BREAK_MANDATORY = '!'; - var BREAK_NOT_ALLOWED$1 = '×'; - var BREAK_ALLOWED$1 = '÷'; - var UnicodeTrie$1 = createTrieFromBase64$1(base64$1); - var ALPHABETICS = [AL, HL]; - var HARD_LINE_BREAKS = [BK, CR$1, LF$1, NL]; - var SPACE$1 = [SP, ZW]; - var PREFIX_POSTFIX = [PR, PO]; - var LINE_BREAKS = HARD_LINE_BREAKS.concat(SPACE$1); - var KOREAN_SYLLABLE_BLOCK = [JL, JV, JT, H2, H3]; - var HYPHEN = [HY, BA]; - var codePointsToCharacterClasses = function (codePoints, lineBreak) { - if (lineBreak === void 0) { lineBreak = 'strict'; } - var types = []; - var indices = []; - var categories = []; - codePoints.forEach(function (codePoint, index) { - var classType = UnicodeTrie$1.get(codePoint); - if (classType > LETTER_NUMBER_MODIFIER) { - categories.push(true); - classType -= LETTER_NUMBER_MODIFIER; - } - else { - categories.push(false); - } - if (['normal', 'auto', 'loose'].indexOf(lineBreak) !== -1) { - // U+2010, – U+2013, 〜 U+301C, ゠ U+30A0 - if ([0x2010, 0x2013, 0x301c, 0x30a0].indexOf(codePoint) !== -1) { - indices.push(index); - return types.push(CB); - } - } - if (classType === CM || classType === ZWJ$1) { - // LB10 Treat any remaining combining mark or ZWJ as AL. - if (index === 0) { - indices.push(index); - return types.push(AL); - } - // LB9 Do not break a combining character sequence; treat it as if it has the line breaking class of - // the base character in all of the following rules. Treat ZWJ as if it were CM. - var prev = types[index - 1]; - if (LINE_BREAKS.indexOf(prev) === -1) { - indices.push(indices[index - 1]); - return types.push(prev); - } - indices.push(index); - return types.push(AL); - } - indices.push(index); - if (classType === CJ) { - return types.push(lineBreak === 'strict' ? NS : ID); - } - if (classType === SA) { - return types.push(AL); - } - if (classType === AI) { - return types.push(AL); - } - // For supplementary characters, a useful default is to treat characters in the range 10000..1FFFD as AL - // and characters in the ranges 20000..2FFFD and 30000..3FFFD as ID, until the implementation can be revised - // to take into account the actual line breaking properties for these characters. - if (classType === XX) { - if ((codePoint >= 0x20000 && codePoint <= 0x2fffd) || (codePoint >= 0x30000 && codePoint <= 0x3fffd)) { - return types.push(ID); - } - else { - return types.push(AL); - } - } - types.push(classType); - }); - return [indices, types, categories]; - }; - var isAdjacentWithSpaceIgnored = function (a, b, currentIndex, classTypes) { - var current = classTypes[currentIndex]; - if (Array.isArray(a) ? a.indexOf(current) !== -1 : a === current) { - var i = currentIndex; - while (i <= classTypes.length) { - i++; - var next = classTypes[i]; - if (next === b) { - return true; - } - if (next !== SP) { - break; - } - } - } - if (current === SP) { - var i = currentIndex; - while (i > 0) { - i--; - var prev = classTypes[i]; - if (Array.isArray(a) ? a.indexOf(prev) !== -1 : a === prev) { - var n = currentIndex; - while (n <= classTypes.length) { - n++; - var next = classTypes[n]; - if (next === b) { - return true; - } - if (next !== SP) { - break; - } - } - } - if (prev !== SP) { - break; - } - } - } - return false; - }; - var previousNonSpaceClassType = function (currentIndex, classTypes) { - var i = currentIndex; - while (i >= 0) { - var type = classTypes[i]; - if (type === SP) { - i--; - } - else { - return type; - } - } - return 0; - }; - var _lineBreakAtIndex = function (codePoints, classTypes, indicies, index, forbiddenBreaks) { - if (indicies[index] === 0) { - return BREAK_NOT_ALLOWED$1; - } - var currentIndex = index - 1; - if (Array.isArray(forbiddenBreaks) && forbiddenBreaks[currentIndex] === true) { - return BREAK_NOT_ALLOWED$1; - } - var beforeIndex = currentIndex - 1; - var afterIndex = currentIndex + 1; - var current = classTypes[currentIndex]; - // LB4 Always break after hard line breaks. - // LB5 Treat CR followed by LF, as well as CR, LF, and NL as hard line breaks. - var before = beforeIndex >= 0 ? classTypes[beforeIndex] : 0; - var next = classTypes[afterIndex]; - if (current === CR$1 && next === LF$1) { - return BREAK_NOT_ALLOWED$1; - } - if (HARD_LINE_BREAKS.indexOf(current) !== -1) { - return BREAK_MANDATORY; - } - // LB6 Do not break before hard line breaks. - if (HARD_LINE_BREAKS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB7 Do not break before spaces or zero width space. - if (SPACE$1.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB8 Break before any character following a zero-width space, even if one or more spaces intervene. - if (previousNonSpaceClassType(currentIndex, classTypes) === ZW) { - return BREAK_ALLOWED$1; - } - // LB8a Do not break after a zero width joiner. - if (UnicodeTrie$1.get(codePoints[currentIndex]) === ZWJ$1) { - return BREAK_NOT_ALLOWED$1; - } - // zwj emojis - if ((current === EB || current === EM) && UnicodeTrie$1.get(codePoints[afterIndex]) === ZWJ$1) { - return BREAK_NOT_ALLOWED$1; - } - // LB11 Do not break before or after Word joiner and related characters. - if (current === WJ || next === WJ) { - return BREAK_NOT_ALLOWED$1; - } - // LB12 Do not break after NBSP and related characters. - if (current === GL) { - return BREAK_NOT_ALLOWED$1; - } - // LB12a Do not break before NBSP and related characters, except after spaces and hyphens. - if ([SP, BA, HY].indexOf(current) === -1 && next === GL) { - return BREAK_NOT_ALLOWED$1; - } - // LB13 Do not break before ‘]’ or ‘!’ or ‘;’ or ‘/’, even after spaces. - if ([CL, CP, EX, IS, SY].indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB14 Do not break after ‘[’, even after spaces. - if (previousNonSpaceClassType(currentIndex, classTypes) === OP) { - return BREAK_NOT_ALLOWED$1; - } - // LB15 Do not break within ‘”[’, even with intervening spaces. - if (isAdjacentWithSpaceIgnored(QU, OP, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB16 Do not break between closing punctuation and a nonstarter (lb=NS), even with intervening spaces. - if (isAdjacentWithSpaceIgnored([CL, CP], NS, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB17 Do not break within ‘——’, even with intervening spaces. - if (isAdjacentWithSpaceIgnored(B2, B2, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB18 Break after spaces. - if (current === SP) { - return BREAK_ALLOWED$1; - } - // LB19 Do not break before or after quotation marks, such as ‘ ” ’. - if (current === QU || next === QU) { - return BREAK_NOT_ALLOWED$1; - } - // LB20 Break before and after unresolved CB. - if (next === CB || current === CB) { - return BREAK_ALLOWED$1; - } - // LB21 Do not break before hyphen-minus, other hyphens, fixed-width spaces, small kana, and other non-starters, or after acute accents. - if ([BA, HY, NS].indexOf(next) !== -1 || current === BB) { - return BREAK_NOT_ALLOWED$1; - } - // LB21a Don't break after Hebrew + Hyphen. - if (before === HL && HYPHEN.indexOf(current) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB21b Don’t break between Solidus and Hebrew letters. - if (current === SY && next === HL) { - return BREAK_NOT_ALLOWED$1; - } - // LB22 Do not break before ellipsis. - if (next === IN) { - return BREAK_NOT_ALLOWED$1; - } - // LB23 Do not break between digits and letters. - if ((ALPHABETICS.indexOf(next) !== -1 && current === NU) || (ALPHABETICS.indexOf(current) !== -1 && next === NU)) { - return BREAK_NOT_ALLOWED$1; - } - // LB23a Do not break between numeric prefixes and ideographs, or between ideographs and numeric postfixes. - if ((current === PR && [ID, EB, EM].indexOf(next) !== -1) || - ([ID, EB, EM].indexOf(current) !== -1 && next === PO)) { - return BREAK_NOT_ALLOWED$1; - } - // LB24 Do not break between numeric prefix/postfix and letters, or between letters and prefix/postfix. - if ((ALPHABETICS.indexOf(current) !== -1 && PREFIX_POSTFIX.indexOf(next) !== -1) || - (PREFIX_POSTFIX.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1)) { - return BREAK_NOT_ALLOWED$1; - } - // LB25 Do not break between the following pairs of classes relevant to numbers: - if ( - // (PR | PO) × ( OP | HY )? NU - ([PR, PO].indexOf(current) !== -1 && - (next === NU || ([OP, HY].indexOf(next) !== -1 && classTypes[afterIndex + 1] === NU))) || - // ( OP | HY ) × NU - ([OP, HY].indexOf(current) !== -1 && next === NU) || - // NU × (NU | SY | IS) - (current === NU && [NU, SY, IS].indexOf(next) !== -1)) { - return BREAK_NOT_ALLOWED$1; - } - // NU (NU | SY | IS)* × (NU | SY | IS | CL | CP) - if ([NU, SY, IS, CL, CP].indexOf(next) !== -1) { - var prevIndex = currentIndex; - while (prevIndex >= 0) { - var type = classTypes[prevIndex]; - if (type === NU) { - return BREAK_NOT_ALLOWED$1; - } - else if ([SY, IS].indexOf(type) !== -1) { - prevIndex--; - } - else { - break; - } - } - } - // NU (NU | SY | IS)* (CL | CP)? × (PO | PR)) - if ([PR, PO].indexOf(next) !== -1) { - var prevIndex = [CL, CP].indexOf(current) !== -1 ? beforeIndex : currentIndex; - while (prevIndex >= 0) { - var type = classTypes[prevIndex]; - if (type === NU) { - return BREAK_NOT_ALLOWED$1; - } - else if ([SY, IS].indexOf(type) !== -1) { - prevIndex--; - } - else { - break; - } - } - } - // LB26 Do not break a Korean syllable. - if ((JL === current && [JL, JV, H2, H3].indexOf(next) !== -1) || - ([JV, H2].indexOf(current) !== -1 && [JV, JT].indexOf(next) !== -1) || - ([JT, H3].indexOf(current) !== -1 && next === JT)) { - return BREAK_NOT_ALLOWED$1; - } - // LB27 Treat a Korean Syllable Block the same as ID. - if ((KOREAN_SYLLABLE_BLOCK.indexOf(current) !== -1 && [IN, PO].indexOf(next) !== -1) || - (KOREAN_SYLLABLE_BLOCK.indexOf(next) !== -1 && current === PR)) { - return BREAK_NOT_ALLOWED$1; - } - // LB28 Do not break between alphabetics (“at”). - if (ALPHABETICS.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB29 Do not break between numeric punctuation and alphabetics (“e.g.”). - if (current === IS && ALPHABETICS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB30 Do not break between letters, numbers, or ordinary symbols and opening or closing parentheses. - if ((ALPHABETICS.concat(NU).indexOf(current) !== -1 && - next === OP && - ea_OP.indexOf(codePoints[afterIndex]) === -1) || - (ALPHABETICS.concat(NU).indexOf(next) !== -1 && current === CP)) { - return BREAK_NOT_ALLOWED$1; - } - // LB30a Break between two regional indicator symbols if and only if there are an even number of regional - // indicators preceding the position of the break. - if (current === RI$1 && next === RI$1) { - var i = indicies[currentIndex]; - var count = 1; - while (i > 0) { - i--; - if (classTypes[i] === RI$1) { - count++; - } - else { - break; - } - } - if (count % 2 !== 0) { - return BREAK_NOT_ALLOWED$1; - } - } - // LB30b Do not break between an emoji base and an emoji modifier. - if (current === EB && next === EM) { - return BREAK_NOT_ALLOWED$1; - } - return BREAK_ALLOWED$1; - }; - var cssFormattedClasses = function (codePoints, options) { - if (!options) { - options = { lineBreak: 'normal', wordBreak: 'normal' }; - } - var _a = codePointsToCharacterClasses(codePoints, options.lineBreak), indicies = _a[0], classTypes = _a[1], isLetterNumber = _a[2]; - if (options.wordBreak === 'break-all' || options.wordBreak === 'break-word') { - classTypes = classTypes.map(function (type) { return ([NU, AL, SA].indexOf(type) !== -1 ? ID : type); }); - } - var forbiddenBreakpoints = options.wordBreak === 'keep-all' - ? isLetterNumber.map(function (letterNumber, i) { - return letterNumber && codePoints[i] >= 0x4e00 && codePoints[i] <= 0x9fff; - }) - : undefined; - return [indicies, classTypes, forbiddenBreakpoints]; - }; - var Break = /** @class */ (function () { - function Break(codePoints, lineBreak, start, end) { - this.codePoints = codePoints; - this.required = lineBreak === BREAK_MANDATORY; - this.start = start; - this.end = end; - } - Break.prototype.slice = function () { - return fromCodePoint$1.apply(void 0, this.codePoints.slice(this.start, this.end)); - }; - return Break; - }()); - var LineBreaker = function (str, options) { - var codePoints = toCodePoints$1(str); - var _a = cssFormattedClasses(codePoints, options), indicies = _a[0], classTypes = _a[1], forbiddenBreakpoints = _a[2]; - var length = codePoints.length; - var lastEnd = 0; - var nextIndex = 0; - return { - next: function () { - if (nextIndex >= length) { - return { done: true, value: null }; - } - var lineBreak = BREAK_NOT_ALLOWED$1; - while (nextIndex < length && - (lineBreak = _lineBreakAtIndex(codePoints, classTypes, indicies, ++nextIndex, forbiddenBreakpoints)) === - BREAK_NOT_ALLOWED$1) { } - if (lineBreak !== BREAK_NOT_ALLOWED$1 || nextIndex === length) { - var value = new Break(codePoints, lineBreak, lastEnd, nextIndex); - lastEnd = nextIndex; - return { value: value, done: false }; - } - return { done: true, value: null }; - }, - }; - }; - - // https://www.w3.org/TR/css-syntax-3 - var FLAG_UNRESTRICTED = 1 << 0; - var FLAG_ID = 1 << 1; - var FLAG_INTEGER = 1 << 2; - var FLAG_NUMBER = 1 << 3; - var LINE_FEED = 0x000a; - var SOLIDUS = 0x002f; - var REVERSE_SOLIDUS = 0x005c; - var CHARACTER_TABULATION = 0x0009; - var SPACE = 0x0020; - var QUOTATION_MARK = 0x0022; - var EQUALS_SIGN = 0x003d; - var NUMBER_SIGN = 0x0023; - var DOLLAR_SIGN = 0x0024; - var PERCENTAGE_SIGN = 0x0025; - var APOSTROPHE = 0x0027; - var LEFT_PARENTHESIS = 0x0028; - var RIGHT_PARENTHESIS = 0x0029; - var LOW_LINE = 0x005f; - var HYPHEN_MINUS = 0x002d; - var EXCLAMATION_MARK = 0x0021; - var LESS_THAN_SIGN = 0x003c; - var GREATER_THAN_SIGN = 0x003e; - var COMMERCIAL_AT = 0x0040; - var LEFT_SQUARE_BRACKET = 0x005b; - var RIGHT_SQUARE_BRACKET = 0x005d; - var CIRCUMFLEX_ACCENT = 0x003d; - var LEFT_CURLY_BRACKET = 0x007b; - var QUESTION_MARK = 0x003f; - var RIGHT_CURLY_BRACKET = 0x007d; - var VERTICAL_LINE = 0x007c; - var TILDE = 0x007e; - var CONTROL = 0x0080; - var REPLACEMENT_CHARACTER = 0xfffd; - var ASTERISK = 0x002a; - var PLUS_SIGN = 0x002b; - var COMMA = 0x002c; - var COLON = 0x003a; - var SEMICOLON = 0x003b; - var FULL_STOP = 0x002e; - var NULL = 0x0000; - var BACKSPACE = 0x0008; - var LINE_TABULATION = 0x000b; - var SHIFT_OUT = 0x000e; - var INFORMATION_SEPARATOR_ONE = 0x001f; - var DELETE = 0x007f; - var EOF = -1; - var ZERO = 0x0030; - var a = 0x0061; - var e = 0x0065; - var f = 0x0066; - var u = 0x0075; - var z = 0x007a; - var A = 0x0041; - var E = 0x0045; - var F = 0x0046; - var U = 0x0055; - var Z = 0x005a; - var isDigit = function (codePoint) { return codePoint >= ZERO && codePoint <= 0x0039; }; - var isSurrogateCodePoint = function (codePoint) { return codePoint >= 0xd800 && codePoint <= 0xdfff; }; - var isHex = function (codePoint) { - return isDigit(codePoint) || (codePoint >= A && codePoint <= F) || (codePoint >= a && codePoint <= f); - }; - var isLowerCaseLetter = function (codePoint) { return codePoint >= a && codePoint <= z; }; - var isUpperCaseLetter = function (codePoint) { return codePoint >= A && codePoint <= Z; }; - var isLetter = function (codePoint) { return isLowerCaseLetter(codePoint) || isUpperCaseLetter(codePoint); }; - var isNonASCIICodePoint = function (codePoint) { return codePoint >= CONTROL; }; - var isWhiteSpace = function (codePoint) { - return codePoint === LINE_FEED || codePoint === CHARACTER_TABULATION || codePoint === SPACE; - }; - var isNameStartCodePoint = function (codePoint) { - return isLetter(codePoint) || isNonASCIICodePoint(codePoint) || codePoint === LOW_LINE; - }; - var isNameCodePoint = function (codePoint) { - return isNameStartCodePoint(codePoint) || isDigit(codePoint) || codePoint === HYPHEN_MINUS; - }; - var isNonPrintableCodePoint = function (codePoint) { - return ((codePoint >= NULL && codePoint <= BACKSPACE) || - codePoint === LINE_TABULATION || - (codePoint >= SHIFT_OUT && codePoint <= INFORMATION_SEPARATOR_ONE) || - codePoint === DELETE); - }; - var isValidEscape = function (c1, c2) { - if (c1 !== REVERSE_SOLIDUS) { - return false; - } - return c2 !== LINE_FEED; - }; - var isIdentifierStart = function (c1, c2, c3) { - if (c1 === HYPHEN_MINUS) { - return isNameStartCodePoint(c2) || isValidEscape(c2, c3); - } - else if (isNameStartCodePoint(c1)) { - return true; - } - else if (c1 === REVERSE_SOLIDUS && isValidEscape(c1, c2)) { - return true; - } - return false; - }; - var isNumberStart = function (c1, c2, c3) { - if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) { - if (isDigit(c2)) { - return true; - } - return c2 === FULL_STOP && isDigit(c3); - } - if (c1 === FULL_STOP) { - return isDigit(c2); - } - return isDigit(c1); - }; - var stringToNumber = function (codePoints) { - var c = 0; - var sign = 1; - if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) { - if (codePoints[c] === HYPHEN_MINUS) { - sign = -1; - } - c++; - } - var integers = []; - while (isDigit(codePoints[c])) { - integers.push(codePoints[c++]); - } - var int = integers.length ? parseInt(fromCodePoint$1.apply(void 0, integers), 10) : 0; - if (codePoints[c] === FULL_STOP) { - c++; - } - var fraction = []; - while (isDigit(codePoints[c])) { - fraction.push(codePoints[c++]); - } - var fracd = fraction.length; - var frac = fracd ? parseInt(fromCodePoint$1.apply(void 0, fraction), 10) : 0; - if (codePoints[c] === E || codePoints[c] === e) { - c++; - } - var expsign = 1; - if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) { - if (codePoints[c] === HYPHEN_MINUS) { - expsign = -1; - } - c++; - } - var exponent = []; - while (isDigit(codePoints[c])) { - exponent.push(codePoints[c++]); - } - var exp = exponent.length ? parseInt(fromCodePoint$1.apply(void 0, exponent), 10) : 0; - return sign * (int + frac * Math.pow(10, -fracd)) * Math.pow(10, expsign * exp); - }; - var LEFT_PARENTHESIS_TOKEN = { - type: 2 /* LEFT_PARENTHESIS_TOKEN */ - }; - var RIGHT_PARENTHESIS_TOKEN = { - type: 3 /* RIGHT_PARENTHESIS_TOKEN */ - }; - var COMMA_TOKEN = { type: 4 /* COMMA_TOKEN */ }; - var SUFFIX_MATCH_TOKEN = { type: 13 /* SUFFIX_MATCH_TOKEN */ }; - var PREFIX_MATCH_TOKEN = { type: 8 /* PREFIX_MATCH_TOKEN */ }; - var COLUMN_TOKEN = { type: 21 /* COLUMN_TOKEN */ }; - var DASH_MATCH_TOKEN = { type: 9 /* DASH_MATCH_TOKEN */ }; - var INCLUDE_MATCH_TOKEN = { type: 10 /* INCLUDE_MATCH_TOKEN */ }; - var LEFT_CURLY_BRACKET_TOKEN = { - type: 11 /* LEFT_CURLY_BRACKET_TOKEN */ - }; - var RIGHT_CURLY_BRACKET_TOKEN = { - type: 12 /* RIGHT_CURLY_BRACKET_TOKEN */ - }; - var SUBSTRING_MATCH_TOKEN = { type: 14 /* SUBSTRING_MATCH_TOKEN */ }; - var BAD_URL_TOKEN = { type: 23 /* BAD_URL_TOKEN */ }; - var BAD_STRING_TOKEN = { type: 1 /* BAD_STRING_TOKEN */ }; - var CDO_TOKEN = { type: 25 /* CDO_TOKEN */ }; - var CDC_TOKEN = { type: 24 /* CDC_TOKEN */ }; - var COLON_TOKEN = { type: 26 /* COLON_TOKEN */ }; - var SEMICOLON_TOKEN = { type: 27 /* SEMICOLON_TOKEN */ }; - var LEFT_SQUARE_BRACKET_TOKEN = { - type: 28 /* LEFT_SQUARE_BRACKET_TOKEN */ - }; - var RIGHT_SQUARE_BRACKET_TOKEN = { - type: 29 /* RIGHT_SQUARE_BRACKET_TOKEN */ - }; - var WHITESPACE_TOKEN = { type: 31 /* WHITESPACE_TOKEN */ }; - var EOF_TOKEN = { type: 32 /* EOF_TOKEN */ }; - var Tokenizer = /** @class */ (function () { - function Tokenizer() { - this._value = []; - } - Tokenizer.prototype.write = function (chunk) { - this._value = this._value.concat(toCodePoints$1(chunk)); - }; - Tokenizer.prototype.read = function () { - var tokens = []; - var token = this.consumeToken(); - while (token !== EOF_TOKEN) { - tokens.push(token); - token = this.consumeToken(); - } - return tokens; - }; - Tokenizer.prototype.consumeToken = function () { - var codePoint = this.consumeCodePoint(); - switch (codePoint) { - case QUOTATION_MARK: - return this.consumeStringToken(QUOTATION_MARK); - case NUMBER_SIGN: - var c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if (isNameCodePoint(c1) || isValidEscape(c2, c3)) { - var flags = isIdentifierStart(c1, c2, c3) ? FLAG_ID : FLAG_UNRESTRICTED; - var value = this.consumeName(); - return { type: 5 /* HASH_TOKEN */, value: value, flags: flags }; - } - break; - case DOLLAR_SIGN: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return SUFFIX_MATCH_TOKEN; - } - break; - case APOSTROPHE: - return this.consumeStringToken(APOSTROPHE); - case LEFT_PARENTHESIS: - return LEFT_PARENTHESIS_TOKEN; - case RIGHT_PARENTHESIS: - return RIGHT_PARENTHESIS_TOKEN; - case ASTERISK: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return SUBSTRING_MATCH_TOKEN; - } - break; - case PLUS_SIGN: - if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - break; - case COMMA: - return COMMA_TOKEN; - case HYPHEN_MINUS: - var e1 = codePoint; - var e2 = this.peekCodePoint(0); - var e3 = this.peekCodePoint(1); - if (isNumberStart(e1, e2, e3)) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - if (isIdentifierStart(e1, e2, e3)) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - if (e2 === HYPHEN_MINUS && e3 === GREATER_THAN_SIGN) { - this.consumeCodePoint(); - this.consumeCodePoint(); - return CDC_TOKEN; - } - break; - case FULL_STOP: - if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - break; - case SOLIDUS: - if (this.peekCodePoint(0) === ASTERISK) { - this.consumeCodePoint(); - while (true) { - var c = this.consumeCodePoint(); - if (c === ASTERISK) { - c = this.consumeCodePoint(); - if (c === SOLIDUS) { - return this.consumeToken(); - } - } - if (c === EOF) { - return this.consumeToken(); - } - } - } - break; - case COLON: - return COLON_TOKEN; - case SEMICOLON: - return SEMICOLON_TOKEN; - case LESS_THAN_SIGN: - if (this.peekCodePoint(0) === EXCLAMATION_MARK && - this.peekCodePoint(1) === HYPHEN_MINUS && - this.peekCodePoint(2) === HYPHEN_MINUS) { - this.consumeCodePoint(); - this.consumeCodePoint(); - return CDO_TOKEN; - } - break; - case COMMERCIAL_AT: - var a1 = this.peekCodePoint(0); - var a2 = this.peekCodePoint(1); - var a3 = this.peekCodePoint(2); - if (isIdentifierStart(a1, a2, a3)) { - var value = this.consumeName(); - return { type: 7 /* AT_KEYWORD_TOKEN */, value: value }; - } - break; - case LEFT_SQUARE_BRACKET: - return LEFT_SQUARE_BRACKET_TOKEN; - case REVERSE_SOLIDUS: - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - break; - case RIGHT_SQUARE_BRACKET: - return RIGHT_SQUARE_BRACKET_TOKEN; - case CIRCUMFLEX_ACCENT: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return PREFIX_MATCH_TOKEN; - } - break; - case LEFT_CURLY_BRACKET: - return LEFT_CURLY_BRACKET_TOKEN; - case RIGHT_CURLY_BRACKET: - return RIGHT_CURLY_BRACKET_TOKEN; - case u: - case U: - var u1 = this.peekCodePoint(0); - var u2 = this.peekCodePoint(1); - if (u1 === PLUS_SIGN && (isHex(u2) || u2 === QUESTION_MARK)) { - this.consumeCodePoint(); - this.consumeUnicodeRangeToken(); - } - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - case VERTICAL_LINE: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return DASH_MATCH_TOKEN; - } - if (this.peekCodePoint(0) === VERTICAL_LINE) { - this.consumeCodePoint(); - return COLUMN_TOKEN; - } - break; - case TILDE: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return INCLUDE_MATCH_TOKEN; - } - break; - case EOF: - return EOF_TOKEN; - } - if (isWhiteSpace(codePoint)) { - this.consumeWhiteSpace(); - return WHITESPACE_TOKEN; - } - if (isDigit(codePoint)) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - if (isNameStartCodePoint(codePoint)) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - return { type: 6 /* DELIM_TOKEN */, value: fromCodePoint$1(codePoint) }; - }; - Tokenizer.prototype.consumeCodePoint = function () { - var value = this._value.shift(); - return typeof value === 'undefined' ? -1 : value; - }; - Tokenizer.prototype.reconsumeCodePoint = function (codePoint) { - this._value.unshift(codePoint); - }; - Tokenizer.prototype.peekCodePoint = function (delta) { - if (delta >= this._value.length) { - return -1; - } - return this._value[delta]; - }; - Tokenizer.prototype.consumeUnicodeRangeToken = function () { - var digits = []; - var codePoint = this.consumeCodePoint(); - while (isHex(codePoint) && digits.length < 6) { - digits.push(codePoint); - codePoint = this.consumeCodePoint(); - } - var questionMarks = false; - while (codePoint === QUESTION_MARK && digits.length < 6) { - digits.push(codePoint); - codePoint = this.consumeCodePoint(); - questionMarks = true; - } - if (questionMarks) { - var start_1 = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? ZERO : digit); })), 16); - var end = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? F : digit); })), 16); - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start_1, end: end }; - } - var start = parseInt(fromCodePoint$1.apply(void 0, digits), 16); - if (this.peekCodePoint(0) === HYPHEN_MINUS && isHex(this.peekCodePoint(1))) { - this.consumeCodePoint(); - codePoint = this.consumeCodePoint(); - var endDigits = []; - while (isHex(codePoint) && endDigits.length < 6) { - endDigits.push(codePoint); - codePoint = this.consumeCodePoint(); - } - var end = parseInt(fromCodePoint$1.apply(void 0, endDigits), 16); - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: end }; - } - else { - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: start }; - } - }; - Tokenizer.prototype.consumeIdentLikeToken = function () { - var value = this.consumeName(); - if (value.toLowerCase() === 'url' && this.peekCodePoint(0) === LEFT_PARENTHESIS) { - this.consumeCodePoint(); - return this.consumeUrlToken(); - } - else if (this.peekCodePoint(0) === LEFT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 19 /* FUNCTION_TOKEN */, value: value }; - } - return { type: 20 /* IDENT_TOKEN */, value: value }; - }; - Tokenizer.prototype.consumeUrlToken = function () { - var value = []; - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF) { - return { type: 22 /* URL_TOKEN */, value: '' }; - } - var next = this.peekCodePoint(0); - if (next === APOSTROPHE || next === QUOTATION_MARK) { - var stringToken = this.consumeStringToken(this.consumeCodePoint()); - if (stringToken.type === 0 /* STRING_TOKEN */) { - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 22 /* URL_TOKEN */, value: stringToken.value }; - } - } - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - while (true) { - var codePoint = this.consumeCodePoint(); - if (codePoint === EOF || codePoint === RIGHT_PARENTHESIS) { - return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) }; - } - else if (isWhiteSpace(codePoint)) { - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) }; - } - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - else if (codePoint === QUOTATION_MARK || - codePoint === APOSTROPHE || - codePoint === LEFT_PARENTHESIS || - isNonPrintableCodePoint(codePoint)) { - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - else if (codePoint === REVERSE_SOLIDUS) { - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - value.push(this.consumeEscapedCodePoint()); - } - else { - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - } - else { - value.push(codePoint); - } - } - }; - Tokenizer.prototype.consumeWhiteSpace = function () { - while (isWhiteSpace(this.peekCodePoint(0))) { - this.consumeCodePoint(); - } - }; - Tokenizer.prototype.consumeBadUrlRemnants = function () { - while (true) { - var codePoint = this.consumeCodePoint(); - if (codePoint === RIGHT_PARENTHESIS || codePoint === EOF) { - return; - } - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - this.consumeEscapedCodePoint(); - } - } - }; - Tokenizer.prototype.consumeStringSlice = function (count) { - var SLICE_STACK_SIZE = 50000; - var value = ''; - while (count > 0) { - var amount = Math.min(SLICE_STACK_SIZE, count); - value += fromCodePoint$1.apply(void 0, this._value.splice(0, amount)); - count -= amount; - } - this._value.shift(); - return value; - }; - Tokenizer.prototype.consumeStringToken = function (endingCodePoint) { - var value = ''; - var i = 0; - do { - var codePoint = this._value[i]; - if (codePoint === EOF || codePoint === undefined || codePoint === endingCodePoint) { - value += this.consumeStringSlice(i); - return { type: 0 /* STRING_TOKEN */, value: value }; - } - if (codePoint === LINE_FEED) { - this._value.splice(0, i); - return BAD_STRING_TOKEN; - } - if (codePoint === REVERSE_SOLIDUS) { - var next = this._value[i + 1]; - if (next !== EOF && next !== undefined) { - if (next === LINE_FEED) { - value += this.consumeStringSlice(i); - i = -1; - this._value.shift(); - } - else if (isValidEscape(codePoint, next)) { - value += this.consumeStringSlice(i); - value += fromCodePoint$1(this.consumeEscapedCodePoint()); - i = -1; - } - } - } - i++; - } while (true); - }; - Tokenizer.prototype.consumeNumber = function () { - var repr = []; - var type = FLAG_INTEGER; - var c1 = this.peekCodePoint(0); - if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) { - repr.push(this.consumeCodePoint()); - } - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - if (c1 === FULL_STOP && isDigit(c2)) { - repr.push(this.consumeCodePoint(), this.consumeCodePoint()); - type = FLAG_NUMBER; - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - } - c1 = this.peekCodePoint(0); - c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if ((c1 === E || c1 === e) && (((c2 === PLUS_SIGN || c2 === HYPHEN_MINUS) && isDigit(c3)) || isDigit(c2))) { - repr.push(this.consumeCodePoint(), this.consumeCodePoint()); - type = FLAG_NUMBER; - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - } - return [stringToNumber(repr), type]; - }; - Tokenizer.prototype.consumeNumericToken = function () { - var _a = this.consumeNumber(), number = _a[0], flags = _a[1]; - var c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if (isIdentifierStart(c1, c2, c3)) { - var unit = this.consumeName(); - return { type: 15 /* DIMENSION_TOKEN */, number: number, flags: flags, unit: unit }; - } - if (c1 === PERCENTAGE_SIGN) { - this.consumeCodePoint(); - return { type: 16 /* PERCENTAGE_TOKEN */, number: number, flags: flags }; - } - return { type: 17 /* NUMBER_TOKEN */, number: number, flags: flags }; - }; - Tokenizer.prototype.consumeEscapedCodePoint = function () { - var codePoint = this.consumeCodePoint(); - if (isHex(codePoint)) { - var hex = fromCodePoint$1(codePoint); - while (isHex(this.peekCodePoint(0)) && hex.length < 6) { - hex += fromCodePoint$1(this.consumeCodePoint()); - } - if (isWhiteSpace(this.peekCodePoint(0))) { - this.consumeCodePoint(); - } - var hexCodePoint = parseInt(hex, 16); - if (hexCodePoint === 0 || isSurrogateCodePoint(hexCodePoint) || hexCodePoint > 0x10ffff) { - return REPLACEMENT_CHARACTER; - } - return hexCodePoint; - } - if (codePoint === EOF) { - return REPLACEMENT_CHARACTER; - } - return codePoint; - }; - Tokenizer.prototype.consumeName = function () { - var result = ''; - while (true) { - var codePoint = this.consumeCodePoint(); - if (isNameCodePoint(codePoint)) { - result += fromCodePoint$1(codePoint); - } - else if (isValidEscape(codePoint, this.peekCodePoint(0))) { - result += fromCodePoint$1(this.consumeEscapedCodePoint()); - } - else { - this.reconsumeCodePoint(codePoint); - return result; - } - } - }; - return Tokenizer; - }()); - - var Parser = /** @class */ (function () { - function Parser(tokens) { - this._tokens = tokens; - } - Parser.create = function (value) { - var tokenizer = new Tokenizer(); - tokenizer.write(value); - return new Parser(tokenizer.read()); - }; - Parser.parseValue = function (value) { - return Parser.create(value).parseComponentValue(); - }; - Parser.parseValues = function (value) { - return Parser.create(value).parseComponentValues(); - }; - Parser.prototype.parseComponentValue = function () { - var token = this.consumeToken(); - while (token.type === 31 /* WHITESPACE_TOKEN */) { - token = this.consumeToken(); - } - if (token.type === 32 /* EOF_TOKEN */) { - throw new SyntaxError("Error parsing CSS component value, unexpected EOF"); - } - this.reconsumeToken(token); - var value = this.consumeComponentValue(); - do { - token = this.consumeToken(); - } while (token.type === 31 /* WHITESPACE_TOKEN */); - if (token.type === 32 /* EOF_TOKEN */) { - return value; - } - throw new SyntaxError("Error parsing CSS component value, multiple values found when expecting only one"); - }; - Parser.prototype.parseComponentValues = function () { - var values = []; - while (true) { - var value = this.consumeComponentValue(); - if (value.type === 32 /* EOF_TOKEN */) { - return values; - } - values.push(value); - values.push(); - } - }; - Parser.prototype.consumeComponentValue = function () { - var token = this.consumeToken(); - switch (token.type) { - case 11 /* LEFT_CURLY_BRACKET_TOKEN */: - case 28 /* LEFT_SQUARE_BRACKET_TOKEN */: - case 2 /* LEFT_PARENTHESIS_TOKEN */: - return this.consumeSimpleBlock(token.type); - case 19 /* FUNCTION_TOKEN */: - return this.consumeFunction(token); - } - return token; - }; - Parser.prototype.consumeSimpleBlock = function (type) { - var block = { type: type, values: [] }; - var token = this.consumeToken(); - while (true) { - if (token.type === 32 /* EOF_TOKEN */ || isEndingTokenFor(token, type)) { - return block; - } - this.reconsumeToken(token); - block.values.push(this.consumeComponentValue()); - token = this.consumeToken(); - } - }; - Parser.prototype.consumeFunction = function (functionToken) { - var cssFunction = { - name: functionToken.value, - values: [], - type: 18 /* FUNCTION */ - }; - while (true) { - var token = this.consumeToken(); - if (token.type === 32 /* EOF_TOKEN */ || token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */) { - return cssFunction; - } - this.reconsumeToken(token); - cssFunction.values.push(this.consumeComponentValue()); - } - }; - Parser.prototype.consumeToken = function () { - var token = this._tokens.shift(); - return typeof token === 'undefined' ? EOF_TOKEN : token; - }; - Parser.prototype.reconsumeToken = function (token) { - this._tokens.unshift(token); - }; - return Parser; - }()); - var isDimensionToken = function (token) { return token.type === 15 /* DIMENSION_TOKEN */; }; - var isNumberToken = function (token) { return token.type === 17 /* NUMBER_TOKEN */; }; - var isIdentToken = function (token) { return token.type === 20 /* IDENT_TOKEN */; }; - var isStringToken = function (token) { return token.type === 0 /* STRING_TOKEN */; }; - var isIdentWithValue = function (token, value) { - return isIdentToken(token) && token.value === value; - }; - var nonWhiteSpace = function (token) { return token.type !== 31 /* WHITESPACE_TOKEN */; }; - var nonFunctionArgSeparator = function (token) { - return token.type !== 31 /* WHITESPACE_TOKEN */ && token.type !== 4 /* COMMA_TOKEN */; - }; - var parseFunctionArgs = function (tokens) { - var args = []; - var arg = []; - tokens.forEach(function (token) { - if (token.type === 4 /* COMMA_TOKEN */) { - if (arg.length === 0) { - throw new Error("Error parsing function args, zero tokens for arg"); - } - args.push(arg); - arg = []; - return; - } - if (token.type !== 31 /* WHITESPACE_TOKEN */) { - arg.push(token); - } - }); - if (arg.length) { - args.push(arg); - } - return args; - }; - var isEndingTokenFor = function (token, type) { - if (type === 11 /* LEFT_CURLY_BRACKET_TOKEN */ && token.type === 12 /* RIGHT_CURLY_BRACKET_TOKEN */) { - return true; - } - if (type === 28 /* LEFT_SQUARE_BRACKET_TOKEN */ && token.type === 29 /* RIGHT_SQUARE_BRACKET_TOKEN */) { - return true; - } - return type === 2 /* LEFT_PARENTHESIS_TOKEN */ && token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */; - }; - - var isLength = function (token) { - return token.type === 17 /* NUMBER_TOKEN */ || token.type === 15 /* DIMENSION_TOKEN */; - }; - - var isLengthPercentage = function (token) { - return token.type === 16 /* PERCENTAGE_TOKEN */ || isLength(token); - }; - var parseLengthPercentageTuple = function (tokens) { - return tokens.length > 1 ? [tokens[0], tokens[1]] : [tokens[0]]; - }; - var ZERO_LENGTH = { - type: 17 /* NUMBER_TOKEN */, - number: 0, - flags: FLAG_INTEGER - }; - var FIFTY_PERCENT = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 50, - flags: FLAG_INTEGER - }; - var HUNDRED_PERCENT = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 100, - flags: FLAG_INTEGER - }; - var getAbsoluteValueForTuple = function (tuple, width, height) { - var x = tuple[0], y = tuple[1]; - return [getAbsoluteValue(x, width), getAbsoluteValue(typeof y !== 'undefined' ? y : x, height)]; - }; - var getAbsoluteValue = function (token, parent) { - if (token.type === 16 /* PERCENTAGE_TOKEN */) { - return (token.number / 100) * parent; - } - if (isDimensionToken(token)) { - switch (token.unit) { - case 'rem': - case 'em': - return 16 * token.number; // TODO use correct font-size - case 'px': - default: - return token.number; - } - } - return token.number; - }; - - var DEG = 'deg'; - var GRAD = 'grad'; - var RAD = 'rad'; - var TURN = 'turn'; - var angle = { - name: 'angle', - parse: function (_context, value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - switch (value.unit) { - case DEG: - return (Math.PI * value.number) / 180; - case GRAD: - return (Math.PI / 200) * value.number; - case RAD: - return value.number; - case TURN: - return Math.PI * 2 * value.number; - } - } - throw new Error("Unsupported angle type"); - } - }; - var isAngle = function (value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - if (value.unit === DEG || value.unit === GRAD || value.unit === RAD || value.unit === TURN) { - return true; - } - } - return false; - }; - var parseNamedSide = function (tokens) { - var sideOrCorner = tokens - .filter(isIdentToken) - .map(function (ident) { return ident.value; }) - .join(' '); - switch (sideOrCorner) { - case 'to bottom right': - case 'to right bottom': - case 'left top': - case 'top left': - return [ZERO_LENGTH, ZERO_LENGTH]; - case 'to top': - case 'bottom': - return deg(0); - case 'to bottom left': - case 'to left bottom': - case 'right top': - case 'top right': - return [ZERO_LENGTH, HUNDRED_PERCENT]; - case 'to right': - case 'left': - return deg(90); - case 'to top left': - case 'to left top': - case 'right bottom': - case 'bottom right': - return [HUNDRED_PERCENT, HUNDRED_PERCENT]; - case 'to bottom': - case 'top': - return deg(180); - case 'to top right': - case 'to right top': - case 'left bottom': - case 'bottom left': - return [HUNDRED_PERCENT, ZERO_LENGTH]; - case 'to left': - case 'right': - return deg(270); - } - return 0; - }; - var deg = function (deg) { return (Math.PI * deg) / 180; }; - - var color$1 = { - name: 'color', - parse: function (context, value) { - if (value.type === 18 /* FUNCTION */) { - var colorFunction = SUPPORTED_COLOR_FUNCTIONS[value.name]; - if (typeof colorFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported color function \"" + value.name + "\""); - } - return colorFunction(context, value.values); - } - if (value.type === 5 /* HASH_TOKEN */) { - if (value.value.length === 3) { - var r = value.value.substring(0, 1); - var g = value.value.substring(1, 2); - var b = value.value.substring(2, 3); - return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), 1); - } - if (value.value.length === 4) { - var r = value.value.substring(0, 1); - var g = value.value.substring(1, 2); - var b = value.value.substring(2, 3); - var a = value.value.substring(3, 4); - return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), parseInt(a + a, 16) / 255); - } - if (value.value.length === 6) { - var r = value.value.substring(0, 2); - var g = value.value.substring(2, 4); - var b = value.value.substring(4, 6); - return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), 1); - } - if (value.value.length === 8) { - var r = value.value.substring(0, 2); - var g = value.value.substring(2, 4); - var b = value.value.substring(4, 6); - var a = value.value.substring(6, 8); - return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), parseInt(a, 16) / 255); - } - } - if (value.type === 20 /* IDENT_TOKEN */) { - var namedColor = COLORS[value.value.toUpperCase()]; - if (typeof namedColor !== 'undefined') { - return namedColor; - } - } - return COLORS.TRANSPARENT; - } - }; - var isTransparent = function (color) { return (0xff & color) === 0; }; - var asString = function (color) { - var alpha = 0xff & color; - var blue = 0xff & (color >> 8); - var green = 0xff & (color >> 16); - var red = 0xff & (color >> 24); - return alpha < 255 ? "rgba(" + red + "," + green + "," + blue + "," + alpha / 255 + ")" : "rgb(" + red + "," + green + "," + blue + ")"; - }; - var pack = function (r, g, b, a) { - return ((r << 24) | (g << 16) | (b << 8) | (Math.round(a * 255) << 0)) >>> 0; - }; - var getTokenColorValue = function (token, i) { - if (token.type === 17 /* NUMBER_TOKEN */) { - return token.number; - } - if (token.type === 16 /* PERCENTAGE_TOKEN */) { - var max = i === 3 ? 1 : 255; - return i === 3 ? (token.number / 100) * max : Math.round((token.number / 100) * max); - } - return 0; - }; - var rgb = function (_context, args) { - var tokens = args.filter(nonFunctionArgSeparator); - if (tokens.length === 3) { - var _a = tokens.map(getTokenColorValue), r = _a[0], g = _a[1], b = _a[2]; - return pack(r, g, b, 1); - } - if (tokens.length === 4) { - var _b = tokens.map(getTokenColorValue), r = _b[0], g = _b[1], b = _b[2], a = _b[3]; - return pack(r, g, b, a); - } - return 0; - }; - function hue2rgb(t1, t2, hue) { - if (hue < 0) { - hue += 1; - } - if (hue >= 1) { - hue -= 1; - } - if (hue < 1 / 6) { - return (t2 - t1) * hue * 6 + t1; - } - else if (hue < 1 / 2) { - return t2; - } - else if (hue < 2 / 3) { - return (t2 - t1) * 6 * (2 / 3 - hue) + t1; - } - else { - return t1; - } - } - var hsl = function (context, args) { - var tokens = args.filter(nonFunctionArgSeparator); - var hue = tokens[0], saturation = tokens[1], lightness = tokens[2], alpha = tokens[3]; - var h = (hue.type === 17 /* NUMBER_TOKEN */ ? deg(hue.number) : angle.parse(context, hue)) / (Math.PI * 2); - var s = isLengthPercentage(saturation) ? saturation.number / 100 : 0; - var l = isLengthPercentage(lightness) ? lightness.number / 100 : 0; - var a = typeof alpha !== 'undefined' && isLengthPercentage(alpha) ? getAbsoluteValue(alpha, 1) : 1; - if (s === 0) { - return pack(l * 255, l * 255, l * 255, 1); - } - var t2 = l <= 0.5 ? l * (s + 1) : l + s - l * s; - var t1 = l * 2 - t2; - var r = hue2rgb(t1, t2, h + 1 / 3); - var g = hue2rgb(t1, t2, h); - var b = hue2rgb(t1, t2, h - 1 / 3); - return pack(r * 255, g * 255, b * 255, a); - }; - var SUPPORTED_COLOR_FUNCTIONS = { - hsl: hsl, - hsla: hsl, - rgb: rgb, - rgba: rgb - }; - var parseColor = function (context, value) { - return color$1.parse(context, Parser.create(value).parseComponentValue()); - }; - var COLORS = { - ALICEBLUE: 0xf0f8ffff, - ANTIQUEWHITE: 0xfaebd7ff, - AQUA: 0x00ffffff, - AQUAMARINE: 0x7fffd4ff, - AZURE: 0xf0ffffff, - BEIGE: 0xf5f5dcff, - BISQUE: 0xffe4c4ff, - BLACK: 0x000000ff, - BLANCHEDALMOND: 0xffebcdff, - BLUE: 0x0000ffff, - BLUEVIOLET: 0x8a2be2ff, - BROWN: 0xa52a2aff, - BURLYWOOD: 0xdeb887ff, - CADETBLUE: 0x5f9ea0ff, - CHARTREUSE: 0x7fff00ff, - CHOCOLATE: 0xd2691eff, - CORAL: 0xff7f50ff, - CORNFLOWERBLUE: 0x6495edff, - CORNSILK: 0xfff8dcff, - CRIMSON: 0xdc143cff, - CYAN: 0x00ffffff, - DARKBLUE: 0x00008bff, - DARKCYAN: 0x008b8bff, - DARKGOLDENROD: 0xb886bbff, - DARKGRAY: 0xa9a9a9ff, - DARKGREEN: 0x006400ff, - DARKGREY: 0xa9a9a9ff, - DARKKHAKI: 0xbdb76bff, - DARKMAGENTA: 0x8b008bff, - DARKOLIVEGREEN: 0x556b2fff, - DARKORANGE: 0xff8c00ff, - DARKORCHID: 0x9932ccff, - DARKRED: 0x8b0000ff, - DARKSALMON: 0xe9967aff, - DARKSEAGREEN: 0x8fbc8fff, - DARKSLATEBLUE: 0x483d8bff, - DARKSLATEGRAY: 0x2f4f4fff, - DARKSLATEGREY: 0x2f4f4fff, - DARKTURQUOISE: 0x00ced1ff, - DARKVIOLET: 0x9400d3ff, - DEEPPINK: 0xff1493ff, - DEEPSKYBLUE: 0x00bfffff, - DIMGRAY: 0x696969ff, - DIMGREY: 0x696969ff, - DODGERBLUE: 0x1e90ffff, - FIREBRICK: 0xb22222ff, - FLORALWHITE: 0xfffaf0ff, - FORESTGREEN: 0x228b22ff, - FUCHSIA: 0xff00ffff, - GAINSBORO: 0xdcdcdcff, - GHOSTWHITE: 0xf8f8ffff, - GOLD: 0xffd700ff, - GOLDENROD: 0xdaa520ff, - GRAY: 0x808080ff, - GREEN: 0x008000ff, - GREENYELLOW: 0xadff2fff, - GREY: 0x808080ff, - HONEYDEW: 0xf0fff0ff, - HOTPINK: 0xff69b4ff, - INDIANRED: 0xcd5c5cff, - INDIGO: 0x4b0082ff, - IVORY: 0xfffff0ff, - KHAKI: 0xf0e68cff, - LAVENDER: 0xe6e6faff, - LAVENDERBLUSH: 0xfff0f5ff, - LAWNGREEN: 0x7cfc00ff, - LEMONCHIFFON: 0xfffacdff, - LIGHTBLUE: 0xadd8e6ff, - LIGHTCORAL: 0xf08080ff, - LIGHTCYAN: 0xe0ffffff, - LIGHTGOLDENRODYELLOW: 0xfafad2ff, - LIGHTGRAY: 0xd3d3d3ff, - LIGHTGREEN: 0x90ee90ff, - LIGHTGREY: 0xd3d3d3ff, - LIGHTPINK: 0xffb6c1ff, - LIGHTSALMON: 0xffa07aff, - LIGHTSEAGREEN: 0x20b2aaff, - LIGHTSKYBLUE: 0x87cefaff, - LIGHTSLATEGRAY: 0x778899ff, - LIGHTSLATEGREY: 0x778899ff, - LIGHTSTEELBLUE: 0xb0c4deff, - LIGHTYELLOW: 0xffffe0ff, - LIME: 0x00ff00ff, - LIMEGREEN: 0x32cd32ff, - LINEN: 0xfaf0e6ff, - MAGENTA: 0xff00ffff, - MAROON: 0x800000ff, - MEDIUMAQUAMARINE: 0x66cdaaff, - MEDIUMBLUE: 0x0000cdff, - MEDIUMORCHID: 0xba55d3ff, - MEDIUMPURPLE: 0x9370dbff, - MEDIUMSEAGREEN: 0x3cb371ff, - MEDIUMSLATEBLUE: 0x7b68eeff, - MEDIUMSPRINGGREEN: 0x00fa9aff, - MEDIUMTURQUOISE: 0x48d1ccff, - MEDIUMVIOLETRED: 0xc71585ff, - MIDNIGHTBLUE: 0x191970ff, - MINTCREAM: 0xf5fffaff, - MISTYROSE: 0xffe4e1ff, - MOCCASIN: 0xffe4b5ff, - NAVAJOWHITE: 0xffdeadff, - NAVY: 0x000080ff, - OLDLACE: 0xfdf5e6ff, - OLIVE: 0x808000ff, - OLIVEDRAB: 0x6b8e23ff, - ORANGE: 0xffa500ff, - ORANGERED: 0xff4500ff, - ORCHID: 0xda70d6ff, - PALEGOLDENROD: 0xeee8aaff, - PALEGREEN: 0x98fb98ff, - PALETURQUOISE: 0xafeeeeff, - PALEVIOLETRED: 0xdb7093ff, - PAPAYAWHIP: 0xffefd5ff, - PEACHPUFF: 0xffdab9ff, - PERU: 0xcd853fff, - PINK: 0xffc0cbff, - PLUM: 0xdda0ddff, - POWDERBLUE: 0xb0e0e6ff, - PURPLE: 0x800080ff, - REBECCAPURPLE: 0x663399ff, - RED: 0xff0000ff, - ROSYBROWN: 0xbc8f8fff, - ROYALBLUE: 0x4169e1ff, - SADDLEBROWN: 0x8b4513ff, - SALMON: 0xfa8072ff, - SANDYBROWN: 0xf4a460ff, - SEAGREEN: 0x2e8b57ff, - SEASHELL: 0xfff5eeff, - SIENNA: 0xa0522dff, - SILVER: 0xc0c0c0ff, - SKYBLUE: 0x87ceebff, - SLATEBLUE: 0x6a5acdff, - SLATEGRAY: 0x708090ff, - SLATEGREY: 0x708090ff, - SNOW: 0xfffafaff, - SPRINGGREEN: 0x00ff7fff, - STEELBLUE: 0x4682b4ff, - TAN: 0xd2b48cff, - TEAL: 0x008080ff, - THISTLE: 0xd8bfd8ff, - TOMATO: 0xff6347ff, - TRANSPARENT: 0x00000000, - TURQUOISE: 0x40e0d0ff, - VIOLET: 0xee82eeff, - WHEAT: 0xf5deb3ff, - WHITE: 0xffffffff, - WHITESMOKE: 0xf5f5f5ff, - YELLOW: 0xffff00ff, - YELLOWGREEN: 0x9acd32ff - }; - - var backgroundClip = { - name: 'background-clip', - initialValue: 'border-box', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.map(function (token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'padding-box': - return 1 /* PADDING_BOX */; - case 'content-box': - return 2 /* CONTENT_BOX */; - } - } - return 0 /* BORDER_BOX */; - }); - } - }; - - var backgroundColor = { - name: "background-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var parseColorStop = function (context, args) { - var color = color$1.parse(context, args[0]); - var stop = args[1]; - return stop && isLengthPercentage(stop) ? { color: color, stop: stop } : { color: color, stop: null }; - }; - var processColorStops = function (stops, lineLength) { - var first = stops[0]; - var last = stops[stops.length - 1]; - if (first.stop === null) { - first.stop = ZERO_LENGTH; - } - if (last.stop === null) { - last.stop = HUNDRED_PERCENT; - } - var processStops = []; - var previous = 0; - for (var i = 0; i < stops.length; i++) { - var stop_1 = stops[i].stop; - if (stop_1 !== null) { - var absoluteValue = getAbsoluteValue(stop_1, lineLength); - if (absoluteValue > previous) { - processStops.push(absoluteValue); - } - else { - processStops.push(previous); - } - previous = absoluteValue; - } - else { - processStops.push(null); - } - } - var gapBegin = null; - for (var i = 0; i < processStops.length; i++) { - var stop_2 = processStops[i]; - if (stop_2 === null) { - if (gapBegin === null) { - gapBegin = i; - } - } - else if (gapBegin !== null) { - var gapLength = i - gapBegin; - var beforeGap = processStops[gapBegin - 1]; - var gapValue = (stop_2 - beforeGap) / (gapLength + 1); - for (var g = 1; g <= gapLength; g++) { - processStops[gapBegin + g - 1] = gapValue * g; - } - gapBegin = null; - } - } - return stops.map(function (_a, i) { - var color = _a.color; - return { color: color, stop: Math.max(Math.min(1, processStops[i] / lineLength), 0) }; - }); - }; - var getAngleFromCorner = function (corner, width, height) { - var centerX = width / 2; - var centerY = height / 2; - var x = getAbsoluteValue(corner[0], width) - centerX; - var y = centerY - getAbsoluteValue(corner[1], height); - return (Math.atan2(y, x) + Math.PI * 2) % (Math.PI * 2); - }; - var calculateGradientDirection = function (angle, width, height) { - var radian = typeof angle === 'number' ? angle : getAngleFromCorner(angle, width, height); - var lineLength = Math.abs(width * Math.sin(radian)) + Math.abs(height * Math.cos(radian)); - var halfWidth = width / 2; - var halfHeight = height / 2; - var halfLineLength = lineLength / 2; - var yDiff = Math.sin(radian - Math.PI / 2) * halfLineLength; - var xDiff = Math.cos(radian - Math.PI / 2) * halfLineLength; - return [lineLength, halfWidth - xDiff, halfWidth + xDiff, halfHeight - yDiff, halfHeight + yDiff]; - }; - var distance = function (a, b) { return Math.sqrt(a * a + b * b); }; - var findCorner = function (width, height, x, y, closest) { - var corners = [ - [0, 0], - [0, height], - [width, 0], - [width, height] - ]; - return corners.reduce(function (stat, corner) { - var cx = corner[0], cy = corner[1]; - var d = distance(x - cx, y - cy); - if (closest ? d < stat.optimumDistance : d > stat.optimumDistance) { - return { - optimumCorner: corner, - optimumDistance: d - }; - } - return stat; - }, { - optimumDistance: closest ? Infinity : -Infinity, - optimumCorner: null - }).optimumCorner; - }; - var calculateRadius = function (gradient, x, y, width, height) { - var rx = 0; - var ry = 0; - switch (gradient.size) { - case 0 /* CLOSEST_SIDE */: - // The ending shape is sized so that that it exactly meets the side of the gradient box closest to the gradient’s center. - // If the shape is an ellipse, it exactly meets the closest side in each dimension. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.min(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - rx = Math.min(Math.abs(x), Math.abs(x - width)); - ry = Math.min(Math.abs(y), Math.abs(y - height)); - } - break; - case 2 /* CLOSEST_CORNER */: - // The ending shape is sized so that that it passes through the corner of the gradient box closest to the gradient’s center. - // If the shape is an ellipse, the ending shape is given the same aspect-ratio it would have if closest-side were specified. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.min(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - // Compute the ratio ry/rx (which is to be the same as for "closest-side") - var c = Math.min(Math.abs(y), Math.abs(y - height)) / Math.min(Math.abs(x), Math.abs(x - width)); - var _a = findCorner(width, height, x, y, true), cx = _a[0], cy = _a[1]; - rx = distance(cx - x, (cy - y) / c); - ry = c * rx; - } - break; - case 1 /* FARTHEST_SIDE */: - // Same as closest-side, except the ending shape is sized based on the farthest side(s) - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.max(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - rx = Math.max(Math.abs(x), Math.abs(x - width)); - ry = Math.max(Math.abs(y), Math.abs(y - height)); - } - break; - case 3 /* FARTHEST_CORNER */: - // Same as closest-corner, except the ending shape is sized based on the farthest corner. - // If the shape is an ellipse, the ending shape is given the same aspect ratio it would have if farthest-side were specified. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.max(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - // Compute the ratio ry/rx (which is to be the same as for "farthest-side") - var c = Math.max(Math.abs(y), Math.abs(y - height)) / Math.max(Math.abs(x), Math.abs(x - width)); - var _b = findCorner(width, height, x, y, false), cx = _b[0], cy = _b[1]; - rx = distance(cx - x, (cy - y) / c); - ry = c * rx; - } - break; - } - if (Array.isArray(gradient.size)) { - rx = getAbsoluteValue(gradient.size[0], width); - ry = gradient.size.length === 2 ? getAbsoluteValue(gradient.size[1], height) : rx; - } - return [rx, ry]; - }; - - var linearGradient = function (context, tokens) { - var angle$1 = deg(180); - var stops = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - if (i === 0) { - var firstToken = arg[0]; - if (firstToken.type === 20 /* IDENT_TOKEN */ && firstToken.value === 'to') { - angle$1 = parseNamedSide(arg); - return; - } - else if (isAngle(firstToken)) { - angle$1 = angle.parse(context, firstToken); - return; - } - } - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - }); - return { angle: angle$1, stops: stops, type: 1 /* LINEAR_GRADIENT */ }; - }; - - var prefixLinearGradient = function (context, tokens) { - var angle$1 = deg(180); - var stops = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - if (i === 0) { - var firstToken = arg[0]; - if (firstToken.type === 20 /* IDENT_TOKEN */ && - ['top', 'left', 'right', 'bottom'].indexOf(firstToken.value) !== -1) { - angle$1 = parseNamedSide(arg); - return; - } - else if (isAngle(firstToken)) { - angle$1 = (angle.parse(context, firstToken) + deg(270)) % deg(360); - return; - } - } - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - }); - return { - angle: angle$1, - stops: stops, - type: 1 /* LINEAR_GRADIENT */ - }; - }; - - var webkitGradient = function (context, tokens) { - var angle = deg(180); - var stops = []; - var type = 1 /* LINEAR_GRADIENT */; - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var firstToken = arg[0]; - if (i === 0) { - if (isIdentToken(firstToken) && firstToken.value === 'linear') { - type = 1 /* LINEAR_GRADIENT */; - return; - } - else if (isIdentToken(firstToken) && firstToken.value === 'radial') { - type = 2 /* RADIAL_GRADIENT */; - return; - } - } - if (firstToken.type === 18 /* FUNCTION */) { - if (firstToken.name === 'from') { - var color = color$1.parse(context, firstToken.values[0]); - stops.push({ stop: ZERO_LENGTH, color: color }); - } - else if (firstToken.name === 'to') { - var color = color$1.parse(context, firstToken.values[0]); - stops.push({ stop: HUNDRED_PERCENT, color: color }); - } - else if (firstToken.name === 'color-stop') { - var values = firstToken.values.filter(nonFunctionArgSeparator); - if (values.length === 2) { - var color = color$1.parse(context, values[1]); - var stop_1 = values[0]; - if (isNumberToken(stop_1)) { - stops.push({ - stop: { type: 16 /* PERCENTAGE_TOKEN */, number: stop_1.number * 100, flags: stop_1.flags }, - color: color - }); - } - } - } - } - }); - return type === 1 /* LINEAR_GRADIENT */ - ? { - angle: (angle + deg(180)) % deg(360), - stops: stops, - type: type - } - : { size: size, shape: shape, stops: stops, position: position, type: type }; - }; - - var CLOSEST_SIDE = 'closest-side'; - var FARTHEST_SIDE = 'farthest-side'; - var CLOSEST_CORNER = 'closest-corner'; - var FARTHEST_CORNER = 'farthest-corner'; - var CIRCLE = 'circle'; - var ELLIPSE = 'ellipse'; - var COVER = 'cover'; - var CONTAIN = 'contain'; - var radialGradient = function (context, tokens) { - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var stops = []; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var isColorStop = true; - if (i === 0) { - var isAtPosition_1 = false; - isColorStop = arg.reduce(function (acc, token) { - if (isAtPosition_1) { - if (isIdentToken(token)) { - switch (token.value) { - case 'center': - position.push(FIFTY_PERCENT); - return acc; - case 'top': - case 'left': - position.push(ZERO_LENGTH); - return acc; - case 'right': - case 'bottom': - position.push(HUNDRED_PERCENT); - return acc; - } - } - else if (isLengthPercentage(token) || isLength(token)) { - position.push(token); - } - } - else if (isIdentToken(token)) { - switch (token.value) { - case CIRCLE: - shape = 0 /* CIRCLE */; - return false; - case ELLIPSE: - shape = 1 /* ELLIPSE */; - return false; - case 'at': - isAtPosition_1 = true; - return false; - case CLOSEST_SIDE: - size = 0 /* CLOSEST_SIDE */; - return false; - case COVER: - case FARTHEST_SIDE: - size = 1 /* FARTHEST_SIDE */; - return false; - case CONTAIN: - case CLOSEST_CORNER: - size = 2 /* CLOSEST_CORNER */; - return false; - case FARTHEST_CORNER: - size = 3 /* FARTHEST_CORNER */; - return false; - } - } - else if (isLength(token) || isLengthPercentage(token)) { - if (!Array.isArray(size)) { - size = []; - } - size.push(token); - return false; - } - return acc; - }, isColorStop); - } - if (isColorStop) { - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - } - }); - return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ }; - }; - - var prefixRadialGradient = function (context, tokens) { - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var stops = []; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var isColorStop = true; - if (i === 0) { - isColorStop = arg.reduce(function (acc, token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'center': - position.push(FIFTY_PERCENT); - return false; - case 'top': - case 'left': - position.push(ZERO_LENGTH); - return false; - case 'right': - case 'bottom': - position.push(HUNDRED_PERCENT); - return false; - } - } - else if (isLengthPercentage(token) || isLength(token)) { - position.push(token); - return false; - } - return acc; - }, isColorStop); - } - else if (i === 1) { - isColorStop = arg.reduce(function (acc, token) { - if (isIdentToken(token)) { - switch (token.value) { - case CIRCLE: - shape = 0 /* CIRCLE */; - return false; - case ELLIPSE: - shape = 1 /* ELLIPSE */; - return false; - case CONTAIN: - case CLOSEST_SIDE: - size = 0 /* CLOSEST_SIDE */; - return false; - case FARTHEST_SIDE: - size = 1 /* FARTHEST_SIDE */; - return false; - case CLOSEST_CORNER: - size = 2 /* CLOSEST_CORNER */; - return false; - case COVER: - case FARTHEST_CORNER: - size = 3 /* FARTHEST_CORNER */; - return false; - } - } - else if (isLength(token) || isLengthPercentage(token)) { - if (!Array.isArray(size)) { - size = []; - } - size.push(token); - return false; - } - return acc; - }, isColorStop); - } - if (isColorStop) { - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - } - }); - return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ }; - }; - - var isLinearGradient = function (background) { - return background.type === 1 /* LINEAR_GRADIENT */; - }; - var isRadialGradient = function (background) { - return background.type === 2 /* RADIAL_GRADIENT */; - }; - var image = { - name: 'image', - parse: function (context, value) { - if (value.type === 22 /* URL_TOKEN */) { - var image_1 = { url: value.value, type: 0 /* URL */ }; - context.cache.addImage(value.value); - return image_1; - } - if (value.type === 18 /* FUNCTION */) { - var imageFunction = SUPPORTED_IMAGE_FUNCTIONS[value.name]; - if (typeof imageFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported image function \"" + value.name + "\""); - } - return imageFunction(context, value.values); - } - throw new Error("Unsupported image type " + value.type); - } - }; - function isSupportedImage(value) { - return (!(value.type === 20 /* IDENT_TOKEN */ && value.value === 'none') && - (value.type !== 18 /* FUNCTION */ || !!SUPPORTED_IMAGE_FUNCTIONS[value.name])); - } - var SUPPORTED_IMAGE_FUNCTIONS = { - 'linear-gradient': linearGradient, - '-moz-linear-gradient': prefixLinearGradient, - '-ms-linear-gradient': prefixLinearGradient, - '-o-linear-gradient': prefixLinearGradient, - '-webkit-linear-gradient': prefixLinearGradient, - 'radial-gradient': radialGradient, - '-moz-radial-gradient': prefixRadialGradient, - '-ms-radial-gradient': prefixRadialGradient, - '-o-radial-gradient': prefixRadialGradient, - '-webkit-radial-gradient': prefixRadialGradient, - '-webkit-gradient': webkitGradient - }; - - var backgroundImage = { - name: 'background-image', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (context, tokens) { - if (tokens.length === 0) { - return []; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return []; - } - return tokens - .filter(function (value) { return nonFunctionArgSeparator(value) && isSupportedImage(value); }) - .map(function (value) { return image.parse(context, value); }); - } - }; - - var backgroundOrigin = { - name: 'background-origin', - initialValue: 'border-box', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.map(function (token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'padding-box': - return 1 /* PADDING_BOX */; - case 'content-box': - return 2 /* CONTENT_BOX */; - } - } - return 0 /* BORDER_BOX */; - }); - } - }; - - var backgroundPosition = { - name: 'background-position', - initialValue: '0% 0%', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens) - .map(function (values) { return values.filter(isLengthPercentage); }) - .map(parseLengthPercentageTuple); - } - }; - - var backgroundRepeat = { - name: 'background-repeat', - initialValue: 'repeat', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens) - .map(function (values) { - return values - .filter(isIdentToken) - .map(function (token) { return token.value; }) - .join(' '); - }) - .map(parseBackgroundRepeat); - } - }; - var parseBackgroundRepeat = function (value) { - switch (value) { - case 'no-repeat': - return 1 /* NO_REPEAT */; - case 'repeat-x': - case 'repeat no-repeat': - return 2 /* REPEAT_X */; - case 'repeat-y': - case 'no-repeat repeat': - return 3 /* REPEAT_Y */; - case 'repeat': - default: - return 0 /* REPEAT */; - } - }; - - var BACKGROUND_SIZE; - (function (BACKGROUND_SIZE) { - BACKGROUND_SIZE["AUTO"] = "auto"; - BACKGROUND_SIZE["CONTAIN"] = "contain"; - BACKGROUND_SIZE["COVER"] = "cover"; - })(BACKGROUND_SIZE || (BACKGROUND_SIZE = {})); - var backgroundSize = { - name: 'background-size', - initialValue: '0', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens).map(function (values) { return values.filter(isBackgroundSizeInfoToken); }); - } - }; - var isBackgroundSizeInfoToken = function (value) { - return isIdentToken(value) || isLengthPercentage(value); - }; - - var borderColorForSide = function (side) { return ({ - name: "border-" + side + "-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }); }; - var borderTopColor = borderColorForSide('top'); - var borderRightColor = borderColorForSide('right'); - var borderBottomColor = borderColorForSide('bottom'); - var borderLeftColor = borderColorForSide('left'); - - var borderRadiusForSide = function (side) { return ({ - name: "border-radius-" + side, - initialValue: '0 0', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseLengthPercentageTuple(tokens.filter(isLengthPercentage)); - } - }); }; - var borderTopLeftRadius = borderRadiusForSide('top-left'); - var borderTopRightRadius = borderRadiusForSide('top-right'); - var borderBottomRightRadius = borderRadiusForSide('bottom-right'); - var borderBottomLeftRadius = borderRadiusForSide('bottom-left'); - - var borderStyleForSide = function (side) { return ({ - name: "border-" + side + "-style", - initialValue: 'solid', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, style) { - switch (style) { - case 'none': - return 0 /* NONE */; - case 'dashed': - return 2 /* DASHED */; - case 'dotted': - return 3 /* DOTTED */; - case 'double': - return 4 /* DOUBLE */; - } - return 1 /* SOLID */; - } - }); }; - var borderTopStyle = borderStyleForSide('top'); - var borderRightStyle = borderStyleForSide('right'); - var borderBottomStyle = borderStyleForSide('bottom'); - var borderLeftStyle = borderStyleForSide('left'); - - var borderWidthForSide = function (side) { return ({ - name: "border-" + side + "-width", - initialValue: '0', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isDimensionToken(token)) { - return token.number; - } - return 0; - } - }); }; - var borderTopWidth = borderWidthForSide('top'); - var borderRightWidth = borderWidthForSide('right'); - var borderBottomWidth = borderWidthForSide('bottom'); - var borderLeftWidth = borderWidthForSide('left'); - - var color = { - name: "color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var direction = { - name: 'direction', - initialValue: 'ltr', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, direction) { - switch (direction) { - case 'rtl': - return 1 /* RTL */; - case 'ltr': - default: - return 0 /* LTR */; - } - } - }; - - var display = { - name: 'display', - initialValue: 'inline-block', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).reduce(function (bit, token) { - return bit | parseDisplayValue(token.value); - }, 0 /* NONE */); - } - }; - var parseDisplayValue = function (display) { - switch (display) { - case 'block': - case '-webkit-box': - return 2 /* BLOCK */; - case 'inline': - return 4 /* INLINE */; - case 'run-in': - return 8 /* RUN_IN */; - case 'flow': - return 16 /* FLOW */; - case 'flow-root': - return 32 /* FLOW_ROOT */; - case 'table': - return 64 /* TABLE */; - case 'flex': - case '-webkit-flex': - return 128 /* FLEX */; - case 'grid': - case '-ms-grid': - return 256 /* GRID */; - case 'ruby': - return 512 /* RUBY */; - case 'subgrid': - return 1024 /* SUBGRID */; - case 'list-item': - return 2048 /* LIST_ITEM */; - case 'table-row-group': - return 4096 /* TABLE_ROW_GROUP */; - case 'table-header-group': - return 8192 /* TABLE_HEADER_GROUP */; - case 'table-footer-group': - return 16384 /* TABLE_FOOTER_GROUP */; - case 'table-row': - return 32768 /* TABLE_ROW */; - case 'table-cell': - return 65536 /* TABLE_CELL */; - case 'table-column-group': - return 131072 /* TABLE_COLUMN_GROUP */; - case 'table-column': - return 262144 /* TABLE_COLUMN */; - case 'table-caption': - return 524288 /* TABLE_CAPTION */; - case 'ruby-base': - return 1048576 /* RUBY_BASE */; - case 'ruby-text': - return 2097152 /* RUBY_TEXT */; - case 'ruby-base-container': - return 4194304 /* RUBY_BASE_CONTAINER */; - case 'ruby-text-container': - return 8388608 /* RUBY_TEXT_CONTAINER */; - case 'contents': - return 16777216 /* CONTENTS */; - case 'inline-block': - return 33554432 /* INLINE_BLOCK */; - case 'inline-list-item': - return 67108864 /* INLINE_LIST_ITEM */; - case 'inline-table': - return 134217728 /* INLINE_TABLE */; - case 'inline-flex': - return 268435456 /* INLINE_FLEX */; - case 'inline-grid': - return 536870912 /* INLINE_GRID */; - } - return 0 /* NONE */; - }; - - var float = { - name: 'float', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, float) { - switch (float) { - case 'left': - return 1 /* LEFT */; - case 'right': - return 2 /* RIGHT */; - case 'inline-start': - return 3 /* INLINE_START */; - case 'inline-end': - return 4 /* INLINE_END */; - } - return 0 /* NONE */; - } - }; - - var letterSpacing = { - name: 'letter-spacing', - initialValue: '0', - prefix: false, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'normal') { - return 0; - } - if (token.type === 17 /* NUMBER_TOKEN */) { - return token.number; - } - if (token.type === 15 /* DIMENSION_TOKEN */) { - return token.number; - } - return 0; - } - }; - - var LINE_BREAK; - (function (LINE_BREAK) { - LINE_BREAK["NORMAL"] = "normal"; - LINE_BREAK["STRICT"] = "strict"; - })(LINE_BREAK || (LINE_BREAK = {})); - var lineBreak = { - name: 'line-break', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, lineBreak) { - switch (lineBreak) { - case 'strict': - return LINE_BREAK.STRICT; - case 'normal': - default: - return LINE_BREAK.NORMAL; - } - } - }; - - var lineHeight = { - name: 'line-height', - initialValue: 'normal', - prefix: false, - type: 4 /* TOKEN_VALUE */ - }; - var computeLineHeight = function (token, fontSize) { - if (isIdentToken(token) && token.value === 'normal') { - return 1.2 * fontSize; - } - else if (token.type === 17 /* NUMBER_TOKEN */) { - return fontSize * token.number; - } - else if (isLengthPercentage(token)) { - return getAbsoluteValue(token, fontSize); - } - return fontSize; - }; - - var listStyleImage = { - name: 'list-style-image', - initialValue: 'none', - type: 0 /* VALUE */, - prefix: false, - parse: function (context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') { - return null; - } - return image.parse(context, token); - } - }; - - var listStylePosition = { - name: 'list-style-position', - initialValue: 'outside', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, position) { - switch (position) { - case 'inside': - return 0 /* INSIDE */; - case 'outside': - default: - return 1 /* OUTSIDE */; - } - } - }; - - var listStyleType = { - name: 'list-style-type', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, type) { - switch (type) { - case 'disc': - return 0 /* DISC */; - case 'circle': - return 1 /* CIRCLE */; - case 'square': - return 2 /* SQUARE */; - case 'decimal': - return 3 /* DECIMAL */; - case 'cjk-decimal': - return 4 /* CJK_DECIMAL */; - case 'decimal-leading-zero': - return 5 /* DECIMAL_LEADING_ZERO */; - case 'lower-roman': - return 6 /* LOWER_ROMAN */; - case 'upper-roman': - return 7 /* UPPER_ROMAN */; - case 'lower-greek': - return 8 /* LOWER_GREEK */; - case 'lower-alpha': - return 9 /* LOWER_ALPHA */; - case 'upper-alpha': - return 10 /* UPPER_ALPHA */; - case 'arabic-indic': - return 11 /* ARABIC_INDIC */; - case 'armenian': - return 12 /* ARMENIAN */; - case 'bengali': - return 13 /* BENGALI */; - case 'cambodian': - return 14 /* CAMBODIAN */; - case 'cjk-earthly-branch': - return 15 /* CJK_EARTHLY_BRANCH */; - case 'cjk-heavenly-stem': - return 16 /* CJK_HEAVENLY_STEM */; - case 'cjk-ideographic': - return 17 /* CJK_IDEOGRAPHIC */; - case 'devanagari': - return 18 /* DEVANAGARI */; - case 'ethiopic-numeric': - return 19 /* ETHIOPIC_NUMERIC */; - case 'georgian': - return 20 /* GEORGIAN */; - case 'gujarati': - return 21 /* GUJARATI */; - case 'gurmukhi': - return 22 /* GURMUKHI */; - case 'hebrew': - return 22 /* HEBREW */; - case 'hiragana': - return 23 /* HIRAGANA */; - case 'hiragana-iroha': - return 24 /* HIRAGANA_IROHA */; - case 'japanese-formal': - return 25 /* JAPANESE_FORMAL */; - case 'japanese-informal': - return 26 /* JAPANESE_INFORMAL */; - case 'kannada': - return 27 /* KANNADA */; - case 'katakana': - return 28 /* KATAKANA */; - case 'katakana-iroha': - return 29 /* KATAKANA_IROHA */; - case 'khmer': - return 30 /* KHMER */; - case 'korean-hangul-formal': - return 31 /* KOREAN_HANGUL_FORMAL */; - case 'korean-hanja-formal': - return 32 /* KOREAN_HANJA_FORMAL */; - case 'korean-hanja-informal': - return 33 /* KOREAN_HANJA_INFORMAL */; - case 'lao': - return 34 /* LAO */; - case 'lower-armenian': - return 35 /* LOWER_ARMENIAN */; - case 'malayalam': - return 36 /* MALAYALAM */; - case 'mongolian': - return 37 /* MONGOLIAN */; - case 'myanmar': - return 38 /* MYANMAR */; - case 'oriya': - return 39 /* ORIYA */; - case 'persian': - return 40 /* PERSIAN */; - case 'simp-chinese-formal': - return 41 /* SIMP_CHINESE_FORMAL */; - case 'simp-chinese-informal': - return 42 /* SIMP_CHINESE_INFORMAL */; - case 'tamil': - return 43 /* TAMIL */; - case 'telugu': - return 44 /* TELUGU */; - case 'thai': - return 45 /* THAI */; - case 'tibetan': - return 46 /* TIBETAN */; - case 'trad-chinese-formal': - return 47 /* TRAD_CHINESE_FORMAL */; - case 'trad-chinese-informal': - return 48 /* TRAD_CHINESE_INFORMAL */; - case 'upper-armenian': - return 49 /* UPPER_ARMENIAN */; - case 'disclosure-open': - return 50 /* DISCLOSURE_OPEN */; - case 'disclosure-closed': - return 51 /* DISCLOSURE_CLOSED */; - case 'none': - default: - return -1 /* NONE */; - } - } - }; - - var marginForSide = function (side) { return ({ - name: "margin-" + side, - initialValue: '0', - prefix: false, - type: 4 /* TOKEN_VALUE */ - }); }; - var marginTop = marginForSide('top'); - var marginRight = marginForSide('right'); - var marginBottom = marginForSide('bottom'); - var marginLeft = marginForSide('left'); - - var overflow = { - name: 'overflow', - initialValue: 'visible', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).map(function (overflow) { - switch (overflow.value) { - case 'hidden': - return 1 /* HIDDEN */; - case 'scroll': - return 2 /* SCROLL */; - case 'clip': - return 3 /* CLIP */; - case 'auto': - return 4 /* AUTO */; - case 'visible': - default: - return 0 /* VISIBLE */; - } - }); - } - }; - - var overflowWrap = { - name: 'overflow-wrap', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, overflow) { - switch (overflow) { - case 'break-word': - return "break-word" /* BREAK_WORD */; - case 'normal': - default: - return "normal" /* NORMAL */; - } - } - }; - - var paddingForSide = function (side) { return ({ - name: "padding-" + side, - initialValue: '0', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'length-percentage' - }); }; - var paddingTop = paddingForSide('top'); - var paddingRight = paddingForSide('right'); - var paddingBottom = paddingForSide('bottom'); - var paddingLeft = paddingForSide('left'); - - var textAlign = { - name: 'text-align', - initialValue: 'left', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, textAlign) { - switch (textAlign) { - case 'right': - return 2 /* RIGHT */; - case 'center': - case 'justify': - return 1 /* CENTER */; - case 'left': - default: - return 0 /* LEFT */; - } - } - }; - - var position = { - name: 'position', - initialValue: 'static', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, position) { - switch (position) { - case 'relative': - return 1 /* RELATIVE */; - case 'absolute': - return 2 /* ABSOLUTE */; - case 'fixed': - return 3 /* FIXED */; - case 'sticky': - return 4 /* STICKY */; - } - return 0 /* STATIC */; - } - }; - - var textShadow = { - name: 'text-shadow', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (context, tokens) { - if (tokens.length === 1 && isIdentWithValue(tokens[0], 'none')) { - return []; - } - return parseFunctionArgs(tokens).map(function (values) { - var shadow = { - color: COLORS.TRANSPARENT, - offsetX: ZERO_LENGTH, - offsetY: ZERO_LENGTH, - blur: ZERO_LENGTH - }; - var c = 0; - for (var i = 0; i < values.length; i++) { - var token = values[i]; - if (isLength(token)) { - if (c === 0) { - shadow.offsetX = token; - } - else if (c === 1) { - shadow.offsetY = token; - } - else { - shadow.blur = token; - } - c++; - } - else { - shadow.color = color$1.parse(context, token); - } - } - return shadow; - }); - } - }; - - var textTransform = { - name: 'text-transform', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, textTransform) { - switch (textTransform) { - case 'uppercase': - return 2 /* UPPERCASE */; - case 'lowercase': - return 1 /* LOWERCASE */; - case 'capitalize': - return 3 /* CAPITALIZE */; - } - return 0 /* NONE */; - } - }; - - var transform$1 = { - name: 'transform', - initialValue: 'none', - prefix: true, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') { - return null; - } - if (token.type === 18 /* FUNCTION */) { - var transformFunction = SUPPORTED_TRANSFORM_FUNCTIONS[token.name]; - if (typeof transformFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported transform function \"" + token.name + "\""); - } - return transformFunction(token.values); - } - return null; - } - }; - var matrix = function (args) { - var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; }); - return values.length === 6 ? values : null; - }; - // doesn't support 3D transforms at the moment - var matrix3d = function (args) { - var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; }); - var a1 = values[0], b1 = values[1]; values[2]; values[3]; var a2 = values[4], b2 = values[5]; values[6]; values[7]; values[8]; values[9]; values[10]; values[11]; var a4 = values[12], b4 = values[13]; values[14]; values[15]; - return values.length === 16 ? [a1, b1, a2, b2, a4, b4] : null; - }; - var SUPPORTED_TRANSFORM_FUNCTIONS = { - matrix: matrix, - matrix3d: matrix3d - }; - - var DEFAULT_VALUE = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 50, - flags: FLAG_INTEGER - }; - var DEFAULT = [DEFAULT_VALUE, DEFAULT_VALUE]; - var transformOrigin = { - name: 'transform-origin', - initialValue: '50% 50%', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var origins = tokens.filter(isLengthPercentage); - if (origins.length !== 2) { - return DEFAULT; - } - return [origins[0], origins[1]]; - } - }; - - var visibility = { - name: 'visible', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, visibility) { - switch (visibility) { - case 'hidden': - return 1 /* HIDDEN */; - case 'collapse': - return 2 /* COLLAPSE */; - case 'visible': - default: - return 0 /* VISIBLE */; - } - } - }; - - var WORD_BREAK; - (function (WORD_BREAK) { - WORD_BREAK["NORMAL"] = "normal"; - WORD_BREAK["BREAK_ALL"] = "break-all"; - WORD_BREAK["KEEP_ALL"] = "keep-all"; - })(WORD_BREAK || (WORD_BREAK = {})); - var wordBreak = { - name: 'word-break', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, wordBreak) { - switch (wordBreak) { - case 'break-all': - return WORD_BREAK.BREAK_ALL; - case 'keep-all': - return WORD_BREAK.KEEP_ALL; - case 'normal': - default: - return WORD_BREAK.NORMAL; - } - } - }; - - var zIndex = { - name: 'z-index', - initialValue: 'auto', - prefix: false, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */) { - return { auto: true, order: 0 }; - } - if (isNumberToken(token)) { - return { auto: false, order: token.number }; - } - throw new Error("Invalid z-index number parsed"); - } - }; - - var time = { - name: 'time', - parse: function (_context, value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - switch (value.unit.toLowerCase()) { - case 's': - return 1000 * value.number; - case 'ms': - return value.number; - } - } - throw new Error("Unsupported time type"); - } - }; - - var opacity = { - name: 'opacity', - initialValue: '1', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isNumberToken(token)) { - return token.number; - } - return 1; - } - }; - - var textDecorationColor = { - name: "text-decoration-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var textDecorationLine = { - name: 'text-decoration-line', - initialValue: 'none', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens - .filter(isIdentToken) - .map(function (token) { - switch (token.value) { - case 'underline': - return 1 /* UNDERLINE */; - case 'overline': - return 2 /* OVERLINE */; - case 'line-through': - return 3 /* LINE_THROUGH */; - case 'none': - return 4 /* BLINK */; - } - return 0 /* NONE */; - }) - .filter(function (line) { return line !== 0 /* NONE */; }); - } - }; - - var fontFamily = { - name: "font-family", - initialValue: '', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var accumulator = []; - var results = []; - tokens.forEach(function (token) { - switch (token.type) { - case 20 /* IDENT_TOKEN */: - case 0 /* STRING_TOKEN */: - accumulator.push(token.value); - break; - case 17 /* NUMBER_TOKEN */: - accumulator.push(token.number.toString()); - break; - case 4 /* COMMA_TOKEN */: - results.push(accumulator.join(' ')); - accumulator.length = 0; - break; - } - }); - if (accumulator.length) { - results.push(accumulator.join(' ')); - } - return results.map(function (result) { return (result.indexOf(' ') === -1 ? result : "'" + result + "'"); }); - } - }; - - var fontSize = { - name: "font-size", - initialValue: '0', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'length' - }; - - var fontWeight = { - name: 'font-weight', - initialValue: 'normal', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isNumberToken(token)) { - return token.number; - } - if (isIdentToken(token)) { - switch (token.value) { - case 'bold': - return 700; - case 'normal': - default: - return 400; - } - } - return 400; - } - }; - - var fontVariant = { - name: 'font-variant', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).map(function (token) { return token.value; }); - } - }; - - var fontStyle = { - name: 'font-style', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, overflow) { - switch (overflow) { - case 'oblique': - return "oblique" /* OBLIQUE */; - case 'italic': - return "italic" /* ITALIC */; - case 'normal': - default: - return "normal" /* NORMAL */; - } - } - }; - - var contains = function (bit, value) { return (bit & value) !== 0; }; - - var content = { - name: 'content', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return []; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return []; - } - return tokens; - } - }; - - var counterIncrement = { - name: 'counter-increment', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return null; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return null; - } - var increments = []; - var filtered = tokens.filter(nonWhiteSpace); - for (var i = 0; i < filtered.length; i++) { - var counter = filtered[i]; - var next = filtered[i + 1]; - if (counter.type === 20 /* IDENT_TOKEN */) { - var increment = next && isNumberToken(next) ? next.number : 1; - increments.push({ counter: counter.value, increment: increment }); - } - } - return increments; - } - }; - - var counterReset = { - name: 'counter-reset', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return []; - } - var resets = []; - var filtered = tokens.filter(nonWhiteSpace); - for (var i = 0; i < filtered.length; i++) { - var counter = filtered[i]; - var next = filtered[i + 1]; - if (isIdentToken(counter) && counter.value !== 'none') { - var reset = next && isNumberToken(next) ? next.number : 0; - resets.push({ counter: counter.value, reset: reset }); - } - } - return resets; - } - }; - - var duration = { - name: 'duration', - initialValue: '0s', - prefix: false, - type: 1 /* LIST */, - parse: function (context, tokens) { - return tokens.filter(isDimensionToken).map(function (token) { return time.parse(context, token); }); - } - }; - - var quotes = { - name: 'quotes', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return null; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return null; - } - var quotes = []; - var filtered = tokens.filter(isStringToken); - if (filtered.length % 2 !== 0) { - return null; - } - for (var i = 0; i < filtered.length; i += 2) { - var open_1 = filtered[i].value; - var close_1 = filtered[i + 1].value; - quotes.push({ open: open_1, close: close_1 }); - } - return quotes; - } - }; - var getQuote = function (quotes, depth, open) { - if (!quotes) { - return ''; - } - var quote = quotes[Math.min(depth, quotes.length - 1)]; - if (!quote) { - return ''; - } - return open ? quote.open : quote.close; - }; - - var paintOrder = { - name: 'paint-order', - initialValue: 'normal', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var DEFAULT_VALUE = [0 /* FILL */, 1 /* STROKE */, 2 /* MARKERS */]; - var layers = []; - tokens.filter(isIdentToken).forEach(function (token) { - switch (token.value) { - case 'stroke': - layers.push(1 /* STROKE */); - break; - case 'fill': - layers.push(0 /* FILL */); - break; - case 'markers': - layers.push(2 /* MARKERS */); - break; - } - }); - DEFAULT_VALUE.forEach(function (value) { - if (layers.indexOf(value) === -1) { - layers.push(value); - } - }); - return layers; - } - }; - - var webkitTextStrokeColor = { - name: "-webkit-text-stroke-color", - initialValue: 'currentcolor', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var webkitTextStrokeWidth = { - name: "-webkit-text-stroke-width", - initialValue: '0', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isDimensionToken(token)) { - return token.number; - } - return 0; - } - }; - - var CSSParsedDeclaration = /** @class */ (function () { - function CSSParsedDeclaration(context, declaration) { - var _a, _b; - this.animationDuration = parse(context, duration, declaration.animationDuration); - this.backgroundClip = parse(context, backgroundClip, declaration.backgroundClip); - this.backgroundColor = parse(context, backgroundColor, declaration.backgroundColor); - this.backgroundImage = parse(context, backgroundImage, declaration.backgroundImage); - this.backgroundOrigin = parse(context, backgroundOrigin, declaration.backgroundOrigin); - this.backgroundPosition = parse(context, backgroundPosition, declaration.backgroundPosition); - this.backgroundRepeat = parse(context, backgroundRepeat, declaration.backgroundRepeat); - this.backgroundSize = parse(context, backgroundSize, declaration.backgroundSize); - this.borderTopColor = parse(context, borderTopColor, declaration.borderTopColor); - this.borderRightColor = parse(context, borderRightColor, declaration.borderRightColor); - this.borderBottomColor = parse(context, borderBottomColor, declaration.borderBottomColor); - this.borderLeftColor = parse(context, borderLeftColor, declaration.borderLeftColor); - this.borderTopLeftRadius = parse(context, borderTopLeftRadius, declaration.borderTopLeftRadius); - this.borderTopRightRadius = parse(context, borderTopRightRadius, declaration.borderTopRightRadius); - this.borderBottomRightRadius = parse(context, borderBottomRightRadius, declaration.borderBottomRightRadius); - this.borderBottomLeftRadius = parse(context, borderBottomLeftRadius, declaration.borderBottomLeftRadius); - this.borderTopStyle = parse(context, borderTopStyle, declaration.borderTopStyle); - this.borderRightStyle = parse(context, borderRightStyle, declaration.borderRightStyle); - this.borderBottomStyle = parse(context, borderBottomStyle, declaration.borderBottomStyle); - this.borderLeftStyle = parse(context, borderLeftStyle, declaration.borderLeftStyle); - this.borderTopWidth = parse(context, borderTopWidth, declaration.borderTopWidth); - this.borderRightWidth = parse(context, borderRightWidth, declaration.borderRightWidth); - this.borderBottomWidth = parse(context, borderBottomWidth, declaration.borderBottomWidth); - this.borderLeftWidth = parse(context, borderLeftWidth, declaration.borderLeftWidth); - this.color = parse(context, color, declaration.color); - this.direction = parse(context, direction, declaration.direction); - this.display = parse(context, display, declaration.display); - this.float = parse(context, float, declaration.cssFloat); - this.fontFamily = parse(context, fontFamily, declaration.fontFamily); - this.fontSize = parse(context, fontSize, declaration.fontSize); - this.fontStyle = parse(context, fontStyle, declaration.fontStyle); - this.fontVariant = parse(context, fontVariant, declaration.fontVariant); - this.fontWeight = parse(context, fontWeight, declaration.fontWeight); - this.letterSpacing = parse(context, letterSpacing, declaration.letterSpacing); - this.lineBreak = parse(context, lineBreak, declaration.lineBreak); - this.lineHeight = parse(context, lineHeight, declaration.lineHeight); - this.listStyleImage = parse(context, listStyleImage, declaration.listStyleImage); - this.listStylePosition = parse(context, listStylePosition, declaration.listStylePosition); - this.listStyleType = parse(context, listStyleType, declaration.listStyleType); - this.marginTop = parse(context, marginTop, declaration.marginTop); - this.marginRight = parse(context, marginRight, declaration.marginRight); - this.marginBottom = parse(context, marginBottom, declaration.marginBottom); - this.marginLeft = parse(context, marginLeft, declaration.marginLeft); - this.opacity = parse(context, opacity, declaration.opacity); - var overflowTuple = parse(context, overflow, declaration.overflow); - this.overflowX = overflowTuple[0]; - this.overflowY = overflowTuple[overflowTuple.length > 1 ? 1 : 0]; - this.overflowWrap = parse(context, overflowWrap, declaration.overflowWrap); - this.paddingTop = parse(context, paddingTop, declaration.paddingTop); - this.paddingRight = parse(context, paddingRight, declaration.paddingRight); - this.paddingBottom = parse(context, paddingBottom, declaration.paddingBottom); - this.paddingLeft = parse(context, paddingLeft, declaration.paddingLeft); - this.paintOrder = parse(context, paintOrder, declaration.paintOrder); - this.position = parse(context, position, declaration.position); - this.textAlign = parse(context, textAlign, declaration.textAlign); - this.textDecorationColor = parse(context, textDecorationColor, (_a = declaration.textDecorationColor) !== null && _a !== void 0 ? _a : declaration.color); - this.textDecorationLine = parse(context, textDecorationLine, (_b = declaration.textDecorationLine) !== null && _b !== void 0 ? _b : declaration.textDecoration); - this.textShadow = parse(context, textShadow, declaration.textShadow); - this.textTransform = parse(context, textTransform, declaration.textTransform); - this.transform = parse(context, transform$1, declaration.transform); - this.transformOrigin = parse(context, transformOrigin, declaration.transformOrigin); - this.visibility = parse(context, visibility, declaration.visibility); - this.webkitTextStrokeColor = parse(context, webkitTextStrokeColor, declaration.webkitTextStrokeColor); - this.webkitTextStrokeWidth = parse(context, webkitTextStrokeWidth, declaration.webkitTextStrokeWidth); - this.wordBreak = parse(context, wordBreak, declaration.wordBreak); - this.zIndex = parse(context, zIndex, declaration.zIndex); - } - CSSParsedDeclaration.prototype.isVisible = function () { - return this.display > 0 && this.opacity > 0 && this.visibility === 0 /* VISIBLE */; - }; - CSSParsedDeclaration.prototype.isTransparent = function () { - return isTransparent(this.backgroundColor); - }; - CSSParsedDeclaration.prototype.isTransformed = function () { - return this.transform !== null; - }; - CSSParsedDeclaration.prototype.isPositioned = function () { - return this.position !== 0 /* STATIC */; - }; - CSSParsedDeclaration.prototype.isPositionedWithZIndex = function () { - return this.isPositioned() && !this.zIndex.auto; - }; - CSSParsedDeclaration.prototype.isFloating = function () { - return this.float !== 0 /* NONE */; - }; - CSSParsedDeclaration.prototype.isInlineLevel = function () { - return (contains(this.display, 4 /* INLINE */) || - contains(this.display, 33554432 /* INLINE_BLOCK */) || - contains(this.display, 268435456 /* INLINE_FLEX */) || - contains(this.display, 536870912 /* INLINE_GRID */) || - contains(this.display, 67108864 /* INLINE_LIST_ITEM */) || - contains(this.display, 134217728 /* INLINE_TABLE */)); - }; - return CSSParsedDeclaration; - }()); - var CSSParsedPseudoDeclaration = /** @class */ (function () { - function CSSParsedPseudoDeclaration(context, declaration) { - this.content = parse(context, content, declaration.content); - this.quotes = parse(context, quotes, declaration.quotes); - } - return CSSParsedPseudoDeclaration; - }()); - var CSSParsedCounterDeclaration = /** @class */ (function () { - function CSSParsedCounterDeclaration(context, declaration) { - this.counterIncrement = parse(context, counterIncrement, declaration.counterIncrement); - this.counterReset = parse(context, counterReset, declaration.counterReset); - } - return CSSParsedCounterDeclaration; - }()); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var parse = function (context, descriptor, style) { - var tokenizer = new Tokenizer(); - var value = style !== null && typeof style !== 'undefined' ? style.toString() : descriptor.initialValue; - tokenizer.write(value); - var parser = new Parser(tokenizer.read()); - switch (descriptor.type) { - case 2 /* IDENT_VALUE */: - var token = parser.parseComponentValue(); - return descriptor.parse(context, isIdentToken(token) ? token.value : descriptor.initialValue); - case 0 /* VALUE */: - return descriptor.parse(context, parser.parseComponentValue()); - case 1 /* LIST */: - return descriptor.parse(context, parser.parseComponentValues()); - case 4 /* TOKEN_VALUE */: - return parser.parseComponentValue(); - case 3 /* TYPE_VALUE */: - switch (descriptor.format) { - case 'angle': - return angle.parse(context, parser.parseComponentValue()); - case 'color': - return color$1.parse(context, parser.parseComponentValue()); - case 'image': - return image.parse(context, parser.parseComponentValue()); - case 'length': - var length_1 = parser.parseComponentValue(); - return isLength(length_1) ? length_1 : ZERO_LENGTH; - case 'length-percentage': - var value_1 = parser.parseComponentValue(); - return isLengthPercentage(value_1) ? value_1 : ZERO_LENGTH; - case 'time': - return time.parse(context, parser.parseComponentValue()); - } - break; - } - }; - - var elementDebuggerAttribute = 'data-html2canvas-debug'; - var getElementDebugType = function (element) { - var attribute = element.getAttribute(elementDebuggerAttribute); - switch (attribute) { - case 'all': - return 1 /* ALL */; - case 'clone': - return 2 /* CLONE */; - case 'parse': - return 3 /* PARSE */; - case 'render': - return 4 /* RENDER */; - default: - return 0 /* NONE */; - } - }; - var isDebugging = function (element, type) { - var elementType = getElementDebugType(element); - return elementType === 1 /* ALL */ || type === elementType; - }; - - var ElementContainer = /** @class */ (function () { - function ElementContainer(context, element) { - this.context = context; - this.textNodes = []; - this.elements = []; - this.flags = 0; - if (isDebugging(element, 3 /* PARSE */)) { - debugger; - } - this.styles = new CSSParsedDeclaration(context, window.getComputedStyle(element, null)); - if (isHTMLElementNode(element)) { - if (this.styles.animationDuration.some(function (duration) { return duration > 0; })) { - element.style.animationDuration = '0s'; - } - if (this.styles.transform !== null) { - // getBoundingClientRect takes transforms into account - element.style.transform = 'none'; - } - } - this.bounds = parseBounds(this.context, element); - if (isDebugging(element, 4 /* RENDER */)) { - this.flags |= 16 /* DEBUG_RENDER */; - } - } - return ElementContainer; - }()); - - /* - * text-segmentation 1.0.3 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var base64 = 'AAAAAAAAAAAAEA4AGBkAAFAaAAACAAAAAAAIABAAGAAwADgACAAQAAgAEAAIABAACAAQAAgAEAAIABAACAAQAAgAEAAIABAAQABIAEQATAAIABAACAAQAAgAEAAIABAAVABcAAgAEAAIABAACAAQAGAAaABwAHgAgACIAI4AlgAIABAAmwCjAKgAsAC2AL4AvQDFAMoA0gBPAVYBWgEIAAgACACMANoAYgFkAWwBdAF8AX0BhQGNAZUBlgGeAaMBlQGWAasBswF8AbsBwwF0AcsBYwHTAQgA2wG/AOMBdAF8AekB8QF0AfkB+wHiAHQBfAEIAAMC5gQIAAsCEgIIAAgAFgIeAggAIgIpAggAMQI5AkACygEIAAgASAJQAlgCYAIIAAgACAAKBQoFCgUTBRMFGQUrBSsFCAAIAAgACAAIAAgACAAIAAgACABdAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABoAmgCrwGvAQgAbgJ2AggAHgEIAAgACADnAXsCCAAIAAgAgwIIAAgACAAIAAgACACKAggAkQKZAggAPADJAAgAoQKkAqwCsgK6AsICCADJAggA0AIIAAgACAAIANYC3gIIAAgACAAIAAgACABAAOYCCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAkASoB+QIEAAgACAA8AEMCCABCBQgACABJBVAFCAAIAAgACAAIAAgACAAIAAgACABTBVoFCAAIAFoFCABfBWUFCAAIAAgACAAIAAgAbQUIAAgACAAIAAgACABzBXsFfQWFBYoFigWKBZEFigWKBYoFmAWfBaYFrgWxBbkFCAAIAAgACAAIAAgACAAIAAgACAAIAMEFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAMgFCADQBQgACAAIAAgACAAIAAgACAAIAAgACAAIAO4CCAAIAAgAiQAIAAgACABAAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAD0AggACAD8AggACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIANYFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAMDvwAIAAgAJAIIAAgACAAIAAgACAAIAAgACwMTAwgACAB9BOsEGwMjAwgAKwMyAwsFYgE3A/MEPwMIAEUDTQNRAwgAWQOsAGEDCAAIAAgACAAIAAgACABpAzQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFIQUoBSwFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABtAwgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABMAEwACAAIAAgACAAIABgACAAIAAgACAC/AAgACAAyAQgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACAAIAAwAAgACAAIAAgACAAIAAgACAAIAAAARABIAAgACAAIABQASAAIAAgAIABwAEAAjgCIABsAqAC2AL0AigDQAtwC+IJIQqVAZUBWQqVAZUBlQGVAZUBlQGrC5UBlQGVAZUBlQGVAZUBlQGVAXsKlQGVAbAK6wsrDGUMpQzlDJUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAfAKAAuZA64AtwCJALoC6ADwAAgAuACgA/oEpgO6AqsD+AAIAAgAswMIAAgACAAIAIkAuwP5AfsBwwPLAwgACAAIAAgACADRA9kDCAAIAOED6QMIAAgACAAIAAgACADuA/YDCAAIAP4DyQAIAAgABgQIAAgAXQAOBAgACAAIAAgACAAIABMECAAIAAgACAAIAAgACAD8AAQBCAAIAAgAGgQiBCoECAExBAgAEAEIAAgACAAIAAgACAAIAAgACAAIAAgACAA4BAgACABABEYECAAIAAgATAQYAQgAVAQIAAgACAAIAAgACAAIAAgACAAIAFoECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAOQEIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAB+BAcACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAEABhgSMBAgACAAIAAgAlAQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAwAEAAQABAADAAMAAwADAAQABAAEAAQABAAEAAQABHATAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAdQMIAAgACAAIAAgACAAIAMkACAAIAAgAfQMIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACFA4kDCAAIAAgACAAIAOcBCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAIcDCAAIAAgACAAIAAgACAAIAAgACAAIAJEDCAAIAAgACADFAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABgBAgAZgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAbAQCBXIECAAIAHkECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABAAJwEQACjBKoEsgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAC6BMIECAAIAAgACAAIAAgACABmBAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAxwQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAGYECAAIAAgAzgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBd0FXwUIAOIF6gXxBYoF3gT5BQAGCAaKBYoFigWKBYoFigWKBYoFigWKBYoFigXWBIoFigWKBYoFigWKBYoFigWKBYsFEAaKBYoFigWKBYoFigWKBRQGCACKBYoFigWKBQgACAAIANEECAAIABgGigUgBggAJgYIAC4GMwaKBYoF0wQ3Bj4GigWKBYoFigWKBYoFigWKBYoFigWKBYoFigUIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWLBf///////wQABAAEAAQABAAEAAQABAAEAAQAAwAEAAQAAgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAQADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUAAAAFAAUAAAAFAAUAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAQAAAAUABQAFAAUABQAFAAAAAAAFAAUAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAFAAUAAQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAAABwAHAAcAAAAHAAcABwAFAAEAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAcABwAFAAUAAAAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAAAAQABAAAAAAAAAAAAAAAFAAUABQAFAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAHAAcAAAAHAAcAAAAAAAUABQAHAAUAAQAHAAEABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwABAAUABQAFAAUAAAAAAAAAAAAAAAEAAQABAAEAAQABAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABQANAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAABQAHAAUABQAFAAAAAAAAAAcABQAFAAUABQAFAAQABAAEAAQABAAEAAQABAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUAAAAFAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAUAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAcABwAFAAcABwAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUABwAHAAUABQAFAAUAAAAAAAcABwAAAAAABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAAAAAAAAAAABQAFAAAAAAAFAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAFAAUABQAFAAUAAAAFAAUABwAAAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABwAFAAUABQAFAAAAAAAHAAcAAAAAAAcABwAFAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAAAAAAAAAHAAcABwAAAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAUABQAFAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAHAAcABQAHAAcAAAAFAAcABwAAAAcABwAFAAUAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAFAAcABwAFAAUABQAAAAUAAAAHAAcABwAHAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAHAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUAAAAFAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAUAAAAFAAUAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABwAFAAUABQAFAAUABQAAAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABQAFAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAFAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAHAAUABQAFAAUABQAFAAUABwAHAAcABwAHAAcABwAHAAUABwAHAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABwAHAAcABwAFAAUABwAHAAcAAAAAAAAAAAAHAAcABQAHAAcABwAHAAcABwAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAUABQAFAAUABQAFAAUAAAAFAAAABQAAAAAABQAFAAUABQAFAAUABQAFAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAUABQAFAAUABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABwAFAAcABwAHAAcABwAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAUABQAFAAUABwAHAAUABQAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABQAFAAcABwAHAAUABwAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAcABQAFAAUABQAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAAAAAABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAAAAAAAAAFAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAUABQAHAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAFAAUABQAFAAcABwAFAAUABwAHAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAcABwAFAAUABwAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABQAAAAAABQAFAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAcABwAAAAAAAAAAAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAcABwAFAAcABwAAAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAFAAUABQAAAAUABQAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABwAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAHAAcABQAHAAUABQAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAAABwAHAAAAAAAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAFAAUABwAFAAcABwAFAAcABQAFAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAAAAAABwAHAAcABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAFAAcABwAFAAUABQAFAAUABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAUABQAFAAcABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABQAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAAAAAAFAAUABwAHAAcABwAFAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAHAAUABQAFAAUABQAFAAUABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAABQAAAAUABQAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAHAAcAAAAFAAUAAAAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABQAFAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAABQAFAAUABQAFAAUABQAAAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAFAAUABQAFAAUADgAOAA4ADgAOAA4ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAMAAwADAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkAAAAAAAAAAAAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAAAAAAAAAAAAsADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwACwAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAADgAOAA4AAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAAAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4AAAAOAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAAAAAAAAAAAA4AAAAOAAAAAAAAAAAADgAOAA4AAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAA='; - - /* - * utrie 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$1 = 0; i$1 < chars$1.length; i$1++) { - lookup$1[chars$1.charCodeAt(i$1)] = i$1; - } - var decode = function (base64) { - var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4; - if (base64[base64.length - 1] === '=') { - bufferLength--; - if (base64[base64.length - 2] === '=') { - bufferLength--; - } - } - var buffer = typeof ArrayBuffer !== 'undefined' && - typeof Uint8Array !== 'undefined' && - typeof Uint8Array.prototype.slice !== 'undefined' - ? new ArrayBuffer(bufferLength) - : new Array(bufferLength); - var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer); - for (i = 0; i < len; i += 4) { - encoded1 = lookup$1[base64.charCodeAt(i)]; - encoded2 = lookup$1[base64.charCodeAt(i + 1)]; - encoded3 = lookup$1[base64.charCodeAt(i + 2)]; - encoded4 = lookup$1[base64.charCodeAt(i + 3)]; - bytes[p++] = (encoded1 << 2) | (encoded2 >> 4); - bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2); - bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63); - } - return buffer; - }; - var polyUint16Array = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 2) { - bytes.push((buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - var polyUint32Array = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 4) { - bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - - /** Shift size for getting the index-2 table offset. */ - var UTRIE2_SHIFT_2 = 5; - /** Shift size for getting the index-1 table offset. */ - var UTRIE2_SHIFT_1 = 6 + 5; - /** - * Shift size for shifting left the index array values. - * Increases possible data size with 16-bit index values at the cost - * of compactability. - * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY. - */ - var UTRIE2_INDEX_SHIFT = 2; - /** - * Difference between the two shift sizes, - * for getting an index-1 offset from an index-2 offset. 6=11-5 - */ - var UTRIE2_SHIFT_1_2 = UTRIE2_SHIFT_1 - UTRIE2_SHIFT_2; - /** - * The part of the index-2 table for U+D800..U+DBFF stores values for - * lead surrogate code _units_ not code _points_. - * Values for lead surrogate code _points_ are indexed with this portion of the table. - * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.) - */ - var UTRIE2_LSCP_INDEX_2_OFFSET = 0x10000 >> UTRIE2_SHIFT_2; - /** Number of entries in a data block. 32=0x20 */ - var UTRIE2_DATA_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_2; - /** Mask for getting the lower bits for the in-data-block offset. */ - var UTRIE2_DATA_MASK = UTRIE2_DATA_BLOCK_LENGTH - 1; - var UTRIE2_LSCP_INDEX_2_LENGTH = 0x400 >> UTRIE2_SHIFT_2; - /** Count the lengths of both BMP pieces. 2080=0x820 */ - var UTRIE2_INDEX_2_BMP_LENGTH = UTRIE2_LSCP_INDEX_2_OFFSET + UTRIE2_LSCP_INDEX_2_LENGTH; - /** - * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820. - * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2. - */ - var UTRIE2_UTF8_2B_INDEX_2_OFFSET = UTRIE2_INDEX_2_BMP_LENGTH; - var UTRIE2_UTF8_2B_INDEX_2_LENGTH = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */ - /** - * The index-1 table, only used for supplementary code points, at offset 2112=0x840. - * Variable length, for code points up to highStart, where the last single-value range starts. - * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1. - * (For 0x100000 supplementary code points U+10000..U+10ffff.) - * - * The part of the index-2 table for supplementary code points starts - * after this index-1 table. - * - * Both the index-1 table and the following part of the index-2 table - * are omitted completely if there is only BMP data. - */ - var UTRIE2_INDEX_1_OFFSET = UTRIE2_UTF8_2B_INDEX_2_OFFSET + UTRIE2_UTF8_2B_INDEX_2_LENGTH; - /** - * Number of index-1 entries for the BMP. 32=0x20 - * This part of the index-1 table is omitted from the serialized form. - */ - var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH = 0x10000 >> UTRIE2_SHIFT_1; - /** Number of entries in an index-2 block. 64=0x40 */ - var UTRIE2_INDEX_2_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_1_2; - /** Mask for getting the lower bits for the in-index-2-block offset. */ - var UTRIE2_INDEX_2_MASK = UTRIE2_INDEX_2_BLOCK_LENGTH - 1; - var slice16 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint16Array(Array.prototype.slice.call(view, start, end)); - }; - var slice32 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint32Array(Array.prototype.slice.call(view, start, end)); - }; - var createTrieFromBase64 = function (base64, _byteLength) { - var buffer = decode(base64); - var view32 = Array.isArray(buffer) ? polyUint32Array(buffer) : new Uint32Array(buffer); - var view16 = Array.isArray(buffer) ? polyUint16Array(buffer) : new Uint16Array(buffer); - var headerLength = 24; - var index = slice16(view16, headerLength / 2, view32[4] / 2); - var data = view32[5] === 2 - ? slice16(view16, (headerLength + view32[4]) / 2) - : slice32(view32, Math.ceil((headerLength + view32[4]) / 4)); - return new Trie(view32[0], view32[1], view32[2], view32[3], index, data); - }; - var Trie = /** @class */ (function () { - function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) { - this.initialValue = initialValue; - this.errorValue = errorValue; - this.highStart = highStart; - this.highValueIndex = highValueIndex; - this.index = index; - this.data = data; - } - /** - * Get the value for a code point as stored in the Trie. - * - * @param codePoint the code point - * @return the value - */ - Trie.prototype.get = function (codePoint) { - var ix; - if (codePoint >= 0) { - if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) { - // Ordinary BMP code point, excluding leading surrogates. - // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index. - // 16 bit data is stored in the index array itself. - ix = this.index[codePoint >> UTRIE2_SHIFT_2]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint <= 0xffff) { - // Lead Surrogate Code Point. A Separate index section is stored for - // lead surrogate code units and code points. - // The main index has the code unit data. - // For this function, we need the code point data. - // Note: this expression could be refactored for slightly improved efficiency, but - // surrogate code points will be so rare in practice that it's not worth it. - ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2)]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint < this.highStart) { - // Supplemental code point, use two-level lookup. - ix = UTRIE2_INDEX_1_OFFSET - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH + (codePoint >> UTRIE2_SHIFT_1); - ix = this.index[ix]; - ix += (codePoint >> UTRIE2_SHIFT_2) & UTRIE2_INDEX_2_MASK; - ix = this.index[ix]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint <= 0x10ffff) { - return this.data[this.highValueIndex]; - } - } - // Fall through. The code point is outside of the legal range of 0..0x10ffff. - return this.errorValue; - }; - return Trie; - }()); - - /* - * base64-arraybuffer 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i = 0; i < chars.length; i++) { - lookup[chars.charCodeAt(i)] = i; - } - - var Prepend = 1; - var CR = 2; - var LF = 3; - var Control = 4; - var Extend = 5; - var SpacingMark = 7; - var L = 8; - var V = 9; - var T = 10; - var LV = 11; - var LVT = 12; - var ZWJ = 13; - var Extended_Pictographic = 14; - var RI = 15; - var toCodePoints = function (str) { - var codePoints = []; - var i = 0; - var length = str.length; - while (i < length) { - var value = str.charCodeAt(i++); - if (value >= 0xd800 && value <= 0xdbff && i < length) { - var extra = str.charCodeAt(i++); - if ((extra & 0xfc00) === 0xdc00) { - codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000); - } - else { - codePoints.push(value); - i--; - } - } - else { - codePoints.push(value); - } - } - return codePoints; - }; - var fromCodePoint = function () { - var codePoints = []; - for (var _i = 0; _i < arguments.length; _i++) { - codePoints[_i] = arguments[_i]; - } - if (String.fromCodePoint) { - return String.fromCodePoint.apply(String, codePoints); - } - var length = codePoints.length; - if (!length) { - return ''; - } - var codeUnits = []; - var index = -1; - var result = ''; - while (++index < length) { - var codePoint = codePoints[index]; - if (codePoint <= 0xffff) { - codeUnits.push(codePoint); - } - else { - codePoint -= 0x10000; - codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00); - } - if (index + 1 === length || codeUnits.length > 0x4000) { - result += String.fromCharCode.apply(String, codeUnits); - codeUnits.length = 0; - } - } - return result; - }; - var UnicodeTrie = createTrieFromBase64(base64); - var BREAK_NOT_ALLOWED = '×'; - var BREAK_ALLOWED = '÷'; - var codePointToClass = function (codePoint) { return UnicodeTrie.get(codePoint); }; - var _graphemeBreakAtIndex = function (_codePoints, classTypes, index) { - var prevIndex = index - 2; - var prev = classTypes[prevIndex]; - var current = classTypes[index - 1]; - var next = classTypes[index]; - // GB3 Do not break between a CR and LF - if (current === CR && next === LF) { - return BREAK_NOT_ALLOWED; - } - // GB4 Otherwise, break before and after controls. - if (current === CR || current === LF || current === Control) { - return BREAK_ALLOWED; - } - // GB5 - if (next === CR || next === LF || next === Control) { - return BREAK_ALLOWED; - } - // Do not break Hangul syllable sequences. - // GB6 - if (current === L && [L, V, LV, LVT].indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED; - } - // GB7 - if ((current === LV || current === V) && (next === V || next === T)) { - return BREAK_NOT_ALLOWED; - } - // GB8 - if ((current === LVT || current === T) && next === T) { - return BREAK_NOT_ALLOWED; - } - // GB9 Do not break before extending characters or ZWJ. - if (next === ZWJ || next === Extend) { - return BREAK_NOT_ALLOWED; - } - // Do not break before SpacingMarks, or after Prepend characters. - // GB9a - if (next === SpacingMark) { - return BREAK_NOT_ALLOWED; - } - // GB9a - if (current === Prepend) { - return BREAK_NOT_ALLOWED; - } - // GB11 Do not break within emoji modifier sequences or emoji zwj sequences. - if (current === ZWJ && next === Extended_Pictographic) { - while (prev === Extend) { - prev = classTypes[--prevIndex]; - } - if (prev === Extended_Pictographic) { - return BREAK_NOT_ALLOWED; - } - } - // GB12 Do not break within emoji flag sequences. - // That is, do not break between regional indicator (RI) symbols - // if there is an odd number of RI characters before the break point. - if (current === RI && next === RI) { - var countRI = 0; - while (prev === RI) { - countRI++; - prev = classTypes[--prevIndex]; - } - if (countRI % 2 === 0) { - return BREAK_NOT_ALLOWED; - } - } - return BREAK_ALLOWED; - }; - var GraphemeBreaker = function (str) { - var codePoints = toCodePoints(str); - var length = codePoints.length; - var index = 0; - var lastEnd = 0; - var classTypes = codePoints.map(codePointToClass); - return { - next: function () { - if (index >= length) { - return { done: true, value: null }; - } - var graphemeBreak = BREAK_NOT_ALLOWED; - while (index < length && - (graphemeBreak = _graphemeBreakAtIndex(codePoints, classTypes, ++index)) === BREAK_NOT_ALLOWED) { } - if (graphemeBreak !== BREAK_NOT_ALLOWED || index === length) { - var value = fromCodePoint.apply(null, codePoints.slice(lastEnd, index)); - lastEnd = index; - return { value: value, done: false }; - } - return { done: true, value: null }; - }, - }; - }; - var splitGraphemes = function (str) { - var breaker = GraphemeBreaker(str); - var graphemes = []; - var bk; - while (!(bk = breaker.next()).done) { - if (bk.value) { - graphemes.push(bk.value.slice()); - } - } - return graphemes; - }; - - var testRangeBounds = function (document) { - var TEST_HEIGHT = 123; - if (document.createRange) { - var range = document.createRange(); - if (range.getBoundingClientRect) { - var testElement = document.createElement('boundtest'); - testElement.style.height = TEST_HEIGHT + "px"; - testElement.style.display = 'block'; - document.body.appendChild(testElement); - range.selectNode(testElement); - var rangeBounds = range.getBoundingClientRect(); - var rangeHeight = Math.round(rangeBounds.height); - document.body.removeChild(testElement); - if (rangeHeight === TEST_HEIGHT) { - return true; - } - } - } - return false; - }; - var testIOSLineBreak = function (document) { - var testElement = document.createElement('boundtest'); - testElement.style.width = '50px'; - testElement.style.display = 'block'; - testElement.style.fontSize = '12px'; - testElement.style.letterSpacing = '0px'; - testElement.style.wordSpacing = '0px'; - document.body.appendChild(testElement); - var range = document.createRange(); - testElement.innerHTML = typeof ''.repeat === 'function' ? '👨'.repeat(10) : ''; - var node = testElement.firstChild; - var textList = toCodePoints$1(node.data).map(function (i) { return fromCodePoint$1(i); }); - var offset = 0; - var prev = {}; - // ios 13 does not handle range getBoundingClientRect line changes correctly #2177 - var supports = textList.every(function (text, i) { - range.setStart(node, offset); - range.setEnd(node, offset + text.length); - var rect = range.getBoundingClientRect(); - offset += text.length; - var boundAhead = rect.x > prev.x || rect.y > prev.y; - prev = rect; - if (i === 0) { - return true; - } - return boundAhead; - }); - document.body.removeChild(testElement); - return supports; - }; - var testCORS = function () { return typeof new Image().crossOrigin !== 'undefined'; }; - var testResponseType = function () { return typeof new XMLHttpRequest().responseType === 'string'; }; - var testSVG = function (document) { - var img = new Image(); - var canvas = document.createElement('canvas'); - var ctx = canvas.getContext('2d'); - if (!ctx) { - return false; - } - img.src = "data:image/svg+xml,"; - try { - ctx.drawImage(img, 0, 0); - canvas.toDataURL(); - } - catch (e) { - return false; - } - return true; - }; - var isGreenPixel = function (data) { - return data[0] === 0 && data[1] === 255 && data[2] === 0 && data[3] === 255; - }; - var testForeignObject = function (document) { - var canvas = document.createElement('canvas'); - var size = 100; - canvas.width = size; - canvas.height = size; - var ctx = canvas.getContext('2d'); - if (!ctx) { - return Promise.reject(false); - } - ctx.fillStyle = 'rgb(0, 255, 0)'; - ctx.fillRect(0, 0, size, size); - var img = new Image(); - var greenImageSrc = canvas.toDataURL(); - img.src = greenImageSrc; - var svg = createForeignObjectSVG(size, size, 0, 0, img); - ctx.fillStyle = 'red'; - ctx.fillRect(0, 0, size, size); - return loadSerializedSVG$1(svg) - .then(function (img) { - ctx.drawImage(img, 0, 0); - var data = ctx.getImageData(0, 0, size, size).data; - ctx.fillStyle = 'red'; - ctx.fillRect(0, 0, size, size); - var node = document.createElement('div'); - node.style.backgroundImage = "url(" + greenImageSrc + ")"; - node.style.height = size + "px"; - // Firefox 55 does not render inline tags - return isGreenPixel(data) - ? loadSerializedSVG$1(createForeignObjectSVG(size, size, 0, 0, node)) - : Promise.reject(false); - }) - .then(function (img) { - ctx.drawImage(img, 0, 0); - // Edge does not render background-images - return isGreenPixel(ctx.getImageData(0, 0, size, size).data); - }) - .catch(function () { return false; }); - }; - var createForeignObjectSVG = function (width, height, x, y, node) { - var xmlns = 'http://www.w3.org/2000/svg'; - var svg = document.createElementNS(xmlns, 'svg'); - var foreignObject = document.createElementNS(xmlns, 'foreignObject'); - svg.setAttributeNS(null, 'width', width.toString()); - svg.setAttributeNS(null, 'height', height.toString()); - foreignObject.setAttributeNS(null, 'width', '100%'); - foreignObject.setAttributeNS(null, 'height', '100%'); - foreignObject.setAttributeNS(null, 'x', x.toString()); - foreignObject.setAttributeNS(null, 'y', y.toString()); - foreignObject.setAttributeNS(null, 'externalResourcesRequired', 'true'); - svg.appendChild(foreignObject); - foreignObject.appendChild(node); - return svg; - }; - var loadSerializedSVG$1 = function (svg) { - return new Promise(function (resolve, reject) { - var img = new Image(); - img.onload = function () { return resolve(img); }; - img.onerror = reject; - img.src = "data:image/svg+xml;charset=utf-8," + encodeURIComponent(new XMLSerializer().serializeToString(svg)); - }); - }; - var FEATURES = { - get SUPPORT_RANGE_BOUNDS() { - var value = testRangeBounds(document); - Object.defineProperty(FEATURES, 'SUPPORT_RANGE_BOUNDS', { value: value }); - return value; - }, - get SUPPORT_WORD_BREAKING() { - var value = FEATURES.SUPPORT_RANGE_BOUNDS && testIOSLineBreak(document); - Object.defineProperty(FEATURES, 'SUPPORT_WORD_BREAKING', { value: value }); - return value; - }, - get SUPPORT_SVG_DRAWING() { - var value = testSVG(document); - Object.defineProperty(FEATURES, 'SUPPORT_SVG_DRAWING', { value: value }); - return value; - }, - get SUPPORT_FOREIGNOBJECT_DRAWING() { - var value = typeof Array.from === 'function' && typeof window.fetch === 'function' - ? testForeignObject(document) - : Promise.resolve(false); - Object.defineProperty(FEATURES, 'SUPPORT_FOREIGNOBJECT_DRAWING', { value: value }); - return value; - }, - get SUPPORT_CORS_IMAGES() { - var value = testCORS(); - Object.defineProperty(FEATURES, 'SUPPORT_CORS_IMAGES', { value: value }); - return value; - }, - get SUPPORT_RESPONSE_TYPE() { - var value = testResponseType(); - Object.defineProperty(FEATURES, 'SUPPORT_RESPONSE_TYPE', { value: value }); - return value; - }, - get SUPPORT_CORS_XHR() { - var value = 'withCredentials' in new XMLHttpRequest(); - Object.defineProperty(FEATURES, 'SUPPORT_CORS_XHR', { value: value }); - return value; - }, - get SUPPORT_NATIVE_TEXT_SEGMENTATION() { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var value = !!(typeof Intl !== 'undefined' && Intl.Segmenter); - Object.defineProperty(FEATURES, 'SUPPORT_NATIVE_TEXT_SEGMENTATION', { value: value }); - return value; - } - }; - - var TextBounds = /** @class */ (function () { - function TextBounds(text, bounds) { - this.text = text; - this.bounds = bounds; - } - return TextBounds; - }()); - var parseTextBounds = function (context, value, styles, node) { - var textList = breakText(value, styles); - var textBounds = []; - var offset = 0; - textList.forEach(function (text) { - if (styles.textDecorationLine.length || text.trim().length > 0) { - if (FEATURES.SUPPORT_RANGE_BOUNDS) { - var clientRects = createRange(node, offset, text.length).getClientRects(); - if (clientRects.length > 1) { - var subSegments = segmentGraphemes(text); - var subOffset_1 = 0; - subSegments.forEach(function (subSegment) { - textBounds.push(new TextBounds(subSegment, Bounds.fromDOMRectList(context, createRange(node, subOffset_1 + offset, subSegment.length).getClientRects()))); - subOffset_1 += subSegment.length; - }); - } - else { - textBounds.push(new TextBounds(text, Bounds.fromDOMRectList(context, clientRects))); - } - } - else { - var replacementNode = node.splitText(text.length); - textBounds.push(new TextBounds(text, getWrapperBounds(context, node))); - node = replacementNode; - } - } - else if (!FEATURES.SUPPORT_RANGE_BOUNDS) { - node = node.splitText(text.length); - } - offset += text.length; - }); - return textBounds; - }; - var getWrapperBounds = function (context, node) { - var ownerDocument = node.ownerDocument; - if (ownerDocument) { - var wrapper = ownerDocument.createElement('html2canvaswrapper'); - wrapper.appendChild(node.cloneNode(true)); - var parentNode = node.parentNode; - if (parentNode) { - parentNode.replaceChild(wrapper, node); - var bounds = parseBounds(context, wrapper); - if (wrapper.firstChild) { - parentNode.replaceChild(wrapper.firstChild, wrapper); - } - return bounds; - } - } - return Bounds.EMPTY; - }; - var createRange = function (node, offset, length) { - var ownerDocument = node.ownerDocument; - if (!ownerDocument) { - throw new Error('Node has no owner document'); - } - var range = ownerDocument.createRange(); - range.setStart(node, offset); - range.setEnd(node, offset + length); - return range; - }; - var segmentGraphemes = function (value) { - if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var segmenter = new Intl.Segmenter(void 0, { granularity: 'grapheme' }); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; }); - } - return splitGraphemes(value); - }; - var segmentWords = function (value, styles) { - if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var segmenter = new Intl.Segmenter(void 0, { - granularity: 'word' - }); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; }); - } - return breakWords(value, styles); - }; - var breakText = function (value, styles) { - return styles.letterSpacing !== 0 ? segmentGraphemes(value) : segmentWords(value, styles); - }; - // https://drafts.csswg.org/css-text/#word-separator - var wordSeparators = [0x0020, 0x00a0, 0x1361, 0x10100, 0x10101, 0x1039, 0x1091]; - var breakWords = function (str, styles) { - var breaker = LineBreaker(str, { - lineBreak: styles.lineBreak, - wordBreak: styles.overflowWrap === "break-word" /* BREAK_WORD */ ? 'break-word' : styles.wordBreak - }); - var words = []; - var bk; - var _loop_1 = function () { - if (bk.value) { - var value = bk.value.slice(); - var codePoints = toCodePoints$1(value); - var word_1 = ''; - codePoints.forEach(function (codePoint) { - if (wordSeparators.indexOf(codePoint) === -1) { - word_1 += fromCodePoint$1(codePoint); - } - else { - if (word_1.length) { - words.push(word_1); - } - words.push(fromCodePoint$1(codePoint)); - word_1 = ''; - } - }); - if (word_1.length) { - words.push(word_1); - } - } - }; - while (!(bk = breaker.next()).done) { - _loop_1(); - } - return words; - }; - - var TextContainer = /** @class */ (function () { - function TextContainer(context, node, styles) { - this.text = transform(node.data, styles.textTransform); - this.textBounds = parseTextBounds(context, this.text, styles, node); - } - return TextContainer; - }()); - var transform = function (text, transform) { - switch (transform) { - case 1 /* LOWERCASE */: - return text.toLowerCase(); - case 3 /* CAPITALIZE */: - return text.replace(CAPITALIZE, capitalize); - case 2 /* UPPERCASE */: - return text.toUpperCase(); - default: - return text; - } - }; - var CAPITALIZE = /(^|\s|:|-|\(|\))([a-z])/g; - var capitalize = function (m, p1, p2) { - if (m.length > 0) { - return p1 + p2.toUpperCase(); - } - return m; - }; - - var ImageElementContainer = /** @class */ (function (_super) { - __extends(ImageElementContainer, _super); - function ImageElementContainer(context, img) { - var _this = _super.call(this, context, img) || this; - _this.src = img.currentSrc || img.src; - _this.intrinsicWidth = img.naturalWidth; - _this.intrinsicHeight = img.naturalHeight; - _this.context.cache.addImage(_this.src); - return _this; - } - return ImageElementContainer; - }(ElementContainer)); - - var CanvasElementContainer = /** @class */ (function (_super) { - __extends(CanvasElementContainer, _super); - function CanvasElementContainer(context, canvas) { - var _this = _super.call(this, context, canvas) || this; - _this.canvas = canvas; - _this.intrinsicWidth = canvas.width; - _this.intrinsicHeight = canvas.height; - return _this; - } - return CanvasElementContainer; - }(ElementContainer)); - - var SVGElementContainer = /** @class */ (function (_super) { - __extends(SVGElementContainer, _super); - function SVGElementContainer(context, img) { - var _this = _super.call(this, context, img) || this; - var s = new XMLSerializer(); - var bounds = parseBounds(context, img); - img.setAttribute('width', bounds.width + "px"); - img.setAttribute('height', bounds.height + "px"); - _this.svg = "data:image/svg+xml," + encodeURIComponent(s.serializeToString(img)); - _this.intrinsicWidth = img.width.baseVal.value; - _this.intrinsicHeight = img.height.baseVal.value; - _this.context.cache.addImage(_this.svg); - return _this; - } - return SVGElementContainer; - }(ElementContainer)); - - var LIElementContainer = /** @class */ (function (_super) { - __extends(LIElementContainer, _super); - function LIElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.value = element.value; - return _this; - } - return LIElementContainer; - }(ElementContainer)); - - var OLElementContainer = /** @class */ (function (_super) { - __extends(OLElementContainer, _super); - function OLElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.start = element.start; - _this.reversed = typeof element.reversed === 'boolean' && element.reversed === true; - return _this; - } - return OLElementContainer; - }(ElementContainer)); - - var CHECKBOX_BORDER_RADIUS = [ - { - type: 15 /* DIMENSION_TOKEN */, - flags: 0, - unit: 'px', - number: 3 - } - ]; - var RADIO_BORDER_RADIUS = [ - { - type: 16 /* PERCENTAGE_TOKEN */, - flags: 0, - number: 50 - } - ]; - var reformatInputBounds = function (bounds) { - if (bounds.width > bounds.height) { - return new Bounds(bounds.left + (bounds.width - bounds.height) / 2, bounds.top, bounds.height, bounds.height); - } - else if (bounds.width < bounds.height) { - return new Bounds(bounds.left, bounds.top + (bounds.height - bounds.width) / 2, bounds.width, bounds.width); - } - return bounds; - }; - var getInputValue = function (node) { - var value = node.type === PASSWORD ? new Array(node.value.length + 1).join('\u2022') : node.value; - return value.length === 0 ? node.placeholder || '' : value; - }; - var CHECKBOX = 'checkbox'; - var RADIO = 'radio'; - var PASSWORD = 'password'; - var INPUT_COLOR = 0x2a2a2aff; - var InputElementContainer = /** @class */ (function (_super) { - __extends(InputElementContainer, _super); - function InputElementContainer(context, input) { - var _this = _super.call(this, context, input) || this; - _this.type = input.type.toLowerCase(); - _this.checked = input.checked; - _this.value = getInputValue(input); - if (_this.type === CHECKBOX || _this.type === RADIO) { - _this.styles.backgroundColor = 0xdededeff; - _this.styles.borderTopColor = - _this.styles.borderRightColor = - _this.styles.borderBottomColor = - _this.styles.borderLeftColor = - 0xa5a5a5ff; - _this.styles.borderTopWidth = - _this.styles.borderRightWidth = - _this.styles.borderBottomWidth = - _this.styles.borderLeftWidth = - 1; - _this.styles.borderTopStyle = - _this.styles.borderRightStyle = - _this.styles.borderBottomStyle = - _this.styles.borderLeftStyle = - 1 /* SOLID */; - _this.styles.backgroundClip = [0 /* BORDER_BOX */]; - _this.styles.backgroundOrigin = [0 /* BORDER_BOX */]; - _this.bounds = reformatInputBounds(_this.bounds); - } - switch (_this.type) { - case CHECKBOX: - _this.styles.borderTopRightRadius = - _this.styles.borderTopLeftRadius = - _this.styles.borderBottomRightRadius = - _this.styles.borderBottomLeftRadius = - CHECKBOX_BORDER_RADIUS; - break; - case RADIO: - _this.styles.borderTopRightRadius = - _this.styles.borderTopLeftRadius = - _this.styles.borderBottomRightRadius = - _this.styles.borderBottomLeftRadius = - RADIO_BORDER_RADIUS; - break; - } - return _this; - } - return InputElementContainer; - }(ElementContainer)); - - var SelectElementContainer = /** @class */ (function (_super) { - __extends(SelectElementContainer, _super); - function SelectElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - var option = element.options[element.selectedIndex || 0]; - _this.value = option ? option.text || '' : ''; - return _this; - } - return SelectElementContainer; - }(ElementContainer)); - - var TextareaElementContainer = /** @class */ (function (_super) { - __extends(TextareaElementContainer, _super); - function TextareaElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.value = element.value; - return _this; - } - return TextareaElementContainer; - }(ElementContainer)); - - var IFrameElementContainer = /** @class */ (function (_super) { - __extends(IFrameElementContainer, _super); - function IFrameElementContainer(context, iframe) { - var _this = _super.call(this, context, iframe) || this; - _this.src = iframe.src; - _this.width = parseInt(iframe.width, 10) || 0; - _this.height = parseInt(iframe.height, 10) || 0; - _this.backgroundColor = _this.styles.backgroundColor; - try { - if (iframe.contentWindow && - iframe.contentWindow.document && - iframe.contentWindow.document.documentElement) { - _this.tree = parseTree(context, iframe.contentWindow.document.documentElement); - // http://www.w3.org/TR/css3-background/#special-backgrounds - var documentBackgroundColor = iframe.contentWindow.document.documentElement - ? parseColor(context, getComputedStyle(iframe.contentWindow.document.documentElement).backgroundColor) - : COLORS.TRANSPARENT; - var bodyBackgroundColor = iframe.contentWindow.document.body - ? parseColor(context, getComputedStyle(iframe.contentWindow.document.body).backgroundColor) - : COLORS.TRANSPARENT; - _this.backgroundColor = isTransparent(documentBackgroundColor) - ? isTransparent(bodyBackgroundColor) - ? _this.styles.backgroundColor - : bodyBackgroundColor - : documentBackgroundColor; - } - } - catch (e) { } - return _this; - } - return IFrameElementContainer; - }(ElementContainer)); - - var LIST_OWNERS = ['OL', 'UL', 'MENU']; - var parseNodeTree = function (context, node, parent, root) { - for (var childNode = node.firstChild, nextNode = void 0; childNode; childNode = nextNode) { - nextNode = childNode.nextSibling; - if (isTextNode(childNode) && childNode.data.trim().length > 0) { - parent.textNodes.push(new TextContainer(context, childNode, parent.styles)); - } - else if (isElementNode(childNode)) { - if (isSlotElement(childNode) && childNode.assignedNodes) { - childNode.assignedNodes().forEach(function (childNode) { return parseNodeTree(context, childNode, parent, root); }); - } - else { - var container = createContainer(context, childNode); - if (container.styles.isVisible()) { - if (createsRealStackingContext(childNode, container, root)) { - container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */; - } - else if (createsStackingContext(container.styles)) { - container.flags |= 2 /* CREATES_STACKING_CONTEXT */; - } - if (LIST_OWNERS.indexOf(childNode.tagName) !== -1) { - container.flags |= 8 /* IS_LIST_OWNER */; - } - parent.elements.push(container); - childNode.slot; - if (childNode.shadowRoot) { - parseNodeTree(context, childNode.shadowRoot, container, root); - } - else if (!isTextareaElement(childNode) && - !isSVGElement(childNode) && - !isSelectElement(childNode)) { - parseNodeTree(context, childNode, container, root); - } - } - } - } - } - }; - var createContainer = function (context, element) { - if (isImageElement(element)) { - return new ImageElementContainer(context, element); - } - if (isCanvasElement(element)) { - return new CanvasElementContainer(context, element); - } - if (isSVGElement(element)) { - return new SVGElementContainer(context, element); - } - if (isLIElement(element)) { - return new LIElementContainer(context, element); - } - if (isOLElement(element)) { - return new OLElementContainer(context, element); - } - if (isInputElement(element)) { - return new InputElementContainer(context, element); - } - if (isSelectElement(element)) { - return new SelectElementContainer(context, element); - } - if (isTextareaElement(element)) { - return new TextareaElementContainer(context, element); - } - if (isIFrameElement(element)) { - return new IFrameElementContainer(context, element); - } - return new ElementContainer(context, element); - }; - var parseTree = function (context, element) { - var container = createContainer(context, element); - container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */; - parseNodeTree(context, element, container, container); - return container; - }; - var createsRealStackingContext = function (node, container, root) { - return (container.styles.isPositionedWithZIndex() || - container.styles.opacity < 1 || - container.styles.isTransformed() || - (isBodyElement(node) && root.styles.isTransparent())); - }; - var createsStackingContext = function (styles) { return styles.isPositioned() || styles.isFloating(); }; - var isTextNode = function (node) { return node.nodeType === Node.TEXT_NODE; }; - var isElementNode = function (node) { return node.nodeType === Node.ELEMENT_NODE; }; - var isHTMLElementNode = function (node) { - return isElementNode(node) && typeof node.style !== 'undefined' && !isSVGElementNode(node); - }; - var isSVGElementNode = function (element) { - return typeof element.className === 'object'; - }; - var isLIElement = function (node) { return node.tagName === 'LI'; }; - var isOLElement = function (node) { return node.tagName === 'OL'; }; - var isInputElement = function (node) { return node.tagName === 'INPUT'; }; - var isHTMLElement = function (node) { return node.tagName === 'HTML'; }; - var isSVGElement = function (node) { return node.tagName === 'svg'; }; - var isBodyElement = function (node) { return node.tagName === 'BODY'; }; - var isCanvasElement = function (node) { return node.tagName === 'CANVAS'; }; - var isVideoElement = function (node) { return node.tagName === 'VIDEO'; }; - var isImageElement = function (node) { return node.tagName === 'IMG'; }; - var isIFrameElement = function (node) { return node.tagName === 'IFRAME'; }; - var isStyleElement = function (node) { return node.tagName === 'STYLE'; }; - var isScriptElement = function (node) { return node.tagName === 'SCRIPT'; }; - var isTextareaElement = function (node) { return node.tagName === 'TEXTAREA'; }; - var isSelectElement = function (node) { return node.tagName === 'SELECT'; }; - var isSlotElement = function (node) { return node.tagName === 'SLOT'; }; - // https://html.spec.whatwg.org/multipage/custom-elements.html#valid-custom-element-name - var isCustomElement = function (node) { return node.tagName.indexOf('-') > 0; }; - - var CounterState = /** @class */ (function () { - function CounterState() { - this.counters = {}; - } - CounterState.prototype.getCounterValue = function (name) { - var counter = this.counters[name]; - if (counter && counter.length) { - return counter[counter.length - 1]; - } - return 1; - }; - CounterState.prototype.getCounterValues = function (name) { - var counter = this.counters[name]; - return counter ? counter : []; - }; - CounterState.prototype.pop = function (counters) { - var _this = this; - counters.forEach(function (counter) { return _this.counters[counter].pop(); }); - }; - CounterState.prototype.parse = function (style) { - var _this = this; - var counterIncrement = style.counterIncrement; - var counterReset = style.counterReset; - var canReset = true; - if (counterIncrement !== null) { - counterIncrement.forEach(function (entry) { - var counter = _this.counters[entry.counter]; - if (counter && entry.increment !== 0) { - canReset = false; - if (!counter.length) { - counter.push(1); - } - counter[Math.max(0, counter.length - 1)] += entry.increment; - } - }); - } - var counterNames = []; - if (canReset) { - counterReset.forEach(function (entry) { - var counter = _this.counters[entry.counter]; - counterNames.push(entry.counter); - if (!counter) { - counter = _this.counters[entry.counter] = []; - } - counter.push(entry.reset); - }); - } - return counterNames; - }; - return CounterState; - }()); - var ROMAN_UPPER = { - integers: [1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1], - values: ['M', 'CM', 'D', 'CD', 'C', 'XC', 'L', 'XL', 'X', 'IX', 'V', 'IV', 'I'] - }; - var ARMENIAN = { - integers: [ - 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90, 80, 70, - 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'Ք', - 'Փ', - 'Ւ', - 'Ց', - 'Ր', - 'Տ', - 'Վ', - 'Ս', - 'Ռ', - 'Ջ', - 'Պ', - 'Չ', - 'Ո', - 'Շ', - 'Ն', - 'Յ', - 'Մ', - 'Ճ', - 'Ղ', - 'Ձ', - 'Հ', - 'Կ', - 'Ծ', - 'Խ', - 'Լ', - 'Ի', - 'Ժ', - 'Թ', - 'Ը', - 'Է', - 'Զ', - 'Ե', - 'Դ', - 'Գ', - 'Բ', - 'Ա' - ] - }; - var HEBREW = { - integers: [ - 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 400, 300, 200, 100, 90, 80, 70, 60, 50, 40, 30, 20, - 19, 18, 17, 16, 15, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'י׳', - 'ט׳', - 'ח׳', - 'ז׳', - 'ו׳', - 'ה׳', - 'ד׳', - 'ג׳', - 'ב׳', - 'א׳', - 'ת', - 'ש', - 'ר', - 'ק', - 'צ', - 'פ', - 'ע', - 'ס', - 'נ', - 'מ', - 'ל', - 'כ', - 'יט', - 'יח', - 'יז', - 'טז', - 'טו', - 'י', - 'ט', - 'ח', - 'ז', - 'ו', - 'ה', - 'ד', - 'ג', - 'ב', - 'א' - ] - }; - var GEORGIAN = { - integers: [ - 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90, - 80, 70, 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'ჵ', - 'ჰ', - 'ჯ', - 'ჴ', - 'ხ', - 'ჭ', - 'წ', - 'ძ', - 'ც', - 'ჩ', - 'შ', - 'ყ', - 'ღ', - 'ქ', - 'ფ', - 'ჳ', - 'ტ', - 'ს', - 'რ', - 'ჟ', - 'პ', - 'ო', - 'ჲ', - 'ნ', - 'მ', - 'ლ', - 'კ', - 'ი', - 'თ', - 'ჱ', - 'ზ', - 'ვ', - 'ე', - 'დ', - 'გ', - 'ბ', - 'ა' - ] - }; - var createAdditiveCounter = function (value, min, max, symbols, fallback, suffix) { - if (value < min || value > max) { - return createCounterText(value, fallback, suffix.length > 0); - } - return (symbols.integers.reduce(function (string, integer, index) { - while (value >= integer) { - value -= integer; - string += symbols.values[index]; - } - return string; - }, '') + suffix); - }; - var createCounterStyleWithSymbolResolver = function (value, codePointRangeLength, isNumeric, resolver) { - var string = ''; - do { - if (!isNumeric) { - value--; - } - string = resolver(value) + string; - value /= codePointRangeLength; - } while (value * codePointRangeLength >= codePointRangeLength); - return string; - }; - var createCounterStyleFromRange = function (value, codePointRangeStart, codePointRangeEnd, isNumeric, suffix) { - var codePointRangeLength = codePointRangeEnd - codePointRangeStart + 1; - return ((value < 0 ? '-' : '') + - (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, isNumeric, function (codePoint) { - return fromCodePoint$1(Math.floor(codePoint % codePointRangeLength) + codePointRangeStart); - }) + - suffix)); - }; - var createCounterStyleFromSymbols = function (value, symbols, suffix) { - if (suffix === void 0) { suffix = '. '; } - var codePointRangeLength = symbols.length; - return (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, false, function (codePoint) { return symbols[Math.floor(codePoint % codePointRangeLength)]; }) + suffix); - }; - var CJK_ZEROS = 1 << 0; - var CJK_TEN_COEFFICIENTS = 1 << 1; - var CJK_TEN_HIGH_COEFFICIENTS = 1 << 2; - var CJK_HUNDRED_COEFFICIENTS = 1 << 3; - var createCJKCounter = function (value, numbers, multipliers, negativeSign, suffix, flags) { - if (value < -9999 || value > 9999) { - return createCounterText(value, 4 /* CJK_DECIMAL */, suffix.length > 0); - } - var tmp = Math.abs(value); - var string = suffix; - if (tmp === 0) { - return numbers[0] + string; - } - for (var digit = 0; tmp > 0 && digit <= 4; digit++) { - var coefficient = tmp % 10; - if (coefficient === 0 && contains(flags, CJK_ZEROS) && string !== '') { - string = numbers[coefficient] + string; - } - else if (coefficient > 1 || - (coefficient === 1 && digit === 0) || - (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_COEFFICIENTS)) || - (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_HIGH_COEFFICIENTS) && value > 100) || - (coefficient === 1 && digit > 1 && contains(flags, CJK_HUNDRED_COEFFICIENTS))) { - string = numbers[coefficient] + (digit > 0 ? multipliers[digit - 1] : '') + string; - } - else if (coefficient === 1 && digit > 0) { - string = multipliers[digit - 1] + string; - } - tmp = Math.floor(tmp / 10); - } - return (value < 0 ? negativeSign : '') + string; - }; - var CHINESE_INFORMAL_MULTIPLIERS = '十百千萬'; - var CHINESE_FORMAL_MULTIPLIERS = '拾佰仟萬'; - var JAPANESE_NEGATIVE = 'マイナス'; - var KOREAN_NEGATIVE = '마이너스'; - var createCounterText = function (value, type, appendSuffix) { - var defaultSuffix = appendSuffix ? '. ' : ''; - var cjkSuffix = appendSuffix ? '、' : ''; - var koreanSuffix = appendSuffix ? ', ' : ''; - var spaceSuffix = appendSuffix ? ' ' : ''; - switch (type) { - case 0 /* DISC */: - return '•' + spaceSuffix; - case 1 /* CIRCLE */: - return '◦' + spaceSuffix; - case 2 /* SQUARE */: - return '◾' + spaceSuffix; - case 5 /* DECIMAL_LEADING_ZERO */: - var string = createCounterStyleFromRange(value, 48, 57, true, defaultSuffix); - return string.length < 4 ? "0" + string : string; - case 4 /* CJK_DECIMAL */: - return createCounterStyleFromSymbols(value, '〇一二三四五六七八九', cjkSuffix); - case 6 /* LOWER_ROMAN */: - return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix).toLowerCase(); - case 7 /* UPPER_ROMAN */: - return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix); - case 8 /* LOWER_GREEK */: - return createCounterStyleFromRange(value, 945, 969, false, defaultSuffix); - case 9 /* LOWER_ALPHA */: - return createCounterStyleFromRange(value, 97, 122, false, defaultSuffix); - case 10 /* UPPER_ALPHA */: - return createCounterStyleFromRange(value, 65, 90, false, defaultSuffix); - case 11 /* ARABIC_INDIC */: - return createCounterStyleFromRange(value, 1632, 1641, true, defaultSuffix); - case 12 /* ARMENIAN */: - case 49 /* UPPER_ARMENIAN */: - return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix); - case 35 /* LOWER_ARMENIAN */: - return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix).toLowerCase(); - case 13 /* BENGALI */: - return createCounterStyleFromRange(value, 2534, 2543, true, defaultSuffix); - case 14 /* CAMBODIAN */: - case 30 /* KHMER */: - return createCounterStyleFromRange(value, 6112, 6121, true, defaultSuffix); - case 15 /* CJK_EARTHLY_BRANCH */: - return createCounterStyleFromSymbols(value, '子丑寅卯辰巳午未申酉戌亥', cjkSuffix); - case 16 /* CJK_HEAVENLY_STEM */: - return createCounterStyleFromSymbols(value, '甲乙丙丁戊己庚辛壬癸', cjkSuffix); - case 17 /* CJK_IDEOGRAPHIC */: - case 48 /* TRAD_CHINESE_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 47 /* TRAD_CHINESE_FORMAL */: - return createCJKCounter(value, '零壹貳參肆伍陸柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 42 /* SIMP_CHINESE_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 41 /* SIMP_CHINESE_FORMAL */: - return createCJKCounter(value, '零壹贰叁肆伍陆柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 26 /* JAPANESE_INFORMAL */: - return createCJKCounter(value, '〇一二三四五六七八九', '十百千万', JAPANESE_NEGATIVE, cjkSuffix, 0); - case 25 /* JAPANESE_FORMAL */: - return createCJKCounter(value, '零壱弐参四伍六七八九', '拾百千万', JAPANESE_NEGATIVE, cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 31 /* KOREAN_HANGUL_FORMAL */: - return createCJKCounter(value, '영일이삼사오육칠팔구', '십백천만', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 33 /* KOREAN_HANJA_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', '十百千萬', KOREAN_NEGATIVE, koreanSuffix, 0); - case 32 /* KOREAN_HANJA_FORMAL */: - return createCJKCounter(value, '零壹貳參四五六七八九', '拾百千', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 18 /* DEVANAGARI */: - return createCounterStyleFromRange(value, 0x966, 0x96f, true, defaultSuffix); - case 20 /* GEORGIAN */: - return createAdditiveCounter(value, 1, 19999, GEORGIAN, 3 /* DECIMAL */, defaultSuffix); - case 21 /* GUJARATI */: - return createCounterStyleFromRange(value, 0xae6, 0xaef, true, defaultSuffix); - case 22 /* GURMUKHI */: - return createCounterStyleFromRange(value, 0xa66, 0xa6f, true, defaultSuffix); - case 22 /* HEBREW */: - return createAdditiveCounter(value, 1, 10999, HEBREW, 3 /* DECIMAL */, defaultSuffix); - case 23 /* HIRAGANA */: - return createCounterStyleFromSymbols(value, 'あいうえおかきくけこさしすせそたちつてとなにぬねのはひふへほまみむめもやゆよらりるれろわゐゑをん'); - case 24 /* HIRAGANA_IROHA */: - return createCounterStyleFromSymbols(value, 'いろはにほへとちりぬるをわかよたれそつねならむうゐのおくやまけふこえてあさきゆめみしゑひもせす'); - case 27 /* KANNADA */: - return createCounterStyleFromRange(value, 0xce6, 0xcef, true, defaultSuffix); - case 28 /* KATAKANA */: - return createCounterStyleFromSymbols(value, 'アイウエオカキクケコサシスセソタチツテトナニヌネノハヒフヘホマミムメモヤユヨラリルレロワヰヱヲン', cjkSuffix); - case 29 /* KATAKANA_IROHA */: - return createCounterStyleFromSymbols(value, 'イロハニホヘトチリヌルヲワカヨタレソツネナラムウヰノオクヤマケフコエテアサキユメミシヱヒモセス', cjkSuffix); - case 34 /* LAO */: - return createCounterStyleFromRange(value, 0xed0, 0xed9, true, defaultSuffix); - case 37 /* MONGOLIAN */: - return createCounterStyleFromRange(value, 0x1810, 0x1819, true, defaultSuffix); - case 38 /* MYANMAR */: - return createCounterStyleFromRange(value, 0x1040, 0x1049, true, defaultSuffix); - case 39 /* ORIYA */: - return createCounterStyleFromRange(value, 0xb66, 0xb6f, true, defaultSuffix); - case 40 /* PERSIAN */: - return createCounterStyleFromRange(value, 0x6f0, 0x6f9, true, defaultSuffix); - case 43 /* TAMIL */: - return createCounterStyleFromRange(value, 0xbe6, 0xbef, true, defaultSuffix); - case 44 /* TELUGU */: - return createCounterStyleFromRange(value, 0xc66, 0xc6f, true, defaultSuffix); - case 45 /* THAI */: - return createCounterStyleFromRange(value, 0xe50, 0xe59, true, defaultSuffix); - case 46 /* TIBETAN */: - return createCounterStyleFromRange(value, 0xf20, 0xf29, true, defaultSuffix); - case 3 /* DECIMAL */: - default: - return createCounterStyleFromRange(value, 48, 57, true, defaultSuffix); - } - }; - - var IGNORE_ATTRIBUTE = 'data-html2canvas-ignore'; - var DocumentCloner = /** @class */ (function () { - function DocumentCloner(context, element, options) { - this.context = context; - this.options = options; - this.scrolledElements = []; - this.referenceElement = element; - this.counters = new CounterState(); - this.quoteDepth = 0; - if (!element.ownerDocument) { - throw new Error('Cloned element does not have an owner document'); - } - this.documentElement = this.cloneNode(element.ownerDocument.documentElement, false); - } - DocumentCloner.prototype.toIFrame = function (ownerDocument, windowSize) { - var _this = this; - var iframe = createIFrameContainer(ownerDocument, windowSize); - if (!iframe.contentWindow) { - return Promise.reject("Unable to find iframe window"); - } - var scrollX = ownerDocument.defaultView.pageXOffset; - var scrollY = ownerDocument.defaultView.pageYOffset; - var cloneWindow = iframe.contentWindow; - var documentClone = cloneWindow.document; - /* Chrome doesn't detect relative background-images assigned in inline ' - - - def add_row(self, a, b): - tmp = """ -
          -
          REPLACE_A
          -
          REPLACE_B
          -
          - """ - from toolbox import markdown_convertion - tmp = tmp.replace('REPLACE_A', markdown_convertion(a)) - tmp = tmp.replace('REPLACE_B', markdown_convertion(b)) - self.html_string += tmp - - - def save_file(self, file_name): - with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f: - f.write(self.html_string.encode('utf-8', 'ignore').decode()) - diff --git a/spaces/Makiing/coolb-in-gtest/src/components/turn-counter.tsx b/spaces/Makiing/coolb-in-gtest/src/components/turn-counter.tsx deleted file mode 100644 index 08a9e488f044802a8600f4d195b106567c35aab4..0000000000000000000000000000000000000000 --- a/spaces/Makiing/coolb-in-gtest/src/components/turn-counter.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import React from 'react' -import { Throttling } from '@/lib/bots/bing/types' - -export interface TurnCounterProps { - throttling?: Throttling -} - -export function TurnCounter({ throttling }: TurnCounterProps) { - if (!throttling) { - return null - } - - return ( -
          -
          - {throttling.numUserMessagesInConversation} - - {throttling.maxNumUserMessagesInConversation} -
          -
          -
          - ) -} diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/fileio/__init__.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/fileio/__init__.py deleted file mode 100644 index 2051b85f7e59bff7bdbaa131849ce8cd31f059a4..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/fileio/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .file_client import BaseStorageBackend, FileClient -from .handlers import BaseFileHandler, JsonHandler, PickleHandler, YamlHandler -from .io import dump, load, register_handler -from .parse import dict_from_file, list_from_file - -__all__ = [ - 'BaseStorageBackend', 'FileClient', 'load', 'dump', 'register_handler', - 'BaseFileHandler', 'JsonHandler', 'PickleHandler', 'YamlHandler', - 'list_from_file', 'dict_from_file' -] diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/maskrcnn/mask-rcnn_resnet50-oclip_fpn_160e_icdar2015.py b/spaces/Mountchicken/MAERec-Gradio/configs/textdet/maskrcnn/mask-rcnn_resnet50-oclip_fpn_160e_icdar2015.py deleted file mode 100644 index 57bf9b6a8d8383645233729596a5cf419621e281..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/maskrcnn/mask-rcnn_resnet50-oclip_fpn_160e_icdar2015.py +++ /dev/null @@ -1,15 +0,0 @@ -_base_ = [ - 'mask-rcnn_resnet50_fpn_160e_icdar2015.py', -] - -load_from = None - -_base_.model.cfg.backbone = dict( - _scope_='mmocr', - type='CLIPResNet', - init_cfg=dict( - type='Pretrained', - checkpoint='https://download.openmmlab.com/' - 'mmocr/backbone/resnet50-oclip-7ba0c533.pth')) - -_base_.optim_wrapper.optimizer.lr = 0.02 diff --git a/spaces/NN520/AI/Dockerfile b/spaces/NN520/AI/Dockerfile deleted file mode 100644 index 3aa2b29b5fc4fa8b8238955acd7f1fde13ce5e1a..0000000000000000000000000000000000000000 --- a/spaces/NN520/AI/Dockerfile +++ /dev/null @@ -1,36 +0,0 @@ -FROM node:18 - - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set up a new user named "user" with user ID 1000 -RUN useradd -o -u 1000 user && mkdir -p $HOME/app && chown -R user $HOME - -# Switch to the "user" user -USER user - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Install app dependencies -# A wildcard is used to ensure both package.json AND package-lock.json are copied -# where available (npm@5+) -COPY --chown=user package*.json $HOME/app/ - -RUN npm install - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app/ - -RUN npm run build - -ENV PORT 7860 -EXPOSE 7860 - -CMD npm start diff --git a/spaces/Nephele/bert-vits2-multi-voice/text/chinese.py b/spaces/Nephele/bert-vits2-multi-voice/text/chinese.py deleted file mode 100644 index 276753880b73de2e8889dcb2101cd98c09e0710b..0000000000000000000000000000000000000000 --- a/spaces/Nephele/bert-vits2-multi-voice/text/chinese.py +++ /dev/null @@ -1,193 +0,0 @@ -import os -import re - -import cn2an -from pypinyin import lazy_pinyin, Style - -from text import symbols -from text.symbols import punctuation -from text.tone_sandhi import ToneSandhi - -current_file_path = os.path.dirname(__file__) -pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in - open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()} - -import jieba.posseg as psg - - -rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - '$': '.', - '“': "'", - '”': "'", - '‘': "'", - '’': "'", - '(': "'", - ')': "'", - '(': "'", - ')': "'", - '《': "'", - '》': "'", - '【': "'", - '】': "'", - '[': "'", - ']': "'", - '—': "-", - '~': "-", - '~': "-", - '「': "'", - '」': "'", - -} - -tone_modifier = ToneSandhi() - -def replace_punctuation(text): - text = text.replace("嗯", "恩").replace("呣","母") - pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub(r'[^\u4e00-\u9fa5'+"".join(punctuation)+r']+', '', replaced_text) - - return replaced_text - -def g2p(text): - pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation)) - sentences = [i for i in re.split(pattern, text) if i.strip()!=''] - phones, tones, word2ph = _g2p(sentences) - assert sum(word2ph) == len(phones) - assert len(word2ph) == len(text) #Sometimes it will crash,you can add a try-catch. - phones = ['_'] + phones + ["_"] - tones = [0] + tones + [0] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -def _get_initials_finals(word): - initials = [] - finals = [] - orig_initials = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.INITIALS) - orig_finals = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for c, v in zip(orig_initials, orig_finals): - initials.append(c) - finals.append(v) - return initials, finals - - -def _g2p(segments): - phones_list = [] - tones_list = [] - word2ph = [] - for seg in segments: - pinyins = [] - # Replace all English words in the sentence - seg = re.sub('[a-zA-Z]+', '', seg) - seg_cut = psg.lcut(seg) - initials = [] - finals = [] - seg_cut = tone_modifier.pre_merge_for_modify(seg_cut) - for word, pos in seg_cut: - if pos == 'eng': - continue - sub_initials, sub_finals = _get_initials_finals(word) - sub_finals = tone_modifier.modified_tone(word, pos, - sub_finals) - initials.append(sub_initials) - finals.append(sub_finals) - - # assert len(sub_initials) == len(sub_finals) == len(word) - initials = sum(initials, []) - finals = sum(finals, []) - # - for c, v in zip(initials, finals): - raw_pinyin = c+v - # NOTE: post process for pypinyin outputs - # we discriminate i, ii and iii - if c == v: - assert c in punctuation - phone = [c] - tone = '0' - word2ph.append(1) - else: - v_without_tone = v[:-1] - tone = v[-1] - - pinyin = c+v_without_tone - assert tone in '12345' - - if c: - # 多音节 - v_rep_map = { - "uei": 'ui', - 'iou': 'iu', - 'uen': 'un', - } - if v_without_tone in v_rep_map.keys(): - pinyin = c+v_rep_map[v_without_tone] - else: - # 单音节 - pinyin_rep_map = { - 'ing': 'ying', - 'i': 'yi', - 'in': 'yin', - 'u': 'wu', - } - if pinyin in pinyin_rep_map.keys(): - pinyin = pinyin_rep_map[pinyin] - else: - single_rep_map = { - 'v': 'yu', - 'e': 'e', - 'i': 'y', - 'u': 'w', - } - if pinyin[0] in single_rep_map.keys(): - pinyin = single_rep_map[pinyin[0]]+pinyin[1:] - - assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin) - phone = pinyin_to_symbol_map[pinyin].split(' ') - word2ph.append(len(phone)) - - phones_list += phone - tones_list += [int(tone)] * len(phone) - return phones_list, tones_list, word2ph - - - -def text_normalize(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - text = replace_punctuation(text) - return text - -def get_bert_feature(text, word2ph): - from text import chinese_bert - return chinese_bert.get_bert_feature(text, word2ph) - -if __name__ == '__main__': - from text.chinese_bert import get_bert_feature - text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏" - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) - - -# # 示例用法 -# text = "这是一个示例文本:,你好!这是一个测试...." -# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试 diff --git a/spaces/NimaBoscarino/climategan/utils_scripts/compare_maskers.py b/spaces/NimaBoscarino/climategan/utils_scripts/compare_maskers.py deleted file mode 100644 index 9a07f2e8e9298db64c3292b663cad8fc8deeb168..0000000000000000000000000000000000000000 --- a/spaces/NimaBoscarino/climategan/utils_scripts/compare_maskers.py +++ /dev/null @@ -1,344 +0,0 @@ -import sys -from argparse import ArgumentParser -from pathlib import Path -from comet_ml import Experiment - -import numpy as np -import torch -import yaml -from PIL import Image -from skimage.color import gray2rgb -from skimage.io import imread -from skimage.transform import resize -from skimage.util import img_as_ubyte -from tqdm import tqdm - -sys.path.append(str(Path(__file__).resolve().parent.parent)) - -import climategan - -GROUND_MODEL = "/miniscratch/_groups/ccai/experiments/runs/ablation-v1/out--ground" - - -def uint8(array): - return array.astype(np.uint8) - - -def crop_and_resize(image_path, label_path): - """ - Resizes an image so that it keeps the aspect ratio and the smallest dimensions - is 640, then crops this resized image in its center so that the output is 640x640 - without aspect ratio distortion - - Args: - image_path (Path or str): Path to an image - label_path (Path or str): Path to the image's associated label - - Returns: - tuple((np.ndarray, np.ndarray)): (new image, new label) - """ - - img = imread(image_path) - lab = imread(label_path) - - # if img.shape[-1] == 4: - # img = uint8(rgba2rgb(img) * 255) - - # TODO: remove (debug) - if img.shape[:2] != lab.shape[:2]: - print( - "\nWARNING: shape mismatch: im -> {}, lab -> {}".format( - image_path.name, label_path.name - ) - ) - # breakpoint() - - # resize keeping aspect ratio: smallest dim is 640 - h, w = img.shape[:2] - if h < w: - size = (640, int(640 * w / h)) - else: - size = (int(640 * h / w), 640) - - r_img = resize(img, size, preserve_range=True, anti_aliasing=True) - r_img = uint8(r_img) - - r_lab = resize(lab, size, preserve_range=True, anti_aliasing=False, order=0) - r_lab = uint8(r_lab) - - # crop in the center - H, W = r_img.shape[:2] - - top = (H - 640) // 2 - left = (W - 640) // 2 - - rc_img = r_img[top : top + 640, left : left + 640, :] - rc_lab = ( - r_lab[top : top + 640, left : left + 640, :] - if r_lab.ndim == 3 - else r_lab[top : top + 640, left : left + 640] - ) - - return rc_img, rc_lab - - -def load_ground(ground_output_path, ref_image_path): - gop = Path(ground_output_path) - rip = Path(ref_image_path) - - ground_paths = list((gop / "eval-metrics" / "pred").glob(f"{rip.stem}.jpg")) + list( - (gop / "eval-metrics" / "pred").glob(f"{rip.stem}.png") - ) - if len(ground_paths) == 0: - raise ValueError( - f"Could not find a ground match in {str(gop)} for image {str(rip)}" - ) - elif len(ground_paths) > 1: - raise ValueError( - f"Found more than 1 ground match in {str(gop)} for image {str(rip)}:" - + f" {list(map(str, ground_paths))}" - ) - ground_path = ground_paths[0] - _, ground = crop_and_resize(rip, ground_path) - ground = (ground > 0).astype(np.float32) - return torch.from_numpy(ground).unsqueeze(0).unsqueeze(0).cuda() - - -def parse_args(): - parser = ArgumentParser() - parser.add_argument("-y", "--yaml", help="Path to a list of models") - parser.add_argument( - "--disable_loading", - action="store_true", - default=False, - help="Disable loading of existing inferences", - ) - parser.add_argument( - "-t", "--tags", nargs="*", help="Comet.ml tags", default=[], type=str - ) - parser.add_argument( - "--tasks", - nargs="*", - help="Comet.ml tags", - default=["x", "d", "s", "m", "mx", "p"], - type=str, - ) - args = parser.parse_args() - - print("Received args:") - print(vars(args)) - - return args - - -def load_images_and_labels( - path="/miniscratch/_groups/ccai/data/omnigan/masker-test-set", -): - p = Path(path) - ims_path = p / "imgs" - lab_path = p / "labels" - - ims = sorted(climategan.utils.find_images(ims_path), key=lambda x: x.name) - labs = sorted( - climategan.utils.find_images(lab_path), - key=lambda x: x.name.replace("_labeled.", "."), - ) - - xs = climategan.transforms.PrepareInference()(ims) - ys = climategan.transforms.PrepareInference(is_label=True)(labs) - - return xs, ys, ims, labs - - -def load_inferences(inf_path, im_paths): - try: - assert inf_path.exists() - assert sorted([i.stem for i in im_paths]) == sorted( - [i.stem for i in inf_path.glob("*.pt")] - ) - return [torch.load(str(i)) for i in tqdm(list(inf_path.glob("*.pt")))] - except Exception as e: - print() - print(e) - print("Aborting Loading") - print() - return None - - -def get_or_load_inferences( - m_path, device, xs, is_ground, im_paths, ground_model, try_load=True -): - inf_path = Path(m_path) / "inferences" - if try_load: - print("Trying to load existing inferences:") - outputs = load_inferences(inf_path, im_paths) - if outputs is not None: - print("Successfully loaded existing inferences") - return outputs - - trainer = climategan.trainer.Trainer.resume_from_path( - m_path if not is_ground else ground_model, - inference=True, - new_exp=None, - device=device, - ) - - inf_path.mkdir(exist_ok=True) - outputs = [] - for i, x in enumerate(tqdm(xs)): - x = x.to(trainer.device) - if not is_ground: - out = trainer.G.decode(x=x) - else: - out = {"m": load_ground(GROUND_MODEL, im_paths[i])} - out["p"] = trainer.G.paint(out["m"] > 0.5, x) - out["x"] = x - inference = {k: v.cpu() for k, v in out.items()} - outputs.append(inference) - torch.save(inference, inf_path / f"{im_paths[i].stem}.pt") - print() - - return outputs - - -def numpify(outputs): - nps = [] - print("Numpifying...") - for o in tqdm(outputs): - x = (o["x"][0].permute(1, 2, 0).numpy() + 1) / 2 - m = o["m"] - m = (m[0, 0, :, :].numpy() > 0.5).astype(np.uint8) - p = (o["p"][0].permute(1, 2, 0).numpy() + 1) / 2 - data = {"m": m, "p": p, "x": x} - if "s" in o: - s = climategan.data.decode_segmap_merged_labels(o["s"], "r", False) / 255.0 - data["s"] = s[0].permute(1, 2, 0).numpy() - if "d" in o: - d = climategan.tutils.normalize_tensor(o["d"]).squeeze().numpy() - data["d"] = d - nps.append({k: img_as_ubyte(v) for k, v in data.items()}) - return nps - - -def concat_npy_for_model(data, tasks): - assert "m" in data - assert "x" in data - assert "p" in data - - x = mask = depth = seg = painted = masked = None - - x = data["x"] - painted = data["p"] - mask = (gray2rgb(data["m"]) * 255).astype(np.uint8) - painted = data["p"] - masked = (1 - gray2rgb(data["m"])) * x - - concats = [] - - if "d" in data: - depth = img_as_ubyte( - gray2rgb( - resize(data["d"], data["x"].shape[:2], anti_aliasing=True, order=1) - ) - ) - else: - depth = np.ones_like(data["x"]) * 255 - - if "s" in data: - seg = img_as_ubyte( - resize(data["s"], data["x"].shape[:2], anti_aliasing=False, order=0) - ) - else: - seg = np.ones_like(data["x"]) * 255 - - for t in tasks: - if t == "x": - concats.append(x) - if t == "m": - concats.append(mask) - elif t == "mx": - concats.append(masked) - elif t == "d": - concats.append(depth) - elif t == "s": - concats.append(seg) - elif t == "p": - concats.append(painted) - - row = np.concatenate(concats, axis=1) - - return row - - -if __name__ == "__main__": - args = parse_args() - - with open(args.yaml, "r") as f: - maskers = yaml.safe_load(f) - if "models" in maskers: - maskers = maskers["models"] - - load = not args.disable_loading - tags = args.tags - tasks = args.tasks - - ground_model = None - for m in maskers: - if "ground" not in maskers: - ground_model = m - break - if ground_model is None: - raise ValueError("Could not find a non-ground model to get a painter") - - device = torch.device("cpu") - torch.set_grad_enabled(False) - - xs, ys, im_paths, lab_paths = load_images_and_labels() - - np_outs = {} - names = [] - - for m_path in maskers: - - opt_path = Path(m_path) / "opts.yaml" - with opt_path.open("r") as f: - opt = yaml.safe_load(f) - - name = ( - ", ".join( - [ - t - for t in sorted(opt["comet"]["tags"]) - if "branch" not in t and "ablation" not in t and "trash" not in t - ] - ) - if "--ground" not in m_path - else "ground" - ) - names.append(name) - - is_ground = name == "ground" - - print("#" * 100) - print("\n>>> Processing", name) - print() - - outputs = get_or_load_inferences( - m_path, device, xs, is_ground, im_paths, ground_model, load - ) - nps = numpify(outputs) - - np_outs[name] = nps - - exp = Experiment(project_name="climategan-inferences", display_summary_level=0) - exp.log_parameter("names", names) - exp.add_tags(tags) - - for i in tqdm(range(len(xs))): - all_models_for_image = [] - for name in names: - xpmds = concat_npy_for_model(np_outs[name][i], tasks) - all_models_for_image.append(xpmds) - full_im = np.concatenate(all_models_for_image, axis=0) - pil_im = Image.fromarray(full_im) - exp.log_image(pil_im, name=im_paths[i].stem.replace(".", "_"), step=i) diff --git a/spaces/OAOA/DifFace/facelib/utils/face_utils.py b/spaces/OAOA/DifFace/facelib/utils/face_utils.py deleted file mode 100644 index f1474a2a4419b6b62fab8a919ef805b802556464..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/facelib/utils/face_utils.py +++ /dev/null @@ -1,248 +0,0 @@ -import cv2 -import numpy as np -import torch - - -def compute_increased_bbox(bbox, increase_area, preserve_aspect=True): - left, top, right, bot = bbox - width = right - left - height = bot - top - - if preserve_aspect: - width_increase = max(increase_area, ((1 + 2 * increase_area) * height - width) / (2 * width)) - height_increase = max(increase_area, ((1 + 2 * increase_area) * width - height) / (2 * height)) - else: - width_increase = height_increase = increase_area - left = int(left - width_increase * width) - top = int(top - height_increase * height) - right = int(right + width_increase * width) - bot = int(bot + height_increase * height) - return (left, top, right, bot) - - -def get_valid_bboxes(bboxes, h, w): - left = max(bboxes[0], 0) - top = max(bboxes[1], 0) - right = min(bboxes[2], w) - bottom = min(bboxes[3], h) - return (left, top, right, bottom) - - -def align_crop_face_landmarks(img, - landmarks, - output_size, - transform_size=None, - enable_padding=True, - return_inverse_affine=False, - shrink_ratio=(1, 1)): - """Align and crop face with landmarks. - - The output_size and transform_size are based on width. The height is - adjusted based on shrink_ratio_h/shring_ration_w. - - Modified from: - https://github.com/NVlabs/ffhq-dataset/blob/master/download_ffhq.py - - Args: - img (Numpy array): Input image. - landmarks (Numpy array): 5 or 68 or 98 landmarks. - output_size (int): Output face size. - transform_size (ing): Transform size. Usually the four time of - output_size. - enable_padding (float): Default: True. - shrink_ratio (float | tuple[float] | list[float]): Shring the whole - face for height and width (crop larger area). Default: (1, 1). - - Returns: - (Numpy array): Cropped face. - """ - lm_type = 'retinaface_5' # Options: dlib_5, retinaface_5 - - if isinstance(shrink_ratio, (float, int)): - shrink_ratio = (shrink_ratio, shrink_ratio) - if transform_size is None: - transform_size = output_size * 4 - - # Parse landmarks - lm = np.array(landmarks) - if lm.shape[0] == 5 and lm_type == 'retinaface_5': - eye_left = lm[0] - eye_right = lm[1] - mouth_avg = (lm[3] + lm[4]) * 0.5 - elif lm.shape[0] == 5 and lm_type == 'dlib_5': - lm_eye_left = lm[2:4] - lm_eye_right = lm[0:2] - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - mouth_avg = lm[4] - elif lm.shape[0] == 68: - lm_eye_left = lm[36:42] - lm_eye_right = lm[42:48] - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - mouth_avg = (lm[48] + lm[54]) * 0.5 - elif lm.shape[0] == 98: - lm_eye_left = lm[60:68] - lm_eye_right = lm[68:76] - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - mouth_avg = (lm[76] + lm[82]) * 0.5 - - eye_avg = (eye_left + eye_right) * 0.5 - eye_to_eye = eye_right - eye_left - eye_to_mouth = mouth_avg - eye_avg - - # Get the oriented crop rectangle - # x: half width of the oriented crop rectangle - x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1] - # - np.flipud(eye_to_mouth) * [-1, 1]: rotate 90 clockwise - # norm with the hypotenuse: get the direction - x /= np.hypot(*x) # get the hypotenuse of a right triangle - rect_scale = 1 # TODO: you can edit it to get larger rect - x *= max(np.hypot(*eye_to_eye) * 2.0 * rect_scale, np.hypot(*eye_to_mouth) * 1.8 * rect_scale) - # y: half height of the oriented crop rectangle - y = np.flipud(x) * [-1, 1] - - x *= shrink_ratio[1] # width - y *= shrink_ratio[0] # height - - # c: center - c = eye_avg + eye_to_mouth * 0.1 - # quad: (left_top, left_bottom, right_bottom, right_top) - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) - # qsize: side length of the square - qsize = np.hypot(*x) * 2 - - quad_ori = np.copy(quad) - # Shrink, for large face - # TODO: do we really need shrink - shrink = int(np.floor(qsize / output_size * 0.5)) - if shrink > 1: - h, w = img.shape[0:2] - rsize = (int(np.rint(float(w) / shrink)), int(np.rint(float(h) / shrink))) - img = cv2.resize(img, rsize, interpolation=cv2.INTER_AREA) - quad /= shrink - qsize /= shrink - - # Crop - h, w = img.shape[0:2] - border = max(int(np.rint(qsize * 0.1)), 3) - crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, w), min(crop[3] + border, h)) - if crop[2] - crop[0] < w or crop[3] - crop[1] < h: - img = img[crop[1]:crop[3], crop[0]:crop[2], :] - quad -= crop[0:2] - - # Pad - # pad: (width_left, height_top, width_right, height_bottom) - h, w = img.shape[0:2] - pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - w + border, 0), max(pad[3] - h + border, 0)) - if enable_padding and max(pad) > border - 4: - pad = np.maximum(pad, int(np.rint(qsize * 0.3))) - img = np.pad(img, ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect') - h, w = img.shape[0:2] - y, x, _ = np.ogrid[:h, :w, :1] - mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], - np.float32(w - 1 - x) / pad[2]), - 1.0 - np.minimum(np.float32(y) / pad[1], - np.float32(h - 1 - y) / pad[3])) - blur = int(qsize * 0.02) - if blur % 2 == 0: - blur += 1 - blur_img = cv2.boxFilter(img, 0, ksize=(blur, blur)) - - img = img.astype('float32') - img += (blur_img - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0) - img = np.clip(img, 0, 255) # float32, [0, 255] - quad += pad[:2] - - # Transform use cv2 - h_ratio = shrink_ratio[0] / shrink_ratio[1] - dst_h, dst_w = int(transform_size * h_ratio), transform_size - template = np.array([[0, 0], [0, dst_h], [dst_w, dst_h], [dst_w, 0]]) - # use cv2.LMEDS method for the equivalence to skimage transform - # ref: https://blog.csdn.net/yichxi/article/details/115827338 - affine_matrix = cv2.estimateAffinePartial2D(quad, template, method=cv2.LMEDS)[0] - cropped_face = cv2.warpAffine( - img, affine_matrix, (dst_w, dst_h), borderMode=cv2.BORDER_CONSTANT, borderValue=(135, 133, 132)) # gray - - if output_size < transform_size: - cropped_face = cv2.resize( - cropped_face, (output_size, int(output_size * h_ratio)), interpolation=cv2.INTER_LINEAR) - - if return_inverse_affine: - dst_h, dst_w = int(output_size * h_ratio), output_size - template = np.array([[0, 0], [0, dst_h], [dst_w, dst_h], [dst_w, 0]]) - # use cv2.LMEDS method for the equivalence to skimage transform - # ref: https://blog.csdn.net/yichxi/article/details/115827338 - affine_matrix = cv2.estimateAffinePartial2D( - quad_ori, np.array([[0, 0], [0, output_size], [dst_w, dst_h], [dst_w, 0]]), method=cv2.LMEDS)[0] - inverse_affine = cv2.invertAffineTransform(affine_matrix) - else: - inverse_affine = None - return cropped_face, inverse_affine - - -def paste_face_back(img, face, inverse_affine): - h, w = img.shape[0:2] - face_h, face_w = face.shape[0:2] - inv_restored = cv2.warpAffine(face, inverse_affine, (w, h)) - mask = np.ones((face_h, face_w, 3), dtype=np.float32) - inv_mask = cv2.warpAffine(mask, inverse_affine, (w, h)) - # remove the black borders - inv_mask_erosion = cv2.erode(inv_mask, np.ones((2, 2), np.uint8)) - inv_restored_remove_border = inv_mask_erosion * inv_restored - total_face_area = np.sum(inv_mask_erosion) // 3 - # compute the fusion edge based on the area of face - w_edge = int(total_face_area**0.5) // 20 - erosion_radius = w_edge * 2 - inv_mask_center = cv2.erode(inv_mask_erosion, np.ones((erosion_radius, erosion_radius), np.uint8)) - blur_size = w_edge * 2 - inv_soft_mask = cv2.GaussianBlur(inv_mask_center, (blur_size + 1, blur_size + 1), 0) - img = inv_soft_mask * inv_restored_remove_border + (1 - inv_soft_mask) * img - # float32, [0, 255] - return img - - -if __name__ == '__main__': - import os - - from facelib.detection import init_detection_model - from facelib.utils.face_restoration_helper import get_largest_face - - img_path = '/home/wxt/datasets/ffhq/ffhq_wild/00009.png' - img_name = os.splitext(os.path.basename(img_path))[0] - - # initialize model - det_net = init_detection_model('retinaface_resnet50', half=False) - img_ori = cv2.imread(img_path) - h, w = img_ori.shape[0:2] - # if larger than 800, scale it - scale = max(h / 800, w / 800) - if scale > 1: - img = cv2.resize(img_ori, (int(w / scale), int(h / scale)), interpolation=cv2.INTER_LINEAR) - - with torch.no_grad(): - bboxes = det_net.detect_faces(img, 0.97) - if scale > 1: - bboxes *= scale # the score is incorrect - bboxes = get_largest_face(bboxes, h, w)[0] - - landmarks = np.array([[bboxes[i], bboxes[i + 1]] for i in range(5, 15, 2)]) - - cropped_face, inverse_affine = align_crop_face_landmarks( - img_ori, - landmarks, - output_size=512, - transform_size=None, - enable_padding=True, - return_inverse_affine=True, - shrink_ratio=(1, 1)) - - cv2.imwrite(f'tmp/{img_name}_cropeed_face.png', cropped_face) - img = paste_face_back(img_ori, cropped_face, inverse_affine) - cv2.imwrite(f'tmp/{img_name}_back.png', img) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_to_text/simultaneous_translation/agents/fairseq_simul_st_agent.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_to_text/simultaneous_translation/agents/fairseq_simul_st_agent.py deleted file mode 100644 index 61617a1739ce196abba1e9a6f9ad9e9f4b37b9c1..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_to_text/simultaneous_translation/agents/fairseq_simul_st_agent.py +++ /dev/null @@ -1,363 +0,0 @@ -import math -import os -import json -import numpy as np -import torch -import torchaudio.compliance.kaldi as kaldi -import yaml -from fairseq import checkpoint_utils, tasks -from fairseq.file_io import PathManager - -try: - from simuleval import READ_ACTION, WRITE_ACTION, DEFAULT_EOS - from simuleval.agents import SpeechAgent - from simuleval.states import ListEntry, SpeechStates -except ImportError: - print("Please install simuleval 'pip install simuleval'") - -SHIFT_SIZE = 10 -WINDOW_SIZE = 25 -SAMPLE_RATE = 16000 -FEATURE_DIM = 80 -BOW_PREFIX = "\u2581" - - -class OnlineFeatureExtractor: - """ - Extract speech feature on the fly. - """ - - def __init__(self, args): - self.shift_size = args.shift_size - self.window_size = args.window_size - assert self.window_size >= self.shift_size - - self.sample_rate = args.sample_rate - self.feature_dim = args.feature_dim - self.num_samples_per_shift = int(self.shift_size * self.sample_rate / 1000) - self.num_samples_per_window = int(self.window_size * self.sample_rate / 1000) - self.len_ms_to_samples = lambda x: x * self.sample_rate / 1000 - self.previous_residual_samples = [] - self.global_cmvn = args.global_cmvn - - def clear_cache(self): - self.previous_residual_samples = [] - - def __call__(self, new_samples): - samples = self.previous_residual_samples + new_samples - if len(samples) < self.num_samples_per_window: - self.previous_residual_samples = samples - return - - # num_frames is the number of frames from the new segment - num_frames = math.floor( - (len(samples) - self.len_ms_to_samples(self.window_size - self.shift_size)) - / self.num_samples_per_shift - ) - - # the number of frames used for feature extraction - # including some part of thte previous segment - effective_num_samples = int( - num_frames * self.len_ms_to_samples(self.shift_size) - + self.len_ms_to_samples(self.window_size - self.shift_size) - ) - - input_samples = samples[:effective_num_samples] - self.previous_residual_samples = samples[ - num_frames * self.num_samples_per_shift: - ] - - torch.manual_seed(1) - output = kaldi.fbank( - torch.FloatTensor(input_samples).unsqueeze(0), - num_mel_bins=self.feature_dim, - frame_length=self.window_size, - frame_shift=self.shift_size, - ).numpy() - - output = self.transform(output) - - return torch.from_numpy(output) - - def transform(self, input): - if self.global_cmvn is None: - return input - - mean = self.global_cmvn["mean"] - std = self.global_cmvn["std"] - - x = np.subtract(input, mean) - x = np.divide(x, std) - return x - - -class TensorListEntry(ListEntry): - """ - Data structure to store a list of tensor. - """ - - def append(self, value): - - if len(self.value) == 0: - self.value = value - return - - self.value = torch.cat([self.value] + [value], dim=0) - - def info(self): - return { - "type": str(self.new_value_type), - "length": self.__len__(), - "value": "" if type(self.value) is list else self.value.size(), - } - - -class FairseqSimulSTAgent(SpeechAgent): - - speech_segment_size = 40 # in ms, 4 pooling ratio * 10 ms step size - - def __init__(self, args): - super().__init__(args) - - self.eos = DEFAULT_EOS - - self.gpu = getattr(args, "gpu", False) - - self.args = args - - self.load_model_vocab(args) - - if getattr( - self.model.decoder.layers[0].encoder_attn, - 'pre_decision_ratio', - None - ) is not None: - self.speech_segment_size *= ( - self.model.decoder.layers[0].encoder_attn.pre_decision_ratio - ) - - args.global_cmvn = None - if args.config: - with open(os.path.join(args.data_bin, args.config), "r") as f: - config = yaml.load(f, Loader=yaml.BaseLoader) - - if "global_cmvn" in config: - args.global_cmvn = np.load(config["global_cmvn"]["stats_npz_path"]) - - if args.global_stats: - with PathManager.open(args.global_stats, "r") as f: - global_cmvn = json.loads(f.read()) - self.global_cmvn = {"mean": global_cmvn["mean"], "std": global_cmvn["stddev"]} - - self.feature_extractor = OnlineFeatureExtractor(args) - - self.max_len = args.max_len - - self.force_finish = args.force_finish - - torch.set_grad_enabled(False) - - def build_states(self, args, client, sentence_id): - # Initialize states here, for example add customized entry to states - # This function will be called at beginning of every new sentence - states = SpeechStates(args, client, sentence_id, self) - self.initialize_states(states) - return states - - def to_device(self, tensor): - if self.gpu: - return tensor.cuda() - else: - return tensor.cpu() - - @staticmethod - def add_args(parser): - # fmt: off - parser.add_argument('--model-path', type=str, required=True, - help='path to your pretrained model.') - parser.add_argument("--data-bin", type=str, required=True, - help="Path of data binary") - parser.add_argument("--config", type=str, default=None, - help="Path to config yaml file") - parser.add_argument("--global-stats", type=str, default=None, - help="Path to json file containing cmvn stats") - parser.add_argument("--tgt-splitter-type", type=str, default="SentencePiece", - help="Subword splitter type for target text") - parser.add_argument("--tgt-splitter-path", type=str, default=None, - help="Subword splitter model path for target text") - parser.add_argument("--user-dir", type=str, default="examples/simultaneous_translation", - help="User directory for simultaneous translation") - parser.add_argument("--max-len", type=int, default=200, - help="Max length of translation") - parser.add_argument("--force-finish", default=False, action="store_true", - help="Force the model to finish the hypothsis if the source is not finished") - parser.add_argument("--shift-size", type=int, default=SHIFT_SIZE, - help="Shift size of feature extraction window.") - parser.add_argument("--window-size", type=int, default=WINDOW_SIZE, - help="Window size of feature extraction window.") - parser.add_argument("--sample-rate", type=int, default=SAMPLE_RATE, - help="Sample rate") - parser.add_argument("--feature-dim", type=int, default=FEATURE_DIM, - help="Acoustic feature dimension.") - - # fmt: on - return parser - - def load_model_vocab(self, args): - - filename = args.model_path - if not os.path.exists(filename): - raise IOError("Model file not found: {}".format(filename)) - - state = checkpoint_utils.load_checkpoint_to_cpu(filename) - - task_args = state["cfg"]["task"] - task_args.data = args.data_bin - - if args.config is not None: - task_args.config_yaml = args.config - - task = tasks.setup_task(task_args) - - # build model for ensemble - state["cfg"]["model"].load_pretrained_encoder_from = None - state["cfg"]["model"].load_pretrained_decoder_from = None - self.model = task.build_model(state["cfg"]["model"]) - self.model.load_state_dict(state["model"], strict=True) - self.model.eval() - self.model.share_memory() - - if self.gpu: - self.model.cuda() - - # Set dictionary - self.dict = {} - self.dict["tgt"] = task.target_dictionary - - def initialize_states(self, states): - self.feature_extractor.clear_cache() - states.units.source = TensorListEntry() - states.units.target = ListEntry() - states.incremental_states = dict() - - def segment_to_units(self, segment, states): - # Convert speech samples to features - features = self.feature_extractor(segment) - if features is not None: - return [features] - else: - return [] - - def units_to_segment(self, units, states): - # Merge sub word to full word. - if self.model.decoder.dictionary.eos() == units[0]: - return DEFAULT_EOS - - segment = [] - if None in units.value: - units.value.remove(None) - - for index in units: - if index is None: - units.pop() - token = self.model.decoder.dictionary.string([index]) - if token.startswith(BOW_PREFIX): - if len(segment) == 0: - segment += [token.replace(BOW_PREFIX, "")] - else: - for j in range(len(segment)): - units.pop() - - string_to_return = ["".join(segment)] - - if self.model.decoder.dictionary.eos() == units[0]: - string_to_return += [DEFAULT_EOS] - - return string_to_return - else: - segment += [token.replace(BOW_PREFIX, "")] - - if ( - len(units) > 0 - and self.model.decoder.dictionary.eos() == units[-1] - or len(states.units.target) > self.max_len - ): - tokens = [self.model.decoder.dictionary.string([unit]) for unit in units] - return ["".join(tokens).replace(BOW_PREFIX, "")] + [DEFAULT_EOS] - - return None - - def update_model_encoder(self, states): - if len(states.units.source) == 0: - return - src_indices = self.to_device( - states.units.source.value.unsqueeze(0) - ) - src_lengths = self.to_device( - torch.LongTensor([states.units.source.value.size(0)]) - ) - - states.encoder_states = self.model.encoder(src_indices, src_lengths) - torch.cuda.empty_cache() - - def update_states_read(self, states): - # Happens after a read action. - self.update_model_encoder(states) - - def policy(self, states): - if not getattr(states, "encoder_states", None): - return READ_ACTION - - tgt_indices = self.to_device( - torch.LongTensor( - [self.model.decoder.dictionary.eos()] - + [x for x in states.units.target.value if x is not None] - ).unsqueeze(0) - ) - - states.incremental_states["steps"] = { - "src": states.encoder_states["encoder_out"][0].size(0), - "tgt": 1 + len(states.units.target), - } - - states.incremental_states["online"] = {"only": torch.tensor(not states.finish_read())} - - x, outputs = self.model.decoder.forward( - prev_output_tokens=tgt_indices, - encoder_out=states.encoder_states, - incremental_state=states.incremental_states, - ) - - states.decoder_out = x - - states.decoder_out_extra = outputs - - torch.cuda.empty_cache() - - if outputs.action == 0: - return READ_ACTION - else: - return WRITE_ACTION - - def predict(self, states): - decoder_states = states.decoder_out - - lprobs = self.model.get_normalized_probs( - [decoder_states[:, -1:]], log_probs=True - ) - - index = lprobs.argmax(dim=-1) - - index = index[0, 0].item() - - if ( - self.force_finish - and index == self.model.decoder.dictionary.eos() - and not states.finish_read() - ): - # If we want to force finish the translation - # (don't stop before finish reading), return a None - # self.model.decoder.clear_cache(states.incremental_states) - index = None - - return index diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/show_wer.sh b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/show_wer.sh deleted file mode 100644 index 9ecf1690c67f8a019009ef32d973fbd45b56c7ca..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/show_wer.sh +++ /dev/null @@ -1,52 +0,0 @@ -#!/bin/bash - -split="dev_other" -ref_data="" -get_best_wer=true -dec_name="decode" -graph_name="graph" - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -exp_root=$1 - -set -eu - -echo "==== WER w.r.t. pseudo transcript" -for x in $exp_root/*/${dec_name}_${split}*; do grep WER $x/wer_* 2>/dev/null | utils/best_wer.sh; done - - -if [ ! -z $ref_data ]; then - echo "==== WER w.r.t. real transcript (select based on pseudo WER)" - ref_txt=$ref_data/$split/text - for x in $exp_root/*/${dec_name}_${split}*; do - lang=$(dirname $x)/$graph_name - - lmwt=$( - grep WER $x/wer_* 2>/dev/null | utils/best_wer.sh | - sed 's/.*wer_\(.*\)$/\1/g' | sed 's/_/./g' - ) - tra=$x/scoring/$lmwt.tra - cat $tra | utils/int2sym.pl -f 2- $lang/words.txt | sed 's:::g' | sed 's:::g' | \ - compute-wer --text --mode=present \ - ark:$ref_txt ark,p:- 2> /dev/null | grep WER | xargs -I{} echo {} $tra - done -fi - -if [ ! -z $ref_data ] && $get_best_wer; then - echo "==== WER w.r.t. real transcript (select based on true WER)" - ref_txt=$ref_data/$split/text - for x in $exp_root/*/${dec_name}_${split}*; do - lang=$(dirname $x)/$graph_name - - for tra in $x/scoring/*.tra; do - cat $tra | utils/int2sym.pl -f 2- $lang/words.txt | sed 's:::g' | sed 's:::g' | \ - compute-wer --text --mode=present \ - ark:$ref_txt ark,p:- 2> /dev/null | grep WER | xargs -I{} echo {} $tra - done | sort -k2n | head -n1 - done -fi - -exit 0; diff --git a/spaces/OFA-Sys/OFA-Image_Caption/utils/cider/pyciderevalcap/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/utils/cider/pyciderevalcap/__init__.py deleted file mode 100644 index 3f7d85bba884ea8f83fc6ab2a1e6ade80d98d4d9..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/utils/cider/pyciderevalcap/__init__.py +++ /dev/null @@ -1 +0,0 @@ -__author__ = 'tylin' diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/data/data_utils.py b/spaces/OFA-Sys/OFA-Visual_Grounding/data/data_utils.py deleted file mode 100644 index 7f843789138c62668f9e1c4e7fd44299fb5ef768..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/data/data_utils.py +++ /dev/null @@ -1,601 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -try: - from collections.abc import Iterable -except ImportError: - from collections import Iterable -import contextlib -import itertools -import logging -import re -import warnings -from typing import Optional, Tuple - -import numpy as np -import torch - -from fairseq.file_io import PathManager -from fairseq import utils -import os - -logger = logging.getLogger(__name__) - - -def infer_language_pair(path): - """Infer language pair from filename: .-.(...).idx""" - src, dst = None, None - for filename in PathManager.ls(path): - parts = filename.split(".") - if len(parts) >= 3 and len(parts[1].split("-")) == 2: - return parts[1].split("-") - return src, dst - - -def collate_tokens( - values, - pad_idx, - eos_idx=None, - left_pad=False, - move_eos_to_beginning=False, - pad_to_length=None, - pad_to_multiple=1, - pad_to_bsz=None, -): - """Convert a list of 1d tensors into a padded 2d tensor.""" - size = max(v.size(0) for v in values) - size = size if pad_to_length is None else max(size, pad_to_length) - if pad_to_multiple != 1 and size % pad_to_multiple != 0: - size = int(((size - 0.1) // pad_to_multiple + 1) * pad_to_multiple) - - def copy_tensor(src, dst): - assert dst.numel() == src.numel() - if move_eos_to_beginning: - if eos_idx is None: - # if no eos_idx is specified, then use the last token in src - dst[0] = src[-1] - else: - dst[0] = eos_idx - dst[1:] = src[:-1] - else: - dst.copy_(src) - - if values[0].dim() == 1: - res = values[0].new(len(values), size).fill_(pad_idx) - elif values[0].dim() == 2: - assert move_eos_to_beginning is False - res = values[0].new(len(values), size, values[0].size(1)).fill_(pad_idx) - else: - raise NotImplementedError - - for i, v in enumerate(values): - copy_tensor(v, res[i][size - len(v) :] if left_pad else res[i][: len(v)]) - return res - - -def load_indexed_dataset( - path, dictionary=None, dataset_impl=None, combine=False, default="cached" -): - """A helper function for loading indexed datasets. - - Args: - path (str): path to indexed dataset (e.g., 'data-bin/train') - dictionary (~fairseq.data.Dictionary): data dictionary - dataset_impl (str, optional): which dataset implementation to use. If - not provided, it will be inferred automatically. For legacy indexed - data we use the 'cached' implementation by default. - combine (bool, optional): automatically load and combine multiple - datasets. For example, if *path* is 'data-bin/train', then we will - combine 'data-bin/train', 'data-bin/train1', ... and return a - single ConcatDataset instance. - """ - import fairseq.data.indexed_dataset as indexed_dataset - from fairseq.data.concat_dataset import ConcatDataset - - datasets = [] - for k in itertools.count(): - path_k = path + (str(k) if k > 0 else "") - try: - path_k = indexed_dataset.get_indexed_dataset_to_local(path_k) - except Exception as e: - if "StorageException: [404] Path not found" in str(e): - logger.warning(f"path_k: {e} not found") - else: - raise e - - dataset_impl_k = dataset_impl - if dataset_impl_k is None: - dataset_impl_k = indexed_dataset.infer_dataset_impl(path_k) - dataset = indexed_dataset.make_dataset( - path_k, - impl=dataset_impl_k or default, - fix_lua_indexing=True, - dictionary=dictionary, - ) - if dataset is None: - break - logger.info("loaded {:,} examples from: {}".format(len(dataset), path_k)) - datasets.append(dataset) - if not combine: - break - if len(datasets) == 0: - return None - elif len(datasets) == 1: - return datasets[0] - else: - return ConcatDataset(datasets) - - -@contextlib.contextmanager -def numpy_seed(seed, *addl_seeds): - """Context manager which seeds the NumPy PRNG with the specified seed and - restores the state afterward""" - if seed is None: - yield - return - if len(addl_seeds) > 0: - seed = int(hash((seed, *addl_seeds)) % 1e6) - state = np.random.get_state() - np.random.seed(seed) - try: - yield - finally: - np.random.set_state(state) - - -def collect_filtered(function, iterable, filtered): - """ - Similar to :func:`filter` but collects filtered elements in ``filtered``. - - Args: - function (callable): function that returns ``False`` for elements that - should be filtered - iterable (iterable): iterable to filter - filtered (list): list to store filtered elements - """ - for el in iterable: - if function(el): - yield el - else: - filtered.append(el) - - -def _filter_by_size_dynamic(indices, size_fn, max_positions, raise_exception=False): - def compare_leq(a, b): - return a <= b if not isinstance(a, tuple) else max(a) <= b - - def check_size(idx): - if isinstance(max_positions, float) or isinstance(max_positions, int): - return size_fn(idx) <= max_positions - elif isinstance(max_positions, dict): - idx_size = size_fn(idx) - assert isinstance(idx_size, dict) - intersect_keys = set(max_positions.keys()) & set(idx_size.keys()) - return all( - all( - a is None or b is None or a <= b - for a, b in zip(idx_size[key], max_positions[key]) - ) - for key in intersect_keys - ) - else: - # For MultiCorpusSampledDataset, will generalize it later - if not isinstance(size_fn(idx), Iterable): - return all(size_fn(idx) <= b for b in max_positions) - return all( - a is None or b is None or a <= b - for a, b in zip(size_fn(idx), max_positions) - ) - - ignored = [] - itr = collect_filtered(check_size, indices, ignored) - indices = np.fromiter(itr, dtype=np.int64, count=-1) - return indices, ignored - - -def filter_by_size(indices, dataset, max_positions, raise_exception=False): - """ - [deprecated] Filter indices based on their size. - Use `FairseqDataset::filter_indices_by_size` instead. - - Args: - indices (List[int]): ordered list of dataset indices - dataset (FairseqDataset): fairseq dataset instance - max_positions (tuple): filter elements larger than this size. - Comparisons are done component-wise. - raise_exception (bool, optional): if ``True``, raise an exception if - any elements are filtered (default: False). - """ - warnings.warn( - "data_utils.filter_by_size is deprecated. " - "Use `FairseqDataset::filter_indices_by_size` instead.", - stacklevel=2, - ) - if isinstance(max_positions, float) or isinstance(max_positions, int): - if hasattr(dataset, "sizes") and isinstance(dataset.sizes, np.ndarray): - ignored = indices[dataset.sizes[indices] > max_positions].tolist() - indices = indices[dataset.sizes[indices] <= max_positions] - elif ( - hasattr(dataset, "sizes") - and isinstance(dataset.sizes, list) - and len(dataset.sizes) == 1 - ): - ignored = indices[dataset.sizes[0][indices] > max_positions].tolist() - indices = indices[dataset.sizes[0][indices] <= max_positions] - else: - indices, ignored = _filter_by_size_dynamic( - indices, dataset.size, max_positions - ) - else: - indices, ignored = _filter_by_size_dynamic(indices, dataset.size, max_positions) - - if len(ignored) > 0 and raise_exception: - raise Exception( - ( - "Size of sample #{} is invalid (={}) since max_positions={}, " - "skip this example with --skip-invalid-size-inputs-valid-test" - ).format(ignored[0], dataset.size(ignored[0]), max_positions) - ) - if len(ignored) > 0: - logger.warning( - ( - "{} samples have invalid sizes and will be skipped, " - "max_positions={}, first few sample ids={}" - ).format(len(ignored), max_positions, ignored[:10]) - ) - return indices - - -def filter_paired_dataset_indices_by_size(src_sizes, tgt_sizes, indices, max_sizes): - """Filter a list of sample indices. Remove those that are longer - than specified in max_sizes. - - Args: - indices (np.array): original array of sample indices - max_sizes (int or list[int] or tuple[int]): max sample size, - can be defined separately for src and tgt (then list or tuple) - - Returns: - np.array: filtered sample array - list: list of removed indices - """ - if max_sizes is None: - return indices, [] - if type(max_sizes) in (int, float): - max_src_size, max_tgt_size = max_sizes, max_sizes - else: - max_src_size, max_tgt_size = max_sizes - if tgt_sizes is None: - ignored = indices[src_sizes[indices] > max_src_size] - else: - ignored = indices[ - (src_sizes[indices] > max_src_size) | (tgt_sizes[indices] > max_tgt_size) - ] - if len(ignored) > 0: - if tgt_sizes is None: - indices = indices[src_sizes[indices] <= max_src_size] - else: - indices = indices[ - (src_sizes[indices] <= max_src_size) - & (tgt_sizes[indices] <= max_tgt_size) - ] - return indices, ignored.tolist() - - -def batch_by_size( - indices, - num_tokens_fn, - num_tokens_vec=None, - max_tokens=None, - max_sentences=None, - required_batch_size_multiple=1, - fixed_shapes=None, -): - """ - Yield mini-batches of indices bucketed by size. Batches may contain - sequences of different lengths. - - Args: - indices (List[int]): ordered list of dataset indices - num_tokens_fn (callable): function that returns the number of tokens at - a given index - num_tokens_vec (List[int], optional): precomputed vector of the number - of tokens for each index in indices (to enable faster batch generation) - max_tokens (int, optional): max number of tokens in each batch - (default: None). - max_sentences (int, optional): max number of sentences in each - batch (default: None). - required_batch_size_multiple (int, optional): require batch size to - be less than N or a multiple of N (default: 1). - fixed_shapes (List[Tuple[int, int]], optional): if given, batches will - only be created with the given shapes. *max_sentences* and - *required_batch_size_multiple* will be ignored (default: None). - """ - try: - from fairseq.data.data_utils_fast import ( - batch_by_size_fn, - batch_by_size_vec, - batch_fixed_shapes_fast, - ) - except ImportError: - raise ImportError( - "Please build Cython components with: " - "`python setup.py build_ext --inplace`" - ) - except ValueError: - raise ValueError( - "Please build (or rebuild) Cython components with `python setup.py build_ext --inplace`." - ) - - # added int() to avoid TypeError: an integer is required - max_tokens = ( - int(max_tokens) if max_tokens is not None else -1 - ) - max_sentences = max_sentences if max_sentences is not None else -1 - bsz_mult = required_batch_size_multiple - - if not isinstance(indices, np.ndarray): - indices = np.fromiter(indices, dtype=np.int64, count=-1) - - if num_tokens_vec is not None and not isinstance(num_tokens_vec, np.ndarray): - num_tokens_vec = np.fromiter(num_tokens_vec, dtype=np.int64, count=-1) - - if fixed_shapes is None: - if num_tokens_vec is None: - return batch_by_size_fn( - indices, - num_tokens_fn, - max_tokens, - max_sentences, - bsz_mult, - ) - else: - return batch_by_size_vec( - indices, - num_tokens_vec, - max_tokens, - max_sentences, - bsz_mult, - ) - - else: - fixed_shapes = np.array(fixed_shapes, dtype=np.int64) - sort_order = np.lexsort( - [ - fixed_shapes[:, 1].argsort(), # length - fixed_shapes[:, 0].argsort(), # bsz - ] - ) - fixed_shapes_sorted = fixed_shapes[sort_order] - return batch_fixed_shapes_fast(indices, num_tokens_fn, fixed_shapes_sorted) - - -def post_process(sentence: str, symbol: str): - if symbol == "sentencepiece": - sentence = sentence.replace(" ", "").replace("\u2581", " ").strip() - elif symbol == "wordpiece": - sentence = sentence.replace(" ", "").replace("_", " ").strip() - elif symbol == "letter": - sentence = sentence.replace(" ", "").replace("|", " ").strip() - elif symbol == "silence": - import re - sentence = sentence.replace("", "") - sentence = re.sub(' +', ' ', sentence).strip() - elif symbol == "_EOW": - sentence = sentence.replace(" ", "").replace("_EOW", " ").strip() - elif symbol in {"subword_nmt", "@@ ", "@@"}: - if symbol == "subword_nmt": - symbol = "@@ " - sentence = (sentence + " ").replace(symbol, "").rstrip() - elif symbol == "none": - pass - elif symbol is not None: - raise NotImplementedError(f"Unknown post_process option: {symbol}") - return sentence - - -def compute_mask_indices( - shape: Tuple[int, int], - padding_mask: Optional[torch.Tensor], - mask_prob: float, - mask_length: int, - mask_type: str = "static", - mask_other: float = 0.0, - min_masks: int = 0, - no_overlap: bool = False, - min_space: int = 0, -) -> np.ndarray: - """ - Computes random mask spans for a given shape - - Args: - shape: the the shape for which to compute masks. - should be of size 2 where first element is batch size and 2nd is timesteps - padding_mask: optional padding mask of the same size as shape, which will prevent masking padded elements - mask_prob: probability for each token to be chosen as start of the span to be masked. this will be multiplied by - number of timesteps divided by length of mask span to mask approximately this percentage of all elements. - however due to overlaps, the actual number will be smaller (unless no_overlap is True) - mask_type: how to compute mask lengths - static = fixed size - uniform = sample from uniform distribution [mask_other, mask_length*2] - normal = sample from normal distribution with mean mask_length and stdev mask_other. mask is min 1 element - poisson = sample from possion distribution with lambda = mask length - min_masks: minimum number of masked spans - no_overlap: if false, will switch to an alternative recursive algorithm that prevents spans from overlapping - min_space: only used if no_overlap is True, this is how many elements to keep unmasked between spans - """ - - bsz, all_sz = shape - mask = np.full((bsz, all_sz), False) - - all_num_mask = int( - # add a random number for probabilistic rounding - mask_prob * all_sz / float(mask_length) - + np.random.rand() - ) - - all_num_mask = max(min_masks, all_num_mask) - - mask_idcs = [] - for i in range(bsz): - if padding_mask is not None: - sz = all_sz - padding_mask[i].long().sum().item() - num_mask = int( - # add a random number for probabilistic rounding - mask_prob * sz / float(mask_length) - + np.random.rand() - ) - num_mask = max(min_masks, num_mask) - else: - sz = all_sz - num_mask = all_num_mask - - if mask_type == "static": - lengths = np.full(num_mask, mask_length) - elif mask_type == "uniform": - lengths = np.random.randint(mask_other, mask_length * 2 + 1, size=num_mask) - elif mask_type == "normal": - lengths = np.random.normal(mask_length, mask_other, size=num_mask) - lengths = [max(1, int(round(x))) for x in lengths] - elif mask_type == "poisson": - lengths = np.random.poisson(mask_length, size=num_mask) - lengths = [int(round(x)) for x in lengths] - else: - raise Exception("unknown mask selection " + mask_type) - - if sum(lengths) == 0: - lengths[0] = min(mask_length, sz - 1) - - if no_overlap: - mask_idc = [] - - def arrange(s, e, length, keep_length): - span_start = np.random.randint(s, e - length) - mask_idc.extend(span_start + i for i in range(length)) - - new_parts = [] - if span_start - s - min_space >= keep_length: - new_parts.append((s, span_start - min_space + 1)) - if e - span_start - keep_length - min_space > keep_length: - new_parts.append((span_start + length + min_space, e)) - return new_parts - - parts = [(0, sz)] - min_length = min(lengths) - for length in sorted(lengths, reverse=True): - lens = np.fromiter( - (e - s if e - s >= length + min_space else 0 for s, e in parts), - np.int, - ) - l_sum = np.sum(lens) - if l_sum == 0: - break - probs = lens / np.sum(lens) - c = np.random.choice(len(parts), p=probs) - s, e = parts.pop(c) - parts.extend(arrange(s, e, length, min_length)) - mask_idc = np.asarray(mask_idc) - else: - min_len = min(lengths) - if sz - min_len <= num_mask: - min_len = sz - num_mask - 1 - - mask_idc = np.random.choice(sz - min_len, num_mask, replace=False) - - mask_idc = np.asarray( - [ - mask_idc[j] + offset - for j in range(len(mask_idc)) - for offset in range(lengths[j]) - ] - ) - - mask_idcs.append(np.unique(mask_idc[mask_idc < sz])) - - min_len = min([len(m) for m in mask_idcs]) - for i, mask_idc in enumerate(mask_idcs): - if len(mask_idc) > min_len: - mask_idc = np.random.choice(mask_idc, min_len, replace=False) - mask[i, mask_idc] = True - - return mask - - -def get_mem_usage(): - try: - import psutil - - mb = 1024 * 1024 - return f"used={psutil.virtual_memory().used / mb}Mb; avail={psutil.virtual_memory().available / mb}Mb" - except ImportError: - return "N/A" - - -# lens: torch.LongTensor -# returns: torch.BoolTensor -def lengths_to_padding_mask(lens): - bsz, max_lens = lens.size(0), torch.max(lens).item() - mask = torch.arange(max_lens).to(lens.device).view(1, max_lens) - mask = mask.expand(bsz, -1) >= lens.view(bsz, 1).expand(-1, max_lens) - return mask - - -# lens: torch.LongTensor -# returns: torch.BoolTensor -def lengths_to_mask(lens): - return ~lengths_to_padding_mask(lens) - - -def get_buckets(sizes, num_buckets): - buckets = np.unique( - np.percentile( - sizes, - np.linspace(0, 100, num_buckets + 1), - interpolation='lower', - )[1:] - ) - return buckets - - -def get_bucketed_sizes(orig_sizes, buckets): - sizes = np.copy(orig_sizes) - assert np.min(sizes) >= 0 - start_val = -1 - for end_val in buckets: - mask = (sizes > start_val) & (sizes <= end_val) - sizes[mask] = end_val - start_val = end_val - return sizes - - - -def _find_extra_valid_paths(dataset_path: str) -> set: - paths = utils.split_paths(dataset_path) - all_valid_paths = set() - for sub_dir in paths: - contents = PathManager.ls(sub_dir) - valid_paths = [c for c in contents if re.match("valid*[0-9].*", c) is not None] - all_valid_paths |= {os.path.basename(p) for p in valid_paths} - # Remove .bin, .idx etc - roots = {os.path.splitext(p)[0] for p in all_valid_paths} - return roots - - -def raise_if_valid_subsets_unintentionally_ignored(train_cfg) -> None: - """Raises if there are paths matching 'valid*[0-9].*' which are not combined or ignored.""" - if ( - train_cfg.dataset.ignore_unused_valid_subsets - or train_cfg.dataset.combine_valid_subsets - or train_cfg.dataset.disable_validation - or not hasattr(train_cfg.task, "data") - ): - return - other_paths = _find_extra_valid_paths(train_cfg.task.data) - specified_subsets = train_cfg.dataset.valid_subset.split(",") - ignored_paths = [p for p in other_paths if p not in specified_subsets] - if ignored_paths: - advice = "Set --combine-val to combine them or --ignore-unused-valid-subsets to ignore them." - msg = f"Valid paths {ignored_paths} will be ignored. {advice}" - raise ValueError(msg) \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py deleted file mode 100644 index eb81ded341257ba0a43c4d0867e8f3c83f276bc7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py +++ /dev/null @@ -1,600 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from collections import namedtuple - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import options, utils -from fairseq.modules import ( - AdaptiveSoftmax, - LayerNorm, - MultiheadAttention, - PositionalEmbedding, -) - - -EncoderOut = namedtuple( - "TransformerEncoderOut", - [ - "encoder_out", # T x B x C - "encoder_padding_mask", # B x T - "encoder_embedding", # B x T x C - "encoder_states", # List[T x B x C] - ], -) - - -class TransformerEncoderEmbedding(nn.Module): - """ Encoder Embedding + Positional Embedding """ - - def __init__(self, args, embed_tokens): - super().__init__() - self.dropout = args.dropout - self.max_source_positions = args.max_source_positions - self.embed_tokens = embed_tokens - if isinstance(embed_tokens, nn.ModuleList): - self.padding_idx = embed_tokens[0].padding_idx - embed_dim = sum(e.embedding_dim for e in embed_tokens) - else: - self.padding_idx = embed_tokens.padding_idx - embed_dim = embed_tokens.embedding_dim - self.embed_scale = math.sqrt(embed_dim) - self.embed_positions = ( - PositionalEmbedding( - args.max_source_positions, - embed_dim, - self.padding_idx, - learned=args.encoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - if getattr(args, "layernorm_embedding", False): - self.layernorm_embedding = LayerNorm(embed_dim) - else: - self.layernorm_embedding = None - - def forward(self, input): - # embed tokens and positions - src_tokens = input[0] - prev_output_tokens = input[2] - if isinstance(self.embed_tokens, nn.ModuleList): - x_embed_list = [] - for embed_tokens_part in self.embed_tokens: - x_embed_list.append(embed_tokens_part(src_tokens)) - - embedded = torch.cat(x_embed_list, dim=-1) - else: - embedded = self.embed_tokens(src_tokens) - x = embed = self.embed_scale * embedded - if self.embed_positions is not None: - x = embed + self.embed_positions(src_tokens) - if self.layernorm_embedding: - x = self.layernorm_embedding(x) - x = F.dropout(x, p=self.dropout, training=self.training) - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # compute padding mask - encoder_padding_mask = src_tokens.eq(self.padding_idx) - return (x, encoder_padding_mask, prev_output_tokens) - - -class TransformerEncoderLayerNorm(nn.Module): - """ - Layer norm at the the end of all encoder layers if - args.encoder_enormalize_before = True - """ - - def __init__(self, args, embed_dim): - super().__init__() - if args.encoder_normalize_before: - self.layer_norm = LayerNorm(embed_dim) - else: - self.layer_norm = None - - def forward(self, input): - x = input[0] - encoder_padding_mask = input[1] - prev_output_tokens = input[2] - if self.layer_norm: - x = self.layer_norm(x) - # keeping track of the incremental_state is not supported yet - return (x, encoder_padding_mask, prev_output_tokens) - - -class TransformerDecoderEmbedding(nn.Module): - """ Decoder Embedding + Positional Embedding """ - - def __init__(self, args, embed_tokens): - super().__init__() - self.dropout = args.dropout - self.share_input_output_embed = args.share_decoder_input_output_embed - input_embed_dim = ( - sum(e.embedding_dim for e in embed_tokens) - if isinstance(embed_tokens, nn.ModuleList) - else embed_tokens.embedding_dim - ) - embed_dim = args.decoder_embed_dim - self.output_embed_dim = args.decoder_output_dim - - padding_idx = ( - embed_tokens[0].padding_idx - if isinstance(embed_tokens, nn.ModuleList) - else embed_tokens.padding_idx - ) - self.max_target_positions = args.max_target_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) # todo: try with input_embed_dim - - self.project_in_dim = ( - Linear(input_embed_dim, embed_dim, bias=False) - if embed_dim != input_embed_dim - else None - ) - - self.embed_positions = ( - PositionalEmbedding( - args.max_target_positions, - embed_dim, - padding_idx, - learned=args.decoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - def forward(self, input): - mt_task = False - if isinstance(input, tuple): - if len(input) == 3: - encoder_out = input[0] - encoder_padding_mask = input[1] - prev_output_tokens = input[2] - incremental_state = None # Hardcoding to avoid passing of None objects - mt_task = True - else: - # HACK for now, need to fix (TODO sidgoyal) - prev_output_tokens = input[0] - # discard "src_lengths" - encoder_out = None - encoder_padding_mask = None - incremental_state = None - - else: - prev_output_tokens = input - encoder_out = None - encoder_padding_mask = None - incremental_state = None - - positions = ( - self.embed_positions( - prev_output_tokens, - incremental_state=incremental_state, - ) - if self.embed_positions is not None - else None - ) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - # embed tokens and positions - - if isinstance(self.embed_tokens, nn.ModuleList): - x_embed_list = [] - for embed_tokens_part in self.embed_tokens: - x_embed_list.append(embed_tokens_part(prev_output_tokens)) - - x = self.embed_scale * torch.cat(x_embed_list, dim=-1) - else: - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - - if self.project_in_dim is not None: - x = self.project_in_dim(x) - - if positions is not None: - x += positions - x = F.dropout(x, p=self.dropout, training=self.training) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - if mt_task: - return (x, encoder_out, encoder_padding_mask) - return x - - -class TransformerDecoderOutputLayer(nn.Module): - def __init__(self, args, embed_tokens, dictionary): - super().__init__() - self.share_input_output_embed = args.share_decoder_input_output_embed - self.embed_tokens = embed_tokens - self.output_embed_dim = args.decoder_output_dim - embed_dim = args.decoder_embed_dim - - self.project_out_dim = ( - Linear(embed_dim, self.output_embed_dim, bias=False) - if embed_dim != self.output_embed_dim and not args.tie_adaptive_weights - else None - ) - self.adaptive_softmax = None - if args.adaptive_softmax_cutoff is not None: - assert not isinstance(embed_tokens, nn.ModuleList) - self.adaptive_softmax = AdaptiveSoftmax( - len(dictionary), - self.output_embed_dim, - options.eval_str_list(args.adaptive_softmax_cutoff, type=int), - dropout=args.adaptive_softmax_dropout, - adaptive_inputs=embed_tokens if args.tie_adaptive_weights else None, - factor=args.adaptive_softmax_factor, - tie_proj=args.tie_adaptive_proj, - ) - elif not self.share_input_output_embed: - self.embed_tokens = nn.Parameter( - torch.Tensor(len(dictionary), self.output_embed_dim) - ) - nn.init.normal_( - self.embed_tokens, mean=0, std=self.output_embed_dim ** -0.5 - ) - - if args.decoder_normalize_before and not getattr( - args, "no_decoder_final_norm", False - ): - self.layer_norm = LayerNorm(embed_dim) - else: - self.layer_norm = None - - def forward(self, input, apply_final_proj=True): - if isinstance(input, tuple): - x = input[0] - else: - x = input - - if self.layer_norm: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - if self.project_out_dim is not None: - x = self.project_out_dim(x) - if apply_final_proj: - x = self.output_layer(x) - return x - - def output_layer(self, features, **kwargs): - """Project features to the vocabulary size.""" - if self.adaptive_softmax is None: - # project back to size of vocabulary - if self.share_input_output_embed: - if isinstance(self.embed_tokens, nn.ModuleList): - output = None - for i, emb in enumerate(self.embed_tokens): - sidx = i * emb.embedding_dim - eidx = (i + 1) * emb.embedding_dim - if output is None: - output = F.linear(features[:, :, sidx:eidx], emb.weight) - else: - output += F.linear(features[:, :, sidx:eidx], emb.weight) - - return output - else: - return F.linear(features, self.embed_tokens.weight) - else: - return F.linear(features, self.embed_tokens) - else: - return features - - -class TransformerEncoderLayer(nn.Module): - """Encoder layer block. - In the original paper each operation (multi-head attention or FFN) is - postprocessed with: `dropout -> add residual -> layernorm`. In the - tensor2tensor code they suggest that learning is more robust when - preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *args.encoder_normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - - def __init__(self, args): - super().__init__() - self.embed_dim = args.encoder_embed_dim - self.self_attn = MultiheadAttention( - self.embed_dim, - args.encoder_attention_heads, - dropout=args.attention_dropout, - self_attention=True, - ) - self.self_attn_layer_norm = LayerNorm(self.embed_dim) - self.dropout = args.dropout - self.activation_fn = utils.get_activation_fn( - activation=getattr(args, "activation_fn", "relu") - ) - self.activation_dropout = getattr(args, "activation_dropout", 0) - if self.activation_dropout == 0: - # for backwards compatibility with models that use args.relu_dropout - self.activation_dropout = getattr(args, "relu_dropout", 0) - self.normalize_before = args.encoder_normalize_before - self.fc1 = Linear(self.embed_dim, args.encoder_ffn_embed_dim) - self.fc2 = Linear(args.encoder_ffn_embed_dim, self.embed_dim) - self.final_layer_norm = LayerNorm(self.embed_dim) - - def upgrade_state_dict_named(self, state_dict, name): - """ - Rename layer norm states from `...layer_norms.0.weight` to - `...self_attn_layer_norm.weight` and `...layer_norms.1.weight` to - `...final_layer_norm.weight` - """ - layer_norm_map = {"0": "self_attn_layer_norm", "1": "final_layer_norm"} - for old, new in layer_norm_map.items(): - for m in ("weight", "bias"): - k = "{}.layer_norms.{}.{}".format(name, old, m) - if k in state_dict: - state_dict["{}.{}.{}".format(name, new, m)] = state_dict[k] - del state_dict[k] - - def forward(self, input): - """ - Args: - input (Tuple): - input[0] (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - input[1] (ByteTensor/FloatTensor): encoder padding mask - - binary ByteTensor of shape `(batch, src_len)` where padding elements - are indicated by ``1``. - input[2] (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing) - Returns: - output (Tuple): - output[0] (Tensor): encoded output of shape `(batch, src_len, embed_dim)` - output[1] (ByteTensor/FloatTensor): encoder padding mask - output[2] (LongTensor): previous decoder outputs - """ - x = input[0] - encoder_padding_mask = input[1] - prev_output_tokens = input[2] - residual = x - x = self.maybe_layer_norm(self.self_attn_layer_norm, x, before=True) - x, _ = self.self_attn( - query=x, key=x, value=x, key_padding_mask=encoder_padding_mask - ) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.self_attn_layer_norm, x, after=True) - - residual = x - x = self.maybe_layer_norm(self.final_layer_norm, x, before=True) - x = self.activation_fn(self.fc1(x)) - x = F.dropout(x, p=self.activation_dropout, training=self.training) - x = self.fc2(x) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.final_layer_norm, x, after=True) - return (x, encoder_padding_mask, prev_output_tokens) - - def maybe_layer_norm(self, layer_norm, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return layer_norm(x) - else: - return x - - -class TransformerDecoderLayer(nn.Module): - """Decoder layer block. - - In the original paper each operation (multi-head attention, encoder - attention or FFN) is postprocessed with: `dropout -> add residual -> - layernorm`. In the tensor2tensor code they suggest that learning is more - robust when preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *args.decoder_normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - """ - - def __init__( - self, args, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False - ): - super().__init__() - self.embed_dim = args.decoder_embed_dim - self.self_attn = MultiheadAttention( - embed_dim=self.embed_dim, - num_heads=args.decoder_attention_heads, - dropout=args.attention_dropout, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - self_attention=True, - ) - self.dropout = args.dropout - self.activation_fn = utils.get_activation_fn( - activation=getattr(args, "activation_fn", "relu") - ) - self.activation_dropout = getattr(args, "activation_dropout", 0) - if self.activation_dropout == 0: - # for backwards compatibility with models that use args.relu_dropout - self.activation_dropout = getattr(args, "relu_dropout", 0) - self.normalize_before = args.decoder_normalize_before - - # use layerNorm rather than FusedLayerNorm for exporting. - # char_inputs can be used to determint this. - # TODO remove this once we update apex with the fix - export = getattr(args, "char_inputs", False) - self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=export) - - if no_encoder_attn: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = MultiheadAttention( - self.embed_dim, - args.decoder_attention_heads, - kdim=getattr(args, "encoder_embed_dim", None), - vdim=getattr(args, "encoder_embed_dim", None), - dropout=args.attention_dropout, - encoder_decoder_attention=True, - ) - self.encoder_attn_layer_norm = LayerNorm(self.embed_dim, export=export) - - self.fc1 = Linear(self.embed_dim, args.decoder_ffn_embed_dim) - self.fc2 = Linear(args.decoder_ffn_embed_dim, self.embed_dim) - - self.final_layer_norm = LayerNorm(self.embed_dim, export=export) - self.need_attn = True - - self.onnx_trace = False - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def forward(self, input): - """ - Args: - input (Tuple): - input[0] (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - input[1] (Tensor): encoder output of shape `(batch, src_len, embed_dim)` - input[2] (ByteTensor/FloatTensor): encoder padding mask - - binary ByteTensor of shape `(batch, src_len)` where padding elements - are indicated by ``1``. - Returns: - output (Tuple): - output[0] (Tensor): encoded output of shape `(batch, src_len, embed_dim)` - output[1] (ByteTensor/FloatTensor): encoder padding mask - output[2] (LongTensor): previous decoder outputs - """ - # Note: incremental state is not yet supported - mt_task = False - if isinstance(input, tuple): - x = input[0] - encoder_out = input[1] - encoder_padding_mask = input[2] - incremental_state = None - mt_task = True - else: - x = input - encoder_out = None - encoder_padding_mask = None - incremental_state = None - - if incremental_state is None: - self_attn_mask = self.buffered_future_mask(x) - else: - self_attn_mask = None - - # TODO: add back prev_self_attn_state, prev_attn_state, - # self_attn_padding_mask - prev_self_attn_state = None - prev_attn_state = None - self_attn_padding_mask = None - - residual = x - x = self.maybe_layer_norm(self.self_attn_layer_norm, x, before=True) - if prev_self_attn_state is not None: - if incremental_state is None: - incremental_state = {} - prev_key, prev_value = prev_self_attn_state - saved_state = {"prev_key": prev_key, "prev_value": prev_value} - self.self_attn._set_input_buffer(incremental_state, saved_state) - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - incremental_state=incremental_state, - need_weights=False, - attn_mask=self_attn_mask, - ) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.self_attn_layer_norm, x, after=True) - - if self.encoder_attn is not None: - residual = x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, before=True) - if prev_attn_state is not None: - if incremental_state is None: - incremental_state = {} - prev_key, prev_value = prev_attn_state - saved_state = {"prev_key": prev_key, "prev_value": prev_value} - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=(not self.training and self.need_attn), - ) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, after=True) - - residual = x - x = self.maybe_layer_norm(self.final_layer_norm, x, before=True) - x = self.activation_fn(self.fc1(x)) - x = F.dropout(x, p=self.activation_dropout, training=self.training) - x = self.fc2(x) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.final_layer_norm, x, after=True) - - if mt_task: - return (x, encoder_out, encoder_padding_mask) - return x - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - if ( - not hasattr(self, "_future_mask") - or self._future_mask is None - or self._future_mask.device != tensor.device - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(tensor.new(dim, dim)), 1 - ) - if self._future_mask.size(0) < dim: - self._future_mask = torch.triu( - utils.fill_with_neg_inf(self._future_mask.resize_(dim, dim)), 1 - ) - return self._future_mask[:dim, :dim] - - def maybe_layer_norm(self, layer_norm, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return layer_norm(x) - else: - return x - - def make_generation_fast_(self, need_attn=False, **kwargs): - self.need_attn = need_attn - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def Linear(in_features, out_features, bias=True): - m = nn.Linear(in_features, out_features, bias) - nn.init.xavier_uniform_(m.weight) - if bias: - nn.init.constant_(m.bias, 0.0) - return m diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda.cpp b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda.cpp deleted file mode 100644 index ece47a8d908b93cec102743070c9057986d39d3f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda.cpp +++ /dev/null @@ -1,51 +0,0 @@ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include -#include - -std::vector -lightconv_cuda_forward(at::Tensor input, at::Tensor filters, int padding_l); - -std::vector lightconv_cuda_backward( - at::Tensor gradOutput, - int padding_l, - at::Tensor input, - at::Tensor filters); - -#define CHECK_CUDA(x) \ - AT_ASSERTM(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) \ - AT_ASSERTM(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) \ - CHECK_CUDA(x); \ - CHECK_CONTIGUOUS(x) - -std::vector -lightconv_forward(at::Tensor input, at::Tensor filters, int padding_l) { - CHECK_INPUT(input); - CHECK_INPUT(filters); - - return lightconv_cuda_forward(input, filters, padding_l); -} - -std::vector lightconv_backward( - at::Tensor gradOutput, - int padding_l, - at::Tensor input, - at::Tensor filters) { - CHECK_INPUT(gradOutput); - CHECK_INPUT(input); - CHECK_INPUT(filters); - - return lightconv_cuda_backward(gradOutput, padding_l, input, filters); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("forward", &lightconv_forward, "lighconv forward (CUDA)"); - m.def("backward", &lightconv_backward, "lighconv backward (CUDA)"); -} diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/transpose_last.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/transpose_last.py deleted file mode 100644 index e578b3ec5097bfac5c976b207ea46bec1d9bd4f5..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/transpose_last.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -transpose last 2 dimensions of the input -""" - -import torch.nn as nn - - -class TransposeLast(nn.Module): - def __init__(self, deconstruct_idx=None): - super().__init__() - self.deconstruct_idx = deconstruct_idx - - def forward(self, x): - if self.deconstruct_idx is not None: - x = x[self.deconstruct_idx] - return x.transpose(-2, -1) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/scoring/wer.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/scoring/wer.py deleted file mode 100644 index 633dc47c247691c4c9e36cbdbab7d7cb74b38452..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/scoring/wer.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -from fairseq.dataclass import FairseqDataclass -from fairseq.scoring import BaseScorer, register_scorer -from fairseq.scoring.tokenizer import EvaluationTokenizer - - -@dataclass -class WerScorerConfig(FairseqDataclass): - wer_tokenizer: EvaluationTokenizer.ALL_TOKENIZER_TYPES = field( - default="none", metadata={"help": "sacreBLEU tokenizer to use for evaluation"} - ) - wer_remove_punct: bool = field( - default=False, metadata={"help": "remove punctuation"} - ) - wer_char_level: bool = field( - default=False, metadata={"help": "evaluate at character level"} - ) - wer_lowercase: bool = field(default=False, metadata={"help": "lowercasing"}) - - -@register_scorer("wer", dataclass=WerScorerConfig) -class WerScorer(BaseScorer): - def __init__(self, cfg): - super().__init__(cfg) - self.reset() - try: - import editdistance as ed - except ImportError: - raise ImportError("Please install editdistance to use WER scorer") - self.ed = ed - self.tokenizer = EvaluationTokenizer( - tokenizer_type=self.cfg.wer_tokenizer, - lowercase=self.cfg.wer_lowercase, - punctuation_removal=self.cfg.wer_remove_punct, - character_tokenization=self.cfg.wer_char_level, - ) - - def reset(self): - self.distance = 0 - self.ref_length = 0 - - def add_string(self, ref, pred): - ref_items = self.tokenizer.tokenize(ref).split() - pred_items = self.tokenizer.tokenize(pred).split() - self.distance += self.ed.eval(ref_items, pred_items) - self.ref_length += len(ref_items) - - def result_string(self): - return f"WER: {self.score():.2f}" - - def score(self): - return 100.0 * self.distance / self.ref_length if self.ref_length > 0 else 0 diff --git a/spaces/ORI-Muchim/RaidenTTS/transforms.py b/spaces/ORI-Muchim/RaidenTTS/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/RaidenTTS/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/OkayuTadano/OgiriMasters/README.md b/spaces/OkayuTadano/OgiriMasters/README.md deleted file mode 100644 index c3d21f82b0b4cfec48626c5c749c1606a9929b70..0000000000000000000000000000000000000000 --- a/spaces/OkayuTadano/OgiriMasters/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: OgiriMasters -emoji: 🐠 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OpenGVLab/DragGAN/README.md b/spaces/OpenGVLab/DragGAN/README.md deleted file mode 100644 index 09684a12629b17f406ccac5f62ebd6f394b855b9..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/DragGAN/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: DragGAN -emoji: 🐢 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.29.0 -app_file: gradio_app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/datasets/prepare_ade20k_sem_seg.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/datasets/prepare_ade20k_sem_seg.py deleted file mode 100644 index 8b4a58d8f2877544498e328b6d269f23aa1eb59f..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/datasets/prepare_ade20k_sem_seg.py +++ /dev/null @@ -1,26 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -import os -from pathlib import Path -import tqdm -from PIL import Image - - -def convert(input, output): - img = np.asarray(Image.open(input)) - assert img.dtype == np.uint8 - img = img - 1 # 0 (ignore) becomes 255. others are shifted by 1 - Image.fromarray(img).save(output) - - -if __name__ == "__main__": - dataset_dir = Path(os.getenv("DETECTRON2_DATASETS", "datasets")) / "ADEChallengeData2016" - for name in ["training", "validation"]: - annotation_dir = dataset_dir / "annotations" / name - output_dir = dataset_dir / "annotations_detectron2" / name - output_dir.mkdir(parents=True, exist_ok=True) - for file in tqdm.tqdm(list(annotation_dir.iterdir())): - output_file = output_dir / file.name - convert(file, output_file) diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/evaluation/losses/ssim.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/evaluation/losses/ssim.py deleted file mode 100644 index ee43a0095408eca98e253dea194db788446f9c0a..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/evaluation/losses/ssim.py +++ /dev/null @@ -1,74 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F - - -class SSIM(torch.nn.Module): - """SSIM. Modified from: - https://github.com/Po-Hsun-Su/pytorch-ssim/blob/master/pytorch_ssim/__init__.py - """ - - def __init__(self, window_size=11, size_average=True): - super().__init__() - self.window_size = window_size - self.size_average = size_average - self.channel = 1 - self.register_buffer('window', self._create_window(window_size, self.channel)) - - def forward(self, img1, img2): - assert len(img1.shape) == 4 - - channel = img1.size()[1] - - if channel == self.channel and self.window.data.type() == img1.data.type(): - window = self.window - else: - window = self._create_window(self.window_size, channel) - - # window = window.to(img1.get_device()) - window = window.type_as(img1) - - self.window = window - self.channel = channel - - return self._ssim(img1, img2, window, self.window_size, channel, self.size_average) - - def _gaussian(self, window_size, sigma): - gauss = torch.Tensor([ - np.exp(-(x - (window_size // 2)) ** 2 / float(2 * sigma ** 2)) for x in range(window_size) - ]) - return gauss / gauss.sum() - - def _create_window(self, window_size, channel): - _1D_window = self._gaussian(window_size, 1.5).unsqueeze(1) - _2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0) - return _2D_window.expand(channel, 1, window_size, window_size).contiguous() - - def _ssim(self, img1, img2, window, window_size, channel, size_average=True): - mu1 = F.conv2d(img1, window, padding=(window_size // 2), groups=channel) - mu2 = F.conv2d(img2, window, padding=(window_size // 2), groups=channel) - - mu1_sq = mu1.pow(2) - mu2_sq = mu2.pow(2) - mu1_mu2 = mu1 * mu2 - - sigma1_sq = F.conv2d( - img1 * img1, window, padding=(window_size // 2), groups=channel) - mu1_sq - sigma2_sq = F.conv2d( - img2 * img2, window, padding=(window_size // 2), groups=channel) - mu2_sq - sigma12 = F.conv2d( - img1 * img2, window, padding=(window_size // 2), groups=channel) - mu1_mu2 - - C1 = 0.01 ** 2 - C2 = 0.03 ** 2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / \ - ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2)) - - if size_average: - return ssim_map.mean() - - return ssim_map.mean(1).mean(1).mean(1) - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs): - return diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/trainers/__init__.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/trainers/__init__.py deleted file mode 100644 index c59241f553efe4e2dd6b198e2e5656a2b1488857..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/trainers/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -import logging -import torch -from saicinpainting.training.trainers.default import DefaultInpaintingTrainingModule - - -def get_training_model_class(kind): - if kind == 'default': - return DefaultInpaintingTrainingModule - - raise ValueError(f'Unknown trainer module {kind}') - - -def make_training_model(config): - kind = config.training_model.kind - kwargs = dict(config.training_model) - kwargs.pop('kind') - kwargs['use_ddp'] = config.trainer.kwargs.get('accelerator', None) == 'ddp' - - logging.info(f'Make training model {kind}') - - cls = get_training_model_class(kind) - return cls(config, **kwargs) - - -def load_checkpoint(train_config, path, map_location='cuda', strict=True): - model: torch.nn.Module = make_training_model(train_config) - state = torch.load(path, map_location=map_location) - model.load_state_dict(state['state_dict'], strict=strict) - model.on_load_checkpoint(state) - return model diff --git a/spaces/PIISA/PIISA_Demo/app.py b/spaces/PIISA/PIISA_Demo/app.py deleted file mode 100644 index 13451e94c8611930fe2d34b459fbaf9ee0acd4ad..0000000000000000000000000000000000000000 --- a/spaces/PIISA/PIISA_Demo/app.py +++ /dev/null @@ -1,174 +0,0 @@ -import gradio as gr -import os -from pii_transform.api.e2e import PiiTextProcessor -from pii_extract.defs import FMT_CONFIG_PLUGIN - -examples = [] -with open("examples.txt", "r") as f: - examples = f.readlines() -examples_truncated = [example[:50] + "..." for example in examples] -language_choices = { - "English": "en", - "Italian": "it", - "Spanish": "es", - "Portuguese": "pt", - "German": "de", - "French": "fr", -} -language_code = "en" -cache_dir = "/home/user/app/cache" -os.makedirs(cache_dir, exist_ok=True) -if os.path.isdir(cache_dir): - gr.Info("Cache directory created at "+cache_dir) -else: - gr.Warning("Cache directory creation error") - -policy_help_string = """ -Policies are defined as follows: - -1. **Annotate** - replace the PII instance by a \ string, i.e. include both the PII type and its value -2. **Redact** - all PII instances are replaced by a \ generic string -3. **Placeholder** - replace with a prototypical value -4. **Synthetic** - substitute with synthetic data - -For more information on the transformation policies, please refer to the guide [here](https://github.com/piisa/pii-transform/blob/main/doc/policies.md#pii-transformation-policies)""" - -header_string = """ -## [PIISA](https://privacyprotection.substack.com/p/towards-a-common-privacy-api-introducing) -**PIISA** (Personally Identifiable Information Standard Architecture) is a set of tools to detect and remediate -PII within large scale language data. It uses best of breed tools like [🤗 transformers](https://huggingface.co/docs/transformers/index) libraries, -[spaCy](https://spacy.io/), regular expressions, [Faker](https://faker.readthedocs.io/en/master/) and [Presidio](https://microsoft.github.io/presidio/) -to leverage best practices for effectively managing data privacy in accordance with your privacy policies. -Important links: -1. [PIISA API docs](https://github.com/piisa/piisa) -2. [Blog](https://privacyprotection.substack.com/) -3. [LinkedIn](https://www.linkedin.com/company/piisa/) - -This demo uses the multi-lingual [wikineural model](https://huggingface.co/Babelscape/wikineural-multilingual-ner) from [Babelscape](https://huggingface.co/Babelscape). - -### ▵ We're looking for any feedback and/or suggestions, so please open a new thread in the Discussions tab ▵ -""" - - -def change_language(language_selection): - global language_code - language_code = language_choices[language_selection] - gr.Info(f"{language_selection} selected") - - -def process(text, policy): - # Create the object, defining the language to use and the policy - # Further customization is possible by providing a config - policy = policy.lower() - if text == "": - print("Empty text field") - gr.Warning("No text present") - return "" - - # Custom config to prevent loading of the Presidio plugin - proc = PiiTextProcessor( - lang=language_code, default_policy=policy, config="config.json" - ) - - # Process a text buffer and get the transformed buffer - outbuf = proc(text) - return outbuf - - -def get_full_example(idx): - return examples[idx] - - -with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - gr.Markdown(value=header_string) - with gr.Column(scale=0, min_width=100): - pass - with gr.Column(scale=0, min_width=100): - logo = gr.Image( - "image.jpeg", - height=100, - width=100, - show_label=False, - show_download_button=False, - show_share_button=False, - mask_opacity=1.0, - ) - with gr.Row(): - with gr.Column(scale=2, min_width=400): - text_original = gr.Textbox( - label="Original Text", - lines=13, - placeholder="Enter the text you would like to analyze, or select from one of the examples below", - ) - with gr.Column(scale=0, min_width=25): - pass - with gr.Column(scale=0, min_width=150): - gr.Markdown(value="""

          Select Language

          """) - lang_picker = gr.Dropdown( - choices=list(language_choices.keys()), - label="", - value=list(language_choices.keys())[0], - type="value", - container=False, - ) - lang_picker.select(change_language, inputs=lang_picker, outputs=None) - gr.Markdown(value="""

          Select Policy

          """) - annotate_btn = gr.Button(value="Annotate", variant="primary", size="sm") - redact_btn = gr.Button(value="Redact", variant="primary", size="sm") - anonymize_btn = gr.Button(value="Synthetic", variant="primary", size="sm") - placeholder_btn = gr.Button( - value="Placeholder", variant="primary", size="sm" - ) - - with gr.Column(scale=0, min_width=25): - pass - with gr.Column( - scale=2, - min_width=400, - ): - text_modified = gr.TextArea( - label="Transformed Text", - lines=13, - show_copy_button=True, - interactive=False, - ) - annotate_btn.click( - fn=process, inputs=[text_original, annotate_btn], outputs=text_modified - ) - redact_btn.click( - fn=process, - inputs=[ - text_original, - gr.Text(value="redact", visible=False), - ], - outputs=text_modified, - ) - anonymize_btn.click( - fn=process, - inputs=[ - text_original, - gr.Text(value="synthetic", visible=False), - ], - outputs=text_modified, - ) - placeholder_btn.click( - fn=process, - inputs=[ - text_original, - gr.Text(value="placeholder", visible=False), - ], - outputs=text_modified, - ) - with gr.Row(): - example_selector = gr.Dropdown( - examples_truncated, type="index", label="Examples" - ) - example_selector.select( - get_full_example, inputs=example_selector, outputs=[text_original] - ) - with gr.Accordion(label="Help Panel", open=False): - gr.Markdown(value=policy_help_string) -demo.queue().launch() - diff --git a/spaces/PaddlePaddle/chinese-stable-diffusion/style.css b/spaces/PaddlePaddle/chinese-stable-diffusion/style.css deleted file mode 100644 index e223e2ce8ce1368bfcb8b715174ec79420f44915..0000000000000000000000000000000000000000 --- a/spaces/PaddlePaddle/chinese-stable-diffusion/style.css +++ /dev/null @@ -1,70 +0,0 @@ -.gradio-container { - font-family: 'IBM Plex Sans', sans-serif; -} -.gr-button { - color: white; - border-color: black; - background: black; -} -input[type='range'] { - accent-color: black; -} -.dark input[type='range'] { - accent-color: #dfdfdf; -} -.container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; -} -#gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; -} -#gallery>div>.h-full { - min-height: 20rem; -} -.details:hover { - text-decoration: underline; -} -.gr-button { - white-space: nowrap; -} -.gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; -} -.footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; -} -.footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; -} -.dark .footer { - border-color: #303030; -} -.dark .footer>p { - background: #0b0f19; -} -.prompt h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; -} \ No newline at end of file diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/configs/_base_/models/apcnet_r50-d8.py b/spaces/Pie31415/control-animation/annotator/uniformer/configs/_base_/models/apcnet_r50-d8.py deleted file mode 100644 index c8f5316cbcf3896ba9de7ca2c801eba512f01d5e..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/configs/_base_/models/apcnet_r50-d8.py +++ /dev/null @@ -1,44 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='APCHead', - in_channels=2048, - in_index=3, - channels=512, - pool_scales=(1, 2, 3, 6), - dropout_ratio=0.1, - num_classes=19, - norm_cfg=dict(type='SyncBN', requires_grad=True), - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/parallel/utils.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/parallel/utils.py deleted file mode 100644 index 0f5712cb42c38a2e8563bf563efb6681383cab9b..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/parallel/utils.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .registry import MODULE_WRAPPERS - - -def is_module_wrapper(module): - """Check if a module is a module wrapper. - - The following 3 modules in MMCV (and their subclasses) are regarded as - module wrappers: DataParallel, DistributedDataParallel, - MMDistributedDataParallel (the deprecated version). You may add you own - module wrapper by registering it to mmcv.parallel.MODULE_WRAPPERS. - - Args: - module (nn.Module): The module to be checked. - - Returns: - bool: True if the input module is a module wrapper. - """ - module_wrappers = tuple(MODULE_WRAPPERS.module_dict.values()) - return isinstance(module, module_wrappers) diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py deleted file mode 100644 index d02122ca0e68743b1bf7a893afae96042f23838c..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py +++ /dev/null @@ -1,57 +0,0 @@ -from abc import ABCMeta, abstractmethod - -from .decode_head import BaseDecodeHead - - -class BaseCascadeDecodeHead(BaseDecodeHead, metaclass=ABCMeta): - """Base class for cascade decode head used in - :class:`CascadeEncoderDecoder.""" - - def __init__(self, *args, **kwargs): - super(BaseCascadeDecodeHead, self).__init__(*args, **kwargs) - - @abstractmethod - def forward(self, inputs, prev_output): - """Placeholder of forward function.""" - pass - - def forward_train(self, inputs, prev_output, img_metas, gt_semantic_seg, - train_cfg): - """Forward function for training. - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - gt_semantic_seg (Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - train_cfg (dict): The training config. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - seg_logits = self.forward(inputs, prev_output) - losses = self.losses(seg_logits, gt_semantic_seg) - - return losses - - def forward_test(self, inputs, prev_output, img_metas, test_cfg): - """Forward function for testing. - - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - test_cfg (dict): The testing config. - - Returns: - Tensor: Output segmentation map. - """ - return self.forward(inputs, prev_output) diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/models/blip.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/models/blip.py deleted file mode 100644 index 38678f65ea2c276b351c2c97d429ebc2525ddcf7..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/models/blip.py +++ /dev/null @@ -1,238 +0,0 @@ -''' - * Copyright (c) 2022, salesforce.com, inc. - * All rights reserved. - * SPDX-License-Identifier: BSD-3-Clause - * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause - * By Junnan Li -''' -import warnings -warnings.filterwarnings("ignore") - -from models.vit import VisionTransformer, interpolate_pos_embed -from models.med import BertConfig, BertModel, BertLMHeadModel -from transformers import BertTokenizer - -import torch -from torch import nn -import torch.nn.functional as F - -import os -from urllib.parse import urlparse -from timm.models.hub import download_cached_file - -class BLIP_Base(nn.Module): - def __init__(self, - med_config = 'configs/med_config.json', - image_size = 224, - vit = 'base', - vit_grad_ckpt = False, - vit_ckpt_layer = 0, - ): - """ - Args: - med_config (str): path for the mixture of encoder-decoder model's configuration file - image_size (int): input image size - vit (str): model size of vision transformer - """ - super().__init__() - - self.visual_encoder, vision_width = create_vit(vit,image_size, vit_grad_ckpt, vit_ckpt_layer) - self.tokenizer = init_tokenizer() - med_config = BertConfig.from_json_file(med_config) - med_config.encoder_width = vision_width - self.text_encoder = BertModel(config=med_config, add_pooling_layer=False) - - - def forward(self, image, caption, mode): - - assert mode in ['image', 'text', 'multimodal'], "mode parameter must be image, text, or multimodal" - text = self.tokenizer(caption, return_tensors="pt").to(image.device) - - if mode=='image': - # return image features - image_embeds = self.visual_encoder(image) - return image_embeds - - elif mode=='text': - # return text features - text_output = self.text_encoder(text.input_ids, attention_mask = text.attention_mask, - return_dict = True, mode = 'text') - return text_output.last_hidden_state - - elif mode=='multimodal': - # return multimodel features - image_embeds = self.visual_encoder(image) - image_atts = torch.ones(image_embeds.size()[:-1],dtype=torch.long).to(image.device) - - text.input_ids[:,0] = self.tokenizer.enc_token_id - output = self.text_encoder(text.input_ids, - attention_mask = text.attention_mask, - encoder_hidden_states = image_embeds, - encoder_attention_mask = image_atts, - return_dict = True, - ) - return output.last_hidden_state - - - -class BLIP_Decoder(nn.Module): - def __init__(self, - med_config = 'configs/med_config.json', - image_size = 384, - vit = 'base', - vit_grad_ckpt = False, - vit_ckpt_layer = 0, - prompt = 'a picture of ', - ): - """ - Args: - med_config (str): path for the mixture of encoder-decoder model's configuration file - image_size (int): input image size - vit (str): model size of vision transformer - """ - super().__init__() - - self.visual_encoder, vision_width = create_vit(vit,image_size, vit_grad_ckpt, vit_ckpt_layer) - self.tokenizer = init_tokenizer() - med_config = BertConfig.from_json_file(med_config) - med_config.encoder_width = vision_width - self.text_decoder = BertLMHeadModel(config=med_config) - - self.prompt = prompt - self.prompt_length = len(self.tokenizer(self.prompt).input_ids)-1 - - - def forward(self, image, caption): - - image_embeds = self.visual_encoder(image) - image_atts = torch.ones(image_embeds.size()[:-1],dtype=torch.long).to(image.device) - - text = self.tokenizer(caption, padding='longest', truncation=True, max_length=40, return_tensors="pt").to(image.device) - - text.input_ids[:,0] = self.tokenizer.bos_token_id - - decoder_targets = text.input_ids.masked_fill(text.input_ids == self.tokenizer.pad_token_id, -100) - decoder_targets[:,:self.prompt_length] = -100 - - decoder_output = self.text_decoder(text.input_ids, - attention_mask = text.attention_mask, - encoder_hidden_states = image_embeds, - encoder_attention_mask = image_atts, - labels = decoder_targets, - return_dict = True, - ) - loss_lm = decoder_output.loss - - return loss_lm - - def generate(self, image, sample=False, num_beams=3, max_length=30, min_length=10, top_p=0.9, repetition_penalty=1.0): - image_embeds = self.visual_encoder(image) - - if not sample: - image_embeds = image_embeds.repeat_interleave(num_beams,dim=0) - - image_atts = torch.ones(image_embeds.size()[:-1],dtype=torch.long).to(image.device) - model_kwargs = {"encoder_hidden_states": image_embeds, "encoder_attention_mask":image_atts} - - prompt = [self.prompt] * image.size(0) - input_ids = self.tokenizer(prompt, return_tensors="pt").input_ids.to(image.device) - input_ids[:,0] = self.tokenizer.bos_token_id - input_ids = input_ids[:, :-1] - - if sample: - #nucleus sampling - outputs = self.text_decoder.generate(input_ids=input_ids, - max_length=max_length, - min_length=min_length, - do_sample=True, - top_p=top_p, - num_return_sequences=1, - eos_token_id=self.tokenizer.sep_token_id, - pad_token_id=self.tokenizer.pad_token_id, - repetition_penalty=1.1, - **model_kwargs) - else: - #beam search - outputs = self.text_decoder.generate(input_ids=input_ids, - max_length=max_length, - min_length=min_length, - num_beams=num_beams, - eos_token_id=self.tokenizer.sep_token_id, - pad_token_id=self.tokenizer.pad_token_id, - repetition_penalty=repetition_penalty, - **model_kwargs) - - captions = [] - for output in outputs: - caption = self.tokenizer.decode(output, skip_special_tokens=True) - captions.append(caption[len(self.prompt):]) - return captions - - -def blip_decoder(pretrained='',**kwargs): - model = BLIP_Decoder(**kwargs) - if pretrained: - model,msg = load_checkpoint(model,pretrained) - assert(len(msg.missing_keys)==0) - return model - -def blip_feature_extractor(pretrained='',**kwargs): - model = BLIP_Base(**kwargs) - if pretrained: - model,msg = load_checkpoint(model,pretrained) - assert(len(msg.missing_keys)==0) - return model - -def init_tokenizer(): - tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') - tokenizer.add_special_tokens({'bos_token':'[DEC]'}) - tokenizer.add_special_tokens({'additional_special_tokens':['[ENC]']}) - tokenizer.enc_token_id = tokenizer.additional_special_tokens_ids[0] - return tokenizer - - -def create_vit(vit, image_size, use_grad_checkpointing=False, ckpt_layer=0, drop_path_rate=0): - - assert vit in ['base', 'large'], "vit parameter must be base or large" - if vit=='base': - vision_width = 768 - visual_encoder = VisionTransformer(img_size=image_size, patch_size=16, embed_dim=vision_width, depth=12, - num_heads=12, use_grad_checkpointing=use_grad_checkpointing, ckpt_layer=ckpt_layer, - drop_path_rate=0 or drop_path_rate - ) - elif vit=='large': - vision_width = 1024 - visual_encoder = VisionTransformer(img_size=image_size, patch_size=16, embed_dim=vision_width, depth=24, - num_heads=16, use_grad_checkpointing=use_grad_checkpointing, ckpt_layer=ckpt_layer, - drop_path_rate=0.1 or drop_path_rate - ) - return visual_encoder, vision_width - -def is_url(url_or_filename): - parsed = urlparse(url_or_filename) - return parsed.scheme in ("http", "https") - -def load_checkpoint(model,url_or_filename): - if is_url(url_or_filename): - cached_file = download_cached_file(url_or_filename, check_hash=False, progress=True) - checkpoint = torch.load(cached_file, map_location='cpu') - elif os.path.isfile(url_or_filename): - checkpoint = torch.load(url_or_filename, map_location='cpu') - else: - raise RuntimeError('checkpoint url or path is invalid') - - state_dict = checkpoint['model'] - - state_dict['visual_encoder.pos_embed'] = interpolate_pos_embed(state_dict['visual_encoder.pos_embed'],model.visual_encoder) - if 'visual_encoder_m.pos_embed' in model.state_dict().keys(): - state_dict['visual_encoder_m.pos_embed'] = interpolate_pos_embed(state_dict['visual_encoder_m.pos_embed'], - model.visual_encoder_m) - for key in model.state_dict().keys(): - if key in state_dict.keys(): - if state_dict[key].shape!=model.state_dict()[key].shape: - del state_dict[key] - - msg = model.load_state_dict(state_dict,strict=False) - print('load checkpoint from %s'%url_or_filename) - return model,msg - diff --git a/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/modules.py b/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/modules.py deleted file mode 100644 index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000 --- a/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from lib.infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/colab_notebooks/README.md b/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/colab_notebooks/README.md deleted file mode 100644 index 894645f04740b0dc56805ab58ad2df0556fe29a2..0000000000000000000000000000000000000000 --- a/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/colab_notebooks/README.md +++ /dev/null @@ -1 +0,0 @@ -Open In Colab diff --git a/spaces/Raghav001/PDF/README.md b/spaces/Raghav001/PDF/README.md deleted file mode 100644 index 36b5934dd030db6b38d0ad3aca3a5f23f431c4a3..0000000000000000000000000000000000000000 --- a/spaces/Raghav001/PDF/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChatPDF -emoji: 💻 -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.20.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: Raghav001/Pinecone ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/RandomCatLover/plants_disease/app.py b/spaces/RandomCatLover/plants_disease/app.py deleted file mode 100644 index 9df30765d763f46890001b17e8dcb03786f59d60..0000000000000000000000000000000000000000 --- a/spaces/RandomCatLover/plants_disease/app.py +++ /dev/null @@ -1,76 +0,0 @@ -# %% -import gradio as gr -import tensorflow as tf -import cv2 -import os - -model_folder = 'model' -destination = model_folder -repo_url = "https://huggingface.co/RandomCatLover/plants_disease" - -if not os.path.exists(destination): - import subprocess - #repo_url = os.getenv("GIT_CORE") - command = f'git clone {repo_url} {destination}' - try: - subprocess.check_output(command, stderr=subprocess.STDOUT, shell=True)#, env=env) - print('Repository cloned successfully.') - except subprocess.CalledProcessError as e: - print(f'Error cloning repository: {e.output.decode()}') - -destination = 'explainer_tf_mobilenetv2' -if not os.path.exists(destination): - import subprocess - repo_url = os.getenv("GIT_CORE") - command = f'git clone {repo_url}' - try: - subprocess.check_output(command, stderr=subprocess.STDOUT, shell=True)#, env=env) - print('Repository cloned successfully.') - except subprocess.CalledProcessError as e: - print(f'Error cloning repository: {e.output.decode()}') - -from explainer_tf_mobilenetv2.explainer import explainer -# %% -with open(f'{model_folder}/labels.txt', 'r') as f: - labels = f.read().split('\n') - -# model = tf.saved_model.load(f'{model_folder}/last_layer.hdf5') -model = tf.keras.models.load_model(f'{model_folder}/last_layer.hdf5') -#model = tf.keras.models.load_model(f'{model_folder}/MobileNetV2_last_layer.hdf5') -# %% -def classify_image(inp): - inp = cv2.resize(inp, (224,224,)) - inp = inp.reshape((-1, 224, 224, 3)) - inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp) - prediction = model.predict(inp).flatten() - print(prediction) - confidences = {labels[i]: float(prediction[i]) for i in range(len(labels))} - return confidences - -def explainer_wrapper(inp): - return explainer(inp, model) - -with gr.Blocks() as demo: - with gr.Column(): - with gr.Row(): - with gr.Column(): - image = gr.inputs.Image(shape=(224, 224)) - with gr.Row(): - classify = gr.Button("Classify") - interpret = gr.Button("Interpret") - with gr.Column(): - label = gr.outputs.Label(num_top_classes=3) - interpretation = gr.Plot(label="Interpretation") - # interpretation = gr.outputs.Image(type="numpy", label="Interpretation") - gr.Examples(["TomatoHealthy2.jpg", "TomatoYellowCurlVirus3.jpg", "AppleCedarRust3.jpg"], - inputs=[image],) - classify.click(classify_image, image, label, queue=True) - interpret.click(explainer_wrapper, image, interpretation, queue=True) - - -demo.queue(concurrency_count=3).launch() -#%% -# gr.Interface(fn=classify_image, -# inputs=gr.Image(shape=(224, 224)), -# outputs=gr.Label(num_top_classes=3), -# examples=["TomatoHealthy2.jpg", "TomatoYellowCurlVirus3.jpg", "AppleCedarRust3.jpg"]).launch() diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/importlib_resources/_compat.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/importlib_resources/_compat.py deleted file mode 100644 index cb9fc820cb352aa6e92705aab4f55cbc2eff96bc..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/importlib_resources/_compat.py +++ /dev/null @@ -1,98 +0,0 @@ -# flake8: noqa - -import abc -import sys -import pathlib -from contextlib import suppress - -if sys.version_info >= (3, 10): - from zipfile import Path as ZipPath # type: ignore -else: - from ..zipp import Path as ZipPath # type: ignore - - -try: - from typing import runtime_checkable # type: ignore -except ImportError: - - def runtime_checkable(cls): # type: ignore - return cls - - -try: - from typing import Protocol # type: ignore -except ImportError: - Protocol = abc.ABC # type: ignore - - -class TraversableResourcesLoader: - """ - Adapt loaders to provide TraversableResources and other - compatibility. - - Used primarily for Python 3.9 and earlier where the native - loaders do not yet implement TraversableResources. - """ - - def __init__(self, spec): - self.spec = spec - - @property - def path(self): - return self.spec.origin - - def get_resource_reader(self, name): - from . import readers, _adapters - - def _zip_reader(spec): - with suppress(AttributeError): - return readers.ZipReader(spec.loader, spec.name) - - def _namespace_reader(spec): - with suppress(AttributeError, ValueError): - return readers.NamespaceReader(spec.submodule_search_locations) - - def _available_reader(spec): - with suppress(AttributeError): - return spec.loader.get_resource_reader(spec.name) - - def _native_reader(spec): - reader = _available_reader(spec) - return reader if hasattr(reader, 'files') else None - - def _file_reader(spec): - try: - path = pathlib.Path(self.path) - except TypeError: - return None - if path.exists(): - return readers.FileReader(self) - - return ( - # native reader if it supplies 'files' - _native_reader(self.spec) - or - # local ZipReader if a zip module - _zip_reader(self.spec) - or - # local NamespaceReader if a namespace module - _namespace_reader(self.spec) - or - # local FileReader - _file_reader(self.spec) - # fallback - adapt the spec ResourceReader to TraversableReader - or _adapters.CompatibilityFiles(self.spec) - ) - - -def wrap_spec(package): - """ - Construct a package spec with traversable compatibility - on the spec/loader/reader. - - Supersedes _adapters.wrap_spec to use TraversableResourcesLoader - from above for older Python compatibility (<3.10). - """ - from . import _adapters - - return _adapters.SpecLoaderAdapter(package.__spec__, TraversableResourcesLoader) diff --git a/spaces/Reself/StableVideo/ldm/modules/encoders/__init__.py b/spaces/Reself/StableVideo/ldm/modules/encoders/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Ricecake123/RVC-demo/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/Ricecake123/RVC-demo/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/bbox_heads/sabl_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/bbox_heads/sabl_head.py deleted file mode 100644 index 5153996aeb706d103d1ad14b61734914eddb7693..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/bbox_heads/sabl_head.py +++ /dev/null @@ -1,572 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, kaiming_init, normal_init, xavier_init -from mmcv.runner import force_fp32 - -from mmdet.core import build_bbox_coder, multi_apply, multiclass_nms -from mmdet.models.builder import HEADS, build_loss -from mmdet.models.losses import accuracy - - -@HEADS.register_module() -class SABLHead(nn.Module): - """Side-Aware Boundary Localization (SABL) for RoI-Head. - - Side-Aware features are extracted by conv layers - with an attention mechanism. - Boundary Localization with Bucketing and Bucketing Guided Rescoring - are implemented in BucketingBBoxCoder. - - Please refer to https://arxiv.org/abs/1912.04260 for more details. - - Args: - cls_in_channels (int): Input channels of cls RoI feature. \ - Defaults to 256. - reg_in_channels (int): Input channels of reg RoI feature. \ - Defaults to 256. - roi_feat_size (int): Size of RoI features. Defaults to 7. - reg_feat_up_ratio (int): Upsample ratio of reg features. \ - Defaults to 2. - reg_pre_kernel (int): Kernel of 2D conv layers before \ - attention pooling. Defaults to 3. - reg_post_kernel (int): Kernel of 1D conv layers after \ - attention pooling. Defaults to 3. - reg_pre_num (int): Number of pre convs. Defaults to 2. - reg_post_num (int): Number of post convs. Defaults to 1. - num_classes (int): Number of classes in dataset. Defaults to 80. - cls_out_channels (int): Hidden channels in cls fcs. Defaults to 1024. - reg_offset_out_channels (int): Hidden and output channel \ - of reg offset branch. Defaults to 256. - reg_cls_out_channels (int): Hidden and output channel \ - of reg cls branch. Defaults to 256. - num_cls_fcs (int): Number of fcs for cls branch. Defaults to 1. - num_reg_fcs (int): Number of fcs for reg branch.. Defaults to 0. - reg_class_agnostic (bool): Class agnostic regresion or not. \ - Defaults to True. - norm_cfg (dict): Config of norm layers. Defaults to None. - bbox_coder (dict): Config of bbox coder. Defaults 'BucketingBBoxCoder'. - loss_cls (dict): Config of classification loss. - loss_bbox_cls (dict): Config of classification loss for bbox branch. - loss_bbox_reg (dict): Config of regression loss for bbox branch. - """ - - def __init__(self, - num_classes, - cls_in_channels=256, - reg_in_channels=256, - roi_feat_size=7, - reg_feat_up_ratio=2, - reg_pre_kernel=3, - reg_post_kernel=3, - reg_pre_num=2, - reg_post_num=1, - cls_out_channels=1024, - reg_offset_out_channels=256, - reg_cls_out_channels=256, - num_cls_fcs=1, - num_reg_fcs=0, - reg_class_agnostic=True, - norm_cfg=None, - bbox_coder=dict( - type='BucketingBBoxCoder', - num_buckets=14, - scale_factor=1.7), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_bbox_reg=dict( - type='SmoothL1Loss', beta=0.1, loss_weight=1.0)): - super(SABLHead, self).__init__() - self.cls_in_channels = cls_in_channels - self.reg_in_channels = reg_in_channels - self.roi_feat_size = roi_feat_size - self.reg_feat_up_ratio = int(reg_feat_up_ratio) - self.num_buckets = bbox_coder['num_buckets'] - assert self.reg_feat_up_ratio // 2 >= 1 - self.up_reg_feat_size = roi_feat_size * self.reg_feat_up_ratio - assert self.up_reg_feat_size == bbox_coder['num_buckets'] - self.reg_pre_kernel = reg_pre_kernel - self.reg_post_kernel = reg_post_kernel - self.reg_pre_num = reg_pre_num - self.reg_post_num = reg_post_num - self.num_classes = num_classes - self.cls_out_channels = cls_out_channels - self.reg_offset_out_channels = reg_offset_out_channels - self.reg_cls_out_channels = reg_cls_out_channels - self.num_cls_fcs = num_cls_fcs - self.num_reg_fcs = num_reg_fcs - self.reg_class_agnostic = reg_class_agnostic - assert self.reg_class_agnostic - self.norm_cfg = norm_cfg - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox_cls = build_loss(loss_bbox_cls) - self.loss_bbox_reg = build_loss(loss_bbox_reg) - - self.cls_fcs = self._add_fc_branch(self.num_cls_fcs, - self.cls_in_channels, - self.roi_feat_size, - self.cls_out_channels) - - self.side_num = int(np.ceil(self.num_buckets / 2)) - - if self.reg_feat_up_ratio > 1: - self.upsample_x = nn.ConvTranspose1d( - reg_in_channels, - reg_in_channels, - self.reg_feat_up_ratio, - stride=self.reg_feat_up_ratio) - self.upsample_y = nn.ConvTranspose1d( - reg_in_channels, - reg_in_channels, - self.reg_feat_up_ratio, - stride=self.reg_feat_up_ratio) - - self.reg_pre_convs = nn.ModuleList() - for i in range(self.reg_pre_num): - reg_pre_conv = ConvModule( - reg_in_channels, - reg_in_channels, - kernel_size=reg_pre_kernel, - padding=reg_pre_kernel // 2, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU')) - self.reg_pre_convs.append(reg_pre_conv) - - self.reg_post_conv_xs = nn.ModuleList() - for i in range(self.reg_post_num): - reg_post_conv_x = ConvModule( - reg_in_channels, - reg_in_channels, - kernel_size=(1, reg_post_kernel), - padding=(0, reg_post_kernel // 2), - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU')) - self.reg_post_conv_xs.append(reg_post_conv_x) - self.reg_post_conv_ys = nn.ModuleList() - for i in range(self.reg_post_num): - reg_post_conv_y = ConvModule( - reg_in_channels, - reg_in_channels, - kernel_size=(reg_post_kernel, 1), - padding=(reg_post_kernel // 2, 0), - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU')) - self.reg_post_conv_ys.append(reg_post_conv_y) - - self.reg_conv_att_x = nn.Conv2d(reg_in_channels, 1, 1) - self.reg_conv_att_y = nn.Conv2d(reg_in_channels, 1, 1) - - self.fc_cls = nn.Linear(self.cls_out_channels, self.num_classes + 1) - self.relu = nn.ReLU(inplace=True) - - self.reg_cls_fcs = self._add_fc_branch(self.num_reg_fcs, - self.reg_in_channels, 1, - self.reg_cls_out_channels) - self.reg_offset_fcs = self._add_fc_branch(self.num_reg_fcs, - self.reg_in_channels, 1, - self.reg_offset_out_channels) - self.fc_reg_cls = nn.Linear(self.reg_cls_out_channels, 1) - self.fc_reg_offset = nn.Linear(self.reg_offset_out_channels, 1) - - def _add_fc_branch(self, num_branch_fcs, in_channels, roi_feat_size, - fc_out_channels): - in_channels = in_channels * roi_feat_size * roi_feat_size - branch_fcs = nn.ModuleList() - for i in range(num_branch_fcs): - fc_in_channels = (in_channels if i == 0 else fc_out_channels) - branch_fcs.append(nn.Linear(fc_in_channels, fc_out_channels)) - return branch_fcs - - def init_weights(self): - for module_list in [ - self.reg_cls_fcs, self.reg_offset_fcs, self.cls_fcs - ]: - for m in module_list.modules(): - if isinstance(m, nn.Linear): - xavier_init(m, distribution='uniform') - if self.reg_feat_up_ratio > 1: - kaiming_init(self.upsample_x, distribution='normal') - kaiming_init(self.upsample_y, distribution='normal') - - normal_init(self.reg_conv_att_x, 0, 0.01) - normal_init(self.reg_conv_att_y, 0, 0.01) - normal_init(self.fc_reg_offset, 0, 0.001) - normal_init(self.fc_reg_cls, 0, 0.01) - normal_init(self.fc_cls, 0, 0.01) - - def cls_forward(self, cls_x): - cls_x = cls_x.view(cls_x.size(0), -1) - for fc in self.cls_fcs: - cls_x = self.relu(fc(cls_x)) - cls_score = self.fc_cls(cls_x) - return cls_score - - def attention_pool(self, reg_x): - """Extract direction-specific features fx and fy with attention - methanism.""" - reg_fx = reg_x - reg_fy = reg_x - reg_fx_att = self.reg_conv_att_x(reg_fx).sigmoid() - reg_fy_att = self.reg_conv_att_y(reg_fy).sigmoid() - reg_fx_att = reg_fx_att / reg_fx_att.sum(dim=2).unsqueeze(2) - reg_fy_att = reg_fy_att / reg_fy_att.sum(dim=3).unsqueeze(3) - reg_fx = (reg_fx * reg_fx_att).sum(dim=2) - reg_fy = (reg_fy * reg_fy_att).sum(dim=3) - return reg_fx, reg_fy - - def side_aware_feature_extractor(self, reg_x): - """Refine and extract side-aware features without split them.""" - for reg_pre_conv in self.reg_pre_convs: - reg_x = reg_pre_conv(reg_x) - reg_fx, reg_fy = self.attention_pool(reg_x) - - if self.reg_post_num > 0: - reg_fx = reg_fx.unsqueeze(2) - reg_fy = reg_fy.unsqueeze(3) - for i in range(self.reg_post_num): - reg_fx = self.reg_post_conv_xs[i](reg_fx) - reg_fy = self.reg_post_conv_ys[i](reg_fy) - reg_fx = reg_fx.squeeze(2) - reg_fy = reg_fy.squeeze(3) - if self.reg_feat_up_ratio > 1: - reg_fx = self.relu(self.upsample_x(reg_fx)) - reg_fy = self.relu(self.upsample_y(reg_fy)) - reg_fx = torch.transpose(reg_fx, 1, 2) - reg_fy = torch.transpose(reg_fy, 1, 2) - return reg_fx.contiguous(), reg_fy.contiguous() - - def reg_pred(self, x, offset_fcs, cls_fcs): - """Predict bucketing estimation (cls_pred) and fine regression (offset - pred) with side-aware features.""" - x_offset = x.view(-1, self.reg_in_channels) - x_cls = x.view(-1, self.reg_in_channels) - - for fc in offset_fcs: - x_offset = self.relu(fc(x_offset)) - for fc in cls_fcs: - x_cls = self.relu(fc(x_cls)) - offset_pred = self.fc_reg_offset(x_offset) - cls_pred = self.fc_reg_cls(x_cls) - - offset_pred = offset_pred.view(x.size(0), -1) - cls_pred = cls_pred.view(x.size(0), -1) - - return offset_pred, cls_pred - - def side_aware_split(self, feat): - """Split side-aware features aligned with orders of bucketing - targets.""" - l_end = int(np.ceil(self.up_reg_feat_size / 2)) - r_start = int(np.floor(self.up_reg_feat_size / 2)) - feat_fl = feat[:, :l_end] - feat_fr = feat[:, r_start:].flip(dims=(1, )) - feat_fl = feat_fl.contiguous() - feat_fr = feat_fr.contiguous() - feat = torch.cat([feat_fl, feat_fr], dim=-1) - return feat - - def bbox_pred_split(self, bbox_pred, num_proposals_per_img): - """Split batch bbox prediction back to each image.""" - bucket_cls_preds, bucket_offset_preds = bbox_pred - bucket_cls_preds = bucket_cls_preds.split(num_proposals_per_img, 0) - bucket_offset_preds = bucket_offset_preds.split( - num_proposals_per_img, 0) - bbox_pred = tuple(zip(bucket_cls_preds, bucket_offset_preds)) - return bbox_pred - - def reg_forward(self, reg_x): - outs = self.side_aware_feature_extractor(reg_x) - edge_offset_preds = [] - edge_cls_preds = [] - reg_fx = outs[0] - reg_fy = outs[1] - offset_pred_x, cls_pred_x = self.reg_pred(reg_fx, self.reg_offset_fcs, - self.reg_cls_fcs) - offset_pred_y, cls_pred_y = self.reg_pred(reg_fy, self.reg_offset_fcs, - self.reg_cls_fcs) - offset_pred_x = self.side_aware_split(offset_pred_x) - offset_pred_y = self.side_aware_split(offset_pred_y) - cls_pred_x = self.side_aware_split(cls_pred_x) - cls_pred_y = self.side_aware_split(cls_pred_y) - edge_offset_preds = torch.cat([offset_pred_x, offset_pred_y], dim=-1) - edge_cls_preds = torch.cat([cls_pred_x, cls_pred_y], dim=-1) - - return (edge_cls_preds, edge_offset_preds) - - def forward(self, x): - - bbox_pred = self.reg_forward(x) - cls_score = self.cls_forward(x) - - return cls_score, bbox_pred - - def get_targets(self, sampling_results, gt_bboxes, gt_labels, - rcnn_train_cfg): - pos_proposals = [res.pos_bboxes for res in sampling_results] - neg_proposals = [res.neg_bboxes for res in sampling_results] - pos_gt_bboxes = [res.pos_gt_bboxes for res in sampling_results] - pos_gt_labels = [res.pos_gt_labels for res in sampling_results] - cls_reg_targets = self.bucket_target(pos_proposals, neg_proposals, - pos_gt_bboxes, pos_gt_labels, - rcnn_train_cfg) - (labels, label_weights, bucket_cls_targets, bucket_cls_weights, - bucket_offset_targets, bucket_offset_weights) = cls_reg_targets - return (labels, label_weights, (bucket_cls_targets, - bucket_offset_targets), - (bucket_cls_weights, bucket_offset_weights)) - - def bucket_target(self, - pos_proposals_list, - neg_proposals_list, - pos_gt_bboxes_list, - pos_gt_labels_list, - rcnn_train_cfg, - concat=True): - (labels, label_weights, bucket_cls_targets, bucket_cls_weights, - bucket_offset_targets, bucket_offset_weights) = multi_apply( - self._bucket_target_single, - pos_proposals_list, - neg_proposals_list, - pos_gt_bboxes_list, - pos_gt_labels_list, - cfg=rcnn_train_cfg) - - if concat: - labels = torch.cat(labels, 0) - label_weights = torch.cat(label_weights, 0) - bucket_cls_targets = torch.cat(bucket_cls_targets, 0) - bucket_cls_weights = torch.cat(bucket_cls_weights, 0) - bucket_offset_targets = torch.cat(bucket_offset_targets, 0) - bucket_offset_weights = torch.cat(bucket_offset_weights, 0) - return (labels, label_weights, bucket_cls_targets, bucket_cls_weights, - bucket_offset_targets, bucket_offset_weights) - - def _bucket_target_single(self, pos_proposals, neg_proposals, - pos_gt_bboxes, pos_gt_labels, cfg): - """Compute bucketing estimation targets and fine regression targets for - a single image. - - Args: - pos_proposals (Tensor): positive proposals of a single image, - Shape (n_pos, 4) - neg_proposals (Tensor): negative proposals of a single image, - Shape (n_neg, 4). - pos_gt_bboxes (Tensor): gt bboxes assigned to positive proposals - of a single image, Shape (n_pos, 4). - pos_gt_labels (Tensor): gt labels assigned to positive proposals - of a single image, Shape (n_pos, ). - cfg (dict): Config of calculating targets - - Returns: - tuple: - - - labels (Tensor): Labels in a single image. \ - Shape (n,). - - label_weights (Tensor): Label weights in a single image.\ - Shape (n,) - - bucket_cls_targets (Tensor): Bucket cls targets in \ - a single image. Shape (n, num_buckets*2). - - bucket_cls_weights (Tensor): Bucket cls weights in \ - a single image. Shape (n, num_buckets*2). - - bucket_offset_targets (Tensor): Bucket offset targets \ - in a single image. Shape (n, num_buckets*2). - - bucket_offset_targets (Tensor): Bucket offset weights \ - in a single image. Shape (n, num_buckets*2). - """ - num_pos = pos_proposals.size(0) - num_neg = neg_proposals.size(0) - num_samples = num_pos + num_neg - labels = pos_gt_bboxes.new_full((num_samples, ), - self.num_classes, - dtype=torch.long) - label_weights = pos_proposals.new_zeros(num_samples) - bucket_cls_targets = pos_proposals.new_zeros(num_samples, - 4 * self.side_num) - bucket_cls_weights = pos_proposals.new_zeros(num_samples, - 4 * self.side_num) - bucket_offset_targets = pos_proposals.new_zeros( - num_samples, 4 * self.side_num) - bucket_offset_weights = pos_proposals.new_zeros( - num_samples, 4 * self.side_num) - if num_pos > 0: - labels[:num_pos] = pos_gt_labels - label_weights[:num_pos] = 1.0 - (pos_bucket_offset_targets, pos_bucket_offset_weights, - pos_bucket_cls_targets, - pos_bucket_cls_weights) = self.bbox_coder.encode( - pos_proposals, pos_gt_bboxes) - bucket_cls_targets[:num_pos, :] = pos_bucket_cls_targets - bucket_cls_weights[:num_pos, :] = pos_bucket_cls_weights - bucket_offset_targets[:num_pos, :] = pos_bucket_offset_targets - bucket_offset_weights[:num_pos, :] = pos_bucket_offset_weights - if num_neg > 0: - label_weights[-num_neg:] = 1.0 - return (labels, label_weights, bucket_cls_targets, bucket_cls_weights, - bucket_offset_targets, bucket_offset_weights) - - def loss(self, - cls_score, - bbox_pred, - rois, - labels, - label_weights, - bbox_targets, - bbox_weights, - reduction_override=None): - losses = dict() - if cls_score is not None: - avg_factor = max(torch.sum(label_weights > 0).float().item(), 1.) - losses['loss_cls'] = self.loss_cls( - cls_score, - labels, - label_weights, - avg_factor=avg_factor, - reduction_override=reduction_override) - losses['acc'] = accuracy(cls_score, labels) - - if bbox_pred is not None: - bucket_cls_preds, bucket_offset_preds = bbox_pred - bucket_cls_targets, bucket_offset_targets = bbox_targets - bucket_cls_weights, bucket_offset_weights = bbox_weights - # edge cls - bucket_cls_preds = bucket_cls_preds.view(-1, self.side_num) - bucket_cls_targets = bucket_cls_targets.view(-1, self.side_num) - bucket_cls_weights = bucket_cls_weights.view(-1, self.side_num) - losses['loss_bbox_cls'] = self.loss_bbox_cls( - bucket_cls_preds, - bucket_cls_targets, - bucket_cls_weights, - avg_factor=bucket_cls_targets.size(0), - reduction_override=reduction_override) - - losses['loss_bbox_reg'] = self.loss_bbox_reg( - bucket_offset_preds, - bucket_offset_targets, - bucket_offset_weights, - avg_factor=bucket_offset_targets.size(0), - reduction_override=reduction_override) - - return losses - - @force_fp32(apply_to=('cls_score', 'bbox_pred')) - def get_bboxes(self, - rois, - cls_score, - bbox_pred, - img_shape, - scale_factor, - rescale=False, - cfg=None): - if isinstance(cls_score, list): - cls_score = sum(cls_score) / float(len(cls_score)) - scores = F.softmax(cls_score, dim=1) if cls_score is not None else None - - if bbox_pred is not None: - bboxes, confids = self.bbox_coder.decode(rois[:, 1:], bbox_pred, - img_shape) - else: - bboxes = rois[:, 1:].clone() - confids = None - if img_shape is not None: - bboxes[:, [0, 2]].clamp_(min=0, max=img_shape[1] - 1) - bboxes[:, [1, 3]].clamp_(min=0, max=img_shape[0] - 1) - - if rescale and bboxes.size(0) > 0: - if isinstance(scale_factor, float): - bboxes /= scale_factor - else: - bboxes /= torch.from_numpy(scale_factor).to(bboxes.device) - - if cfg is None: - return bboxes, scores - else: - det_bboxes, det_labels = multiclass_nms( - bboxes, - scores, - cfg.score_thr, - cfg.nms, - cfg.max_per_img, - score_factors=confids) - - return det_bboxes, det_labels - - @force_fp32(apply_to=('bbox_preds', )) - def refine_bboxes(self, rois, labels, bbox_preds, pos_is_gts, img_metas): - """Refine bboxes during training. - - Args: - rois (Tensor): Shape (n*bs, 5), where n is image number per GPU, - and bs is the sampled RoIs per image. - labels (Tensor): Shape (n*bs, ). - bbox_preds (list[Tensor]): Shape [(n*bs, num_buckets*2), \ - (n*bs, num_buckets*2)]. - pos_is_gts (list[Tensor]): Flags indicating if each positive bbox - is a gt bbox. - img_metas (list[dict]): Meta info of each image. - - Returns: - list[Tensor]: Refined bboxes of each image in a mini-batch. - """ - img_ids = rois[:, 0].long().unique(sorted=True) - assert img_ids.numel() == len(img_metas) - - bboxes_list = [] - for i in range(len(img_metas)): - inds = torch.nonzero( - rois[:, 0] == i, as_tuple=False).squeeze(dim=1) - num_rois = inds.numel() - - bboxes_ = rois[inds, 1:] - label_ = labels[inds] - edge_cls_preds, edge_offset_preds = bbox_preds - edge_cls_preds_ = edge_cls_preds[inds] - edge_offset_preds_ = edge_offset_preds[inds] - bbox_pred_ = [edge_cls_preds_, edge_offset_preds_] - img_meta_ = img_metas[i] - pos_is_gts_ = pos_is_gts[i] - - bboxes = self.regress_by_class(bboxes_, label_, bbox_pred_, - img_meta_) - # filter gt bboxes - pos_keep = 1 - pos_is_gts_ - keep_inds = pos_is_gts_.new_ones(num_rois) - keep_inds[:len(pos_is_gts_)] = pos_keep - - bboxes_list.append(bboxes[keep_inds.type(torch.bool)]) - - return bboxes_list - - @force_fp32(apply_to=('bbox_pred', )) - def regress_by_class(self, rois, label, bbox_pred, img_meta): - """Regress the bbox for the predicted class. Used in Cascade R-CNN. - - Args: - rois (Tensor): shape (n, 4) or (n, 5) - label (Tensor): shape (n, ) - bbox_pred (list[Tensor]): shape [(n, num_buckets *2), \ - (n, num_buckets *2)] - img_meta (dict): Image meta info. - - Returns: - Tensor: Regressed bboxes, the same shape as input rois. - """ - assert rois.size(1) == 4 or rois.size(1) == 5 - - if rois.size(1) == 4: - new_rois, _ = self.bbox_coder.decode(rois, bbox_pred, - img_meta['img_shape']) - else: - bboxes, _ = self.bbox_coder.decode(rois[:, 1:], bbox_pred, - img_meta['img_shape']) - new_rois = torch.cat((rois[:, [0]], bboxes), dim=1) - - return new_rois diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/bricks/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/bricks/__init__.py deleted file mode 100644 index 0f33124ed23fc6f27119a37bcb5ab004d3572be0..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/bricks/__init__.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .activation import build_activation_layer -from .context_block import ContextBlock -from .conv import build_conv_layer -from .conv2d_adaptive_padding import Conv2dAdaptivePadding -from .conv_module import ConvModule -from .conv_ws import ConvAWS2d, ConvWS2d, conv_ws_2d -from .depthwise_separable_conv_module import DepthwiseSeparableConvModule -from .drop import Dropout, DropPath -from .generalized_attention import GeneralizedAttention -from .hsigmoid import HSigmoid -from .hswish import HSwish -from .non_local import NonLocal1d, NonLocal2d, NonLocal3d -from .norm import build_norm_layer, is_norm -from .padding import build_padding_layer -from .plugin import build_plugin_layer -from .registry import (ACTIVATION_LAYERS, CONV_LAYERS, NORM_LAYERS, - PADDING_LAYERS, PLUGIN_LAYERS, UPSAMPLE_LAYERS) -from .scale import Scale -from .swish import Swish -from .upsample import build_upsample_layer -from .wrappers import (Conv2d, Conv3d, ConvTranspose2d, ConvTranspose3d, - Linear, MaxPool2d, MaxPool3d) - -__all__ = [ - 'ConvModule', 'build_activation_layer', 'build_conv_layer', - 'build_norm_layer', 'build_padding_layer', 'build_upsample_layer', - 'build_plugin_layer', 'is_norm', 'HSigmoid', 'HSwish', 'NonLocal1d', - 'NonLocal2d', 'NonLocal3d', 'ContextBlock', 'GeneralizedAttention', - 'ACTIVATION_LAYERS', 'CONV_LAYERS', 'NORM_LAYERS', 'PADDING_LAYERS', - 'UPSAMPLE_LAYERS', 'PLUGIN_LAYERS', 'Scale', 'ConvAWS2d', 'ConvWS2d', - 'conv_ws_2d', 'DepthwiseSeparableConvModule', 'Swish', 'Linear', - 'Conv2dAdaptivePadding', 'Conv2d', 'ConvTranspose2d', 'MaxPool2d', - 'ConvTranspose3d', 'MaxPool3d', 'Conv3d', 'Dropout', 'DropPath' -] diff --git a/spaces/RobinWZQ/CCLAP/style_trsfer.py b/spaces/RobinWZQ/CCLAP/style_trsfer.py deleted file mode 100644 index ca5473a7e7cc02f0ab12e4b78682c208ecad9e35..0000000000000000000000000000000000000000 --- a/spaces/RobinWZQ/CCLAP/style_trsfer.py +++ /dev/null @@ -1,80 +0,0 @@ -import argparse -import torch -from torchvision.utils import make_grid -from PIL import Image, ImageFile -from net import Net -from utils import DEVICE, test_transform -Image.MAX_IMAGE_PIXELS = None -ImageFile.LOAD_TRUNCATED_IMAGES = True - - - -def style_transfer_method(content_image,style_img): - main_parser = argparse.ArgumentParser(description="main parser") - subparsers = main_parser.add_subparsers(title="subcommands", dest="subcommand") - - main_parser.add_argument("--pretrained", type=bool, default=True, - help="whether to use the pre-trained checkpoints") - main_parser.add_argument("--requires_grad", type=bool, default=True, - help="set to True if the model requires model gradient") - - train_parser = subparsers.add_parser("train", help="training mode parser") - train_parser.add_argument("--training", type=bool, default=True) - train_parser.add_argument("--iterations", type=int, default=60000, - help="total training epochs (default: 160000)") - train_parser.add_argument("--batch_size", type=int, default=2, - help="training batch size (default: 8)") - train_parser.add_argument("--num_workers", type=int, default=2, - help="iterator threads (default: 8)") - train_parser.add_argument("--lr", type=float, default=1e-4, help="the learning rate during training (default: 1e-4)") - train_parser.add_argument("--content_folder", type=str, required = True, - help="the root of content images, the path should point to a folder") - train_parser.add_argument("--style_folder", type=str, required = True, - help="the root of style images, the path should point to a folder") - train_parser.add_argument("--log_interval", type=int, default=10000, - help="number of images after which the training loss is logged (default: 20000)") - - train_parser.add_argument("--w_content1", type=float, default=12, help="the stage1 content loss weight") - train_parser.add_argument("--w_content2", type=float, default=9, help="the stage2 content loss weight") - train_parser.add_argument("--w_content3", type=float, default=7, help="the stage3 content loss weight") - train_parser.add_argument("--w_remd1", type=float, default=2, help="the stage1 remd loss weight") - train_parser.add_argument("--w_remd2", type=float, default=2, help="the stage2 remd loss weight") - train_parser.add_argument("--w_remd3", type=float, default=2, help="the stage3 remd loss weight") - train_parser.add_argument("--w_moment1", type=float, default=2, help="the stage1 moment loss weight") - train_parser.add_argument("--w_moment2", type=float, default=2, help="the stage2 moment loss weight") - train_parser.add_argument("--w_moment3", type=float, default=2, help="the stage3 moment loss weight") - train_parser.add_argument("--color_on", type=str, default=True, help="turn on the color loss") - train_parser.add_argument("--w_color1", type=float, default=0.25, help="the stage1 color loss weight") - train_parser.add_argument("--w_color2", type=float, default=0.5, help="the stage2 color loss weight") - train_parser.add_argument("--w_color3", type=float, default=1, help="the stage3 color loss weight") - - - eval_parser = subparsers.add_parser("eval", help="evaluation mode parser") - eval_parser.add_argument("--training", type=bool, default=False) - eval_parser.add_argument("--run_folder", type=bool, default=False) - - args = main_parser.parse_args() - - args.training = False - - model = Net(args) - model.eval() - model = model.to(DEVICE) - - tf = test_transform() - - Ic = tf(content_image).to(DEVICE) - Is = tf(Image.fromarray(style_img)).to(DEVICE) - - Ic = Ic.unsqueeze(dim=0) - Is = Is.unsqueeze(dim=0) - - with torch.no_grad(): - Ics = model(Ic, Is) - - grid = make_grid(Ics[0]) - # Add 0.5 after unnormalizing to [0, 255] to round to the nearest integer - ndarr = grid.mul(255).add_(0.5).clamp_(0, 255).permute(1, 2, 0).to("cpu", torch.uint8).numpy() - im = Image.fromarray(ndarr) - - return im diff --git a/spaces/Royir/SynGen/compute_loss.py b/spaces/Royir/SynGen/compute_loss.py deleted file mode 100644 index 09333f977570900c1e98d5300a5d3908f2152a50..0000000000000000000000000000000000000000 --- a/spaces/Royir/SynGen/compute_loss.py +++ /dev/null @@ -1,279 +0,0 @@ -import torch.distributions as dist -from typing import List, Dict -import itertools - -start_token = "<|startoftext|>" -end_token = "<|endoftext|>" - - -def _get_outside_indices(subtree_indices, attn_map_idx_to_wp): - flattened_subtree_indices = _flatten_indices(subtree_indices) - outside_indices = [ - map_idx - for map_idx in attn_map_idx_to_wp.keys() if (map_idx not in flattened_subtree_indices) - ] - return outside_indices - - -def _flatten_indices(related_indices): - flattened_related_indices = [] - for item in related_indices: - if isinstance(item, list): - flattened_related_indices.extend(item) - else: - flattened_related_indices.append(item) - return flattened_related_indices - - -def split_indices(related_indices: List[int]): - noun = [related_indices[-1]] # assumes noun is always last in the list - modifier = related_indices[:-1] - if isinstance(modifier, int): - modifier = [modifier] - return noun, modifier - - -def _symmetric_kl(attention_map1, attention_map2): - # Convert map into a single distribution: 16x16 -> 256 - if len(attention_map1.shape) > 1: - attention_map1 = attention_map1.reshape(-1) - if len(attention_map2.shape) > 1: - attention_map2 = attention_map2.reshape(-1) - - p = dist.Categorical(probs=attention_map1) - q = dist.Categorical(probs=attention_map2) - - kl_divergence_pq = dist.kl_divergence(p, q) - kl_divergence_qp = dist.kl_divergence(q, p) - - avg_kl_divergence = (kl_divergence_pq + kl_divergence_qp) / 2 - return avg_kl_divergence - - -def calculate_positive_loss(attention_maps, modifier, noun): - src_indices = modifier - dest_indices = noun - - if isinstance(src_indices, list) and isinstance(dest_indices, list): - wp_pos_loss = [ - _symmetric_kl(attention_maps[s], attention_maps[d]) - for (s, d) in itertools.product(src_indices, dest_indices) - ] - positive_loss = max(wp_pos_loss) - elif isinstance(dest_indices, list): - wp_pos_loss = [ - _symmetric_kl(attention_maps[src_indices], attention_maps[d]) - for d in dest_indices - ] - positive_loss = max(wp_pos_loss) - elif isinstance(src_indices, list): - wp_pos_loss = [ - _symmetric_kl(attention_maps[s], attention_maps[dest_indices]) - for s in src_indices - ] - positive_loss = max(wp_pos_loss) - else: - positive_loss = _symmetric_kl( - attention_maps[src_indices], attention_maps[dest_indices] - ) - - return positive_loss - - -def _calculate_outside_loss(attention_maps, src_indices, outside_loss): - negative_loss = [] - computed_pairs = set() - pair_counter = 0 - - for outside_idx in outside_loss: - if isinstance(src_indices, list): - wp_neg_loss = [] - for t in src_indices: - pair_key = (t, outside_idx) - if pair_key not in computed_pairs: - wp_neg_loss.append( - _symmetric_kl( - attention_maps[t], attention_maps[outside_idx] - ) - ) - computed_pairs.add(pair_key) - negative_loss.append(max(wp_neg_loss) if wp_neg_loss else 0) - pair_counter += 1 - - else: - pair_key = (src_indices, outside_idx) - if pair_key not in computed_pairs: - negative_loss.append( - _symmetric_kl( - attention_maps[src_indices], attention_maps[outside_idx] - ) - ) - computed_pairs.add(pair_key) - pair_counter += 1 - - return negative_loss, pair_counter - - -def align_wordpieces_indices( - wordpieces2indices, start_idx, target_word -): - """ - Aligns a `target_word` that contains more than one wordpiece (the first wordpiece is `start_idx`) - """ - - wp_indices = [start_idx] - wp = wordpieces2indices[start_idx].replace("", "") - - # Run over the next wordpieces in the sequence (which is why we use +1) - for wp_idx in range(start_idx + 1, len(wordpieces2indices)): - if wp == target_word: - break - - wp2 = wordpieces2indices[wp_idx].replace("", "") - if target_word.startswith(wp + wp2) and wp2 != target_word: - wp += wordpieces2indices[wp_idx].replace("", "") - wp_indices.append(wp_idx) - else: - wp_indices = ( - [] - ) # if there's no match, you want to clear the list and finish - break - - return wp_indices - - -def extract_attribution_indices(doc): - # doc = parser(prompt) - subtrees = [] - modifiers = ["amod", "nmod", "compound", "npadvmod", "advmod", "acomp"] - - for w in doc: - if w.pos_ not in ["NOUN", "PROPN"] or w.dep_ in modifiers: - continue - subtree = [] - stack = [] - for child in w.children: - if child.dep_ in modifiers: - subtree.append(child) - stack.extend(child.children) - - while stack: - node = stack.pop() - if node.dep_ in modifiers or node.dep_ == "conj": - subtree.append(node) - stack.extend(node.children) - if subtree: - subtree.append(w) - subtrees.append(subtree) - return subtrees - -def extract_attribution_indices_with_verbs(doc): - '''This function specifically addresses cases where a verb is between - a noun and its modifier. For instance: "a dog that is red" - here, the aux is between 'dog' and 'red'. ''' - - subtrees = [] - modifiers = ["amod", "nmod", "compound", "npadvmod", "advmod", "acomp", - 'relcl'] - for w in doc: - if w.pos_ not in ["NOUN", "PROPN"] or w.dep_ in modifiers: - continue - subtree = [] - stack = [] - for child in w.children: - if child.dep_ in modifiers: - if child.pos_ not in ['AUX', 'VERB']: - subtree.append(child) - stack.extend(child.children) - - while stack: - node = stack.pop() - if node.dep_ in modifiers or node.dep_ == "conj": - # we don't want to add 'is' or other verbs to the loss, we want their children - if node.pos_ not in ['AUX', 'VERB']: - subtree.append(node) - stack.extend(node.children) - if subtree: - subtree.append(w) - subtrees.append(subtree) - return subtrees - -def extract_attribution_indices_with_verb_root(doc): - '''This function specifically addresses cases where a verb is between - a noun and its modifier. For instance: "a dog that is red" - here, the aux is between 'dog' and 'red'. ''' - - subtrees = [] - modifiers = ["amod", "nmod", "compound", "npadvmod", "advmod", "acomp"] - for w in doc: - subtree = [] - stack = [] - - # if w is a verb/aux and has a noun child and a modifier child, add them to the stack - if w.pos_ != 'AUX' or w.dep_ in modifiers: - continue - - for child in w.children: - if child.dep_ in modifiers or child.pos_ in ['NOUN', 'PROPN']: - if child.pos_ not in ['AUX', 'VERB']: - subtree.append(child) - stack.extend(child.children) - # did not find a pair of noun and modifier - if len(subtree) < 2: - continue - - while stack: - node = stack.pop() - if node.dep_ in modifiers or node.dep_ == "conj": - # we don't want to add 'is' or other verbs to the loss, we want their children - if node.pos_ not in ['AUX']: - subtree.append(node) - stack.extend(node.children) - - if subtree: - if w.pos_ not in ['AUX']: - subtree.append(w) - subtrees.append(subtree) - return subtrees - -def calculate_negative_loss( - attention_maps, modifier, noun, subtree_indices, attn_map_idx_to_wp -): - outside_indices = _get_outside_indices(subtree_indices, attn_map_idx_to_wp) - negative_modifier_loss, num_modifier_pairs = _calculate_outside_loss( - attention_maps, modifier, outside_indices - ) - negative_noun_loss, num_noun_pairs = _calculate_outside_loss( - attention_maps, noun, outside_indices - ) - - negative_modifier_loss = -sum(negative_modifier_loss) / len(outside_indices) - negative_noun_loss = -sum(negative_noun_loss) / len(outside_indices) - - negative_loss = (negative_modifier_loss + negative_noun_loss) / 2 - - return negative_loss - -def get_indices(tokenizer, prompt: str) -> Dict[str, int]: - """Utility function to list the indices of the tokens you wish to alter""" - ids = tokenizer(prompt).input_ids - indices = { - i: tok - for tok, i in zip( - tokenizer.convert_ids_to_tokens(ids), range(len(ids)) - ) - } - return indices - -def get_attention_map_index_to_wordpiece(tokenizer, prompt): - attn_map_idx_to_wp = {} - - wordpieces2indices = get_indices(tokenizer, prompt) - - # Ignore `start_token` and `end_token` - for i in list(wordpieces2indices.keys())[1:-1]: - wordpiece = wordpieces2indices[i] - wordpiece = wordpiece.replace("", "") - attn_map_idx_to_wp[i] = wordpiece - - return attn_map_idx_to_wp \ No newline at end of file diff --git a/spaces/Salesforce/EDICT/edict_functions.py b/spaces/Salesforce/EDICT/edict_functions.py deleted file mode 100644 index c68fc76f7f474002e6622ef2ee2bebdb7ca37b76..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/edict_functions.py +++ /dev/null @@ -1,997 +0,0 @@ -import torch -from transformers import CLIPModel, CLIPTextModel, CLIPTokenizer -from omegaconf import OmegaConf -import math -import imageio -from PIL import Image -import torchvision -import torch.nn.functional as F -import torch -import numpy as np -from PIL import Image -import time -import datetime -import torch -import sys -import os -from torchvision import datasets -import pickle - - - -# StableDiffusion P2P implementation originally from https://github.com/bloc97/CrossAttentionControl -use_half_prec = True -if use_half_prec: - from my_half_diffusers import AutoencoderKL, UNet2DConditionModel - from my_half_diffusers.schedulers.scheduling_utils import SchedulerOutput - from my_half_diffusers import LMSDiscreteScheduler, PNDMScheduler, DDPMScheduler, DDIMScheduler -else: - from my_diffusers import AutoencoderKL, UNet2DConditionModel - from my_diffusers.schedulers.scheduling_utils import SchedulerOutput - from my_diffusers import LMSDiscreteScheduler, PNDMScheduler, DDPMScheduler, DDIMScheduler -torch_dtype = torch.float16 if use_half_prec else torch.float64 -np_dtype = np.float16 if use_half_prec else np.float64 - - - -import random -from tqdm.auto import tqdm -from torch import autocast -from difflib import SequenceMatcher - -# Build our CLIP model -model_path_clip = "openai/clip-vit-large-patch14" -clip_tokenizer = CLIPTokenizer.from_pretrained(model_path_clip) -clip_model = CLIPModel.from_pretrained(model_path_clip, torch_dtype=torch_dtype) -clip = clip_model.text_model - - -# Getting our HF Auth token -auth_token = os.environ.get('auth_token') -if auth_token is None: - with open('hf_auth', 'r') as f: - auth_token = f.readlines()[0].strip() -model_path_diffusion = "CompVis/stable-diffusion-v1-4" -# Build our SD model -unet = UNet2DConditionModel.from_pretrained(model_path_diffusion, subfolder="unet", use_auth_token=auth_token, revision="fp16", torch_dtype=torch_dtype) -vae = AutoencoderKL.from_pretrained(model_path_diffusion, subfolder="vae", use_auth_token=auth_token, revision="fp16", torch_dtype=torch_dtype) - -# Push to devices w/ double precision -device = 'cuda' -if use_half_prec: - unet.to(device) - vae.to(device) - clip.to(device) -else: - unet.double().to(device) - vae.double().to(device) - clip.double().to(device) -print("Loaded all models") - -from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker -from transformers import AutoFeatureExtractor -# load safety model -safety_model_id = "CompVis/stable-diffusion-safety-checker" -safety_feature_extractor = AutoFeatureExtractor.from_pretrained(safety_model_id) -safety_checker = StableDiffusionSafetyChecker.from_pretrained(safety_model_id) -def load_replacement(x): - try: - hwc = x.shape - y = Image.open("assets/rick.jpeg").convert("RGB").resize((hwc[1], hwc[0])) - y = (np.array(y)/255.0).astype(x.dtype) - assert y.shape == x.shape - return y - except Exception: - return x -def check_safety(x_image): - safety_checker_input = safety_feature_extractor(numpy_to_pil(x_image), return_tensors="pt") - x_checked_image, has_nsfw_concept = safety_checker(images=x_image, clip_input=safety_checker_input.pixel_values) - assert x_checked_image.shape[0] == len(has_nsfw_concept) - for i in range(len(has_nsfw_concept)): - if has_nsfw_concept[i]: - # x_checked_image[i] = load_replacement(x_checked_image[i]) - x_checked_image[i] *= 0 # load_replacement(x_checked_image[i]) - return x_checked_image, has_nsfw_concept - - -def EDICT_editing(im_path, - base_prompt, - edit_prompt, - use_p2p=False, - steps=50, - mix_weight=0.93, - init_image_strength=0.8, - guidance_scale=3, - run_baseline=False, - width=512, height=512): - """ - Main call of our research, performs editing with either EDICT or DDIM - - Args: - im_path: path to image to run on - base_prompt: conditional prompt to deterministically noise with - edit_prompt: desired text conditoining - steps: ddim steps - mix_weight: Weight of mixing layers. - Higher means more consistent generations but divergence in inversion - Lower means opposite - This is fairly tuned and can get good results - init_image_strength: Editing strength. Higher = more dramatic edit. - Typically [0.6, 0.9] is good range. - Definitely tunable per-image/maybe best results are at a different value - guidance_scale: classifier-free guidance scale - 3 I've found is the best for both our method and basic DDIM inversion - Higher can result in more distorted results - run_baseline: - VERY IMPORTANT - True is EDICT, False is DDIM - Output: - PAIR of Images (tuple) - If run_baseline=True then [0] will be edit and [1] will be original - If run_baseline=False then they will be two nearly identical edited versions - """ - # Resize/center crop to 512x512 (Can do higher res. if desired) - if isinstance(im_path, str): - orig_im = load_im_into_format_from_path(im_path) - elif Image.isImageType(im_path): - width, height = im_path.size - - - # add max dim for sake of memory - max_dim = max(width, height) - if max_dim > 1024: - factor = 1024 / max_dim - width *= factor - height *= factor - width = int(width) - height = int(height) - im_path = im_path.resize((width, height)) - - min_dim = min(width, height) - if min_dim < 512: - factor = 512 / min_dim - width *= factor - height *= factor - width = int(width) - height = int(height) - im_path = im_path.resize((width, height)) - - width = width - (width%64) - height = height - (height%64) - - orig_im = im_path # general_crop(im_path, width, height) - else: - orig_im = im_path - - # compute latent pair (second one will be original latent if run_baseline=True) - latents = coupled_stablediffusion(base_prompt, - reverse=True, - init_image=orig_im, - init_image_strength=init_image_strength, - steps=steps, - mix_weight=mix_weight, - guidance_scale=guidance_scale, - run_baseline=run_baseline, - width=width, height=height) - # Denoise intermediate state with new conditioning - gen = coupled_stablediffusion(edit_prompt if (not use_p2p) else base_prompt, - None if (not use_p2p) else edit_prompt, - fixed_starting_latent=latents, - init_image_strength=init_image_strength, - steps=steps, - mix_weight=mix_weight, - guidance_scale=guidance_scale, - run_baseline=run_baseline, - width=width, height=height) - - return gen - - -def img2img_editing(im_path, - edit_prompt, - steps=50, - init_image_strength=0.7, - guidance_scale=3): - """ - Basic SDEdit/img2img, given an image add some noise and denoise with prompt - """ - orig_im = load_im_into_format_from_path(im_path) - - return baseline_stablediffusion(edit_prompt, - init_image_strength=init_image_strength, - steps=steps, - init_image=orig_im, - guidance_scale=guidance_scale) - - -def center_crop(im): - width, height = im.size # Get dimensions - min_dim = min(width, height) - left = (width - min_dim)/2 - top = (height - min_dim)/2 - right = (width + min_dim)/2 - bottom = (height + min_dim)/2 - - # Crop the center of the image - im = im.crop((left, top, right, bottom)) - return im - - - -def general_crop(im, target_w, target_h): - width, height = im.size # Get dimensions - min_dim = min(width, height) - left = target_w / 2 # (width - min_dim)/2 - top = target_h / 2 # (height - min_dim)/2 - right = width - (target_w / 2) # (width + min_dim)/2 - bottom = height - (target_h / 2) # (height + min_dim)/2 - - # Crop the center of the image - im = im.crop((left, top, right, bottom)) - return im - - - -def load_im_into_format_from_path(im_path): - return center_crop(Image.open(im_path)).resize((512,512)) - - -#### P2P STUFF #### -def init_attention_weights(weight_tuples): - tokens_length = clip_tokenizer.model_max_length - weights = torch.ones(tokens_length) - - for i, w in weight_tuples: - if i < tokens_length and i >= 0: - weights[i] = w - - - for name, module in unet.named_modules(): - module_name = type(module).__name__ - if module_name == "CrossAttention" and "attn2" in name: - module.last_attn_slice_weights = weights.to(device) - if module_name == "CrossAttention" and "attn1" in name: - module.last_attn_slice_weights = None - - -def init_attention_edit(tokens, tokens_edit): - tokens_length = clip_tokenizer.model_max_length - mask = torch.zeros(tokens_length) - indices_target = torch.arange(tokens_length, dtype=torch.long) - indices = torch.zeros(tokens_length, dtype=torch.long) - - tokens = tokens.input_ids.numpy()[0] - tokens_edit = tokens_edit.input_ids.numpy()[0] - - for name, a0, a1, b0, b1 in SequenceMatcher(None, tokens, tokens_edit).get_opcodes(): - if b0 < tokens_length: - if name == "equal" or (name == "replace" and a1-a0 == b1-b0): - mask[b0:b1] = 1 - indices[b0:b1] = indices_target[a0:a1] - - for name, module in unet.named_modules(): - module_name = type(module).__name__ - if module_name == "CrossAttention" and "attn2" in name: - module.last_attn_slice_mask = mask.to(device) - module.last_attn_slice_indices = indices.to(device) - if module_name == "CrossAttention" and "attn1" in name: - module.last_attn_slice_mask = None - module.last_attn_slice_indices = None - - -def init_attention_func(): - def new_attention(self, query, key, value, sequence_length, dim): - batch_size_attention = query.shape[0] - hidden_states = torch.zeros( - (batch_size_attention, sequence_length, dim // self.heads), device=query.device, dtype=query.dtype - ) - slice_size = self._slice_size if self._slice_size is not None else hidden_states.shape[0] - for i in range(hidden_states.shape[0] // slice_size): - start_idx = i * slice_size - end_idx = (i + 1) * slice_size - attn_slice = ( - torch.einsum("b i d, b j d -> b i j", query[start_idx:end_idx], key[start_idx:end_idx]) * self.scale - ) - attn_slice = attn_slice.softmax(dim=-1) - - if self.use_last_attn_slice: - if self.last_attn_slice_mask is not None: - new_attn_slice = torch.index_select(self.last_attn_slice, -1, self.last_attn_slice_indices) - attn_slice = attn_slice * (1 - self.last_attn_slice_mask) + new_attn_slice * self.last_attn_slice_mask - else: - attn_slice = self.last_attn_slice - - self.use_last_attn_slice = False - - if self.save_last_attn_slice: - self.last_attn_slice = attn_slice - self.save_last_attn_slice = False - - if self.use_last_attn_weights and self.last_attn_slice_weights is not None: - attn_slice = attn_slice * self.last_attn_slice_weights - self.use_last_attn_weights = False - - attn_slice = torch.einsum("b i j, b j d -> b i d", attn_slice, value[start_idx:end_idx]) - - hidden_states[start_idx:end_idx] = attn_slice - - # reshape hidden_states - hidden_states = self.reshape_batch_dim_to_heads(hidden_states) - return hidden_states - - for name, module in unet.named_modules(): - module_name = type(module).__name__ - if module_name == "CrossAttention": - module.last_attn_slice = None - module.use_last_attn_slice = False - module.use_last_attn_weights = False - module.save_last_attn_slice = False - module._attention = new_attention.__get__(module, type(module)) - -def use_last_tokens_attention(use=True): - for name, module in unet.named_modules(): - module_name = type(module).__name__ - if module_name == "CrossAttention" and "attn2" in name: - module.use_last_attn_slice = use - -def use_last_tokens_attention_weights(use=True): - for name, module in unet.named_modules(): - module_name = type(module).__name__ - if module_name == "CrossAttention" and "attn2" in name: - module.use_last_attn_weights = use - -def use_last_self_attention(use=True): - for name, module in unet.named_modules(): - module_name = type(module).__name__ - if module_name == "CrossAttention" and "attn1" in name: - module.use_last_attn_slice = use - -def save_last_tokens_attention(save=True): - for name, module in unet.named_modules(): - module_name = type(module).__name__ - if module_name == "CrossAttention" and "attn2" in name: - module.save_last_attn_slice = save - -def save_last_self_attention(save=True): - for name, module in unet.named_modules(): - module_name = type(module).__name__ - if module_name == "CrossAttention" and "attn1" in name: - module.save_last_attn_slice = save -#################################### - - -##### BASELINE ALGORITHM, ONLY USED NOW FOR SDEDIT ####3 - -@torch.no_grad() -def baseline_stablediffusion(prompt="", - prompt_edit=None, - null_prompt='', - prompt_edit_token_weights=[], - prompt_edit_tokens_start=0.0, - prompt_edit_tokens_end=1.0, - prompt_edit_spatial_start=0.0, - prompt_edit_spatial_end=1.0, - clip_start=0.0, - clip_end=1.0, - guidance_scale=7, - steps=50, - seed=1, - width=512, height=512, - init_image=None, init_image_strength=0.5, - fixed_starting_latent = None, - prev_image= None, - grid=None, - clip_guidance=None, - clip_guidance_scale=1, - num_cutouts=4, - cut_power=1, - scheduler_str='lms', - return_latent=False, - one_pass=False, - normalize_noise_pred=False): - width = width - width % 64 - height = height - height % 64 - - #If seed is None, randomly select seed from 0 to 2^32-1 - if seed is None: seed = random.randrange(2**32 - 1) - generator = torch.cuda.manual_seed(seed) - - #Set inference timesteps to scheduler - scheduler_dict = {'ddim':DDIMScheduler, - 'lms':LMSDiscreteScheduler, - 'pndm':PNDMScheduler, - 'ddpm':DDPMScheduler} - scheduler_call = scheduler_dict[scheduler_str] - if scheduler_str == 'ddim': - scheduler = DDIMScheduler(beta_start=0.00085, beta_end=0.012, - beta_schedule="scaled_linear", - clip_sample=False, set_alpha_to_one=False) - else: - scheduler = scheduler_call(beta_schedule="scaled_linear", - num_train_timesteps=1000) - - scheduler.set_timesteps(steps) - if prev_image is not None: - prev_scheduler = LMSDiscreteScheduler(beta_start=0.00085, - beta_end=0.012, - beta_schedule="scaled_linear", - num_train_timesteps=1000) - prev_scheduler.set_timesteps(steps) - - #Preprocess image if it exists (img2img) - if init_image is not None: - init_image = init_image.resize((width, height), resample=Image.Resampling.LANCZOS) - init_image = np.array(init_image).astype(np_dtype) / 255.0 * 2.0 - 1.0 - init_image = torch.from_numpy(init_image[np.newaxis, ...].transpose(0, 3, 1, 2)) - - #If there is alpha channel, composite alpha for white, as the diffusion model does not support alpha channel - if init_image.shape[1] > 3: - init_image = init_image[:, :3] * init_image[:, 3:] + (1 - init_image[:, 3:]) - - #Move image to GPU - init_image = init_image.to(device) - - #Encode image - with autocast(device): - init_latent = vae.encode(init_image).latent_dist.sample(generator=generator) * 0.18215 - - t_start = steps - int(steps * init_image_strength) - - else: - init_latent = torch.zeros((1, unet.in_channels, height // 8, width // 8), device=device) - t_start = 0 - - #Generate random normal noise - if fixed_starting_latent is None: - noise = torch.randn(init_latent.shape, generator=generator, device=device, dtype=unet.dtype) - if scheduler_str == 'ddim': - if init_image is not None: - raise notImplementedError - latent = scheduler.add_noise(init_latent, noise, - 1000 - int(1000 * init_image_strength)).to(device) - else: - latent = noise - else: - latent = scheduler.add_noise(init_latent, noise, - t_start).to(device) - else: - latent = fixed_starting_latent - t_start = steps - int(steps * init_image_strength) - - if prev_image is not None: - #Resize and prev_image for numpy b h w c -> torch b c h w - prev_image = prev_image.resize((width, height), resample=Image.Resampling.LANCZOS) - prev_image = np.array(prev_image).astype(np_dtype) / 255.0 * 2.0 - 1.0 - prev_image = torch.from_numpy(prev_image[np.newaxis, ...].transpose(0, 3, 1, 2)) - - #If there is alpha channel, composite alpha for white, as the diffusion model does not support alpha channel - if prev_image.shape[1] > 3: - prev_image = prev_image[:, :3] * prev_image[:, 3:] + (1 - prev_image[:, 3:]) - - #Move image to GPU - prev_image = prev_image.to(device) - - #Encode image - with autocast(device): - prev_init_latent = vae.encode(prev_image).latent_dist.sample(generator=generator) * 0.18215 - - t_start = steps - int(steps * init_image_strength) - - prev_latent = prev_scheduler.add_noise(prev_init_latent, noise, t_start).to(device) - else: - prev_latent = None - - - #Process clip - with autocast(device): - tokens_unconditional = clip_tokenizer(null_prompt, padding="max_length", max_length=clip_tokenizer.model_max_length, truncation=True, return_tensors="pt", return_overflowing_tokens=True) - embedding_unconditional = clip(tokens_unconditional.input_ids.to(device)).last_hidden_state - - tokens_conditional = clip_tokenizer(prompt, padding="max_length", max_length=clip_tokenizer.model_max_length, truncation=True, return_tensors="pt", return_overflowing_tokens=True) - embedding_conditional = clip(tokens_conditional.input_ids.to(device)).last_hidden_state - - #Process prompt editing - assert not ((prompt_edit is not None) and (prev_image is not None)) - if prompt_edit is not None: - tokens_conditional_edit = clip_tokenizer(prompt_edit, padding="max_length", max_length=clip_tokenizer.model_max_length, truncation=True, return_tensors="pt", return_overflowing_tokens=True) - embedding_conditional_edit = clip(tokens_conditional_edit.input_ids.to(device)).last_hidden_state - init_attention_edit(tokens_conditional, tokens_conditional_edit) - elif prev_image is not None: - init_attention_edit(tokens_conditional, tokens_conditional) - - - init_attention_func() - init_attention_weights(prompt_edit_token_weights) - - timesteps = scheduler.timesteps[t_start:] - # print(timesteps) - - assert isinstance(guidance_scale, int) - num_cycles = 1 # guidance_scale + 1 - - last_noise_preds = None - for i, t in tqdm(enumerate(timesteps), total=len(timesteps)): - t_index = t_start + i - - latent_model_input = latent - if scheduler_str=='lms': - sigma = scheduler.sigmas[t_index] # last is first and first is last - latent_model_input = (latent_model_input / ((sigma**2 + 1) ** 0.5)).to(unet.dtype) - else: - assert scheduler_str in ['ddim', 'pndm', 'ddpm'] - - #Predict the unconditional noise residual - - if len(t.shape) == 0: - t = t[None].to(unet.device) - noise_pred_uncond = unet(latent_model_input, t, encoder_hidden_states=embedding_unconditional, - ).sample - - if prev_latent is not None: - prev_latent_model_input = prev_latent - prev_latent_model_input = (prev_latent_model_input / ((sigma**2 + 1) ** 0.5)).to(unet.dtype) - prev_noise_pred_uncond = unet(prev_latent_model_input, t, - encoder_hidden_states=embedding_unconditional, - ).sample - # noise_pred_uncond = unet(latent_model_input, t, - # encoder_hidden_states=embedding_unconditional)['sample'] - - #Prepare the Cross-Attention layers - if prompt_edit is not None or prev_latent is not None: - save_last_tokens_attention() - save_last_self_attention() - else: - #Use weights on non-edited prompt when edit is None - use_last_tokens_attention_weights() - - #Predict the conditional noise residual and save the cross-attention layer activations - if prev_latent is not None: - raise NotImplementedError # I totally lost track of what this is - prev_noise_pred_cond = unet(prev_latent_model_input, t, encoder_hidden_states=embedding_conditional, - ).sample - else: - noise_pred_cond = unet(latent_model_input, t, encoder_hidden_states=embedding_conditional, - ).sample - - #Edit the Cross-Attention layer activations - t_scale = t / scheduler.num_train_timesteps - if prompt_edit is not None or prev_latent is not None: - if t_scale >= prompt_edit_tokens_start and t_scale <= prompt_edit_tokens_end: - use_last_tokens_attention() - if t_scale >= prompt_edit_spatial_start and t_scale <= prompt_edit_spatial_end: - use_last_self_attention() - - #Use weights on edited prompt - use_last_tokens_attention_weights() - - #Predict the edited conditional noise residual using the cross-attention masks - if prompt_edit is not None: - noise_pred_cond = unet(latent_model_input, t, - encoder_hidden_states=embedding_conditional_edit).sample - - #Perform guidance - # if i%(num_cycles)==0: # cycle_i+1==num_cycles: - """ - if cycle_i+1==num_cycles: - noise_pred = noise_pred_uncond - else: - noise_pred = noise_pred_cond - noise_pred_uncond - - """ - if last_noise_preds is not None: - # print( (last_noise_preds[0]*noise_pred_uncond).sum(), (last_noise_preds[1]*noise_pred_cond).sum()) - # print(F.cosine_similarity(last_noise_preds[0].flatten(), noise_pred_uncond.flatten(), dim=0), - # F.cosine_similarity(last_noise_preds[1].flatten(), noise_pred_cond.flatten(), dim=0)) - last_grad= last_noise_preds[1] - last_noise_preds[0] - new_grad = noise_pred_cond - noise_pred_uncond - # print( F.cosine_similarity(last_grad.flatten(), new_grad.flatten(), dim=0)) - last_noise_preds = (noise_pred_uncond, noise_pred_cond) - - use_cond_guidance = True - if use_cond_guidance: - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_cond - noise_pred_uncond) - else: - noise_pred = noise_pred_uncond - if clip_guidance is not None and t_scale >= clip_start and t_scale <= clip_end: - noise_pred, latent = new_cond_fn(latent, t, t_index, - embedding_conditional, noise_pred,clip_guidance, - clip_guidance_scale, - num_cutouts, - scheduler, unet,use_cutouts=True, - cut_power=cut_power) - if normalize_noise_pred: - noise_pred = noise_pred * noise_pred_uncond.norm() / noise_pred.norm() - if scheduler_str == 'ddim': - latent = forward_step(scheduler, noise_pred, - t, - latent).prev_sample - else: - latent = scheduler.step(noise_pred, - t_index, - latent).prev_sample - - if prev_latent is not None: - prev_noise_pred = prev_noise_pred_uncond + guidance_scale * (prev_noise_pred_cond - prev_noise_pred_uncond) - prev_latent = prev_scheduler.step(prev_noise_pred, t_index, prev_latent).prev_sample - if one_pass: break - - #scale and decode the image latents with vae - if return_latent: return latent - latent = latent / 0.18215 - image = vae.decode(latent.to(vae.dtype)).sample - - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - - image, _ = check_safety(image) - - image = (image[0] * 255).round().astype("uint8") - return Image.fromarray(image) -#################################### - -#### HELPER FUNCTIONS FOR OUR METHOD ##### - -def get_alpha_and_beta(t, scheduler): - # want to run this for both current and previous timnestep - if t.dtype==torch.long: - alpha = scheduler.alphas_cumprod[t] - return alpha, 1-alpha - - if t<0: - return scheduler.final_alpha_cumprod, 1 - scheduler.final_alpha_cumprod - - - low = t.floor().long() - high = t.ceil().long() - rem = t - low - - low_alpha = scheduler.alphas_cumprod[low] - high_alpha = scheduler.alphas_cumprod[high] - interpolated_alpha = low_alpha * rem + high_alpha * (1-rem) - interpolated_beta = 1 - interpolated_alpha - return interpolated_alpha, interpolated_beta - - -# A DDIM forward step function -def forward_step( - self, - model_output, - timestep: int, - sample, - eta: float = 0.0, - use_clipped_model_output: bool = False, - generator=None, - return_dict: bool = True, - use_double=False, -) : - if self.num_inference_steps is None: - raise ValueError( - "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" - ) - - prev_timestep = timestep - self.config.num_train_timesteps / self.num_inference_steps - - if timestep > self.timesteps.max(): - raise NotImplementedError("Need to double check what the overflow is") - - alpha_prod_t, beta_prod_t = get_alpha_and_beta(timestep, self) - alpha_prod_t_prev, _ = get_alpha_and_beta(prev_timestep, self) - - - alpha_quotient = ((alpha_prod_t / alpha_prod_t_prev)**0.5) - first_term = (1./alpha_quotient) * sample - second_term = (1./alpha_quotient) * (beta_prod_t ** 0.5) * model_output - third_term = ((1 - alpha_prod_t_prev)**0.5) * model_output - return first_term - second_term + third_term - -# A DDIM reverse step function, the inverse of above -def reverse_step( - self, - model_output, - timestep: int, - sample, - eta: float = 0.0, - use_clipped_model_output: bool = False, - generator=None, - return_dict: bool = True, - use_double=False, -) : - if self.num_inference_steps is None: - raise ValueError( - "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" - ) - - prev_timestep = timestep - self.config.num_train_timesteps / self.num_inference_steps - - if timestep > self.timesteps.max(): - raise NotImplementedError - else: - alpha_prod_t = self.alphas_cumprod[timestep] - - alpha_prod_t, beta_prod_t = get_alpha_and_beta(timestep, self) - alpha_prod_t_prev, _ = get_alpha_and_beta(prev_timestep, self) - - alpha_quotient = ((alpha_prod_t / alpha_prod_t_prev)**0.5) - - first_term = alpha_quotient * sample - second_term = ((beta_prod_t)**0.5) * model_output - third_term = alpha_quotient * ((1 - alpha_prod_t_prev)**0.5) * model_output - return first_term + second_term - third_term - - - - -@torch.no_grad() -def latent_to_image(latent): - image = vae.decode(latent.to(vae.dtype)/0.18215).sample - image = prep_image_for_return(image) - return image - -def prep_image_for_return(image): - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - image = (image[0] * 255).round().astype("uint8") - image = Image.fromarray(image) - return image - -############################# - -##### MAIN EDICT FUNCTION ####### -# Use EDICT_editing to perform calls - -@torch.no_grad() -def coupled_stablediffusion(prompt="", - prompt_edit=None, - null_prompt='', - prompt_edit_token_weights=[], - prompt_edit_tokens_start=0.0, - prompt_edit_tokens_end=1.0, - prompt_edit_spatial_start=0.0, - prompt_edit_spatial_end=1.0, - guidance_scale=7.0, steps=50, - seed=1, width=512, height=512, - init_image=None, init_image_strength=1.0, - run_baseline=False, - use_lms=False, - leapfrog_steps=True, - reverse=False, - return_latents=False, - fixed_starting_latent=None, - beta_schedule='scaled_linear', - mix_weight=0.93): - #If seed is None, randomly select seed from 0 to 2^32-1 - if seed is None: seed = random.randrange(2**32 - 1) - generator = torch.cuda.manual_seed(seed) - - def image_to_latent(im): - if isinstance(im, torch.Tensor): - # assume it's the latent - # used to avoid clipping new generation before inversion - init_latent = im.to(device) - else: - #Resize and transpose for numpy b h w c -> torch b c h w - im = im.resize((width, height), resample=Image.Resampling.LANCZOS) - im = np.array(im).astype(np_dtype) / 255.0 * 2.0 - 1.0 - # check if black and white - if len(im.shape) < 3: - im = np.stack([im for _ in range(3)], axis=2) # putting at end b/c channels - - im = torch.from_numpy(im[np.newaxis, ...].transpose(0, 3, 1, 2)) - - #If there is alpha channel, composite alpha for white, as the diffusion model does not support alpha channel - if im.shape[1] > 3: - im = im[:, :3] * im[:, 3:] + (1 - im[:, 3:]) - - #Move image to GPU - im = im.to(device) - #Encode image - if use_half_prec: - init_latent = vae.encode(im).latent_dist.sample(generator=generator) * 0.18215 - else: - with autocast(device): - init_latent = vae.encode(im).latent_dist.sample(generator=generator) * 0.18215 - return init_latent - assert not use_lms, "Can't invert LMS the same as DDIM" - if run_baseline: leapfrog_steps=False - #Change size to multiple of 64 to prevent size mismatches inside model - width = width - width % 64 - height = height - height % 64 - - - #Preprocess image if it exists (img2img) - if init_image is not None: - assert reverse # want to be performing deterministic noising - # can take either pair (output of generative process) or single image - if isinstance(init_image, list): - if isinstance(init_image[0], torch.Tensor): - init_latent = [t.clone() for t in init_image] - else: - init_latent = [image_to_latent(im) for im in init_image] - else: - init_latent = image_to_latent(init_image) - # this is t_start for forward, t_end for reverse - t_limit = steps - int(steps * init_image_strength) - else: - assert not reverse, 'Need image to reverse from' - init_latent = torch.zeros((1, unet.in_channels, height // 8, width // 8), device=device) - t_limit = 0 - - if reverse: - latent = init_latent - else: - #Generate random normal noise - noise = torch.randn(init_latent.shape, - generator=generator, - device=device, - dtype=torch_dtype) - if fixed_starting_latent is None: - latent = noise - else: - if isinstance(fixed_starting_latent, list): - latent = [l.clone() for l in fixed_starting_latent] - else: - latent = fixed_starting_latent.clone() - t_limit = steps - int(steps * init_image_strength) - if isinstance(latent, list): # initializing from pair of images - latent_pair = latent - else: # initializing from noise - latent_pair = [latent.clone(), latent.clone()] - - - if steps==0: - if init_image is not None: - return image_to_latent(init_image) - else: - image = vae.decode(latent.to(vae.dtype) / 0.18215).sample - return prep_image_for_return(image) - - #Set inference timesteps to scheduler - schedulers = [] - for i in range(2): - # num_raw_timesteps = max(1000, steps) - scheduler = DDIMScheduler(beta_start=0.00085, beta_end=0.012, - beta_schedule=beta_schedule, - num_train_timesteps=1000, - clip_sample=False, - set_alpha_to_one=False) - scheduler.set_timesteps(steps) - schedulers.append(scheduler) - - with autocast(device): - # CLIP Text Embeddings - tokens_unconditional = clip_tokenizer(null_prompt, padding="max_length", - max_length=clip_tokenizer.model_max_length, - truncation=True, return_tensors="pt", - return_overflowing_tokens=True) - embedding_unconditional = clip(tokens_unconditional.input_ids.to(device)).last_hidden_state - - tokens_conditional = clip_tokenizer(prompt, padding="max_length", - max_length=clip_tokenizer.model_max_length, - truncation=True, return_tensors="pt", - return_overflowing_tokens=True) - embedding_conditional = clip(tokens_conditional.input_ids.to(device)).last_hidden_state - - #Process prompt editing (if running Prompt-to-Prompt) - if prompt_edit is not None: - tokens_conditional_edit = clip_tokenizer(prompt_edit, padding="max_length", - max_length=clip_tokenizer.model_max_length, - truncation=True, return_tensors="pt", - return_overflowing_tokens=True) - embedding_conditional_edit = clip(tokens_conditional_edit.input_ids.to(device)).last_hidden_state - - init_attention_edit(tokens_conditional, tokens_conditional_edit) - - init_attention_func() - init_attention_weights(prompt_edit_token_weights) - - timesteps = schedulers[0].timesteps[t_limit:] - if reverse: timesteps = timesteps.flip(0) - - for i, t in tqdm(enumerate(timesteps), total=len(timesteps)): - t_scale = t / schedulers[0].num_train_timesteps - - if (reverse) and (not run_baseline): - # Reverse mixing layer - new_latents = [l.clone() for l in latent_pair] - new_latents[1] = (new_latents[1].clone() - (1-mix_weight)*new_latents[0].clone()) / mix_weight - new_latents[0] = (new_latents[0].clone() - (1-mix_weight)*new_latents[1].clone()) / mix_weight - latent_pair = new_latents - - # alternate EDICT steps - for latent_i in range(2): - if run_baseline and latent_i==1: continue # just have one sequence for baseline - # this modifies latent_pair[i] while using - # latent_pair[(i+1)%2] - if reverse and (not run_baseline): - if leapfrog_steps: - # what i would be from going other way - orig_i = len(timesteps) - (i+1) - offset = (orig_i+1) % 2 - latent_i = (latent_i + offset) % 2 - else: - # Do 1 then 0 - latent_i = (latent_i+1)%2 - else: - if leapfrog_steps: - offset = i%2 - latent_i = (latent_i + offset) % 2 - - latent_j = ((latent_i+1) % 2) if not run_baseline else latent_i - - latent_model_input = latent_pair[latent_j] - latent_base = latent_pair[latent_i] - - #Predict the unconditional noise residual - noise_pred_uncond = unet(latent_model_input, t, - encoder_hidden_states=embedding_unconditional).sample - - #Prepare the Cross-Attention layers - if prompt_edit is not None: - save_last_tokens_attention() - save_last_self_attention() - else: - #Use weights on non-edited prompt when edit is None - use_last_tokens_attention_weights() - - #Predict the conditional noise residual and save the cross-attention layer activations - noise_pred_cond = unet(latent_model_input, t, - encoder_hidden_states=embedding_conditional).sample - - #Edit the Cross-Attention layer activations - if prompt_edit is not None: - t_scale = t / schedulers[0].num_train_timesteps - if t_scale >= prompt_edit_tokens_start and t_scale <= prompt_edit_tokens_end: - use_last_tokens_attention() - if t_scale >= prompt_edit_spatial_start and t_scale <= prompt_edit_spatial_end: - use_last_self_attention() - - #Use weights on edited prompt - use_last_tokens_attention_weights() - - #Predict the edited conditional noise residual using the cross-attention masks - noise_pred_cond = unet(latent_model_input, - t, - encoder_hidden_states=embedding_conditional_edit).sample - - #Perform guidance - grad = (noise_pred_cond - noise_pred_uncond) - noise_pred = noise_pred_uncond + guidance_scale * grad - - - step_call = reverse_step if reverse else forward_step - new_latent = step_call(schedulers[latent_i], - noise_pred, - t, - latent_base)# .prev_sample - new_latent = new_latent.to(latent_base.dtype) - - latent_pair[latent_i] = new_latent - - if (not reverse) and (not run_baseline): - # Mixing layer (contraction) during generative process - new_latents = [l.clone() for l in latent_pair] - new_latents[0] = (mix_weight*new_latents[0] + (1-mix_weight)*new_latents[1]).clone() - new_latents[1] = ((1-mix_weight)*new_latents[0] + (mix_weight)*new_latents[1]).clone() - latent_pair = new_latents - - #scale and decode the image latents with vae, can return latents instead of images - if reverse or return_latents: - results = [latent_pair] - return results if len(results)>1 else results[0] - - # decode latents to iamges - images = [] - for latent_i in range(2): - latent = latent_pair[latent_i] / 0.18215 - image = vae.decode(latent.to(vae.dtype)).sample - images.append(image) - - # Return images - return_arr = [] - for image in images: - image = prep_image_for_return(image) - return_arr.append(image) - results = [return_arr] - return results if len(results)>1 else results[0] - - diff --git a/spaces/SankarSrin/image-matting-app/ppmatting/models/layers/gca_module.py b/spaces/SankarSrin/image-matting-app/ppmatting/models/layers/gca_module.py deleted file mode 100644 index ba8654efc9bd24de2e127393ad8338d21964e4a5..0000000000000000000000000000000000000000 --- a/spaces/SankarSrin/image-matting-app/ppmatting/models/layers/gca_module.py +++ /dev/null @@ -1,211 +0,0 @@ -# copyright (c) 2022 PaddlePaddle Authors. All Rights Reserve. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# The gca code was heavily based on https://github.com/Yaoyi-Li/GCA-Matting -# and https://github.com/open-mmlab/mmediting - -import paddle -import paddle.nn as nn -import paddle.nn.functional as F - -from paddleseg.cvlibs import param_init - - -class GuidedCxtAtten(nn.Layer): - def __init__(self, - out_channels, - guidance_channels, - kernel_size=3, - stride=1, - rate=2): - super().__init__() - - self.kernel_size = kernel_size - self.rate = rate - self.stride = stride - self.guidance_conv = nn.Conv2D( - in_channels=guidance_channels, - out_channels=guidance_channels // 2, - kernel_size=1) - - self.out_conv = nn.Sequential( - nn.Conv2D( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=1, - bias_attr=False), - nn.BatchNorm(out_channels)) - - self.init_weight() - - def init_weight(self): - param_init.xavier_uniform(self.guidance_conv.weight) - param_init.constant_init(self.guidance_conv.bias, value=0.0) - param_init.xavier_uniform(self.out_conv[0].weight) - param_init.constant_init(self.out_conv[1].weight, value=1e-3) - param_init.constant_init(self.out_conv[1].bias, value=0.0) - - def forward(self, img_feat, alpha_feat, unknown=None, softmax_scale=1.): - - img_feat = self.guidance_conv(img_feat) - img_feat = F.interpolate( - img_feat, scale_factor=1 / self.rate, mode='nearest') - - # process unknown mask - unknown, softmax_scale = self.process_unknown_mask(unknown, img_feat, - softmax_scale) - - img_ps, alpha_ps, unknown_ps = self.extract_feature_maps_patches( - img_feat, alpha_feat, unknown) - - self_mask = self.get_self_correlation_mask(img_feat) - - # split tensors by batch dimension; tuple is returned - img_groups = paddle.split(img_feat, 1, axis=0) - img_ps_groups = paddle.split(img_ps, 1, axis=0) - alpha_ps_groups = paddle.split(alpha_ps, 1, axis=0) - unknown_ps_groups = paddle.split(unknown_ps, 1, axis=0) - scale_groups = paddle.split(softmax_scale, 1, axis=0) - groups = (img_groups, img_ps_groups, alpha_ps_groups, unknown_ps_groups, - scale_groups) - - y = [] - - for img_i, img_ps_i, alpha_ps_i, unknown_ps_i, scale_i in zip(*groups): - # conv for compare - similarity_map = self.compute_similarity_map(img_i, img_ps_i) - - gca_score = self.compute_guided_attention_score( - similarity_map, unknown_ps_i, scale_i, self_mask) - - yi = self.propagate_alpha_feature(gca_score, alpha_ps_i) - - y.append(yi) - - y = paddle.concat(y, axis=0) # back to the mini-batch - y = paddle.reshape(y, alpha_feat.shape) - - y = self.out_conv(y) + alpha_feat - - return y - - def extract_feature_maps_patches(self, img_feat, alpha_feat, unknown): - - # extract image feature patches with shape: - # (N, img_h*img_w, img_c, img_ks, img_ks) - img_ks = self.kernel_size - img_ps = self.extract_patches(img_feat, img_ks, self.stride) - - # extract alpha feature patches with shape: - # (N, img_h*img_w, alpha_c, alpha_ks, alpha_ks) - alpha_ps = self.extract_patches(alpha_feat, self.rate * 2, self.rate) - - # extract unknown mask patches with shape: (N, img_h*img_w, 1, 1) - unknown_ps = self.extract_patches(unknown, img_ks, self.stride) - unknown_ps = unknown_ps.squeeze(axis=2) # squeeze channel dimension - unknown_ps = unknown_ps.mean(axis=[2, 3], keepdim=True) - - return img_ps, alpha_ps, unknown_ps - - def extract_patches(self, x, kernel_size, stride): - n, c, _, _ = x.shape - x = self.pad(x, kernel_size, stride) - x = F.unfold(x, [kernel_size, kernel_size], strides=[stride, stride]) - x = paddle.transpose(x, (0, 2, 1)) - x = paddle.reshape(x, (n, -1, c, kernel_size, kernel_size)) - - return x - - def pad(self, x, kernel_size, stride): - left = (kernel_size - stride + 1) // 2 - right = (kernel_size - stride) // 2 - pad = (left, right, left, right) - return F.pad(x, pad, mode='reflect') - - def compute_guided_attention_score(self, similarity_map, unknown_ps, scale, - self_mask): - # scale the correlation with predicted scale factor for known and - # unknown area - unknown_scale, known_scale = scale[0] - out = similarity_map * ( - unknown_scale * paddle.greater_than(unknown_ps, - paddle.to_tensor([0.])) + - known_scale * paddle.less_equal(unknown_ps, paddle.to_tensor([0.]))) - # mask itself, self-mask only applied to unknown area - out = out + self_mask * unknown_ps - gca_score = F.softmax(out, axis=1) - - return gca_score - - def propagate_alpha_feature(self, gca_score, alpha_ps): - - alpha_ps = alpha_ps[0] # squeeze dim 0 - if self.rate == 1: - gca_score = self.pad(gca_score, kernel_size=2, stride=1) - alpha_ps = paddle.transpose(alpha_ps, (1, 0, 2, 3)) - out = F.conv2d(gca_score, alpha_ps) / 4. - else: - out = F.conv2d_transpose( - gca_score, alpha_ps, stride=self.rate, padding=1) / 4. - - return out - - def compute_similarity_map(self, img_feat, img_ps): - img_ps = img_ps[0] # squeeze dim 0 - # convolve the feature to get correlation (similarity) map - img_ps_normed = img_ps / paddle.clip(self.l2_norm(img_ps), 1e-4) - img_feat = F.pad(img_feat, (1, 1, 1, 1), mode='reflect') - similarity_map = F.conv2d(img_feat, img_ps_normed) - - return similarity_map - - def get_self_correlation_mask(self, img_feat): - _, _, h, w = img_feat.shape - self_mask = F.one_hot( - paddle.reshape(paddle.arange(h * w), (h, w)), - num_classes=int(h * w)) - - self_mask = paddle.transpose(self_mask, (2, 0, 1)) - self_mask = paddle.reshape(self_mask, (1, h * w, h, w)) - - return self_mask * (-1e4) - - def process_unknown_mask(self, unknown, img_feat, softmax_scale): - - n, _, h, w = img_feat.shape - - if unknown is not None: - unknown = unknown.clone() - unknown = F.interpolate( - unknown, scale_factor=1 / self.rate, mode='nearest') - unknown_mean = unknown.mean(axis=[2, 3]) - known_mean = 1 - unknown_mean - unknown_scale = paddle.clip( - paddle.sqrt(unknown_mean / known_mean), 0.1, 10) - known_scale = paddle.clip( - paddle.sqrt(known_mean / unknown_mean), 0.1, 10) - softmax_scale = paddle.concat([unknown_scale, known_scale], axis=1) - else: - unknown = paddle.ones([n, 1, h, w]) - softmax_scale = paddle.reshape( - paddle.to_tensor([softmax_scale, softmax_scale]), (1, 2)) - softmax_scale = paddle.expand(softmax_scale, (n, 2)) - - return unknown, softmax_scale - - @staticmethod - def l2_norm(x): - x = x**2 - x = x.sum(axis=[1, 2, 3], keepdim=True) - return paddle.sqrt(x) diff --git a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/losses.py b/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/Shawn37/UTR_LM/esm/model/esm2_only_secondarystructure.py b/spaces/Shawn37/UTR_LM/esm/model/esm2_only_secondarystructure.py deleted file mode 100644 index 2dae4912a55f6ee6491cbafb0fefbbf6a64f382f..0000000000000000000000000000000000000000 --- a/spaces/Shawn37/UTR_LM/esm/model/esm2_only_secondarystructure.py +++ /dev/null @@ -1,179 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Union -import torch -import torch.nn as nn - -import esm -from esm.modules import ContactPredictionHead, ESM1bLayerNorm, RobertaLMHead, TransformerLayer - - -class ESM2(nn.Module): - def __init__( - self, - num_layers: int = 33, - embed_dim: int = 1280, - attention_heads: int = 20, - alphabet: Union[esm.data.Alphabet, str] = "ESM-1b", - token_dropout: bool = True, - ): - super().__init__() - self.num_layers = num_layers - self.embed_dim = embed_dim - self.attention_heads = attention_heads - if not isinstance(alphabet, esm.data.Alphabet): - alphabet = esm.data.Alphabet.from_architecture(alphabet) - self.alphabet = alphabet - self.alphabet_size = len(alphabet) - self.padding_idx = alphabet.padding_idx - self.mask_idx = alphabet.mask_idx - self.cls_idx = alphabet.cls_idx - self.eos_idx = alphabet.eos_idx - self.prepend_bos = alphabet.prepend_bos - self.append_eos = alphabet.append_eos - self.token_dropout = token_dropout - - self._init_submodules() - - def _init_submodules(self): - self.embed_scale = 1 - self.embed_tokens = nn.Embedding( - self.alphabet_size, - self.embed_dim, - padding_idx=self.padding_idx, - ) - - self.layers = nn.ModuleList( - [ - TransformerLayer( - self.embed_dim, - 4 * self.embed_dim, - self.attention_heads, - add_bias_kv=False, - use_esm1b_layer_norm=True, - use_rotary_embeddings=True, - ) - for _ in range(self.num_layers) - ] - ) - - self.contact_head = ContactPredictionHead( - self.num_layers * self.attention_heads, - self.prepend_bos, - self.append_eos, - eos_idx=self.eos_idx, - ) - self.emb_layer_norm_after = ESM1bLayerNorm(self.embed_dim) - - self.lm_head = RobertaLMHead( - embed_dim=self.embed_dim, - output_dim=self.alphabet_size, - weight=self.embed_tokens.weight, - ) -# self.supervised_linear = nn.Linear(self.embed_dim, 1) - self.structure_linear = nn.Linear(self.embed_dim, 3) - def forward(self, tokens, repr_layers=[], need_head_weights=True, return_contacts=True, return_representation=True, return_attentions_symm = False, return_attentions = False): - if return_contacts: - need_head_weights = True - - assert tokens.ndim == 2 - padding_mask = tokens.eq(self.padding_idx) # B, T - - x = self.embed_scale * self.embed_tokens(tokens) - - if self.token_dropout: - x.masked_fill_((tokens == self.mask_idx).unsqueeze(-1), 0.0) - #print(f'tokens = {tokens}') - #print(f'self.mask_idx = {self.mask_idx}') - #print('x.shape = ', x.shape) - # x: B x T x C - mask_ratio_train = 0.15 * 0.8 - src_lengths = (~padding_mask).sum(-1) - #print(f'mask_ratio_train = {mask_ratio_train}') - #print(f'padding_mask = {padding_mask}') - #print(f'src_lengths = {src_lengths}') - mask_ratio_observed = (tokens == self.mask_idx).sum(-1).to(x.dtype) / src_lengths - #print('mask_ratio_observed = ',mask_ratio_observed) - x = x * (1 - mask_ratio_train) / (1 - mask_ratio_observed)[:, None, None] - #print(f'x.shape = {x.shape}:\n', x) - if padding_mask is not None: - x = x * (1 - padding_mask.unsqueeze(-1).type_as(x)) - #print(f'x.shape = {x.shape}:\n', x) - repr_layers = set(repr_layers) - hidden_representations = {} - if 0 in repr_layers: - hidden_representations[0] = x - - if need_head_weights: - attn_weights = [] - - # (B, T, E) => (T, B, E) - x = x.transpose(0, 1) - - if not padding_mask.any(): - padding_mask = None - - for layer_idx, layer in enumerate(self.layers): - x, attn = layer( - x, - self_attn_padding_mask=padding_mask, - need_head_weights=need_head_weights, - ) - if (layer_idx + 1) in repr_layers: - hidden_representations[layer_idx + 1] = x.transpose(0, 1) - if need_head_weights: - # (H, B, T, T) => (B, H, T, T) - attn_weights.append(attn.transpose(1, 0)) -# print(x.shape) # 73, 2, 1280 - x = self.emb_layer_norm_after(x) - x = x.transpose(0, 1) # (T, B, E) => (B, T, E) - - # last hidden representation should have layer norm applied - if (layer_idx + 1) in repr_layers: - hidden_representations[layer_idx + 1] = x -# x_supervised = self.supervised_linear(x[:,0,:]) - x_structure = self.structure_linear(x) - x = self.lm_head(x) - - if return_representation: - result = {"logits": x, "logits_structure": x_structure, "representations": hidden_representations} - else: - result = {"logits": x, "logits_structure": x_structure} - if need_head_weights: - # attentions: B x L x H x T x T - attentions = torch.stack(attn_weights, 1) - if padding_mask is not None: - attention_mask = 1 - padding_mask.type_as(attentions) - attention_mask = attention_mask.unsqueeze(1) * attention_mask.unsqueeze(2) - attentions = attentions * attention_mask[:, None, None, :, :] - if return_attentions: result["attentions"] = attentions - if return_contacts: - attentions_symm, contacts = self.contact_head(tokens, attentions) - result["contacts"] = contacts - if return_attentions_symm: result["attentions_symm"] = attentions_symm - - return result - - def predict_contacts(self, tokens): - return self(tokens, return_contacts=True)["contacts"] - - def predict_symmetric_attentions(self, tokens): - return self(tokens, return_contacts=True)["attentions_symm"] - - def predict_attentions(self, tokens): - return self(tokens, need_head_weights=True)["attentions"] - - def predict_representations(self, tokens): - return self(tokens, return_representation=True)['representations'] - - def predict_logits(self, tokens): - return self(tokens)['logits'] - -# def predict_logits_supervised(self, tokens): -# return self(tokens)['logits_supervised'] - - def predict_logits_structure(self, tokens): - return self(tokens)['logits_structure'] diff --git a/spaces/SpacesExamples/fastapi_dummy/Dockerfile b/spaces/SpacesExamples/fastapi_dummy/Dockerfile deleted file mode 100644 index b742a1870b92ce033b776c0defec1a9996889d50..0000000000000000000000000000000000000000 --- a/spaces/SpacesExamples/fastapi_dummy/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -COPY . . - -CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/Sriharsha6902/Chat-Analyser/senti.py b/spaces/Sriharsha6902/Chat-Analyser/senti.py deleted file mode 100644 index 34a529b2e56d681de33f291b309df94149a7789b..0000000000000000000000000000000000000000 --- a/spaces/Sriharsha6902/Chat-Analyser/senti.py +++ /dev/null @@ -1,21 +0,0 @@ -import warnings -warnings.filterwarnings('ignore') -import numpy as np -from keras_preprocessing.text import Tokenizer -from keras_preprocessing.sequence import pad_sequences -import keras.models -max_words = 5000 -max_len = 200 -def sentiment_analysis(test): - data=np.load('data.npy') - tokenizer = Tokenizer(num_words=max_words) - tokenizer.fit_on_texts(data) - best_model = keras.models.load_model("Sentiment_analysis_BiLSTM.hdf5") - sentiment = [0,-1,1] - sent=[] - for i in test['message']: - sequence = tokenizer.texts_to_sequences([i]) - sentest = pad_sequences(sequence, maxlen=200) - sent.append(sentiment[np.around(best_model.predict(sentest), decimals=0).argmax(axis=1)[0]]) - test['value']=sent - return test \ No newline at end of file diff --git a/spaces/StatsByZach/app/on_ice_xgfp.py b/spaces/StatsByZach/app/on_ice_xgfp.py deleted file mode 100644 index 3e82e8c7633c7a6720a73c82d7bb86fd7e74318d..0000000000000000000000000000000000000000 --- a/spaces/StatsByZach/app/on_ice_xgfp.py +++ /dev/null @@ -1,265 +0,0 @@ -##### gsax_leaderboard.,py ##### - -# Import modules -from shiny import * -import shinyswatch -import plotly.express as px -from shinywidgets import output_widget, render_widget -import pandas as pd -from configure import base_url - -# Paths to data -onice = "data/on_ice_xg.csv" -df = pd.read_csv(onice) -def server(input,output,session): - @output - @render.table - def table(): - df = pd.read_csv(onice) - if input.z() == "T": - asc = True - else: - asc = False - - if input.strength()=="even": - df = df[(df['Team']==input.x())&(df['EV_TOI']>=input.toi())] - if input.y() == "xGF%": - df = df[['Player','EV_TOI','EV_xGF%']].sort_values(by='EV_xGF%',ascending=asc).round(3) - elif input.y() == 'TOI': - df = df[['Player','EV_TOI','EV_xGF%']].sort_values(by='EV_TOI',ascending=asc).round(3) - else: - df = df[['Player','EV_TOI','EV_xGF%']].sort_values(by=input.y(),ascending=asc).round(3) - elif input.strength()=="_5v5": - df = df[(df['Team']==input.x())&(df['5v5_TOI']>=input.toi())] - if input.y() == "xGF%": - df = df[['Player','5v5_TOI','5v5_xGF%']].sort_values(by='5v5_xGF%',ascending=asc).round(3) - elif input.y() == 'TOI': - df = df[['Player','5v5_TOI','5v5_xGF%']].sort_values(by='5v5_TOI',ascending=asc).round(3) - else: - df = df[['Player','5v5_TOI','5v5_xGF%']].sort_values(by=input.y(),ascending=asc).round(3) - else: - df = df[(df['Team']==input.x())&(df['ALL_TOI']>=input.toi())] - if input.y() == "xGF%": - df = df[['Player','ALL_TOI','ALL_xGF%']].sort_values(by='ALL_xGF%',ascending=asc).round(3) - elif input.y() == 'TOI': - df = df[['Player','ALL_TOI','ALL_xGF%']].sort_values(by='ALL_TOI',ascending=asc).round(3) - else: - df = df[['Player','ALL_TOI','ALL_xGF%']].sort_values(by=input.y(),ascending=asc).round(3) - return df - - @output - @render_widget - def my_widget(): - df = pd.read_csv(onice) - team = input.x() - if input.strength()=="even": - title_strength = "Even Strength" - title_toi = "EV" - x_col = "EV_xGF%" - x_title = "Even Strength xGF%" - color_for_chart = "EV_TOI" - data = df[(df['Team']==team)&(df['EV_TOI']>=input.toi())] - elif input.strength()=="_5v5": - title_strength="5v5" - title_toi="5v5" - x_col = "5v5_xGF%" - x_title = "5v5 xGF%" - color_for_chart="5v5_TOI" - data = df[(df['Team']==team)&(df['5v5_TOI']>=input.toi())] - else: - title_strength="All Situation" - title_toi="All" - x_col = "ALL_xGF%" - x_title = "All Situation xGF%" - color_for_chart="ALL_TOI" - data = df[(df['Team']==team)&(df['ALL_TOI']>=input.toi())] - data = data.sort_values(by=x_col,ascending=True) - data['str'] = data[x_col].round(4) - data['str'] = data['str'].map('{:,.2f}%'.format) - color_discrete_sequence = ['#617296']*len(data) - fig = px.bar(data, x=x_col, y="Player",text=('str'),height=1050,width=1050,template="plotly_dark",color=color_for_chart,color_continuous_scale=["#ffffff","#195293"]) - fig.update_layout(plot_bgcolor="#222222",paper_bgcolor="#222222") - fig.update_traces(marker_line_color='#FFFFFF', - marker_line_width=1.5) - fig.update_layout( - title=(input.x()+ " Skaters "+ title_strength +" On-Ice xGF%
          "+ - "2023-24 NHL Regular Season
          "+ - "Minimum " + str(input.toi()) + " " + title_toi + " TOI"), - margin=dict(r=20, l=40, b=100, t=90),) - fig.update_xaxes(range=[0, 100]) - fig.update_xaxes(tickvals=[0,25,50,75,100],ticktext=['0%','25%','50%','75%','100%']) - fig.add_annotation( - text = ("Data: @StatsByZach on Twitter") - , showarrow=False - , x = .80 - , y = -.045 - , xref='paper' - , yref='paper' - , xanchor='left' - , yanchor='bottom' - , xshift=-1 - , yshift=-5 - , font=dict(size=11, color="white") - , align="left" - ) - fig.update_layout(xaxis_title=x_title) - return fig - - @reactive.Effect - def _(): - val = input.quant() - - if input.strength()=="even": - calc = "EV_TOI" - elif input.strength()=="_5v5": - calc = "5v5_TOI" - else: - calc = "ALL_TOI" - - if val == "_25": - q= round(df[calc].quantile(.25),1) - elif val == "_50": - q= round(df[calc].quantile(.5),1) - elif val == "_75": - q=round(df[calc].quantile(.75),1) - else: - q=0 - ui.update_slider( - "toi", value=q - ) - - @reactive.Effect - def _2(): - btn = input.btn() - if btn % 2 == 1: - tab = ui.output_table("table") - ui.insert_ui( - ui.div({"id": "inserted-slider"},ui.tags.h5("Sort Table by", class_="app-heading"),ui.input_select("y","",{"Player":"Player","TOI":"TOI","xGF%":"xGF%",}), - ui.input_radio_buttons( - "z", "", {"F": "High to Low", "T": "Low to High"} - ),ui.output_table("table")), - selector="#main-content", - where="beforeEnd", - ) - elif btn > 0: - ui.remove_ui("#inserted-slider") - -on_ice_xgfp = App(ui.page_fluid( - ui.tags.base(href=base_url), - ui.tags.div( - {"style": "width:75%;margin: 0 auto"}, - ui.tags.style( - """ - h4 { - margin-top: 1em;font-size:35px; - } - h2{ - font-size:25px; - } - """ - ), - shinyswatch.theme.darkly(), - ui.tags.h4("Stats By Zach"), - ui.tags.i("A website for hockey analytics"), - ui.navset_tab( - ui.nav_control( - ui.a( - "Home", - href="home/" - ), - ), - ui.nav_menu( - "Skater Charts", - ui.nav_control( - ui.a( - "On-Ice xG Rates", - href="skater-xg-rates/" - ), - ui.a( - "On-Ice xGF%", - href="skater-xg-percentages/" - ), - ), - ), - ui.nav_menu( - "Goalie Charts", - ui.nav_control( - ui.a( - "GSAx Timeline", - href="gsax-timeline/" - ), - ui.a( - "GSAx Leaderboard", - href="gsax-leaderboard/" - ), - ui.a( - "GSAx Comparison", - href="gsax-comparison/" - ) - ), - ), - ui.nav_menu( - "Team Charts", - ui.nav_control( - ui.a( - "Team xG Rates", - href="team-xg-rates/" - ), - ), - ),ui.nav_control( - ui.a( - "Games", - href="games/" - ), - ), - ui.nav_control( - ui.a( - "About", - href="about/" - ), - )),ui.row( - ui.column(3,ui.tags.br(),ui.tags.h2("On-Ice xGF%"),ui.tags.h5("Team", class_="app-heading"), - ui.input_select("x", "", {"ANA": "Anaheim Ducks", - "ARI": "Arizona Coyotes", - "BOS": "Boston Bruins", - "BUF": "Buffalo Sabres", - "CGY": "Calgary Flames", - "CAR": "Carolina Hurricanes", - "CHI": "Chicago Blackhawks", - "COL": "Colorado Avalanche", - "CBJ": "Columbus Blue Jackets", - "DAL": "Dallas Stars", - "DET": "Detroit Red Wings", - "EDM": "Edmonton Oilers", - "FLA": "Florida Panthers", - "L.A": "Los Angeles Kings", - "MIN": "Minnesota Wild", - "MTL": "Montreal Canadiens", - "NSH": "Nashville Predators", - "N.J": "New Jersey Devils", - "NYI": "New York Islanders", - "NYR": "New York Rangers", - "OTT": "Ottawa Senators", - "PHI": "Philadelphia Flyers", - "PIT": "Pittsburgh Penguins", - "S.J": "San Jose Sharks", - "SEA":"Seattle Kraken", - "STL": "St. Louis Blues", - "T.B": "Tampa Bay Lightning", - "TOR": "Toronto Maple Leafs", - "VAN": "Vancouver Canucks", - "VGK": "Vegas Golden Knights", - "WSH": "Washington Capitals", - "WPG": "Winnipeg Jets"}),ui.tags.h5("Strength", class_="app-heading"),ui.input_select("strength", "",{'even':"Even",'_5v5':"5v5",'All':"All Strengths"}), - ui.tags.h5("Minimum TOI", class_="app-heading"), - ui.input_slider("toi", "", min=0, max=round(df['ALL_TOI'].max(),0), value=round(df['EV_TOI'].quantile(.25),1)),ui.tags.h5("TOI Percentile (among all NHL skaters)", class_="app-heading"),ui.input_radio_buttons( - "quant", - "", - { - "_25": "Top 75%", - "_50": "Top 50%", - "_75": "Top 25%", - }, - ),ui.input_action_button("btn", "Toggle Table"),ui.div({"id":"main-content"}),), - ui.column(9,output_widget("my_widget") - )),)),server) \ No newline at end of file diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/export.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/export.py deleted file mode 100644 index 28b214017d9ac23934b67e8254a96131cefa6501..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/export.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility to export a training checkpoint to a lightweight release checkpoint. -""" - -from pathlib import Path -import typing as tp - -from omegaconf import OmegaConf -import torch - -from audiocraft import __version__ - - -def export_encodec(checkpoint_path: tp.Union[Path, str], out_file: tp.Union[Path, str]): - """Export only the best state from the given EnCodec checkpoint. This - should be used if you trained your own EnCodec model. - """ - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['best_state']['model'], - 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']), - 'version': __version__, - 'exported': True, - } - Path(out_file).parent.mkdir(exist_ok=True, parents=True) - torch.save(new_pkg, out_file) - return out_file - - -def export_pretrained_compression_model(pretrained_encodec: str, out_file: tp.Union[Path, str]): - """Export a compression model (potentially EnCodec) from a pretrained model. - This is required for packaging the audio tokenizer along a MusicGen or AudioGen model. - Do not include the //pretrained/ prefix. For instance if you trained a model - with `facebook/encodec_32khz`, just put that as a name. Same for `dac_44khz`. - - In that case, this will not actually include a copy of the model, simply the reference - to the model used. - """ - if Path(pretrained_encodec).exists(): - pkg = torch.load(pretrained_encodec) - assert 'best_state' in pkg - assert 'xp.cfg' in pkg - assert 'version' in pkg - assert 'exported' in pkg - else: - pkg = { - 'pretrained': pretrained_encodec, - 'exported': True, - 'version': __version__, - } - Path(out_file).parent.mkdir(exist_ok=True, parents=True) - torch.save(pkg, out_file) - - -def export_lm(checkpoint_path: tp.Union[Path, str], out_file: tp.Union[Path, str]): - """Export only the best state from the given MusicGen or AudioGen checkpoint. - """ - pkg = torch.load(checkpoint_path, 'cpu') - if pkg['fsdp_best_state']: - best_state = pkg['fsdp_best_state']['model'] - else: - assert pkg['best_state'] - best_state = pkg['best_state']['model'] - new_pkg = { - 'best_state': best_state, - 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']), - 'version': __version__, - 'exported': True, - } - - Path(out_file).parent.mkdir(exist_ok=True, parents=True) - torch.save(new_pkg, out_file) - return out_file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/BdfFontFile.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/BdfFontFile.py deleted file mode 100644 index 075d462907abcace9610a686052e643582602a8f..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/BdfFontFile.py +++ /dev/null @@ -1,122 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# bitmap distribution font (bdf) file parser -# -# history: -# 1996-05-16 fl created (as bdf2pil) -# 1997-08-25 fl converted to FontFile driver -# 2001-05-25 fl removed bogus __init__ call -# 2002-11-20 fl robustification (from Kevin Cazabon, Dmitry Vasiliev) -# 2003-04-22 fl more robustification (from Graham Dumpleton) -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1997-2003 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -""" -Parse X Bitmap Distribution Format (BDF) -""" - - -from . import FontFile, Image - -bdf_slant = { - "R": "Roman", - "I": "Italic", - "O": "Oblique", - "RI": "Reverse Italic", - "RO": "Reverse Oblique", - "OT": "Other", -} - -bdf_spacing = {"P": "Proportional", "M": "Monospaced", "C": "Cell"} - - -def bdf_char(f): - # skip to STARTCHAR - while True: - s = f.readline() - if not s: - return None - if s[:9] == b"STARTCHAR": - break - id = s[9:].strip().decode("ascii") - - # load symbol properties - props = {} - while True: - s = f.readline() - if not s or s[:6] == b"BITMAP": - break - i = s.find(b" ") - props[s[:i].decode("ascii")] = s[i + 1 : -1].decode("ascii") - - # load bitmap - bitmap = [] - while True: - s = f.readline() - if not s or s[:7] == b"ENDCHAR": - break - bitmap.append(s[:-1]) - bitmap = b"".join(bitmap) - - # The word BBX - # followed by the width in x (BBw), height in y (BBh), - # and x and y displacement (BBxoff0, BByoff0) - # of the lower left corner from the origin of the character. - width, height, x_disp, y_disp = [int(p) for p in props["BBX"].split()] - - # The word DWIDTH - # followed by the width in x and y of the character in device pixels. - dwx, dwy = [int(p) for p in props["DWIDTH"].split()] - - bbox = ( - (dwx, dwy), - (x_disp, -y_disp - height, width + x_disp, -y_disp), - (0, 0, width, height), - ) - - try: - im = Image.frombytes("1", (width, height), bitmap, "hex", "1") - except ValueError: - # deal with zero-width characters - im = Image.new("1", (width, height)) - - return id, int(props["ENCODING"]), bbox, im - - -class BdfFontFile(FontFile.FontFile): - """Font file plugin for the X11 BDF format.""" - - def __init__(self, fp): - super().__init__() - - s = fp.readline() - if s[:13] != b"STARTFONT 2.1": - msg = "not a valid BDF file" - raise SyntaxError(msg) - - props = {} - comments = [] - - while True: - s = fp.readline() - if not s or s[:13] == b"ENDPROPERTIES": - break - i = s.find(b" ") - props[s[:i].decode("ascii")] = s[i + 1 : -1].decode("ascii") - if s[:i] in [b"COMMENT", b"COPYRIGHT"]: - if s.find(b"LogicalFontDescription") < 0: - comments.append(s[i + 1 : -1].decode("ascii")) - - while True: - c = bdf_char(fp) - if not c: - break - id, ch, (xy, dst, src), im = c - if 0 <= ch < len(self.glyph): - self.glyph[ch] = xy, dst, src, im diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/disasm.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/disasm.py deleted file mode 100644 index 230e3314ade2828e6d6abef213c7b2e8422dbd3c..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/disasm.py +++ /dev/null @@ -1,722 +0,0 @@ -#!~/.wine/drive_c/Python25/python.exe -# -*- coding: utf-8 -*- - -# Copyright (c) 2009-2014, Mario Vilas -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are met: -# -# * Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright -# notice,this list of conditions and the following disclaimer in the -# documentation and/or other materials provided with the distribution. -# * Neither the name of the copyright holder nor the names of its -# contributors may be used to endorse or promote products derived from -# this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. - -""" -Binary code disassembly. - -@group Disassembler loader: - Disassembler, Engine - -@group Disassembler engines: - BeaEngine, CapstoneEngine, DistormEngine, - LibdisassembleEngine, PyDasmEngine -""" - -from __future__ import with_statement - -__revision__ = "$Id$" - -__all__ = [ - 'Disassembler', - 'Engine', - 'BeaEngine', - 'CapstoneEngine', - 'DistormEngine', - 'LibdisassembleEngine', - 'PyDasmEngine', -] - -from winappdbg.textio import HexDump -from winappdbg import win32 - -import ctypes -import warnings - -# lazy imports -BeaEnginePython = None -distorm3 = None -pydasm = None -libdisassemble = None -capstone = None - -#============================================================================== - -class Engine (object): - """ - Base class for disassembly engine adaptors. - - @type name: str - @cvar name: Engine name to use with the L{Disassembler} class. - - @type desc: str - @cvar desc: User friendly name of the disassembler engine. - - @type url: str - @cvar url: Download URL. - - @type supported: set(str) - @cvar supported: Set of supported processor architectures. - For more details see L{win32.version._get_arch}. - - @type arch: str - @ivar arch: Name of the processor architecture. - """ - - name = "" - desc = "" - url = "" - supported = set() - - def __init__(self, arch = None): - """ - @type arch: str - @param arch: Name of the processor architecture. - If not provided the current processor architecture is assumed. - For more details see L{win32.version._get_arch}. - - @raise NotImplementedError: This disassembler doesn't support the - requested processor architecture. - """ - self.arch = self._validate_arch(arch) - try: - self._import_dependencies() - except ImportError: - msg = "%s is not installed or can't be found. Download it from: %s" - msg = msg % (self.name, self.url) - raise NotImplementedError(msg) - - def _validate_arch(self, arch = None): - """ - @type arch: str - @param arch: Name of the processor architecture. - If not provided the current processor architecture is assumed. - For more details see L{win32.version._get_arch}. - - @rtype: str - @return: Name of the processor architecture. - If not provided the current processor architecture is assumed. - For more details see L{win32.version._get_arch}. - - @raise NotImplementedError: This disassembler doesn't support the - requested processor architecture. - """ - - # Use the default architecture if none specified. - if not arch: - arch = win32.arch - - # Validate the architecture. - if arch not in self.supported: - msg = "The %s engine cannot decode %s code." - msg = msg % (self.name, arch) - raise NotImplementedError(msg) - - # Return the architecture. - return arch - - def _import_dependencies(self): - """ - Loads the dependencies for this disassembler. - - @raise ImportError: This disassembler cannot find or load the - necessary dependencies to make it work. - """ - raise SyntaxError("Subclasses MUST implement this method!") - - def decode(self, address, code): - """ - @type address: int - @param address: Memory address where the code was read from. - - @type code: str - @param code: Machine code to disassemble. - - @rtype: list of tuple( long, int, str, str ) - @return: List of tuples. Each tuple represents an assembly instruction - and contains: - - Memory address of instruction. - - Size of instruction in bytes. - - Disassembly line of instruction. - - Hexadecimal dump of instruction. - - @raise NotImplementedError: This disassembler could not be loaded. - This may be due to missing dependencies. - """ - raise NotImplementedError() - -#============================================================================== - -class BeaEngine (Engine): - """ - Integration with the BeaEngine disassembler by Beatrix. - - @see: U{https://sourceforge.net/projects/winappdbg/files/additional%20packages/BeaEngine/} - """ - - name = "BeaEngine" - desc = "BeaEngine disassembler by Beatrix" - url = "https://sourceforge.net/projects/winappdbg/files/additional%20packages/BeaEngine/" - - supported = set(( - win32.ARCH_I386, - win32.ARCH_AMD64, - )) - - def _import_dependencies(self): - - # Load the BeaEngine ctypes wrapper. - global BeaEnginePython - if BeaEnginePython is None: - import BeaEnginePython - - def decode(self, address, code): - addressof = ctypes.addressof - - # Instance the code buffer. - buffer = ctypes.create_string_buffer(code) - buffer_ptr = addressof(buffer) - - # Instance the disassembler structure. - Instruction = BeaEnginePython.DISASM() - Instruction.VirtualAddr = address - Instruction.EIP = buffer_ptr - Instruction.SecurityBlock = buffer_ptr + len(code) - if self.arch == win32.ARCH_I386: - Instruction.Archi = 0 - else: - Instruction.Archi = 0x40 - Instruction.Options = ( BeaEnginePython.Tabulation + - BeaEnginePython.NasmSyntax + - BeaEnginePython.SuffixedNumeral + - BeaEnginePython.ShowSegmentRegs ) - - # Prepare for looping over each instruction. - result = [] - Disasm = BeaEnginePython.Disasm - InstructionPtr = addressof(Instruction) - hexdump = HexDump.hexadecimal - append = result.append - OUT_OF_BLOCK = BeaEnginePython.OUT_OF_BLOCK - UNKNOWN_OPCODE = BeaEnginePython.UNKNOWN_OPCODE - - # For each decoded instruction... - while True: - - # Calculate the current offset into the buffer. - offset = Instruction.EIP - buffer_ptr - - # If we've gone past the buffer, break the loop. - if offset >= len(code): - break - - # Decode the current instruction. - InstrLength = Disasm(InstructionPtr) - - # If BeaEngine detects we've gone past the buffer, break the loop. - if InstrLength == OUT_OF_BLOCK: - break - - # The instruction could not be decoded. - if InstrLength == UNKNOWN_OPCODE: - - # Output a single byte as a "db" instruction. - char = "%.2X" % ord(buffer[offset]) - result.append(( - Instruction.VirtualAddr, - 1, - "db %sh" % char, - char, - )) - Instruction.VirtualAddr += 1 - Instruction.EIP += 1 - - # The instruction was decoded but reading past the buffer's end. - # This can happen when the last instruction is a prefix without an - # opcode. For example: decode(0, '\x66') - elif offset + InstrLength > len(code): - - # Output each byte as a "db" instruction. - for char in buffer[ offset : offset + len(code) ]: - char = "%.2X" % ord(char) - result.append(( - Instruction.VirtualAddr, - 1, - "db %sh" % char, - char, - )) - Instruction.VirtualAddr += 1 - Instruction.EIP += 1 - - # The instruction was decoded correctly. - else: - - # Output the decoded instruction. - append(( - Instruction.VirtualAddr, - InstrLength, - Instruction.CompleteInstr.strip(), - hexdump(buffer.raw[offset:offset+InstrLength]), - )) - Instruction.VirtualAddr += InstrLength - Instruction.EIP += InstrLength - - # Return the list of decoded instructions. - return result - -#============================================================================== - -class DistormEngine (Engine): - """ - Integration with the diStorm disassembler by Gil Dabah. - - @see: U{https://code.google.com/p/distorm3} - """ - - name = "diStorm" - desc = "diStorm disassembler by Gil Dabah" - url = "https://code.google.com/p/distorm3" - - supported = set(( - win32.ARCH_I386, - win32.ARCH_AMD64, - )) - - def _import_dependencies(self): - - # Load the distorm bindings. - global distorm3 - if distorm3 is None: - try: - import distorm3 - except ImportError: - import distorm as distorm3 - - # Load the decoder function. - self.__decode = distorm3.Decode - - # Load the bits flag. - self.__flag = { - win32.ARCH_I386: distorm3.Decode32Bits, - win32.ARCH_AMD64: distorm3.Decode64Bits, - }[self.arch] - - def decode(self, address, code): - return self.__decode(address, code, self.__flag) - -#============================================================================== - -class PyDasmEngine (Engine): - """ - Integration with PyDasm: Python bindings to libdasm. - - @see: U{https://code.google.com/p/libdasm/} - """ - - name = "PyDasm" - desc = "PyDasm: Python bindings to libdasm" - url = "https://code.google.com/p/libdasm/" - - supported = set(( - win32.ARCH_I386, - )) - - def _import_dependencies(self): - - # Load the libdasm bindings. - global pydasm - if pydasm is None: - import pydasm - - def decode(self, address, code): - - # Decode each instruction in the buffer. - result = [] - offset = 0 - while offset < len(code): - - # Try to decode the current instruction. - instruction = pydasm.get_instruction(code[offset:offset+32], - pydasm.MODE_32) - - # Get the memory address of the current instruction. - current = address + offset - - # Illegal opcode or opcode longer than remaining buffer. - if not instruction or instruction.length + offset > len(code): - hexdump = '%.2X' % ord(code[offset]) - disasm = 'db 0x%s' % hexdump - ilen = 1 - - # Correctly decoded instruction. - else: - disasm = pydasm.get_instruction_string(instruction, - pydasm.FORMAT_INTEL, - current) - ilen = instruction.length - hexdump = HexDump.hexadecimal(code[offset:offset+ilen]) - - # Add the decoded instruction to the list. - result.append(( - current, - ilen, - disasm, - hexdump, - )) - - # Move to the next instruction. - offset += ilen - - # Return the list of decoded instructions. - return result - -#============================================================================== - -class LibdisassembleEngine (Engine): - """ - Integration with Immunity libdisassemble. - - @see: U{http://www.immunitysec.com/resources-freesoftware.shtml} - """ - - name = "Libdisassemble" - desc = "Immunity libdisassemble" - url = "http://www.immunitysec.com/resources-freesoftware.shtml" - - supported = set(( - win32.ARCH_I386, - )) - - def _import_dependencies(self): - - # Load the libdisassemble module. - # Since it doesn't come with an installer or an __init__.py file - # users can only install it manually however they feel like it, - # so we'll have to do a bit of guessing to find it. - - global libdisassemble - if libdisassemble is None: - try: - - # If installed properly with __init__.py - import libdisassemble.disassemble as libdisassemble - - except ImportError: - - # If installed by just copying and pasting the files - import disassemble as libdisassemble - - def decode(self, address, code): - - # Decode each instruction in the buffer. - result = [] - offset = 0 - while offset < len(code): - - # Decode the current instruction. - opcode = libdisassemble.Opcode( code[offset:offset+32] ) - length = opcode.getSize() - disasm = opcode.printOpcode('INTEL') - hexdump = HexDump.hexadecimal( code[offset:offset+length] ) - - # Add the decoded instruction to the list. - result.append(( - address + offset, - length, - disasm, - hexdump, - )) - - # Move to the next instruction. - offset += length - - # Return the list of decoded instructions. - return result - -#============================================================================== - -class CapstoneEngine (Engine): - """ - Integration with the Capstone disassembler by Nguyen Anh Quynh. - - @see: U{http://www.capstone-engine.org/} - """ - - name = "Capstone" - desc = "Capstone disassembler by Nguyen Anh Quynh" - url = "http://www.capstone-engine.org/" - - supported = set(( - win32.ARCH_I386, - win32.ARCH_AMD64, - win32.ARCH_THUMB, - win32.ARCH_ARM, - win32.ARCH_ARM64, - )) - - def _import_dependencies(self): - - # Load the Capstone bindings. - global capstone - if capstone is None: - import capstone - - # Load the constants for the requested architecture. - self.__constants = { - win32.ARCH_I386: - (capstone.CS_ARCH_X86, capstone.CS_MODE_32), - win32.ARCH_AMD64: - (capstone.CS_ARCH_X86, capstone.CS_MODE_64), - win32.ARCH_THUMB: - (capstone.CS_ARCH_ARM, capstone.CS_MODE_THUMB), - win32.ARCH_ARM: - (capstone.CS_ARCH_ARM, capstone.CS_MODE_ARM), - win32.ARCH_ARM64: - (capstone.CS_ARCH_ARM64, capstone.CS_MODE_ARM), - } - - # Test for the bug in early versions of Capstone. - # If found, warn the user about it. - try: - self.__bug = not isinstance( - capstone.cs_disasm_quick( - capstone.CS_ARCH_X86, capstone.CS_MODE_32, "\x90", 1)[0], - capstone.capstone.CsInsn) - except AttributeError: - self.__bug = False - if self.__bug: - warnings.warn( - "This version of the Capstone bindings is unstable," - " please upgrade to a newer one!", - RuntimeWarning, stacklevel=4) - - - def decode(self, address, code): - - # Get the constants for the requested architecture. - arch, mode = self.__constants[self.arch] - - # Get the decoder function outside the loop. - decoder = capstone.cs_disasm_quick - - # If the buggy version of the bindings are being used, we need to catch - # all exceptions broadly. If not, we only need to catch CsError. - if self.__bug: - CsError = Exception - else: - CsError = capstone.CsError - - # Create the variables for the instruction length, mnemonic and - # operands. That way they won't be created within the loop, - # minimizing the chances data might be overwritten. - # This only makes sense for the buggy vesion of the bindings, normally - # memory accesses are safe). - length = mnemonic = op_str = None - - # For each instruction... - result = [] - offset = 0 - while offset < len(code): - - # Disassemble a single instruction, because disassembling multiple - # instructions may cause excessive memory usage (Capstone allocates - # approximately 1K of metadata per each decoded instruction). - instr = None - try: - instr = decoder( - arch, mode, code[offset:offset+16], address+offset, 1)[0] - except IndexError: - pass # No instructions decoded. - except CsError: - pass # Any other error. - - # On success add the decoded instruction. - if instr is not None: - - # Get the instruction length, mnemonic and operands. - # Copy the values quickly before someone overwrites them, - # if using the buggy version of the bindings (otherwise it's - # irrelevant in which order we access the properties). - length = instr.size - mnemonic = instr.mnemonic - op_str = instr.op_str - - # Concatenate the mnemonic and the operands. - if op_str: - disasm = "%s %s" % (mnemonic, op_str) - else: - disasm = mnemonic - - # Get the instruction bytes as a hexadecimal dump. - hexdump = HexDump.hexadecimal( code[offset:offset+length] ) - - # On error add a "define constant" instruction. - # The exact instruction depends on the architecture. - else: - - # The number of bytes to skip depends on the architecture. - # On Intel processors we'll skip one byte, since we can't - # really know the instruction length. On the rest of the - # architectures we always know the instruction length. - if self.arch in (win32.ARCH_I386, win32.ARCH_AMD64): - length = 1 - else: - length = 4 - - # Get the skipped bytes as a hexadecimal dump. - skipped = code[offset:offset+length] - hexdump = HexDump.hexadecimal(skipped) - - # Build the "define constant" instruction. - # On Intel processors it's "db". - # On ARM processors it's "dcb". - if self.arch in (win32.ARCH_I386, win32.ARCH_AMD64): - mnemonic = "db " - else: - mnemonic = "dcb " - bytes = [] - for b in skipped: - if b.isalpha(): - bytes.append("'%s'" % b) - else: - bytes.append("0x%x" % ord(b)) - op_str = ", ".join(bytes) - disasm = mnemonic + op_str - - # Add the decoded instruction to the list. - result.append(( - address + offset, - length, - disasm, - hexdump, - )) - - # Update the offset. - offset += length - - # Return the list of decoded instructions. - return result - -#============================================================================== - -# TODO: use a lock to access __decoder -# TODO: look in sys.modules for whichever disassembler is already loaded - -class Disassembler (object): - """ - Generic disassembler. Uses a set of adapters to decide which library to - load for which supported platform. - - @type engines: tuple( L{Engine} ) - @cvar engines: Set of supported engines. If you implement your own adapter - you can add its class here to make it available to L{Disassembler}. - Supported disassemblers are: - """ - - engines = ( - DistormEngine, # diStorm engine goes first for backwards compatibility - BeaEngine, - CapstoneEngine, - LibdisassembleEngine, - PyDasmEngine, - ) - - # Add the list of supported disassemblers to the docstring. - __doc__ += "\n" - for e in engines: - __doc__ += " - %s - %s (U{%s})\n" % (e.name, e.desc, e.url) - del e - - # Cache of already loaded disassemblers. - __decoder = {} - - def __new__(cls, arch = None, engine = None): - """ - Factory class. You can't really instance a L{Disassembler} object, - instead one of the adapter L{Engine} subclasses is returned. - - @type arch: str - @param arch: (Optional) Name of the processor architecture. - If not provided the current processor architecture is assumed. - For more details see L{win32.version._get_arch}. - - @type engine: str - @param engine: (Optional) Name of the disassembler engine. - If not provided a compatible one is loaded automatically. - See: L{Engine.name} - - @raise NotImplementedError: No compatible disassembler was found that - could decode machine code for the requested architecture. This may - be due to missing dependencies. - - @raise ValueError: An unknown engine name was supplied. - """ - - # Use the default architecture if none specified. - if not arch: - arch = win32.arch - - # Return a compatible engine if none specified. - if not engine: - found = False - for clazz in cls.engines: - try: - if arch in clazz.supported: - selected = (clazz.name, arch) - try: - decoder = cls.__decoder[selected] - except KeyError: - decoder = clazz(arch) - cls.__decoder[selected] = decoder - return decoder - except NotImplementedError: - pass - msg = "No disassembler engine available for %s code." % arch - raise NotImplementedError(msg) - - # Return the specified engine. - selected = (engine, arch) - try: - decoder = cls.__decoder[selected] - except KeyError: - found = False - engineLower = engine.lower() - for clazz in cls.engines: - if clazz.name.lower() == engineLower: - found = True - break - if not found: - msg = "Unsupported disassembler engine: %s" % engine - raise ValueError(msg) - if arch not in clazz.supported: - msg = "The %s engine cannot decode %s code." % selected - raise NotImplementedError(msg) - decoder = clazz(arch) - cls.__decoder[selected] = decoder - return decoder diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/audio/audio_tensorflow_tensor.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/audio/audio_tensorflow_tensor.py deleted file mode 100644 index 034cc0faba9640e0a3837755d5b8ab91957c87a1..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/audio/audio_tensorflow_tensor.py +++ /dev/null @@ -1,56 +0,0 @@ -from typing import TypeVar - -from docarray.typing.proto_register import _register_proto -from docarray.typing.tensor.audio.abstract_audio_tensor import AbstractAudioTensor -from docarray.typing.tensor.tensorflow_tensor import TensorFlowTensor, metaTensorFlow - -T = TypeVar('T', bound='AudioTensorFlowTensor') - - -@_register_proto(proto_type_name='audio_tensorflow_tensor') -class AudioTensorFlowTensor( - AbstractAudioTensor, TensorFlowTensor, metaclass=metaTensorFlow -): - """ - Subclass of [`TensorFlowTensor`][docarray.typing.TensorFlowTensor], - to represent an audio tensor. Adds audio-specific features to the tensor. - - --- - - ```python - from typing import Optional - - import tensorflow as tf - - from docarray import BaseDoc - from docarray.typing import AudioBytes, AudioTensorFlowTensor, AudioUrl - - - class MyAudioDoc(BaseDoc): - title: str - audio_tensor: Optional[AudioTensorFlowTensor] - url: Optional[AudioUrl] - bytes_: Optional[AudioBytes] - - - doc_1 = MyAudioDoc( - title='my_first_audio_doc', - audio_tensor=tf.random.normal((1000, 2)), - ) - - # doc_1.audio_tensor.save(file_path='file_1.wav') - doc_1.bytes_ = doc_1.audio_tensor.to_bytes() - - doc_2 = MyAudioDoc( - title='my_second_audio_doc', - url='https://www.kozco.com/tech/piano2.wav', - ) - - doc_2.audio_tensor, _ = doc_2.url.load() - doc_2.bytes_ = doc_1.audio_tensor.to_bytes() - ``` - - --- - """ - - ... diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/evaluation/fast_eval_api.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/evaluation/fast_eval_api.py deleted file mode 100644 index ad1a8f82350098bafe56f6d9481626e812717052..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/evaluation/fast_eval_api.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import logging -import numpy as np -import time -from annotator.oneformer.pycocotools.cocoeval import COCOeval - -from annotator.oneformer.detectron2 import _C - -logger = logging.getLogger(__name__) - - -class COCOeval_opt(COCOeval): - """ - This is a slightly modified version of the original COCO API, where the functions evaluateImg() - and accumulate() are implemented in C++ to speedup evaluation - """ - - def evaluate(self): - """ - Run per image evaluation on given images and store results in self.evalImgs_cpp, a - datastructure that isn't readable from Python but is used by a c++ implementation of - accumulate(). Unlike the original COCO PythonAPI, we don't populate the datastructure - self.evalImgs because this datastructure is a computational bottleneck. - :return: None - """ - tic = time.time() - - p = self.params - # add backward compatibility if useSegm is specified in params - if p.useSegm is not None: - p.iouType = "segm" if p.useSegm == 1 else "bbox" - logger.info("Evaluate annotation type *{}*".format(p.iouType)) - p.imgIds = list(np.unique(p.imgIds)) - if p.useCats: - p.catIds = list(np.unique(p.catIds)) - p.maxDets = sorted(p.maxDets) - self.params = p - - self._prepare() # bottleneck - - # loop through images, area range, max detection number - catIds = p.catIds if p.useCats else [-1] - - if p.iouType == "segm" or p.iouType == "bbox": - computeIoU = self.computeIoU - elif p.iouType == "keypoints": - computeIoU = self.computeOks - self.ious = { - (imgId, catId): computeIoU(imgId, catId) for imgId in p.imgIds for catId in catIds - } # bottleneck - - maxDet = p.maxDets[-1] - - # <<<< Beginning of code differences with original COCO API - def convert_instances_to_cpp(instances, is_det=False): - # Convert annotations for a list of instances in an image to a format that's fast - # to access in C++ - instances_cpp = [] - for instance in instances: - instance_cpp = _C.InstanceAnnotation( - int(instance["id"]), - instance["score"] if is_det else instance.get("score", 0.0), - instance["area"], - bool(instance.get("iscrowd", 0)), - bool(instance.get("ignore", 0)), - ) - instances_cpp.append(instance_cpp) - return instances_cpp - - # Convert GT annotations, detections, and IOUs to a format that's fast to access in C++ - ground_truth_instances = [ - [convert_instances_to_cpp(self._gts[imgId, catId]) for catId in p.catIds] - for imgId in p.imgIds - ] - detected_instances = [ - [convert_instances_to_cpp(self._dts[imgId, catId], is_det=True) for catId in p.catIds] - for imgId in p.imgIds - ] - ious = [[self.ious[imgId, catId] for catId in catIds] for imgId in p.imgIds] - - if not p.useCats: - # For each image, flatten per-category lists into a single list - ground_truth_instances = [[[o for c in i for o in c]] for i in ground_truth_instances] - detected_instances = [[[o for c in i for o in c]] for i in detected_instances] - - # Call C++ implementation of self.evaluateImgs() - self._evalImgs_cpp = _C.COCOevalEvaluateImages( - p.areaRng, maxDet, p.iouThrs, ious, ground_truth_instances, detected_instances - ) - self._evalImgs = None - - self._paramsEval = copy.deepcopy(self.params) - toc = time.time() - logger.info("COCOeval_opt.evaluate() finished in {:0.2f} seconds.".format(toc - tic)) - # >>>> End of code differences with original COCO API - - def accumulate(self): - """ - Accumulate per image evaluation results and store the result in self.eval. Does not - support changing parameter settings from those used by self.evaluate() - """ - logger.info("Accumulating evaluation results...") - tic = time.time() - assert hasattr( - self, "_evalImgs_cpp" - ), "evaluate() must be called before accmulate() is called." - - self.eval = _C.COCOevalAccumulate(self._paramsEval, self._evalImgs_cpp) - - # recall is num_iou_thresholds X num_categories X num_area_ranges X num_max_detections - self.eval["recall"] = np.array(self.eval["recall"]).reshape( - self.eval["counts"][:1] + self.eval["counts"][2:] - ) - - # precision and scores are num_iou_thresholds X num_recall_thresholds X num_categories X - # num_area_ranges X num_max_detections - self.eval["precision"] = np.array(self.eval["precision"]).reshape(self.eval["counts"]) - self.eval["scores"] = np.array(self.eval["scores"]).reshape(self.eval["counts"]) - toc = time.time() - logger.info("COCOeval_opt.accumulate() finished in {:0.2f} seconds.".format(toc - tic)) diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/parallel/distributed.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/parallel/distributed.py deleted file mode 100644 index 1e4c27903db58a54d37ea1ed9ec0104098b486f2..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/parallel/distributed.py +++ /dev/null @@ -1,112 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn.parallel.distributed import (DistributedDataParallel, - _find_tensors) - -from annotator.uniformer.mmcv import print_log -from annotator.uniformer.mmcv.utils import TORCH_VERSION, digit_version -from .scatter_gather import scatter_kwargs - - -class MMDistributedDataParallel(DistributedDataParallel): - """The DDP module that supports DataContainer. - - MMDDP has two main differences with PyTorch DDP: - - - It supports a custom type :class:`DataContainer` which allows more - flexible control of input data. - - It implement two APIs ``train_step()`` and ``val_step()``. - """ - - def to_kwargs(self, inputs, kwargs, device_id): - # Use `self.to_kwargs` instead of `self.scatter` in pytorch1.8 - # to move all tensors to device_id - return scatter_kwargs(inputs, kwargs, [device_id], dim=self.dim) - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) - - def train_step(self, *inputs, **kwargs): - """train_step() API for module wrapped by DistributedDataParallel. - - This method is basically the same as - ``DistributedDataParallel.forward()``, while replacing - ``self.module.forward()`` with ``self.module.train_step()``. - It is compatible with PyTorch 1.1 - 1.5. - """ - - # In PyTorch >= 1.7, ``reducer._rebuild_buckets()`` is moved from the - # end of backward to the beginning of forward. - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) >= digit_version('1.7') - and self.reducer._rebuild_buckets()): - print_log( - 'Reducer buckets have been rebuilt in this iteration.', - logger='mmcv') - - if getattr(self, 'require_forward_param_sync', True): - self._sync_params() - if self.device_ids: - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - if len(self.device_ids) == 1: - output = self.module.train_step(*inputs[0], **kwargs[0]) - else: - outputs = self.parallel_apply( - self._module_copies[:len(inputs)], inputs, kwargs) - output = self.gather(outputs, self.output_device) - else: - output = self.module.train_step(*inputs, **kwargs) - - if torch.is_grad_enabled() and getattr( - self, 'require_backward_grad_sync', True): - if self.find_unused_parameters: - self.reducer.prepare_for_backward(list(_find_tensors(output))) - else: - self.reducer.prepare_for_backward([]) - else: - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) > digit_version('1.2')): - self.require_forward_param_sync = False - return output - - def val_step(self, *inputs, **kwargs): - """val_step() API for module wrapped by DistributedDataParallel. - - This method is basically the same as - ``DistributedDataParallel.forward()``, while replacing - ``self.module.forward()`` with ``self.module.val_step()``. - It is compatible with PyTorch 1.1 - 1.5. - """ - # In PyTorch >= 1.7, ``reducer._rebuild_buckets()`` is moved from the - # end of backward to the beginning of forward. - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) >= digit_version('1.7') - and self.reducer._rebuild_buckets()): - print_log( - 'Reducer buckets have been rebuilt in this iteration.', - logger='mmcv') - - if getattr(self, 'require_forward_param_sync', True): - self._sync_params() - if self.device_ids: - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - if len(self.device_ids) == 1: - output = self.module.val_step(*inputs[0], **kwargs[0]) - else: - outputs = self.parallel_apply( - self._module_copies[:len(inputs)], inputs, kwargs) - output = self.gather(outputs, self.output_device) - else: - output = self.module.val_step(*inputs, **kwargs) - - if torch.is_grad_enabled() and getattr( - self, 'require_backward_grad_sync', True): - if self.find_unused_parameters: - self.reducer.prepare_for_backward(list(_find_tensors(output))) - else: - self.reducer.prepare_for_backward([]) - else: - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) > digit_version('1.2')): - self.require_forward_param_sync = False - return output diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/importlib_resources/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/importlib_resources/__init__.py deleted file mode 100644 index 34e3a9950cc557879af8d797f9382b18a870fb56..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/importlib_resources/__init__.py +++ /dev/null @@ -1,36 +0,0 @@ -"""Read resources contained within a package.""" - -from ._common import ( - as_file, - files, - Package, -) - -from ._legacy import ( - contents, - open_binary, - read_binary, - open_text, - read_text, - is_resource, - path, - Resource, -) - -from .abc import ResourceReader - - -__all__ = [ - 'Package', - 'Resource', - 'ResourceReader', - 'as_file', - 'contents', - 'files', - 'is_resource', - 'open_binary', - 'open_text', - 'path', - 'read_binary', - 'read_text', -] diff --git a/spaces/TangibleAI/mathtext/LICENSE.md b/spaces/TangibleAI/mathtext/LICENSE.md deleted file mode 100644 index cba6f6a15a4cc3ba212e9e9059f7243e2d171090..0000000000000000000000000000000000000000 --- a/spaces/TangibleAI/mathtext/LICENSE.md +++ /dev/null @@ -1,660 +0,0 @@ -### GNU AFFERO GENERAL PUBLIC LICENSE - -Version 3, 19 November 2007 - -Copyright (C) 2007 Free Software Foundation, Inc. - - -Everyone is permitted to copy and distribute verbatim copies of this -license document, but changing it is not allowed. - -### Preamble - -The GNU Affero General Public License is a free, copyleft license for -software and other kinds of works, specifically designed to ensure -cooperation with the community in the case of network server software. - -The licenses for most software and other practical works are designed -to take away your freedom to share and change the works. By contrast, -our General Public Licenses are intended to guarantee your freedom to -share and change all versions of a program--to make sure it remains -free software for all its users. - -When we speak of free software, we are referring to freedom, not -price. Our General Public Licenses are designed to make sure that you -have the freedom to distribute copies of free software (and charge for -them if you wish), that you receive source code or can get it if you -want it, that you can change the software or use pieces of it in new -free programs, and that you know you can do these things. - -Developers that use our General Public Licenses protect your rights -with two steps: (1) assert copyright on the software, and (2) offer -you this License which gives you legal permission to copy, distribute -and/or modify the software. - -A secondary benefit of defending all users' freedom is that -improvements made in alternate versions of the program, if they -receive widespread use, become available for other developers to -incorporate. Many developers of free software are heartened and -encouraged by the resulting cooperation. However, in the case of -software used on network servers, this result may fail to come about. -The GNU General Public License permits making a modified version and -letting the public access it on a server without ever releasing its -source code to the public. - -The GNU Affero General Public License is designed specifically to -ensure that, in such cases, the modified source code becomes available -to the community. It requires the operator of a network server to -provide the source code of the modified version running there to the -users of that server. Therefore, public use of a modified version, on -a publicly accessible server, gives the public access to the source -code of the modified version. - -An older license, called the Affero General Public License and -published by Affero, was designed to accomplish similar goals. This is -a different license, not a version of the Affero GPL, but Affero has -released a new version of the Affero GPL which permits relicensing -under this license. - -The precise terms and conditions for copying, distribution and -modification follow. - -### TERMS AND CONDITIONS - -#### 0. Definitions. - -"This License" refers to version 3 of the GNU Affero General Public -License. - -"Copyright" also means copyright-like laws that apply to other kinds -of works, such as semiconductor masks. - -"The Program" refers to any copyrightable work licensed under this -License. Each licensee is addressed as "you". "Licensees" and -"recipients" may be individuals or organizations. - -To "modify" a work means to copy from or adapt all or part of the work -in a fashion requiring copyright permission, other than the making of -an exact copy. The resulting work is called a "modified version" of -the earlier work or a work "based on" the earlier work. - -A "covered work" means either the unmodified Program or a work based -on the Program. - -To "propagate" a work means to do anything with it that, without -permission, would make you directly or secondarily liable for -infringement under applicable copyright law, except executing it on a -computer or modifying a private copy. Propagation includes copying, -distribution (with or without modification), making available to the -public, and in some countries other activities as well. - -To "convey" a work means any kind of propagation that enables other -parties to make or receive copies. Mere interaction with a user -through a computer network, with no transfer of a copy, is not -conveying. - -An interactive user interface displays "Appropriate Legal Notices" to -the extent that it includes a convenient and prominently visible -feature that (1) displays an appropriate copyright notice, and (2) -tells the user that there is no warranty for the work (except to the -extent that warranties are provided), that licensees may convey the -work under this License, and how to view a copy of this License. If -the interface presents a list of user commands or options, such as a -menu, a prominent item in the list meets this criterion. - -#### 1. Source Code. - -The "source code" for a work means the preferred form of the work for -making modifications to it. "Object code" means any non-source form of -a work. - -A "Standard Interface" means an interface that either is an official -standard defined by a recognized standards body, or, in the case of -interfaces specified for a particular programming language, one that -is widely used among developers working in that language. - -The "System Libraries" of an executable work include anything, other -than the work as a whole, that (a) is included in the normal form of -packaging a Major Component, but which is not part of that Major -Component, and (b) serves only to enable use of the work with that -Major Component, or to implement a Standard Interface for which an -implementation is available to the public in source code form. A -"Major Component", in this context, means a major essential component -(kernel, window system, and so on) of the specific operating system -(if any) on which the executable work runs, or a compiler used to -produce the work, or an object code interpreter used to run it. - -The "Corresponding Source" for a work in object code form means all -the source code needed to generate, install, and (for an executable -work) run the object code and to modify the work, including scripts to -control those activities. However, it does not include the work's -System Libraries, or general-purpose tools or generally available free -programs which are used unmodified in performing those activities but -which are not part of the work. For example, Corresponding Source -includes interface definition files associated with source files for -the work, and the source code for shared libraries and dynamically -linked subprograms that the work is specifically designed to require, -such as by intimate data communication or control flow between those -subprograms and other parts of the work. - -The Corresponding Source need not include anything that users can -regenerate automatically from other parts of the Corresponding Source. - -The Corresponding Source for a work in source code form is that same -work. - -#### 2. Basic Permissions. - -All rights granted under this License are granted for the term of -copyright on the Program, and are irrevocable provided the stated -conditions are met. This License explicitly affirms your unlimited -permission to run the unmodified Program. The output from running a -covered work is covered by this License only if the output, given its -content, constitutes a covered work. This License acknowledges your -rights of fair use or other equivalent, as provided by copyright law. - -You may make, run and propagate covered works that you do not convey, -without conditions so long as your license otherwise remains in force. -You may convey covered works to others for the sole purpose of having -them make modifications exclusively for you, or provide you with -facilities for running those works, provided that you comply with the -terms of this License in conveying all material for which you do not -control copyright. Those thus making or running the covered works for -you must do so exclusively on your behalf, under your direction and -control, on terms that prohibit them from making any copies of your -copyrighted material outside their relationship with you. - -Conveying under any other circumstances is permitted solely under the -conditions stated below. Sublicensing is not allowed; section 10 makes -it unnecessary. - -#### 3. Protecting Users' Legal Rights From Anti-Circumvention Law. - -No covered work shall be deemed part of an effective technological -measure under any applicable law fulfilling obligations under article -11 of the WIPO copyright treaty adopted on 20 December 1996, or -similar laws prohibiting or restricting circumvention of such -measures. - -When you convey a covered work, you waive any legal power to forbid -circumvention of technological measures to the extent such -circumvention is effected by exercising rights under this License with -respect to the covered work, and you disclaim any intention to limit -operation or modification of the work as a means of enforcing, against -the work's users, your or third parties' legal rights to forbid -circumvention of technological measures. - -#### 4. Conveying Verbatim Copies. - -You may convey verbatim copies of the Program's source code as you -receive it, in any medium, provided that you conspicuously and -appropriately publish on each copy an appropriate copyright notice; -keep intact all notices stating that this License and any -non-permissive terms added in accord with section 7 apply to the code; -keep intact all notices of the absence of any warranty; and give all -recipients a copy of this License along with the Program. - -You may charge any price or no price for each copy that you convey, -and you may offer support or warranty protection for a fee. - -#### 5. Conveying Modified Source Versions. - -You may convey a work based on the Program, or the modifications to -produce it from the Program, in the form of source code under the -terms of section 4, provided that you also meet all of these -conditions: - -- a) The work must carry prominent notices stating that you modified - it, and giving a relevant date. -- b) The work must carry prominent notices stating that it is - released under this License and any conditions added under - section 7. This requirement modifies the requirement in section 4 - to "keep intact all notices". -- c) You must license the entire work, as a whole, under this - License to anyone who comes into possession of a copy. This - License will therefore apply, along with any applicable section 7 - additional terms, to the whole of the work, and all its parts, - regardless of how they are packaged. This License gives no - permission to license the work in any other way, but it does not - invalidate such permission if you have separately received it. -- d) If the work has interactive user interfaces, each must display - Appropriate Legal Notices; however, if the Program has interactive - interfaces that do not display Appropriate Legal Notices, your - work need not make them do so. - -A compilation of a covered work with other separate and independent -works, which are not by their nature extensions of the covered work, -and which are not combined with it such as to form a larger program, -in or on a volume of a storage or distribution medium, is called an -"aggregate" if the compilation and its resulting copyright are not -used to limit the access or legal rights of the compilation's users -beyond what the individual works permit. Inclusion of a covered work -in an aggregate does not cause this License to apply to the other -parts of the aggregate. - -#### 6. Conveying Non-Source Forms. - -You may convey a covered work in object code form under the terms of -sections 4 and 5, provided that you also convey the machine-readable -Corresponding Source under the terms of this License, in one of these -ways: - -- a) Convey the object code in, or embodied in, a physical product - (including a physical distribution medium), accompanied by the - Corresponding Source fixed on a durable physical medium - customarily used for software interchange. -- b) Convey the object code in, or embodied in, a physical product - (including a physical distribution medium), accompanied by a - written offer, valid for at least three years and valid for as - long as you offer spare parts or customer support for that product - model, to give anyone who possesses the object code either (1) a - copy of the Corresponding Source for all the software in the - product that is covered by this License, on a durable physical - medium customarily used for software interchange, for a price no - more than your reasonable cost of physically performing this - conveying of source, or (2) access to copy the Corresponding - Source from a network server at no charge. -- c) Convey individual copies of the object code with a copy of the - written offer to provide the Corresponding Source. This - alternative is allowed only occasionally and noncommercially, and - only if you received the object code with such an offer, in accord - with subsection 6b. -- d) Convey the object code by offering access from a designated - place (gratis or for a charge), and offer equivalent access to the - Corresponding Source in the same way through the same place at no - further charge. You need not require recipients to copy the - Corresponding Source along with the object code. If the place to - copy the object code is a network server, the Corresponding Source - may be on a different server (operated by you or a third party) - that supports equivalent copying facilities, provided you maintain - clear directions next to the object code saying where to find the - Corresponding Source. Regardless of what server hosts the - Corresponding Source, you remain obligated to ensure that it is - available for as long as needed to satisfy these requirements. -- e) Convey the object code using peer-to-peer transmission, - provided you inform other peers where the object code and - Corresponding Source of the work are being offered to the general - public at no charge under subsection 6d. - -A separable portion of the object code, whose source code is excluded -from the Corresponding Source as a System Library, need not be -included in conveying the object code work. - -A "User Product" is either (1) a "consumer product", which means any -tangible personal property which is normally used for personal, -family, or household purposes, or (2) anything designed or sold for -incorporation into a dwelling. In determining whether a product is a -consumer product, doubtful cases shall be resolved in favor of -coverage. For a particular product received by a particular user, -"normally used" refers to a typical or common use of that class of -product, regardless of the status of the particular user or of the way -in which the particular user actually uses, or expects or is expected -to use, the product. A product is a consumer product regardless of -whether the product has substantial commercial, industrial or -non-consumer uses, unless such uses represent the only significant -mode of use of the product. - -"Installation Information" for a User Product means any methods, -procedures, authorization keys, or other information required to -install and execute modified versions of a covered work in that User -Product from a modified version of its Corresponding Source. The -information must suffice to ensure that the continued functioning of -the modified object code is in no case prevented or interfered with -solely because modification has been made. - -If you convey an object code work under this section in, or with, or -specifically for use in, a User Product, and the conveying occurs as -part of a transaction in which the right of possession and use of the -User Product is transferred to the recipient in perpetuity or for a -fixed term (regardless of how the transaction is characterized), the -Corresponding Source conveyed under this section must be accompanied -by the Installation Information. But this requirement does not apply -if neither you nor any third party retains the ability to install -modified object code on the User Product (for example, the work has -been installed in ROM). - -The requirement to provide Installation Information does not include a -requirement to continue to provide support service, warranty, or -updates for a work that has been modified or installed by the -recipient, or for the User Product in which it has been modified or -installed. Access to a network may be denied when the modification -itself materially and adversely affects the operation of the network -or violates the rules and protocols for communication across the -network. - -Corresponding Source conveyed, and Installation Information provided, -in accord with this section must be in a format that is publicly -documented (and with an implementation available to the public in -source code form), and must require no special password or key for -unpacking, reading or copying. - -#### 7. Additional Terms. - -"Additional permissions" are terms that supplement the terms of this -License by making exceptions from one or more of its conditions. -Additional permissions that are applicable to the entire Program shall -be treated as though they were included in this License, to the extent -that they are valid under applicable law. If additional permissions -apply only to part of the Program, that part may be used separately -under those permissions, but the entire Program remains governed by -this License without regard to the additional permissions. - -When you convey a copy of a covered work, you may at your option -remove any additional permissions from that copy, or from any part of -it. (Additional permissions may be written to require their own -removal in certain cases when you modify the work.) You may place -additional permissions on material, added by you to a covered work, -for which you have or can give appropriate copyright permission. - -Notwithstanding any other provision of this License, for material you -add to a covered work, you may (if authorized by the copyright holders -of that material) supplement the terms of this License with terms: - -- a) Disclaiming warranty or limiting liability differently from the - terms of sections 15 and 16 of this License; or -- b) Requiring preservation of specified reasonable legal notices or - author attributions in that material or in the Appropriate Legal - Notices displayed by works containing it; or -- c) Prohibiting misrepresentation of the origin of that material, - or requiring that modified versions of such material be marked in - reasonable ways as different from the original version; or -- d) Limiting the use for publicity purposes of names of licensors - or authors of the material; or -- e) Declining to grant rights under trademark law for use of some - trade names, trademarks, or service marks; or -- f) Requiring indemnification of licensors and authors of that - material by anyone who conveys the material (or modified versions - of it) with contractual assumptions of liability to the recipient, - for any liability that these contractual assumptions directly - impose on those licensors and authors. - -All other non-permissive additional terms are considered "further -restrictions" within the meaning of section 10. If the Program as you -received it, or any part of it, contains a notice stating that it is -governed by this License along with a term that is a further -restriction, you may remove that term. If a license document contains -a further restriction but permits relicensing or conveying under this -License, you may add to a covered work material governed by the terms -of that license document, provided that the further restriction does -not survive such relicensing or conveying. - -If you add terms to a covered work in accord with this section, you -must place, in the relevant source files, a statement of the -additional terms that apply to those files, or a notice indicating -where to find the applicable terms. - -Additional terms, permissive or non-permissive, may be stated in the -form of a separately written license, or stated as exceptions; the -above requirements apply either way. - -#### 8. Termination. - -You may not propagate or modify a covered work except as expressly -provided under this License. Any attempt otherwise to propagate or -modify it is void, and will automatically terminate your rights under -this License (including any patent licenses granted under the third -paragraph of section 11). - -However, if you cease all violation of this License, then your license -from a particular copyright holder is reinstated (a) provisionally, -unless and until the copyright holder explicitly and finally -terminates your license, and (b) permanently, if the copyright holder -fails to notify you of the violation by some reasonable means prior to -60 days after the cessation. - -Moreover, your license from a particular copyright holder is -reinstated permanently if the copyright holder notifies you of the -violation by some reasonable means, this is the first time you have -received notice of violation of this License (for any work) from that -copyright holder, and you cure the violation prior to 30 days after -your receipt of the notice. - -Termination of your rights under this section does not terminate the -licenses of parties who have received copies or rights from you under -this License. If your rights have been terminated and not permanently -reinstated, you do not qualify to receive new licenses for the same -material under section 10. - -#### 9. Acceptance Not Required for Having Copies. - -You are not required to accept this License in order to receive or run -a copy of the Program. Ancillary propagation of a covered work -occurring solely as a consequence of using peer-to-peer transmission -to receive a copy likewise does not require acceptance. However, -nothing other than this License grants you permission to propagate or -modify any covered work. These actions infringe copyright if you do -not accept this License. Therefore, by modifying or propagating a -covered work, you indicate your acceptance of this License to do so. - -#### 10. Automatic Licensing of Downstream Recipients. - -Each time you convey a covered work, the recipient automatically -receives a license from the original licensors, to run, modify and -propagate that work, subject to this License. You are not responsible -for enforcing compliance by third parties with this License. - -An "entity transaction" is a transaction transferring control of an -organization, or substantially all assets of one, or subdividing an -organization, or merging organizations. If propagation of a covered -work results from an entity transaction, each party to that -transaction who receives a copy of the work also receives whatever -licenses to the work the party's predecessor in interest had or could -give under the previous paragraph, plus a right to possession of the -Corresponding Source of the work from the predecessor in interest, if -the predecessor has it or can get it with reasonable efforts. - -You may not impose any further restrictions on the exercise of the -rights granted or affirmed under this License. For example, you may -not impose a license fee, royalty, or other charge for exercise of -rights granted under this License, and you may not initiate litigation -(including a cross-claim or counterclaim in a lawsuit) alleging that -any patent claim is infringed by making, using, selling, offering for -sale, or importing the Program or any portion of it. - -#### 11. Patents. - -A "contributor" is a copyright holder who authorizes use under this -License of the Program or a work on which the Program is based. The -work thus licensed is called the contributor's "contributor version". - -A contributor's "essential patent claims" are all patent claims owned -or controlled by the contributor, whether already acquired or -hereafter acquired, that would be infringed by some manner, permitted -by this License, of making, using, or selling its contributor version, -but do not include claims that would be infringed only as a -consequence of further modification of the contributor version. For -purposes of this definition, "control" includes the right to grant -patent sublicenses in a manner consistent with the requirements of -this License. - -Each contributor grants you a non-exclusive, worldwide, royalty-free -patent license under the contributor's essential patent claims, to -make, use, sell, offer for sale, import and otherwise run, modify and -propagate the contents of its contributor version. - -In the following three paragraphs, a "patent license" is any express -agreement or commitment, however denominated, not to enforce a patent -(such as an express permission to practice a patent or covenant not to -sue for patent infringement). To "grant" such a patent license to a -party means to make such an agreement or commitment not to enforce a -patent against the party. - -If you convey a covered work, knowingly relying on a patent license, -and the Corresponding Source of the work is not available for anyone -to copy, free of charge and under the terms of this License, through a -publicly available network server or other readily accessible means, -then you must either (1) cause the Corresponding Source to be so -available, or (2) arrange to deprive yourself of the benefit of the -patent license for this particular work, or (3) arrange, in a manner -consistent with the requirements of this License, to extend the patent -license to downstream recipients. "Knowingly relying" means you have -actual knowledge that, but for the patent license, your conveying the -covered work in a country, or your recipient's use of the covered work -in a country, would infringe one or more identifiable patents in that -country that you have reason to believe are valid. - -If, pursuant to or in connection with a single transaction or -arrangement, you convey, or propagate by procuring conveyance of, a -covered work, and grant a patent license to some of the parties -receiving the covered work authorizing them to use, propagate, modify -or convey a specific copy of the covered work, then the patent license -you grant is automatically extended to all recipients of the covered -work and works based on it. - -A patent license is "discriminatory" if it does not include within the -scope of its coverage, prohibits the exercise of, or is conditioned on -the non-exercise of one or more of the rights that are specifically -granted under this License. You may not convey a covered work if you -are a party to an arrangement with a third party that is in the -business of distributing software, under which you make payment to the -third party based on the extent of your activity of conveying the -work, and under which the third party grants, to any of the parties -who would receive the covered work from you, a discriminatory patent -license (a) in connection with copies of the covered work conveyed by -you (or copies made from those copies), or (b) primarily for and in -connection with specific products or compilations that contain the -covered work, unless you entered into that arrangement, or that patent -license was granted, prior to 28 March 2007. - -Nothing in this License shall be construed as excluding or limiting -any implied license or other defenses to infringement that may -otherwise be available to you under applicable patent law. - -#### 12. No Surrender of Others' Freedom. - -If conditions are imposed on you (whether by court order, agreement or -otherwise) that contradict the conditions of this License, they do not -excuse you from the conditions of this License. If you cannot convey a -covered work so as to satisfy simultaneously your obligations under -this License and any other pertinent obligations, then as a -consequence you may not convey it at all. For example, if you agree to -terms that obligate you to collect a royalty for further conveying -from those to whom you convey the Program, the only way you could -satisfy both those terms and this License would be to refrain entirely -from conveying the Program. - -#### 13. Remote Network Interaction; Use with the GNU General Public License. - -Notwithstanding any other provision of this License, if you modify the -Program, your modified version must prominently offer all users -interacting with it remotely through a computer network (if your -version supports such interaction) an opportunity to receive the -Corresponding Source of your version by providing access to the -Corresponding Source from a network server at no charge, through some -standard or customary means of facilitating copying of software. This -Corresponding Source shall include the Corresponding Source for any -work covered by version 3 of the GNU General Public License that is -incorporated pursuant to the following paragraph. - -Notwithstanding any other provision of this License, you have -permission to link or combine any covered work with a work licensed -under version 3 of the GNU General Public License into a single -combined work, and to convey the resulting work. The terms of this -License will continue to apply to the part which is the covered work, -but the work with which it is combined will remain governed by version -3 of the GNU General Public License. - -#### 14. Revised Versions of this License. - -The Free Software Foundation may publish revised and/or new versions -of the GNU Affero General Public License from time to time. Such new -versions will be similar in spirit to the present version, but may -differ in detail to address new problems or concerns. - -Each version is given a distinguishing version number. If the Program -specifies that a certain numbered version of the GNU Affero General -Public License "or any later version" applies to it, you have the -option of following the terms and conditions either of that numbered -version or of any later version published by the Free Software -Foundation. If the Program does not specify a version number of the -GNU Affero General Public License, you may choose any version ever -published by the Free Software Foundation. - -If the Program specifies that a proxy can decide which future versions -of the GNU Affero General Public License can be used, that proxy's -public statement of acceptance of a version permanently authorizes you -to choose that version for the Program. - -Later license versions may give you additional or different -permissions. However, no additional obligations are imposed on any -author or copyright holder as a result of your choosing to follow a -later version. - -#### 15. Disclaimer of Warranty. - -THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY -APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT -HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT -WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND -PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE -DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR -CORRECTION. - -#### 16. Limitation of Liability. - -IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING -WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR -CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, -INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES -ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT -NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR -LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM -TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER -PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. - -#### 17. Interpretation of Sections 15 and 16. - -If the disclaimer of warranty and limitation of liability provided -above cannot be given local legal effect according to their terms, -reviewing courts shall apply local law that most closely approximates -an absolute waiver of all civil liability in connection with the -Program, unless a warranty or assumption of liability accompanies a -copy of the Program in return for a fee. - -END OF TERMS AND CONDITIONS - -### How to Apply These Terms to Your New Programs - -If you develop a new program, and you want it to be of the greatest -possible use to the public, the best way to achieve this is to make it -free software which everyone can redistribute and change under these -terms. - -To do so, attach the following notices to the program. It is safest to -attach them to the start of each source file to most effectively state -the exclusion of warranty; and each file should have at least the -"copyright" line and a pointer to where the full notice is found. - - - Copyright (C) - - This program is free software: you can redistribute it and/or modify - it under the terms of the GNU Affero General Public License as - published by the Free Software Foundation, either version 3 of the - License, or (at your option) any later version. - - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU Affero General Public License for more details. - - You should have received a copy of the GNU Affero General Public License - along with this program. If not, see . - -Also add information on how to contact you by electronic and paper -mail. - -If your software can interact with users remotely through a computer -network, you should also make sure that it provides a way for users to -get its source. For example, if your program is a web application, its -interface could display a "Source" link that leads users to an archive -of the code. There are many ways you could offer source, and different -solutions will be better for different programs; see section 13 for -the specific requirements. - -You should also get your employer (if you work as a programmer) or -school, if any, to sign a "copyright disclaimer" for the program, if -necessary. For more information on this, and how to apply and follow -the GNU AGPL, see . diff --git a/spaces/Theivaprakasham/wildreceipt/app.py b/spaces/Theivaprakasham/wildreceipt/app.py deleted file mode 100644 index c98ed84a29bf8402e9f30023e73d5e232348baaf..0000000000000000000000000000000000000000 --- a/spaces/Theivaprakasham/wildreceipt/app.py +++ /dev/null @@ -1,129 +0,0 @@ -import os -os.system('pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu') - -import gradio as gr -import numpy as np -from transformers import AutoModelForTokenClassification -from datasets.features import ClassLabel -from transformers import AutoProcessor -from datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D -import torch -from datasets import load_metric -from transformers import LayoutLMv3ForTokenClassification -from transformers.data.data_collator import default_data_collator - - -from transformers import AutoModelForTokenClassification -from datasets import load_dataset -from PIL import Image, ImageDraw, ImageFont - - -processor = AutoProcessor.from_pretrained("Theivaprakasham/layoutlmv3-finetuned-wildreceipt", apply_ocr=True) -model = AutoModelForTokenClassification.from_pretrained("Theivaprakasham/layoutlmv3-finetuned-wildreceipt") - - - -# load image example -dataset = load_dataset("Theivaprakasham/wildreceipt", split="test") -Image.open(dataset[20]["image_path"]).convert("RGB").save("example1.png") -Image.open(dataset[13]["image_path"]).convert("RGB").save("example2.png") -Image.open(dataset[15]["image_path"]).convert("RGB").save("example3.png") - -# define id2label, label2color -labels = dataset.features['ner_tags'].feature.names -id2label = {v: k for v, k in enumerate(labels)} -label2color = { - "Date_key": 'red', - "Date_value": 'green', - "Ignore": 'orange', - "Others": 'orange', - "Prod_item_key": 'red', - "Prod_item_value": 'green', - "Prod_price_key": 'red', - "Prod_price_value": 'green', - "Prod_quantity_key": 'red', - "Prod_quantity_value": 'green', - "Store_addr_key": 'red', - "Store_addr_value": 'green', - "Store_name_key": 'red', - "Store_name_value": 'green', - "Subtotal_key": 'red', - "Subtotal_value": 'green', - "Tax_key": 'red', - "Tax_value": 'green', - "Tel_key": 'red', - "Tel_value": 'green', - "Time_key": 'red', - "Time_value": 'green', - "Tips_key": 'red', - "Tips_value": 'green', - "Total_key": 'red', - "Total_value": 'blue' - } - -def unnormalize_box(bbox, width, height): - return [ - width * (bbox[0] / 1000), - height * (bbox[1] / 1000), - width * (bbox[2] / 1000), - height * (bbox[3] / 1000), - ] - - -def iob_to_label(label): - return label - - - -def process_image(image): - - print(type(image)) - width, height = image.size - - # encode - encoding = processor(image, truncation=True, return_offsets_mapping=True, return_tensors="pt") - offset_mapping = encoding.pop('offset_mapping') - - # forward pass - outputs = model(**encoding) - - # get predictions - predictions = outputs.logits.argmax(-1).squeeze().tolist() - token_boxes = encoding.bbox.squeeze().tolist() - - # only keep non-subword predictions - is_subword = np.array(offset_mapping.squeeze().tolist())[:,0] != 0 - true_predictions = [id2label[pred] for idx, pred in enumerate(predictions) if not is_subword[idx]] - true_boxes = [unnormalize_box(box, width, height) for idx, box in enumerate(token_boxes) if not is_subword[idx]] - - # draw predictions over the image - draw = ImageDraw.Draw(image) - font = ImageFont.load_default() - for prediction, box in zip(true_predictions, true_boxes): - predicted_label = iob_to_label(prediction) - draw.rectangle(box, outline=label2color[predicted_label]) - draw.text((box[0]+10, box[1]-10), text=predicted_label, fill=label2color[predicted_label], font=font) - - return image - - -title = "Restaurant/ Hotel Bill information extraction using LayoutLMv3 model" -description = "Restaurant/ Hotel Bill information extraction - We use Microsoft's LayoutLMv3 trained on WildReceipt Dataset to predict the Store_name_value, Store_name_key, Store_addr_value, Store_addr_key, Tel_value, Tel_key, Date_value, Date_key, Time_value, Time_key, Prod_item_value, Prod_item_key, Prod_quantity_value, Prod_quantity_key, Prod_price_value, Prod_price_key, Subtotal_value, Subtotal_key, Tax_value, Tax_key, Tips_value, Tips_key, Total_value, Total_key. To use it, simply upload an image or use the example image below. Results will show up in a few seconds." - -article="References
          [1] Y. Xu et al., “LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking.” 2022. Paper Link
          [2] LayoutLMv3 training and inference
          [3] Hongbin Sun, Zhanghui Kuang, Xiaoyu Yue, Chenhao Lin, and Wayne Zhang. 2021. Spatial Dual-Modality Graph Reasoning for Key Information Extraction. arXiv. DOI:https://doi.org/10.48550/ARXIV.2103.14470 Paper Link" - -examples =[['example1.png'],['example2.png'],['example3.png']] - -css = """.output_image, .input_image {height: 600px !important}""" - -iface = gr.Interface(fn=process_image, - inputs=gr.inputs.Image(type="pil"), - outputs=gr.outputs.Image(type="pil", label="annotated image"), - title=title, - description=description, - article=article, - examples=examples, - css=css, - analytics_enabled = True, enable_queue=True) - -iface.launch(inline=False, share=False, debug=False) \ No newline at end of file diff --git a/spaces/Tiju1996/resume-parser/ResumeReader.py b/spaces/Tiju1996/resume-parser/ResumeReader.py deleted file mode 100644 index 7f8808b1a559624394fc43907031abb5fc6e1fc2..0000000000000000000000000000000000000000 --- a/spaces/Tiju1996/resume-parser/ResumeReader.py +++ /dev/null @@ -1,103 +0,0 @@ -import re -import os -import logging -import pdfplumber -import fitz - -class ResumeReader: - - def convert_docx_to_txt(self, docx_file,docx_parser): - """ - A utility function to convert a Microsoft docx files to raw text. - - This code is largely borrowed from existing solutions, and does not match the style of the rest of this repo. - :param docx_file: docx file with gets uploaded by the user - :type docx_file: InMemoryUploadedFile - :return: The text contents of the docx file - :rtype: str - """ - - # doc = docx.Document(docx_file) - # allText = [] - # for docpara in doc.paragraphs: - # allText.append(docpara.text) - # text = ' '.join(allText) - text = "" - try: - clean_text = re.sub(r'\n+', '\n', text) - clean_text = clean_text.replace("\r", "\n").replace("\t", " ") # Normalize text blob - resume_lines = clean_text.splitlines() # Split text blob into individual lines - resume_lines = [re.sub('\s+', ' ', line.strip()) for line in resume_lines if - line.strip()] # Remove empty strings and whitespaces - return resume_lines, text - except Exception as e: - logging.error('Error in docx file:: ' + str(e)) - return [], " " - - def convert_pdf_to_txt(self, pdf_file): - """ - A utility function to convert a machine-readable PDF to raw text. - - This code is largely borrowed from existing solutions, and does not match the style of the rest of this repo. - :param input_pdf_path: Path to the .pdf file which should be converted - :type input_pdf_path: str - :return: The text contents of the pdf - :rtype: str - """ - - pdf = pdfplumber.open(pdf_file) - raw_text= "" - with fitz.open(pdf_file) as doc: - for page in doc: - raw_text += page.get_text() - print(raw_text) - # for page in pdf.pages: - # raw_text += page.extract_text() + "\n" - - pdf.close() - - try: - full_string = re.sub(r'\n+', '\n', raw_text) - full_string = full_string.replace("\r", "\n") - full_string = full_string.replace("\t", " ") - - # Remove awkward LaTeX bullet characters - full_string = re.sub(r"\uf0b7", " ", full_string) - full_string = re.sub(r"\(cid:\d{0,3}\)", " ", full_string) - full_string = re.sub(r'• ', " ", full_string) - - # Split text blob into individual lines - resume_lines = full_string.splitlines(True) - - # Remove empty strings and whitespaces - resume_lines = [re.sub('\s+', ' ', line.strip()) for line in resume_lines if line.strip()] - - return resume_lines, raw_text - except Exception as e: - logging.error('Error in docx file:: ' + str(e)) - return [], " " - - def read_file(self, file,docx_parser = "tika"): - """ - file : Give path of resume file - docx_parser : Enter docx2txt or tika, by default is tika - """ - print("Reading the Resume...") - # file = "/content/Asst Manager Trust Administration.docx" - file = os.path.join(file) - if file.endswith('docx') or file.endswith('doc'): - # if file.endswith('doc') and docx_parser == "docx2txt": - # docx_parser = "tika" - # logging.error("doc format not supported by the docx2txt changing back to tika") - resume_lines, raw_text = self.convert_docx_to_txt(file,docx_parser) - elif file.endswith('pdf'): - resume_lines, raw_text = self.convert_pdf_to_txt(file) - elif file.endswith('txt'): - with open(file, 'r', encoding='utf-8') as f: - resume_lines = f.readlines() - - else: - resume_lines = None - - - return resume_lines \ No newline at end of file diff --git a/spaces/Truepic/ai-content-credentials/README.md b/spaces/Truepic/ai-content-credentials/README.md deleted file mode 100644 index 7b6fafda50423e5393b01b43f3136f25c5cc171d..0000000000000000000000000000000000000000 --- a/spaces/Truepic/ai-content-credentials/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: GenAI with Content Credentials -emoji: 🚀 -colorFrom: pink -colorTo: blue -sdk: docker -pinned: false -duplicated_from: jclyo1/docker-sandbox ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/TuringAgency/anic_gui/assets/index.9b6a13d4.css b/spaces/TuringAgency/anic_gui/assets/index.9b6a13d4.css deleted file mode 100644 index f9ce278159935b26fc31d2fe5708c4e14095ffca..0000000000000000000000000000000000000000 --- a/spaces/TuringAgency/anic_gui/assets/index.9b6a13d4.css +++ /dev/null @@ -1 +0,0 @@ -#root{max-width:1280px;margin:0 auto;padding:2rem;text-align:center}.slider{position:relative;display:flex;align-items:center;background-color:transparent;width:100%}.slider__progress-bar{position:absolute;height:4px;border-radius:2px;width:100%;background:linear-gradient(to right,rgb(136,133,121) var(--progress),rgb(202,200,191) var(--progress))}.slider__input{position:relative;appearance:none;margin:0;background-color:transparent;height:100%;width:100%;cursor:pointer}.slider__input::-webkit-slider-thumb{appearance:none;width:16px;height:16px;border-radius:50%;background:rgb(78,77,71);border:1px solid transparent;box-shadow:unset}.slider__input::-moz-range-thumb{appearance:none;width:16px;height:16px;border-radius:50%;background:rgb(78,77,71);border:1px solid transparent;box-shadow:unset}:root{font-family:Inter,Avenir,Helvetica,Arial,sans-serif;font-size:16px;line-height:24px;font-weight:400;color-scheme:light dark;color:#ffffffde;background-color:#242424;font-synthesis:none;text-rendering:optimizeLegibility;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;-webkit-text-size-adjust:100%}a{font-weight:500;color:#646cff;text-decoration:inherit}a:hover{color:#535bf2}body{margin:0;display:flex;place-items:center;min-width:320px;min-height:100vh}h1{font-size:3.2em;line-height:1.1}button{border-radius:8px;border:1px solid transparent;padding:.6em 1.2em;font-size:1em;font-weight:500;font-family:inherit;background-color:#1a1a1a;cursor:pointer;transition:border-color .25s}button:hover{border-color:#646cff}button:focus,button:focus-visible{outline:4px auto -webkit-focus-ring-color}@media (prefers-color-scheme: light){:root{color:#213547;background-color:#fff}a:hover{color:#747bff}button{background-color:#f9f9f9}} diff --git a/spaces/ValarMorghulis/BudgetAllocation/README.md b/spaces/ValarMorghulis/BudgetAllocation/README.md deleted file mode 100644 index 07d7cde1c8695e2a48144b2865d173a51de88b68..0000000000000000000000000000000000000000 --- a/spaces/ValarMorghulis/BudgetAllocation/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: BudgetAllocation -emoji: 🌍 -colorFrom: indigo -colorTo: gray -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Vladimirktan/find-my-pic-app/app.py b/spaces/Vladimirktan/find-my-pic-app/app.py deleted file mode 100644 index ba119950fa7a2a99bc6f75de080320d4278bb00e..0000000000000000000000000000000000000000 --- a/spaces/Vladimirktan/find-my-pic-app/app.py +++ /dev/null @@ -1,129 +0,0 @@ -import streamlit as st -# print(st.__version__) -from PIL import Image -import pandas as pd -import torch -from transformers import CLIPProcessor, CLIPModel -from sklearn.metrics.pairwise import cosine_similarity -import os -import zipfile - - -# Пути (господни) -zip_path = 'flickr30k_images.zip' -capturings_path = 'results.csv' -model_weights_path = 'text_features.pt' -images_path = 'flickr30k_images/' # в случае архива его надо распаковать, делаю это далее по коду - -# Кэширование загрузки модели и других дорогостоящих операций -@st.cache_resource -def load_model(): - model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") - processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") - return model, processor - -@st.cache_data -def load_data(capturings_path, grouped_path): - df = pd.read_csv(capturings_path, sep='|') - grouped_df = pd.read_csv(grouped_path) - return df, grouped_df - -@st.cache_data -def load_text_features(text_features_path): - return torch.load(text_features_path) - -def unpack_images(zip_path): - if not os.path.exists(images_path): - with zipfile.ZipFile(zip_path, 'r') as zip_ref: - zip_ref.extractall('.') - - -# Инкапсулируем логику в функции -def find_images(query, top, text_features, df, grouped_df): - # Векторизация текстового запроса - model, processor = load_model() - query_input = processor(query, return_tensors="pt") - query_features = model.get_text_features(**query_input) - - # Поиск самых похожих изображений - similarity_scores = cosine_similarity(query_features.detach().numpy(), text_features.detach().numpy()) - top_indices = similarity_scores.argsort()[0][-top:][::-1] - top_images = df.loc[top_indices, 'image_name'].tolist() - top_similarity_scores = similarity_scores[0][top_indices] - - top_images_df = pd.DataFrame({'image_name': top_images}) - top_info = pd.merge(top_images_df, grouped_df, on='image_name') - - # Поиск наименее похожих изображений - bottom_indices = similarity_scores.argsort()[0][:2] # здесь "2" - это количество наименее похожих изображений - bottom_images = df.loc[bottom_indices, 'image_name'].tolist() - bottom_similarity_scores = similarity_scores[0][bottom_indices] - - bottom_images_df = pd.DataFrame({'image_name': bottom_images}) - bottom_info = pd.merge(bottom_images_df, grouped_df, on='image_name') - - return top_images, top_similarity_scores, top_info, bottom_images, bottom_similarity_scores, bottom_info - - -# Основная программа -if __name__ == '__main__': - st.title("Find my pic!") - - images_path = 'flickr30k_images/' # в случае архива его надо распоковать, делаю это далее по коду - - # Загрузка модели и данных - model, processor = load_model() - df, grouped_df = load_data('results.csv', 'grouped_df.csv') - text_features = load_text_features('text_features.pt') - - # Ввод данных пользователем - user_input = st.text_input("Введите текстовый запрос:", "") - num_images = st.number_input("Выберите количество изображений", min_value=1, max_value=10, value=5, step=1) - - # Объявляем эти переменные заранее, чтобы избежать NameError - top_images = [] - top_similarity_scores = [] - bottom_images = [] - bottom_similarity_scores = [] - - unpack_images('flickr30k_images.zip') - - - if st.button("Поиск изображений"): - # top_images, top_similarity_scores, top_info = find_images(user_input, num_images, text_features, df, grouped_df) - top_images, top_similarity_scores, top_info, bottom_images, bottom_similarity_scores, bottom_info = find_images(user_input, num_images, text_features, df, grouped_df) - - - # Вывод найденных изображений и подписей - st.write("Most Relevant Images:") - for index, (img_name, score) in enumerate(zip(top_images, top_similarity_scores)): - comment = top_info.loc[top_info['image_name'] == img_name, ' comment'].values[0] - - col1, col2 = st.columns(2) - - with col1: - st.write(f"Image filename: {img_name}") - st.write(f"Image capture: {comment}") - st.write(f"Model confidence of pic relevance: {score:.4f}") - - with col2: - # Загружаем только нужные изображения - # img_path = os.path.join(images_path, 'path_within_zip', img_name) # уточните путь внутри zip-архива - img_path = os.path.join(images_path, img_name) # уточните путь внутри zip-архива - st.image(Image.open(img_path), use_column_width=True) - - # Вывод наименее релевантных изображений - st.write("Least Relevant Images:") - for index, (img_name, score) in enumerate(zip(bottom_images, bottom_similarity_scores)): - comment = bottom_info.loc[bottom_info['image_name'] == img_name, ' comment'].values[0] - - col1, col2 = st.columns(2) - - with col1: - st.write(f"Image filename: {img_name}") - st.write(f"Image capture: {comment}") - st.write(f"Model confidence of pic relevance: {score:.4f}") - - with col2: - img_path = os.path.join(images_path, img_name) - st.image(Image.open(img_path), use_column_width=True) \ No newline at end of file diff --git a/spaces/Westwing/Seasonal_classifier/app.py b/spaces/Westwing/Seasonal_classifier/app.py deleted file mode 100644 index bd235167428fdc4eb433192ea855861d793ea008..0000000000000000000000000000000000000000 --- a/spaces/Westwing/Seasonal_classifier/app.py +++ /dev/null @@ -1,65 +0,0 @@ -import pandas as pd -import numpy as np -from PIL import Image -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchvision import datasets, transforms, models -import joblib - -# SEASONAL MODEL -class Seasonal_Model(nn.Module): - def __init__(self): - super().__init__() - self.conv1 = nn.Conv2d(3, 6, 5) - self.pool = nn.MaxPool2d(2, 2) - self.conv2 = nn.Conv2d(6, 16, 5) - self.fc1 = nn.Linear(16*29*29, 120) - self.fc2 = nn.Linear(120, 84) - self.fc3 = nn.Linear(84, 3) - - def forward(self, x): - x = self.pool(F.relu(self.conv1(x))) - x = self.pool(F.relu(self.conv2(x))) - x = x.view(x.size(0), -1) # flatten all dimensions except batch - x = F.relu(self.fc1(x)) - x = F.relu(self.fc2(x)) - x = self.fc3(x) - return x - - -net = Seasonal_Model() - -model=joblib.load('Image_seasonal_clf.sav') - -def classify_image(inp): - result =[] - im_tensor1 = transforms.ToTensor() # Tensor obj - tensor1 = im_tensor1(inp) - resize1 = transforms.Resize((128,128)) # Resizing image - tensor_resize1= resize1(tensor1) - tensor_resize1= torch.unsqueeze(tensor_resize1, dim=0) # Make image batch like - pred1 = model(tensor_resize1) - probabilities=torch.nn.functional.relu(pred1) - prob=torch.nn.functional.softmax(probabilities) - prob=prob.tolist()[0] - lbls={'ALL_year_sku':prob[0],'Summer_sku':prob[1],'Winter_sku':prob[2]} - return lbls - - # if - #if torch.argmax(pred1)==0: - # result.append("All_year_sku") - #elif torch.argmax(pred1)==1: - # result.append("Summer_sku") - #else: - # result.append("Winter_sku") - - #return result[0] - -import gradio as gr - -gr.Interface(fn=classify_image, - inputs=gr.Image(type='pil'), - outputs=gr.Label(num_top_classes=3), - examples=[["bed.jpg"],["bed2.jpg"],["sock.jpg"],["winter-home-slippers.jpg"],["cushion.jpg"]] - ).launch() \ No newline at end of file diff --git a/spaces/Wings77/ChatGPT4/app.py b/spaces/Wings77/ChatGPT4/app.py deleted file mode 100644 index 7e09e57ef928fd2451fd0ed1295d0994ca75d026..0000000000000000000000000000000000000000 --- a/spaces/Wings77/ChatGPT4/app.py +++ /dev/null @@ -1,193 +0,0 @@ -import gradio as gr -import os -import json -import requests - -#Streaming endpoint -API_URL = "https://api.openai.com/v1/chat/completions" #os.getenv("API_URL") + "/generate_stream" - -#Huggingface provided GPT4 OpenAI API Key -OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") - -#Inferenec function -def predict(system_msg, inputs, top_p, temperature, chat_counter, chatbot=[], history=[]): - - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {OPENAI_API_KEY}" - } - print(f"system message is ^^ {system_msg}") - if system_msg.strip() == '': - initial_message = [{"role": "user", "content": f"{inputs}"},] - multi_turn_message = [] - else: - initial_message= [{"role": "system", "content": system_msg}, - {"role": "user", "content": f"{inputs}"},] - multi_turn_message = [{"role": "system", "content": system_msg},] - - if chat_counter == 0 : - payload = { - "model": "gpt-4", - "messages": initial_message , - "temperature" : 1.0, - "top_p":1.0, - "n" : 1, - "stream": True, - "presence_penalty":0, - "frequency_penalty":0, - } - print(f"chat_counter - {chat_counter}") - else: #if chat_counter != 0 : - messages=multi_turn_message # Of the type of - [{"role": "system", "content": system_msg},] - for data in chatbot: - user = {} - user["role"] = "user" - user["content"] = data[0] - assistant = {} - assistant["role"] = "assistant" - assistant["content"] = data[1] - messages.append(user) - messages.append(assistant) - temp = {} - temp["role"] = "user" - temp["content"] = inputs - messages.append(temp) - #messages - payload = { - "model": "gpt-4", - "messages": messages, # Of the type of [{"role": "user", "content": f"{inputs}"}], - "temperature" : temperature, #1.0, - "top_p": top_p, #1.0, - "n" : 1, - "stream": True, - "presence_penalty":0, - "frequency_penalty":0,} - - chat_counter+=1 - - history.append(inputs) - print(f"Logging : payload is - {payload}") - # make a POST request to the API endpoint using the requests.post method, passing in stream=True - response = requests.post(API_URL, headers=headers, json=payload, stream=True) - print(f"Logging : response code - {response}") - token_counter = 0 - partial_words = "" - - counter=0 - for chunk in response.iter_lines(): - #Skipping first chunk - if counter == 0: - counter+=1 - continue - # check whether each line is non-empty - if chunk.decode() : - chunk = chunk.decode() - # decode each line as response data is in bytes - if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']: - partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"] - if token_counter == 0: - history.append(" " + partial_words) - else: - history[-1] = partial_words - chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list - token_counter+=1 - yield chat, history, chat_counter, response # resembles {chatbot: chat, state: history} - -#Resetting to blank -def reset_textbox(): - return gr.update(value='') - -#to set a component as visible=False -def set_visible_false(): - return gr.update(visible=False) - -#to set a component as visible=True -def set_visible_true(): - return gr.update(visible=True) - -title = """

          🔥GPT4 with ChatCompletions API +🚀Gradio-Streaming

          """ - -#display message for themes feature -theme_addon_msg = """
          🌟 Discover Gradio Themes with this Demo, featuring v3.22.0! Gradio v3.23.0 also enables seamless Theme sharing. You can develop or modify a theme, and send it to the hub using simple theme.push_to_hub(). -
          🏆Participate in Gradio's Theme Building Hackathon to exhibit your creative flair and win fabulous rewards! Join here - Gradio-Themes-Party🎨 🏆
          -""" - -#Using info to add additional information about System message in GPT4 -system_msg_info = """A conversation could begin with a system message to gently instruct the assistant. -System message helps set the behavior of the AI Assistant. For example, the assistant could be instructed with 'You are a helpful assistant.'""" - -#Modifying existing Gradio Theme -theme = gr.themes.Soft(primary_hue="zinc", secondary_hue="green", neutral_hue="green", - text_size=gr.themes.sizes.text_lg) - -with gr.Blocks(css = """#col_container { margin-left: auto; margin-right: auto;} #chatbot {height: 520px; overflow: auto;}""", - theme=theme) as demo: - gr.HTML(title) - gr.HTML("""

          🔥This Huggingface Gradio Demo provides you full access to GPT4 API (4096 token limit). 🎉🥳🎉You don't need any OPENAI API key🙌

          """) - gr.HTML(theme_addon_msg) - gr.HTML('''
          Duplicate SpaceDuplicate the Space and run securely with your OpenAI API Key
          ''') - - with gr.Column(elem_id = "col_container"): - #GPT4 API Key is provided by Huggingface - with gr.Accordion(label="System message:", open=False): - system_msg = gr.Textbox(label="Instruct the AI Assistant to set its beaviour", info = system_msg_info, value="") - accordion_msg = gr.HTML(value="🚧 To set System message you will have to refresh the app", visible=False) - chatbot = gr.Chatbot(label='GPT4', elem_id="chatbot") - inputs = gr.Textbox(placeholder= "Hi there!", label= "Type an input and press Enter") - state = gr.State([]) - with gr.Row(): - with gr.Column(scale=7): - b1 = gr.Button().style(full_width=True) - with gr.Column(scale=3): - server_status_code = gr.Textbox(label="Status code from OpenAI server", ) - - #top_p, temperature - with gr.Accordion("Parameters", open=False): - top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",) - chat_counter = gr.Number(value=0, visible=False, precision=0) - - #Event handling - inputs.submit( predict, [system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key - b1.click( predict, [system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key - - inputs.submit(set_visible_false, [], [system_msg]) - b1.click(set_visible_false, [], [system_msg]) - inputs.submit(set_visible_true, [], [accordion_msg]) - b1.click(set_visible_true, [], [accordion_msg]) - - b1.click(reset_textbox, [], [inputs]) - inputs.submit(reset_textbox, [], [inputs]) - - #Examples - with gr.Accordion(label="Examples for System message:", open=False): - gr.Examples( - examples = [["""You are an AI programming assistant. - - - Follow the user's requirements carefully and to the letter. - - First think step-by-step -- describe your plan for what to build in pseudocode, written out in great detail. - - Then output the code in a single code block. - - Minimize any other prose."""], ["""You are ComedianGPT who is a helpful assistant. You answer everything with a joke and witty replies."""], - ["You are ChefGPT, a helpful assistant who answers questions with culinary expertise and a pinch of humor."], - ["You are FitnessGuruGPT, a fitness expert who shares workout tips and motivation with a playful twist."], - ["You are SciFiGPT, an AI assistant who discusses science fiction topics with a blend of knowledge and wit."], - ["You are PhilosopherGPT, a thoughtful assistant who responds to inquiries with philosophical insights and a touch of humor."], - ["You are EcoWarriorGPT, a helpful assistant who shares environment-friendly advice with a lighthearted approach."], - ["You are MusicMaestroGPT, a knowledgeable AI who discusses music and its history with a mix of facts and playful banter."], - ["You are SportsFanGPT, an enthusiastic assistant who talks about sports and shares amusing anecdotes."], - ["You are TechWhizGPT, a tech-savvy AI who can help users troubleshoot issues and answer questions with a dash of humor."], - ["You are FashionistaGPT, an AI fashion expert who shares style advice and trends with a sprinkle of wit."], - ["You are ArtConnoisseurGPT, an AI assistant who discusses art and its history with a blend of knowledge and playful commentary."], - ["You are a helpful assistant that provides detailed and accurate information."], - ["You are an assistant that speaks like Shakespeare."], - ["You are a friendly assistant who uses casual language and humor."], - ["You are a financial advisor who gives expert advice on investments and budgeting."], - ["You are a health and fitness expert who provides advice on nutrition and exercise."], - ["You are a travel consultant who offers recommendations for destinations, accommodations, and attractions."], - ["You are a movie critic who shares insightful opinions on films and their themes."], - ["You are a history enthusiast who loves to discuss historical events and figures."], - ["You are a tech-savvy assistant who can help users troubleshoot issues and answer questions about gadgets and software."], - ["You are an AI poet who can compose creative and evocative poems on any given topic."],], - inputs = system_msg,) - -demo.queue(max_size=99, concurrency_count=20).launch(debug=True) \ No newline at end of file diff --git a/spaces/XzJosh/yoyo-Bert-VITS2/server.py b/spaces/XzJosh/yoyo-Bert-VITS2/server.py deleted file mode 100644 index c736ca4f95fec853950eef6654ef79856beffc0a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/yoyo-Bert-VITS2/server.py +++ /dev/null @@ -1,123 +0,0 @@ -from flask import Flask, request, Response -from io import BytesIO -import torch -from av import open as avopen - -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import cleaned_text_to_sequence, get_bert -from text.cleaner import clean_text -from scipy.io import wavfile - -# Flask Init -app = Flask(__name__) -app.config['JSON_AS_ASCII'] = False -def get_text(text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - print([f"{p}{t}" for p, t in zip(phone, tone)]) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str) - - assert bert.shape[-1] == len(phone) - - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - - return bert, phone, tone, language - -def infer(text, sdp_ratio, noise_scale, noise_scale_w,length_scale,sid): - bert, phones, tones, lang_ids = get_text(text,"ZH", hps,) - with torch.no_grad(): - x_tst=phones.to(dev).unsqueeze(0) - tones=tones.to(dev).unsqueeze(0) - lang_ids=lang_ids.to(dev).unsqueeze(0) - bert = bert.to(dev).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(dev) - speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(dev) - audio = net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids,bert, sdp_ratio=sdp_ratio - , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy() - return audio - -def replace_punctuation(text, i=2): - punctuation = ",。?!" - for char in punctuation: - text = text.replace(char, char * i) - return text - -def wav2(i, o, format): - inp = avopen(i, 'rb') - out = avopen(o, 'wb', format=format) - if format == "ogg": format = "libvorbis" - - ostream = out.add_stream(format) - - for frame in inp.decode(audio=0): - for p in ostream.encode(frame): out.mux(p) - - for p in ostream.encode(None): out.mux(p) - - out.close() - inp.close() - -# Load Generator -hps = utils.get_hparams_from_file("./configs/config.json") - -dev='cuda' -net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).to(dev) -_ = net_g.eval() - -_ = utils.load_checkpoint("logs/G_649000.pth", net_g, None,skip_optimizer=True) - -@app.route("/",methods=['GET','POST']) -def main(): - if request.method == 'GET': - try: - speaker = request.args.get('speaker') - text = request.args.get('text').replace("/n","") - sdp_ratio = float(request.args.get("sdp_ratio", 0.2)) - noise = float(request.args.get("noise", 0.5)) - noisew = float(request.args.get("noisew", 0.6)) - length = float(request.args.get("length", 1.2)) - if length >= 2: - return "Too big length" - if len(text) >=200: - return "Too long text" - fmt = request.args.get("format", "wav") - if None in (speaker, text): - return "Missing Parameter" - if fmt not in ("mp3", "wav", "ogg"): - return "Invalid Format" - except: - return "Invalid Parameter" - - with torch.no_grad(): - audio = infer(text, sdp_ratio=sdp_ratio, noise_scale=noise, noise_scale_w=noisew, length_scale=length, sid=speaker) - - with BytesIO() as wav: - wavfile.write(wav, hps.data.sampling_rate, audio) - torch.cuda.empty_cache() - if fmt == "wav": - return Response(wav.getvalue(), mimetype="audio/wav") - wav.seek(0, 0) - with BytesIO() as ofp: - wav2(wav, ofp, fmt) - return Response( - ofp.getvalue(), - mimetype="audio/mpeg" if fmt == "mp3" else "audio/ogg" - ) diff --git a/spaces/YazawaSunrise/so-vits-svc-LoveLive/inference_main.py b/spaces/YazawaSunrise/so-vits-svc-LoveLive/inference_main.py deleted file mode 100644 index db6f9634bb276097eae82cac1776a76150003660..0000000000000000000000000000000000000000 --- a/spaces/YazawaSunrise/so-vits-svc-LoveLive/inference_main.py +++ /dev/null @@ -1,56 +0,0 @@ -import io -import logging -import time -from pathlib import Path - -import librosa -import numpy as np -import soundfile - -from inference import infer_tool -from inference import slicer -from inference.infer_tool import Svc - -logging.getLogger('numba').setLevel(logging.WARNING) -chunks_dict = infer_tool.read_temp("inference/chunks_temp.json") - -model_path = "logs/32k/G_174000-Copy1.pth" -config_path = "configs/config.json" -svc_model = Svc(model_path, config_path) -infer_tool.mkdir(["raw", "results"]) - -# 支持多个wav文件,放在raw文件夹下 -clean_names = ["君の知らない物語-src"] -trans = [-5] # 音高调整,支持正负(半音) -spk_list = ['yunhao'] # 每次同时合成多语者音色 -slice_db = -40 # 默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50 -wav_format = 'flac' # 音频输出格式 - -infer_tool.fill_a_to_b(trans, clean_names) -for clean_name, tran in zip(clean_names, trans): - raw_audio_path = f"raw/{clean_name}" - if "." not in raw_audio_path: - raw_audio_path += ".wav" - infer_tool.format_wav(raw_audio_path) - wav_path = Path(raw_audio_path).with_suffix('.wav') - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - - for spk in spk_list: - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - else: - out_audio, out_sr = svc_model.infer(spk, tran, raw_path) - _audio = out_audio.cpu().numpy() - audio.extend(list(_audio)) - - res_path = f'./results/{clean_name}_{tran}key_{spk}.{wav_format}' - soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format) diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/Misc/torchvision_imagenet_R_50.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/Misc/torchvision_imagenet_R_50.py deleted file mode 100644 index 0d75305bcf7445b98db84b3d489a1505d2fce5af..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/Misc/torchvision_imagenet_R_50.py +++ /dev/null @@ -1,150 +0,0 @@ -""" -An example config file to train a ImageNet classifier with detectron2. -Model and dataloader both come from torchvision. -This shows how to use detectron2 as a general engine for any new models and tasks. - -To run, use the following command: - -python tools/lazyconfig_train_net.py --config-file configs/Misc/torchvision_imagenet_R_50.py \ - --num-gpus 8 dataloader.train.dataset.root=/path/to/imagenet/ - -""" - - -import torch -from torch import nn -from torch.nn import functional as F -from omegaconf import OmegaConf -import torchvision -from torchvision.transforms import transforms as T -from torchvision.models.resnet import ResNet, Bottleneck -from fvcore.common.param_scheduler import MultiStepParamScheduler - -from detectron2.solver import WarmupParamScheduler -from detectron2.solver.build import get_default_optimizer_params -from detectron2.config import LazyCall as L -from detectron2.model_zoo import get_config -from detectron2.data.samplers import TrainingSampler, InferenceSampler -from detectron2.evaluation import DatasetEvaluator -from detectron2.utils import comm - - -""" -Note: Here we put reusable code (models, evaluation, data) together with configs just as a -proof-of-concept, to easily demonstrate what's needed to train a ImageNet classifier in detectron2. -Writing code in configs offers extreme flexibility but is often not a good engineering practice. -In practice, you might want to put code in your project and import them instead. -""" - - -def build_data_loader(dataset, batch_size, num_workers, training=True): - return torch.utils.data.DataLoader( - dataset, - sampler=(TrainingSampler if training else InferenceSampler)(len(dataset)), - batch_size=batch_size, - num_workers=num_workers, - pin_memory=True, - ) - - -class ClassificationNet(nn.Module): - def __init__(self, model: nn.Module): - super().__init__() - self.model = model - - @property - def device(self): - return list(self.model.parameters())[0].device - - def forward(self, inputs): - image, label = inputs - pred = self.model(image.to(self.device)) - if self.training: - label = label.to(self.device) - return F.cross_entropy(pred, label) - else: - return pred - - -class ClassificationAcc(DatasetEvaluator): - def reset(self): - self.corr = self.total = 0 - - def process(self, inputs, outputs): - image, label = inputs - self.corr += (outputs.argmax(dim=1).cpu() == label.cpu()).sum().item() - self.total += len(label) - - def evaluate(self): - all_corr_total = comm.all_gather([self.corr, self.total]) - corr = sum(x[0] for x in all_corr_total) - total = sum(x[1] for x in all_corr_total) - return {"accuracy": corr / total} - - -# --- End of code that could be in a project and be imported - - -dataloader = OmegaConf.create() -dataloader.train = L(build_data_loader)( - dataset=L(torchvision.datasets.ImageNet)( - root="/path/to/imagenet", - split="train", - transform=L(T.Compose)( - transforms=[ - L(T.RandomResizedCrop)(size=224), - L(T.RandomHorizontalFlip)(), - T.ToTensor(), - L(T.Normalize)(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)), - ] - ), - ), - batch_size=256 // 8, - num_workers=4, - training=True, -) - -dataloader.test = L(build_data_loader)( - dataset=L(torchvision.datasets.ImageNet)( - root="${...train.dataset.root}", - split="val", - transform=L(T.Compose)( - transforms=[ - L(T.Resize)(size=256), - L(T.CenterCrop)(size=224), - T.ToTensor(), - L(T.Normalize)(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)), - ] - ), - ), - batch_size=256 // 8, - num_workers=4, - training=False, -) - -dataloader.evaluator = L(ClassificationAcc)() - -model = L(ClassificationNet)( - model=(ResNet)(block=Bottleneck, layers=[3, 4, 6, 3], zero_init_residual=True) -) - - -optimizer = L(torch.optim.SGD)( - params=L(get_default_optimizer_params)(), - lr=0.1, - momentum=0.9, - weight_decay=1e-4, -) - -lr_multiplier = L(WarmupParamScheduler)( - scheduler=L(MultiStepParamScheduler)( - values=[1.0, 0.1, 0.01, 0.001], milestones=[30, 60, 90, 100] - ), - warmup_length=1 / 100, - warmup_factor=0.1, -) - - -train = get_config("common/train.py").train -train.init_checkpoint = None -train.max_iter = 100 * 1281167 // 256 diff --git a/spaces/YouLiXiya/Mobile-SAM/sam_extension/utils/__init__.py b/spaces/YouLiXiya/Mobile-SAM/sam_extension/utils/__init__.py deleted file mode 100644 index e1c65408d3489004412ea201d8c128c073f695e9..0000000000000000000000000000000000000000 --- a/spaces/YouLiXiya/Mobile-SAM/sam_extension/utils/__init__.py +++ /dev/null @@ -1,175 +0,0 @@ -import math -import cv2 -import PIL -import torch -from PIL.Image import Image -from typing import Union, Tuple, List, Optional -import numpy as np -import supervision as sv -from sklearn.decomposition import PCA - -# def add_points_tag(img: Union[Image, np.ndarray], -# point_labels: Union[List[int], np.ndarray] = None, -# point_coords: Union[List[List[int]], np.ndarray] = None, -# pil: bool = False): -# if point_labels is None or point_coords is None or \ -# not isinstance(point_labels, (List, np.ndarray)) or \ -# not isinstance(point_coords, (List, np.ndarray)): -# return img -# if len(point_labels) != len(point_coords): -# print('length of point_label and point_coordinate must be same!') -# return img -# if isinstance(img, Image): -# img = np.uint8(img) -# start_angle = 40 -# x = 8 -# y = 2 -# def get_point(angle, d, base): -# angle = angle / 180.0 * math.pi -# _x, _y = math.cos(angle) * d, math.sin(angle) * d -# return [base[0] + _x, base[1] - _y] -# # assert len(point_labels) == len(point_coords), '' -# for i in range(len(point_labels)): -# points = [] -# for j in range(5): -# _x, _y = math.cos(start_angle), math.sin(start_angle) -# points.append(get_point(start_angle, x, point_coords[i])) -# start_angle -= 36 -# points.append(get_point(start_angle, y, point_coords[i])) -# start_angle -= 36 -# points = np.array([points], np.int32) -# color = (255, 0, 0) if point_labels[i] == 0 else (0, 255, 0) -# cv2.fillPoly(img, points, color, cv2.LINE_AA) -# if pil: -# img = PIL.Image.fromarray(img) -# return img -def add_points_tag(img: Union[Image, np.ndarray], - point_labels: Union[List[int], np.ndarray] = None, - point_coords: Union[List[List[int]], np.ndarray] = None, - pil: bool = False): - if point_labels is None or point_coords is None or \ - not isinstance(point_labels, (List, np.ndarray)) or \ - not isinstance(point_coords, (List, np.ndarray)): - return img - if len(point_labels) != len(point_coords): - print('length of point_label and point_coordinate must be same!') - return img - if isinstance(img, Image): - img = np.array(img) - # img.flags.writeable = True - h, w = img.shape[:2] - x_start_list, x_end_list = np.where((point_coords[:, 0] - 4) > 0, point_coords[:, 0] - 4, 0), np.where((point_coords[:, 0] + 4) < w, point_coords[:, 0] + 4, w) - y_start_list, y_end_list = np.where((point_coords[:, 1] - 4) > 0, point_coords[:, 1] - 4, 0), np.where((point_coords[:, 1] + 4) < h, point_coords[:, 1] + 4, h) - for i in range(len(point_labels)): - x_start, x_end = x_start_list[i], x_end_list[i] - y_start, y_end = y_start_list[i], y_end_list[i] - label = point_labels[i] - color = [0, 255, 0] if int(label) == 1 else [255, 0, 0] - for x in range(x_start, x_end): - for y in range(y_start, y_end): - img[y, x, :] = color - if pil: - img = PIL.Image.fromarray(img) - return img -def add_boxes_tag(img: Union[Image, np.ndarray], - boxes: Union[List[List[int]], np.ndarray] = None, - pil: bool = False): - if boxes is None or not isinstance(boxes, (List, np.ndarray)): - return img - # if isinstance(boxes, np.ndarray): - # if not boxes.all(): - # return img - # else: - # if not boxes: - # return img - if isinstance(img, Image): - img = np.uint8(img) - thickness = 2 - for i in range(len(boxes)): - color = (0, 255, 0) - img = cv2.rectangle(img, (boxes[i][0], boxes[i][1]), (boxes[i][2], boxes[i][3]), color, thickness) - if pil: - img = PIL.Image.fromarray(img) - return img - -def add_prompts_tag(img: Union[Image, np.ndarray], - point_labels: Union[List[int], np.ndarray] = None, - point_coords: Union[List[List[int]], np.ndarray] = None, - boxes: Union[List[List[int]], np.ndarray] = None, - pil: bool = False): - img = add_points_tag(img, point_labels, point_coords, pil=pil) - img = add_boxes_tag(img, boxes, pil=pil) - return img - - -def get_empty_detections(): - detections = sv.Detections(xyxy=np.array([0, 0, 0, 0]).reshape(1, 4)) - detections.xyxy = None - return detections - - -def pca_feature(feature: torch.Tensor, dim: int = 3, return_np: bool = True): - pca = PCA(n_components=dim) - H, W, C = feature.shape - feature = feature.view(-1, C).cpu().numpy() - feature = pca.fit_transform(feature) - feature = torch.tensor(feature.reshape(H, W, dim)) - if return_np: - return feature.numpy() - else: - return feature - -def visual_feature_rgb(feature: torch.Tensor, pil:bool = True): - assert feature.ndim >= 3, 'the dim of feature must >= 3!' - if feature.ndim == 4: - feature = feature.squeeze(0) - if feature.shape[-1] != 3: - feature = pca_feature(feature, 3, False) - max_f, _ = feature.max(-1) - min_f, _ = feature.min(-1) - feature = (feature - min_f[..., None]) / (max_f[..., None] - min_f[..., None]) - feature = np.uint8((feature*255).cpu().numpy()) - if pil: - return PIL.Image.fromarray(feature) - else: - return feature - -def transform_coords(src_shape, des_shape, points = None, boxes = None): - assert points is not None or boxes is not None, 'one of points and boxes must be given!' - scale_h = des_shape[0] / src_shape[0] - scale_w = des_shape[1] / src_shape[1] - if points is not None: - new_points = np.full_like(points, 0) - new_points[:, 0] = points[:, 0] * scale_w - new_points[:, 1] = points[:, 1] * scale_h - new_points.astype(np.int64) - else: - new_points = None - if boxes is not None: - new_boxes = np.full_like(boxes, 0) - new_boxes[:, 0] = boxes[:, 0] * scale_w - new_boxes[:, 1] = boxes[:, 1] * scale_h - new_boxes[:, 2] = boxes[:, 2] * scale_w - new_boxes[:, 3] = boxes[:, 3] * scale_h - new_boxes.astype(np.int64) - else: - new_boxes = None - return new_points, new_boxes - - -def mask2greyimg(mask_list, pil=True): - grey_img_list = [] - for mask in mask_list: - if pil: - grey_img_list.append(PIL.Image.fromarray(np.uint8(mask*255))) - else: - grey_img_list.append(np.uint8(mask * 255)) - return grey_img_list -if __name__ == '__main__': - src_shape = (100,100) - des_shape = (200,200) - points = np.array([[20,20],[40,40]]) - boxes = np.array([[10,10,20,20]]) - new_points, new_boxes = transform_coords(src_shape, des_shape, points, boxes) - print(new_points, new_boxes) - diff --git a/spaces/Yuliang/ECON/lib/dataset/TestDataset.py b/spaces/Yuliang/ECON/lib/dataset/TestDataset.py deleted file mode 100644 index dc4ca3152c0ed7f38d5b64599f5d8e35e5ed65ba..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/dataset/TestDataset.py +++ /dev/null @@ -1,211 +0,0 @@ -# -*- coding: utf-8 -*- - -# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is -# holder of all proprietary rights on this computer program. -# You can only use this computer program if you have closed -# a license agreement with MPG or you get the right to use the computer -# program from someone who is authorized to grant you that right. -# Any use of the computer program without a valid license is prohibited and -# liable to prosecution. -# -# Copyright©2019 Max-Planck-Gesellschaft zur Förderung -# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute -# for Intelligent Systems. All rights reserved. -# -# Contact: ps-license@tuebingen.mpg.de - -import logging -import warnings - -warnings.filterwarnings("ignore") -logging.getLogger("lightning").setLevel(logging.ERROR) -logging.getLogger("trimesh").setLevel(logging.ERROR) - -import glob -import os.path as osp - -import numpy as np -import torch -import torch.nn.functional as F -from PIL import ImageFile -from termcolor import colored -from torchvision import transforms -from torchvision.models import detection - -from lib.common.config import cfg -from lib.common.imutils import process_image -from lib.common.render import Render -from lib.common.train_util import Format -from lib.dataset.mesh_util import SMPLX, get_visibility -from lib.pixielib.models.SMPLX import SMPLX as PIXIE_SMPLX -from lib.pixielib.pixie import PIXIE -from lib.pixielib.utils.config import cfg as pixie_cfg -from lib.pymafx.core import path_config -from lib.pymafx.models import pymaf_net - -ImageFile.LOAD_TRUNCATED_IMAGES = True - - -class TestDataset: - def __init__(self, cfg, device): - - self.image_path = cfg["image_path"] - self.use_seg = cfg["use_seg"] - self.hps_type = cfg["hps_type"] - self.smpl_type = "smplx" - self.smpl_gender = "neutral" - self.vol_res = cfg["vol_res"] - self.single = cfg["single"] - - self.device = device - - self.subject_list = [self.image_path] - - # smpl related - self.smpl_data = SMPLX() - - if self.hps_type == "pymafx": - self.hps = pymaf_net(path_config.SMPL_MEAN_PARAMS, pretrained=True).to(self.device) - self.hps.load_state_dict(torch.load(path_config.CHECKPOINT_FILE)["model"], strict=True) - self.hps.eval() - pixie_cfg.merge_from_list(["model.n_shape", 10, "model.n_exp", 10]) - elif self.hps_type == "pixie": - self.hps = PIXIE(config=pixie_cfg, device=self.device) - - self.smpl_model = PIXIE_SMPLX(pixie_cfg.model).to(self.device) - - self.detector = detection.maskrcnn_resnet50_fpn( - weights=detection.MaskRCNN_ResNet50_FPN_V2_Weights - ) - self.detector.eval() - - print( - colored( - f"SMPL-X estimate with {Format.start} {self.hps_type.upper()} {Format.end}", "green" - ) - ) - - self.render = Render(size=512, device=self.device) - - def __len__(self): - return len(self.subject_list) - - def compute_vis_cmap(self, smpl_verts, smpl_faces): - - (xy, z) = torch.as_tensor(smpl_verts).split([2, 1], dim=-1) - smpl_vis = get_visibility(xy, z, - torch.as_tensor(smpl_faces).long()[:, :, - [0, 2, 1]]).unsqueeze(-1) - smpl_cmap = self.smpl_data.cmap_smpl_vids(self.smpl_type).unsqueeze(0) - - return { - "smpl_vis": smpl_vis.to(self.device), - "smpl_cmap": smpl_cmap.to(self.device), - "smpl_verts": smpl_verts, - } - - def depth_to_voxel(self, data_dict): - - data_dict["depth_F"] = transforms.Resize(self.vol_res)(data_dict["depth_F"]) - data_dict["depth_B"] = transforms.Resize(self.vol_res)(data_dict["depth_B"]) - - depth_mask = (~torch.isnan(data_dict['depth_F'])) - depth_FB = torch.cat([data_dict['depth_F'], data_dict['depth_B']], dim=0) - depth_FB[:, ~depth_mask[0]] = 0. - - # Important: index_long = depth_value - 1 - index_z = (((depth_FB + 1.) * 0.5 * self.vol_res) - 1).clip(0, self.vol_res - - 1).permute(1, 2, 0) - index_z_ceil = torch.ceil(index_z).long() - index_z_floor = torch.floor(index_z).long() - index_z_frac = torch.frac(index_z) - - index_mask = index_z[..., 0] == torch.tensor(self.vol_res * 0.5 - 1).long() - voxels = F.one_hot(index_z_ceil[..., 0], self.vol_res) * index_z_frac[..., 0] + \ - F.one_hot(index_z_floor[..., 0], self.vol_res) * (1.0-index_z_frac[..., 0]) + \ - F.one_hot(index_z_ceil[..., 1], self.vol_res) * index_z_frac[..., 1]+ \ - F.one_hot(index_z_floor[..., 1], self.vol_res) * (1.0 - index_z_frac[..., 1]) - - voxels[index_mask] *= 0 - voxels = torch.flip(voxels, [2]).permute(2, 0, 1).float() #[x-2, y-0, z-1] - - return { - "depth_voxels": voxels.flip([ - 0, - ]).unsqueeze(0).to(self.device), - } - - def __getitem__(self, index): - - img_path = self.subject_list[index] - img_name = img_path.split("/")[-1].rsplit(".", 1)[0] - - arr_dict = process_image(img_path, self.hps_type, self.single, 512, self.detector) - arr_dict.update({"name": img_name}) - - with torch.no_grad(): - if self.hps_type == "pixie": - preds_dict = self.hps.forward(arr_dict["img_hps"].to(self.device)) - elif self.hps_type == 'pymafx': - batch = {k: v.to(self.device) for k, v in arr_dict["img_pymafx"].items()} - preds_dict, _ = self.hps.forward(batch) - - arr_dict["smpl_faces"] = ( - torch.as_tensor(self.smpl_data.smplx_faces.astype(np.int64)).unsqueeze(0).long().to( - self.device - ) - ) - arr_dict["type"] = self.smpl_type - - if self.hps_type == "pymafx": - output = preds_dict["mesh_out"][-1] - scale, tranX, tranY = output["theta"][:, :3].split(1, dim=1) - arr_dict["betas"] = output["pred_shape"] - arr_dict["body_pose"] = output["rotmat"][:, 1:22] - arr_dict["global_orient"] = output["rotmat"][:, 0:1] - arr_dict["smpl_verts"] = output["smplx_verts"] - arr_dict["left_hand_pose"] = output["pred_lhand_rotmat"] - arr_dict["right_hand_pose"] = output["pred_rhand_rotmat"] - arr_dict['jaw_pose'] = output['pred_face_rotmat'][:, 0:1] - arr_dict["exp"] = output["pred_exp"] - # 1.2009, 0.0013, 0.3954 - - elif self.hps_type == "pixie": - arr_dict.update(preds_dict) - arr_dict["global_orient"] = preds_dict["global_pose"] - arr_dict["betas"] = preds_dict["shape"] #200 - arr_dict["smpl_verts"] = preds_dict["vertices"] - scale, tranX, tranY = preds_dict["cam"].split(1, dim=1) - # 1.1435, 0.0128, 0.3520 - - arr_dict["scale"] = scale.unsqueeze(1) - arr_dict["trans"] = ( - torch.cat([tranX, tranY, torch.zeros_like(tranX)], - dim=1).unsqueeze(1).to(self.device).float() - ) - - # data_dict info (key-shape): - # scale, tranX, tranY - tensor.float - # betas - [1,10] / [1, 200] - # body_pose - [1, 23, 3, 3] / [1, 21, 3, 3] - # global_orient - [1, 1, 3, 3] - # smpl_verts - [1, 6890, 3] / [1, 10475, 3] - - # from rot_mat to rot_6d for better optimization - N_body, N_pose = arr_dict["body_pose"].shape[:2] - arr_dict["body_pose"] = arr_dict["body_pose"][:, :, :, :2].reshape(N_body, N_pose, -1) - arr_dict["global_orient"] = arr_dict["global_orient"][:, :, :, :2].reshape(N_body, 1, -1) - - return arr_dict - - def render_normal(self, verts, faces): - - # render optimized mesh (normal, T_normal, image [-1,1]) - self.render.load_meshes(verts, faces) - return self.render.get_image(type="rgb") - - def render_depth(self, verts, faces): - - # render optimized mesh (normal, T_normal, image [-1,1]) - self.render.load_meshes(verts, faces) - return self.render.get_image(type="depth") diff --git a/spaces/abhi1nandy2/AI_Music_Team/README.md b/spaces/abhi1nandy2/AI_Music_Team/README.md deleted file mode 100644 index dfc9c23bc30c77aae01297cd045cc32cbb2416f0..0000000000000000000000000000000000000000 --- a/spaces/abhi1nandy2/AI_Music_Team/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AI Music Team -emoji: 🔥 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/mask_rcnn.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/mask_rcnn.py deleted file mode 100644 index c15a7733170e059d2825138b3812319915b7cad6..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/mask_rcnn.py +++ /dev/null @@ -1,24 +0,0 @@ -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class MaskRCNN(TwoStageDetector): - """Implementation of `Mask R-CNN `_""" - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None): - super(MaskRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/htc_roi_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/htc_roi_head.py deleted file mode 100644 index 5b5c2ec3bc9d579061fbd89f8b320e6e59909143..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/htc_roi_head.py +++ /dev/null @@ -1,589 +0,0 @@ -import torch -import torch.nn.functional as F - -from mmdet.core import (bbox2result, bbox2roi, bbox_mapping, merge_aug_bboxes, - merge_aug_masks, multiclass_nms) -from ..builder import HEADS, build_head, build_roi_extractor -from .cascade_roi_head import CascadeRoIHead - - -@HEADS.register_module() -class HybridTaskCascadeRoIHead(CascadeRoIHead): - """Hybrid task cascade roi head including one bbox head and one mask head. - - https://arxiv.org/abs/1901.07518 - """ - - def __init__(self, - num_stages, - stage_loss_weights, - semantic_roi_extractor=None, - semantic_head=None, - semantic_fusion=('bbox', 'mask'), - interleaved=True, - mask_info_flow=True, - **kwargs): - super(HybridTaskCascadeRoIHead, - self).__init__(num_stages, stage_loss_weights, **kwargs) - assert self.with_bbox and self.with_mask - assert not self.with_shared_head # shared head is not supported - - if semantic_head is not None: - self.semantic_roi_extractor = build_roi_extractor( - semantic_roi_extractor) - self.semantic_head = build_head(semantic_head) - - self.semantic_fusion = semantic_fusion - self.interleaved = interleaved - self.mask_info_flow = mask_info_flow - - def init_weights(self, pretrained): - """Initialize the weights in head. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - super(HybridTaskCascadeRoIHead, self).init_weights(pretrained) - if self.with_semantic: - self.semantic_head.init_weights() - - @property - def with_semantic(self): - """bool: whether the head has semantic head""" - if hasattr(self, 'semantic_head') and self.semantic_head is not None: - return True - else: - return False - - def forward_dummy(self, x, proposals): - """Dummy forward function.""" - outs = () - # semantic head - if self.with_semantic: - _, semantic_feat = self.semantic_head(x) - else: - semantic_feat = None - # bbox heads - rois = bbox2roi([proposals]) - for i in range(self.num_stages): - bbox_results = self._bbox_forward( - i, x, rois, semantic_feat=semantic_feat) - outs = outs + (bbox_results['cls_score'], - bbox_results['bbox_pred']) - # mask heads - if self.with_mask: - mask_rois = rois[:100] - mask_roi_extractor = self.mask_roi_extractor[-1] - mask_feats = mask_roi_extractor( - x[:len(mask_roi_extractor.featmap_strides)], mask_rois) - if self.with_semantic and 'mask' in self.semantic_fusion: - mask_semantic_feat = self.semantic_roi_extractor( - [semantic_feat], mask_rois) - mask_feats += mask_semantic_feat - last_feat = None - for i in range(self.num_stages): - mask_head = self.mask_head[i] - if self.mask_info_flow: - mask_pred, last_feat = mask_head(mask_feats, last_feat) - else: - mask_pred = mask_head(mask_feats) - outs = outs + (mask_pred, ) - return outs - - def _bbox_forward_train(self, - stage, - x, - sampling_results, - gt_bboxes, - gt_labels, - rcnn_train_cfg, - semantic_feat=None): - """Run forward function and calculate loss for box head in training.""" - bbox_head = self.bbox_head[stage] - rois = bbox2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward( - stage, x, rois, semantic_feat=semantic_feat) - - bbox_targets = bbox_head.get_targets(sampling_results, gt_bboxes, - gt_labels, rcnn_train_cfg) - loss_bbox = bbox_head.loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update( - loss_bbox=loss_bbox, - rois=rois, - bbox_targets=bbox_targets, - ) - return bbox_results - - def _mask_forward_train(self, - stage, - x, - sampling_results, - gt_masks, - rcnn_train_cfg, - semantic_feat=None): - """Run forward function and calculate loss for mask head in - training.""" - mask_roi_extractor = self.mask_roi_extractor[stage] - mask_head = self.mask_head[stage] - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - mask_feats = mask_roi_extractor(x[:mask_roi_extractor.num_inputs], - pos_rois) - - # semantic feature fusion - # element-wise sum for original features and pooled semantic features - if self.with_semantic and 'mask' in self.semantic_fusion: - mask_semantic_feat = self.semantic_roi_extractor([semantic_feat], - pos_rois) - if mask_semantic_feat.shape[-2:] != mask_feats.shape[-2:]: - mask_semantic_feat = F.adaptive_avg_pool2d( - mask_semantic_feat, mask_feats.shape[-2:]) - mask_feats += mask_semantic_feat - - # mask information flow - # forward all previous mask heads to obtain last_feat, and fuse it - # with the normal mask feature - if self.mask_info_flow: - last_feat = None - for i in range(stage): - last_feat = self.mask_head[i]( - mask_feats, last_feat, return_logits=False) - mask_pred = mask_head(mask_feats, last_feat, return_feat=False) - else: - mask_pred = mask_head(mask_feats, return_feat=False) - - mask_targets = mask_head.get_targets(sampling_results, gt_masks, - rcnn_train_cfg) - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - loss_mask = mask_head.loss(mask_pred, mask_targets, pos_labels) - - mask_results = dict(loss_mask=loss_mask) - return mask_results - - def _bbox_forward(self, stage, x, rois, semantic_feat=None): - """Box head forward function used in both training and testing.""" - bbox_roi_extractor = self.bbox_roi_extractor[stage] - bbox_head = self.bbox_head[stage] - bbox_feats = bbox_roi_extractor( - x[:len(bbox_roi_extractor.featmap_strides)], rois) - if self.with_semantic and 'bbox' in self.semantic_fusion: - bbox_semantic_feat = self.semantic_roi_extractor([semantic_feat], - rois) - if bbox_semantic_feat.shape[-2:] != bbox_feats.shape[-2:]: - bbox_semantic_feat = F.adaptive_avg_pool2d( - bbox_semantic_feat, bbox_feats.shape[-2:]) - bbox_feats += bbox_semantic_feat - cls_score, bbox_pred = bbox_head(bbox_feats) - - bbox_results = dict(cls_score=cls_score, bbox_pred=bbox_pred) - return bbox_results - - def _mask_forward_test(self, stage, x, bboxes, semantic_feat=None): - """Mask head forward function for testing.""" - mask_roi_extractor = self.mask_roi_extractor[stage] - mask_head = self.mask_head[stage] - mask_rois = bbox2roi([bboxes]) - mask_feats = mask_roi_extractor( - x[:len(mask_roi_extractor.featmap_strides)], mask_rois) - if self.with_semantic and 'mask' in self.semantic_fusion: - mask_semantic_feat = self.semantic_roi_extractor([semantic_feat], - mask_rois) - if mask_semantic_feat.shape[-2:] != mask_feats.shape[-2:]: - mask_semantic_feat = F.adaptive_avg_pool2d( - mask_semantic_feat, mask_feats.shape[-2:]) - mask_feats += mask_semantic_feat - if self.mask_info_flow: - last_feat = None - last_pred = None - for i in range(stage): - mask_pred, last_feat = self.mask_head[i](mask_feats, last_feat) - if last_pred is not None: - mask_pred = mask_pred + last_pred - last_pred = mask_pred - mask_pred = mask_head(mask_feats, last_feat, return_feat=False) - if last_pred is not None: - mask_pred = mask_pred + last_pred - else: - mask_pred = mask_head(mask_feats) - return mask_pred - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - gt_semantic_seg=None): - """ - Args: - x (list[Tensor]): list of multi-level img features. - - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - - proposal_list (list[Tensors]): list of region proposals. - - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - - gt_labels (list[Tensor]): class indices corresponding to each box - - gt_bboxes_ignore (None, list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - gt_masks (None, Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - gt_semantic_seg (None, list[Tensor]): semantic segmentation masks - used if the architecture supports semantic segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # semantic segmentation part - # 2 outputs: segmentation prediction and embedded features - losses = dict() - if self.with_semantic: - semantic_pred, semantic_feat = self.semantic_head(x) - loss_seg = self.semantic_head.loss(semantic_pred, gt_semantic_seg) - losses['loss_semantic_seg'] = loss_seg - else: - semantic_feat = None - - for i in range(self.num_stages): - self.current_stage = i - rcnn_train_cfg = self.train_cfg[i] - lw = self.stage_loss_weights[i] - - # assign gts and sample proposals - sampling_results = [] - bbox_assigner = self.bbox_assigner[i] - bbox_sampler = self.bbox_sampler[i] - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - - for j in range(num_imgs): - assign_result = bbox_assigner.assign(proposal_list[j], - gt_bboxes[j], - gt_bboxes_ignore[j], - gt_labels[j]) - sampling_result = bbox_sampler.sample( - assign_result, - proposal_list[j], - gt_bboxes[j], - gt_labels[j], - feats=[lvl_feat[j][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - - # bbox head forward and loss - bbox_results = \ - self._bbox_forward_train( - i, x, sampling_results, gt_bboxes, gt_labels, - rcnn_train_cfg, semantic_feat) - roi_labels = bbox_results['bbox_targets'][0] - - for name, value in bbox_results['loss_bbox'].items(): - losses[f's{i}.{name}'] = ( - value * lw if 'loss' in name else value) - - # mask head forward and loss - if self.with_mask: - # interleaved execution: use regressed bboxes by the box branch - # to train the mask branch - if self.interleaved: - pos_is_gts = [res.pos_is_gt for res in sampling_results] - with torch.no_grad(): - proposal_list = self.bbox_head[i].refine_bboxes( - bbox_results['rois'], roi_labels, - bbox_results['bbox_pred'], pos_is_gts, img_metas) - # re-assign and sample 512 RoIs from 512 RoIs - sampling_results = [] - for j in range(num_imgs): - assign_result = bbox_assigner.assign( - proposal_list[j], gt_bboxes[j], - gt_bboxes_ignore[j], gt_labels[j]) - sampling_result = bbox_sampler.sample( - assign_result, - proposal_list[j], - gt_bboxes[j], - gt_labels[j], - feats=[lvl_feat[j][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - mask_results = self._mask_forward_train( - i, x, sampling_results, gt_masks, rcnn_train_cfg, - semantic_feat) - for name, value in mask_results['loss_mask'].items(): - losses[f's{i}.{name}'] = ( - value * lw if 'loss' in name else value) - - # refine bboxes (same as Cascade R-CNN) - if i < self.num_stages - 1 and not self.interleaved: - pos_is_gts = [res.pos_is_gt for res in sampling_results] - with torch.no_grad(): - proposal_list = self.bbox_head[i].refine_bboxes( - bbox_results['rois'], roi_labels, - bbox_results['bbox_pred'], pos_is_gts, img_metas) - - return losses - - def simple_test(self, x, proposal_list, img_metas, rescale=False): - """Test without augmentation.""" - if self.with_semantic: - _, semantic_feat = self.semantic_head(x) - else: - semantic_feat = None - - num_imgs = len(proposal_list) - img_shapes = tuple(meta['img_shape'] for meta in img_metas) - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - # "ms" in variable names means multi-stage - ms_bbox_result = {} - ms_segm_result = {} - ms_scores = [] - rcnn_test_cfg = self.test_cfg - - rois = bbox2roi(proposal_list) - for i in range(self.num_stages): - bbox_head = self.bbox_head[i] - bbox_results = self._bbox_forward( - i, x, rois, semantic_feat=semantic_feat) - # split batch bbox prediction back to each image - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - num_proposals_per_img = tuple(len(p) for p in proposal_list) - rois = rois.split(num_proposals_per_img, 0) - cls_score = cls_score.split(num_proposals_per_img, 0) - bbox_pred = bbox_pred.split(num_proposals_per_img, 0) - ms_scores.append(cls_score) - - if i < self.num_stages - 1: - bbox_label = [s[:, :-1].argmax(dim=1) for s in cls_score] - rois = torch.cat([ - bbox_head.regress_by_class(rois[i], bbox_label[i], - bbox_pred[i], img_metas[i]) - for i in range(num_imgs) - ]) - - # average scores of each image by stages - cls_score = [ - sum([score[i] for score in ms_scores]) / float(len(ms_scores)) - for i in range(num_imgs) - ] - - # apply bbox post-processing to each image individually - det_bboxes = [] - det_labels = [] - for i in range(num_imgs): - det_bbox, det_label = self.bbox_head[-1].get_bboxes( - rois[i], - cls_score[i], - bbox_pred[i], - img_shapes[i], - scale_factors[i], - rescale=rescale, - cfg=rcnn_test_cfg) - det_bboxes.append(det_bbox) - det_labels.append(det_label) - bbox_result = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head[-1].num_classes) - for i in range(num_imgs) - ] - ms_bbox_result['ensemble'] = bbox_result - - if self.with_mask: - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - mask_classes = self.mask_head[-1].num_classes - segm_results = [[[] for _ in range(mask_classes)] - for _ in range(num_imgs)] - else: - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i] - for i in range(num_imgs) - ] - mask_rois = bbox2roi(_bboxes) - aug_masks = [] - mask_roi_extractor = self.mask_roi_extractor[-1] - mask_feats = mask_roi_extractor( - x[:len(mask_roi_extractor.featmap_strides)], mask_rois) - if self.with_semantic and 'mask' in self.semantic_fusion: - mask_semantic_feat = self.semantic_roi_extractor( - [semantic_feat], mask_rois) - mask_feats += mask_semantic_feat - last_feat = None - - num_bbox_per_img = tuple(len(_bbox) for _bbox in _bboxes) - for i in range(self.num_stages): - mask_head = self.mask_head[i] - if self.mask_info_flow: - mask_pred, last_feat = mask_head(mask_feats, last_feat) - else: - mask_pred = mask_head(mask_feats) - - # split batch mask prediction back to each image - mask_pred = mask_pred.split(num_bbox_per_img, 0) - aug_masks.append( - [mask.sigmoid().cpu().numpy() for mask in mask_pred]) - - # apply mask post-processing to each image individually - segm_results = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append( - [[] - for _ in range(self.mask_head[-1].num_classes)]) - else: - aug_mask = [mask[i] for mask in aug_masks] - merged_mask = merge_aug_masks( - aug_mask, [[img_metas[i]]] * self.num_stages, - rcnn_test_cfg) - segm_result = self.mask_head[-1].get_seg_masks( - merged_mask, _bboxes[i], det_labels[i], - rcnn_test_cfg, ori_shapes[i], scale_factors[i], - rescale) - segm_results.append(segm_result) - ms_segm_result['ensemble'] = segm_results - - if self.with_mask: - results = list( - zip(ms_bbox_result['ensemble'], ms_segm_result['ensemble'])) - else: - results = ms_bbox_result['ensemble'] - - return results - - def aug_test(self, img_feats, proposal_list, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - if self.with_semantic: - semantic_feats = [ - self.semantic_head(feat)[1] for feat in img_feats - ] - else: - semantic_feats = [None] * len(img_metas) - - rcnn_test_cfg = self.test_cfg - aug_bboxes = [] - aug_scores = [] - for x, img_meta, semantic in zip(img_feats, img_metas, semantic_feats): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip, flip_direction) - # "ms" in variable names means multi-stage - ms_scores = [] - - rois = bbox2roi([proposals]) - for i in range(self.num_stages): - bbox_head = self.bbox_head[i] - bbox_results = self._bbox_forward( - i, x, rois, semantic_feat=semantic) - ms_scores.append(bbox_results['cls_score']) - - if i < self.num_stages - 1: - bbox_label = bbox_results['cls_score'].argmax(dim=1) - rois = bbox_head.regress_by_class( - rois, bbox_label, bbox_results['bbox_pred'], - img_meta[0]) - - cls_score = sum(ms_scores) / float(len(ms_scores)) - bboxes, scores = self.bbox_head[-1].get_bboxes( - rois, - cls_score, - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - aug_bboxes.append(bboxes) - aug_scores.append(scores) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - - bbox_result = bbox2result(det_bboxes, det_labels, - self.bbox_head[-1].num_classes) - - if self.with_mask: - if det_bboxes.shape[0] == 0: - segm_result = [[[] - for _ in range(self.mask_head[-1].num_classes)] - ] - else: - aug_masks = [] - aug_img_metas = [] - for x, img_meta, semantic in zip(img_feats, img_metas, - semantic_feats): - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape, - scale_factor, flip, flip_direction) - mask_rois = bbox2roi([_bboxes]) - mask_feats = self.mask_roi_extractor[-1]( - x[:len(self.mask_roi_extractor[-1].featmap_strides)], - mask_rois) - if self.with_semantic: - semantic_feat = semantic - mask_semantic_feat = self.semantic_roi_extractor( - [semantic_feat], mask_rois) - if mask_semantic_feat.shape[-2:] != mask_feats.shape[ - -2:]: - mask_semantic_feat = F.adaptive_avg_pool2d( - mask_semantic_feat, mask_feats.shape[-2:]) - mask_feats += mask_semantic_feat - last_feat = None - for i in range(self.num_stages): - mask_head = self.mask_head[i] - if self.mask_info_flow: - mask_pred, last_feat = mask_head( - mask_feats, last_feat) - else: - mask_pred = mask_head(mask_feats) - aug_masks.append(mask_pred.sigmoid().cpu().numpy()) - aug_img_metas.append(img_meta) - merged_masks = merge_aug_masks(aug_masks, aug_img_metas, - self.test_cfg) - - ori_shape = img_metas[0][0]['ori_shape'] - segm_result = self.mask_head[-1].get_seg_masks( - merged_masks, - det_bboxes, - det_labels, - rcnn_test_cfg, - ori_shape, - scale_factor=1.0, - rescale=False) - return [(bbox_result, segm_result)] - else: - return [bbox_result] diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/utils/transformer.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/utils/transformer.py deleted file mode 100644 index 83870eead42f4b0bf73c9e19248d5512d3d044c5..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/utils/transformer.py +++ /dev/null @@ -1,860 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import (Linear, build_activation_layer, build_norm_layer, - xavier_init) - -from .builder import TRANSFORMER - - -class MultiheadAttention(nn.Module): - """A warpper for torch.nn.MultiheadAttention. - - This module implements MultiheadAttention with residual connection, - and positional encoding used in DETR is also passed as input. - - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. Same as - `nn.MultiheadAttention`. - dropout (float): A Dropout layer on attn_output_weights. Default 0.0. - """ - - def __init__(self, embed_dims, num_heads, dropout=0.0): - super(MultiheadAttention, self).__init__() - assert embed_dims % num_heads == 0, 'embed_dims must be ' \ - f'divisible by num_heads. got {embed_dims} and {num_heads}.' - self.embed_dims = embed_dims - self.num_heads = num_heads - self.dropout = dropout - self.attn = nn.MultiheadAttention(embed_dims, num_heads, dropout) - self.dropout = nn.Dropout(dropout) - - def forward(self, - x, - key=None, - value=None, - residual=None, - query_pos=None, - key_pos=None, - attn_mask=None, - key_padding_mask=None): - """Forward function for `MultiheadAttention`. - - Args: - x (Tensor): The input query with shape [num_query, bs, - embed_dims]. Same in `nn.MultiheadAttention.forward`. - key (Tensor): The key tensor with shape [num_key, bs, - embed_dims]. Same in `nn.MultiheadAttention.forward`. - Default None. If None, the `query` will be used. - value (Tensor): The value tensor with same shape as `key`. - Same in `nn.MultiheadAttention.forward`. Default None. - If None, the `key` will be used. - residual (Tensor): The tensor used for addition, with the - same shape as `x`. Default None. If None, `x` will be used. - query_pos (Tensor): The positional encoding for query, with - the same shape as `x`. Default None. If not None, it will - be added to `x` before forward function. - key_pos (Tensor): The positional encoding for `key`, with the - same shape as `key`. Default None. If not None, it will - be added to `key` before forward function. If None, and - `query_pos` has the same shape as `key`, then `query_pos` - will be used for `key_pos`. - attn_mask (Tensor): ByteTensor mask with shape [num_query, - num_key]. Same in `nn.MultiheadAttention.forward`. - Default None. - key_padding_mask (Tensor): ByteTensor with shape [bs, num_key]. - Same in `nn.MultiheadAttention.forward`. Default None. - - Returns: - Tensor: forwarded results with shape [num_query, bs, embed_dims]. - """ - query = x - if key is None: - key = query - if value is None: - value = key - if residual is None: - residual = x - if key_pos is None: - if query_pos is not None and key is not None: - if query_pos.shape == key.shape: - key_pos = query_pos - if query_pos is not None: - query = query + query_pos - if key_pos is not None: - key = key + key_pos - out = self.attn( - query, - key, - value=value, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask)[0] - - return residual + self.dropout(out) - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'dropout={self.dropout})' - return repr_str - - -class FFN(nn.Module): - """Implements feed-forward networks (FFNs) with residual connection. - - Args: - embed_dims (int): The feature dimension. Same as - `MultiheadAttention`. - feedforward_channels (int): The hidden dimension of FFNs. - num_fcs (int, optional): The number of fully-connected layers in - FFNs. Defaults to 2. - act_cfg (dict, optional): The activation config for FFNs. - dropout (float, optional): Probability of an element to be - zeroed. Default 0.0. - add_residual (bool, optional): Add resudual connection. - Defaults to True. - """ - - def __init__(self, - embed_dims, - feedforward_channels, - num_fcs=2, - act_cfg=dict(type='ReLU', inplace=True), - dropout=0.0, - add_residual=True): - super(FFN, self).__init__() - assert num_fcs >= 2, 'num_fcs should be no less ' \ - f'than 2. got {num_fcs}.' - self.embed_dims = embed_dims - self.feedforward_channels = feedforward_channels - self.num_fcs = num_fcs - self.act_cfg = act_cfg - self.dropout = dropout - self.activate = build_activation_layer(act_cfg) - - layers = nn.ModuleList() - in_channels = embed_dims - for _ in range(num_fcs - 1): - layers.append( - nn.Sequential( - Linear(in_channels, feedforward_channels), self.activate, - nn.Dropout(dropout))) - in_channels = feedforward_channels - layers.append(Linear(feedforward_channels, embed_dims)) - self.layers = nn.Sequential(*layers) - self.dropout = nn.Dropout(dropout) - self.add_residual = add_residual - - def forward(self, x, residual=None): - """Forward function for `FFN`.""" - out = self.layers(x) - if not self.add_residual: - return out - if residual is None: - residual = x - return residual + self.dropout(out) - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'num_fcs={self.num_fcs}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'add_residual={self.add_residual})' - return repr_str - - -class TransformerEncoderLayer(nn.Module): - """Implements one encoder layer in DETR transformer. - - Args: - embed_dims (int): The feature dimension. Same as `FFN`. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - dropout (float): Probability of an element to be zeroed. Default 0.0. - order (tuple[str]): The order for encoder layer. Valid examples are - ('selfattn', 'norm', 'ffn', 'norm') and ('norm', 'selfattn', - 'norm', 'ffn'). Default ('selfattn', 'norm', 'ffn', 'norm'). - act_cfg (dict): The activation config for FFNs. Default ReLU. - norm_cfg (dict): Config dict for normalization layer. Default - layer normalization. - num_fcs (int): The number of fully-connected layers for FFNs. - Default 2. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - dropout=0.0, - order=('selfattn', 'norm', 'ffn', 'norm'), - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2): - super(TransformerEncoderLayer, self).__init__() - assert isinstance(order, tuple) and len(order) == 4 - assert set(order) == set(['selfattn', 'norm', 'ffn']) - self.embed_dims = embed_dims - self.num_heads = num_heads - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.order = order - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.pre_norm = order[0] == 'norm' - self.self_attn = MultiheadAttention(embed_dims, num_heads, dropout) - self.ffn = FFN(embed_dims, feedforward_channels, num_fcs, act_cfg, - dropout) - self.norms = nn.ModuleList() - self.norms.append(build_norm_layer(norm_cfg, embed_dims)[1]) - self.norms.append(build_norm_layer(norm_cfg, embed_dims)[1]) - - def forward(self, x, pos=None, attn_mask=None, key_padding_mask=None): - """Forward function for `TransformerEncoderLayer`. - - Args: - x (Tensor): The input query with shape [num_key, bs, - embed_dims]. Same in `MultiheadAttention.forward`. - pos (Tensor): The positional encoding for query. Default None. - Same as `query_pos` in `MultiheadAttention.forward`. - attn_mask (Tensor): ByteTensor mask with shape [num_key, - num_key]. Same in `MultiheadAttention.forward`. Default None. - key_padding_mask (Tensor): ByteTensor with shape [bs, num_key]. - Same in `MultiheadAttention.forward`. Default None. - - Returns: - Tensor: forwarded results with shape [num_key, bs, embed_dims]. - """ - norm_cnt = 0 - inp_residual = x - for layer in self.order: - if layer == 'selfattn': - # self attention - query = key = value = x - x = self.self_attn( - query, - key, - value, - inp_residual if self.pre_norm else None, - query_pos=pos, - key_pos=pos, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask) - inp_residual = x - elif layer == 'norm': - x = self.norms[norm_cnt](x) - norm_cnt += 1 - elif layer == 'ffn': - x = self.ffn(x, inp_residual if self.pre_norm else None) - return x - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'order={self.order}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs})' - return repr_str - - -class TransformerDecoderLayer(nn.Module): - """Implements one decoder layer in DETR transformer. - - Args: - embed_dims (int): The feature dimension. Same as - `TransformerEncoderLayer`. - num_heads (int): Parallel attention heads. - feedforward_channels (int): Same as `TransformerEncoderLayer`. - dropout (float): Same as `TransformerEncoderLayer`. Default 0.0. - order (tuple[str]): The order for decoder layer. Valid examples are - ('selfattn', 'norm', 'multiheadattn', 'norm', 'ffn', 'norm') and - ('norm', 'selfattn', 'norm', 'multiheadattn', 'norm', 'ffn'). - Default the former. - act_cfg (dict): Same as `TransformerEncoderLayer`. Default ReLU. - norm_cfg (dict): Config dict for normalization layer. Default - layer normalization. - num_fcs (int): The number of fully-connected layers in FFNs. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - dropout=0.0, - order=('selfattn', 'norm', 'multiheadattn', 'norm', 'ffn', - 'norm'), - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2): - super(TransformerDecoderLayer, self).__init__() - assert isinstance(order, tuple) and len(order) == 6 - assert set(order) == set(['selfattn', 'norm', 'multiheadattn', 'ffn']) - self.embed_dims = embed_dims - self.num_heads = num_heads - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.order = order - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.pre_norm = order[0] == 'norm' - self.self_attn = MultiheadAttention(embed_dims, num_heads, dropout) - self.multihead_attn = MultiheadAttention(embed_dims, num_heads, - dropout) - self.ffn = FFN(embed_dims, feedforward_channels, num_fcs, act_cfg, - dropout) - self.norms = nn.ModuleList() - # 3 norm layers in official DETR's TransformerDecoderLayer - for _ in range(3): - self.norms.append(build_norm_layer(norm_cfg, embed_dims)[1]) - - def forward(self, - x, - memory, - memory_pos=None, - query_pos=None, - memory_attn_mask=None, - target_attn_mask=None, - memory_key_padding_mask=None, - target_key_padding_mask=None): - """Forward function for `TransformerDecoderLayer`. - - Args: - x (Tensor): Input query with shape [num_query, bs, embed_dims]. - memory (Tensor): Tensor got from `TransformerEncoder`, with shape - [num_key, bs, embed_dims]. - memory_pos (Tensor): The positional encoding for `memory`. Default - None. Same as `key_pos` in `MultiheadAttention.forward`. - query_pos (Tensor): The positional encoding for `query`. Default - None. Same as `query_pos` in `MultiheadAttention.forward`. - memory_attn_mask (Tensor): ByteTensor mask for `memory`, with - shape [num_key, num_key]. Same as `attn_mask` in - `MultiheadAttention.forward`. Default None. - target_attn_mask (Tensor): ByteTensor mask for `x`, with shape - [num_query, num_query]. Same as `attn_mask` in - `MultiheadAttention.forward`. Default None. - memory_key_padding_mask (Tensor): ByteTensor for `memory`, with - shape [bs, num_key]. Same as `key_padding_mask` in - `MultiheadAttention.forward`. Default None. - target_key_padding_mask (Tensor): ByteTensor for `x`, with shape - [bs, num_query]. Same as `key_padding_mask` in - `MultiheadAttention.forward`. Default None. - - Returns: - Tensor: forwarded results with shape [num_query, bs, embed_dims]. - """ - norm_cnt = 0 - inp_residual = x - for layer in self.order: - if layer == 'selfattn': - query = key = value = x - x = self.self_attn( - query, - key, - value, - inp_residual if self.pre_norm else None, - query_pos, - key_pos=query_pos, - attn_mask=target_attn_mask, - key_padding_mask=target_key_padding_mask) - inp_residual = x - elif layer == 'norm': - x = self.norms[norm_cnt](x) - norm_cnt += 1 - elif layer == 'multiheadattn': - query = x - key = value = memory - x = self.multihead_attn( - query, - key, - value, - inp_residual if self.pre_norm else None, - query_pos, - key_pos=memory_pos, - attn_mask=memory_attn_mask, - key_padding_mask=memory_key_padding_mask) - inp_residual = x - elif layer == 'ffn': - x = self.ffn(x, inp_residual if self.pre_norm else None) - return x - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'order={self.order}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs})' - return repr_str - - -class TransformerEncoder(nn.Module): - """Implements the encoder in DETR transformer. - - Args: - num_layers (int): The number of `TransformerEncoderLayer`. - embed_dims (int): Same as `TransformerEncoderLayer`. - num_heads (int): Same as `TransformerEncoderLayer`. - feedforward_channels (int): Same as `TransformerEncoderLayer`. - dropout (float): Same as `TransformerEncoderLayer`. Default 0.0. - order (tuple[str]): Same as `TransformerEncoderLayer`. - act_cfg (dict): Same as `TransformerEncoderLayer`. Default ReLU. - norm_cfg (dict): Same as `TransformerEncoderLayer`. Default - layer normalization. - num_fcs (int): Same as `TransformerEncoderLayer`. Default 2. - """ - - def __init__(self, - num_layers, - embed_dims, - num_heads, - feedforward_channels, - dropout=0.0, - order=('selfattn', 'norm', 'ffn', 'norm'), - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2): - super(TransformerEncoder, self).__init__() - assert isinstance(order, tuple) and len(order) == 4 - assert set(order) == set(['selfattn', 'norm', 'ffn']) - self.num_layers = num_layers - self.embed_dims = embed_dims - self.num_heads = num_heads - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.order = order - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.pre_norm = order[0] == 'norm' - self.layers = nn.ModuleList() - for _ in range(num_layers): - self.layers.append( - TransformerEncoderLayer(embed_dims, num_heads, - feedforward_channels, dropout, order, - act_cfg, norm_cfg, num_fcs)) - self.norm = build_norm_layer(norm_cfg, - embed_dims)[1] if self.pre_norm else None - - def forward(self, x, pos=None, attn_mask=None, key_padding_mask=None): - """Forward function for `TransformerEncoder`. - - Args: - x (Tensor): Input query. Same in `TransformerEncoderLayer.forward`. - pos (Tensor): Positional encoding for query. Default None. - Same in `TransformerEncoderLayer.forward`. - attn_mask (Tensor): ByteTensor attention mask. Default None. - Same in `TransformerEncoderLayer.forward`. - key_padding_mask (Tensor): Same in - `TransformerEncoderLayer.forward`. Default None. - - Returns: - Tensor: Results with shape [num_key, bs, embed_dims]. - """ - for layer in self.layers: - x = layer(x, pos, attn_mask, key_padding_mask) - if self.norm is not None: - x = self.norm(x) - return x - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(num_layers={self.num_layers}, ' - repr_str += f'embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'order={self.order}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs})' - return repr_str - - -class TransformerDecoder(nn.Module): - """Implements the decoder in DETR transformer. - - Args: - num_layers (int): The number of `TransformerDecoderLayer`. - embed_dims (int): Same as `TransformerDecoderLayer`. - num_heads (int): Same as `TransformerDecoderLayer`. - feedforward_channels (int): Same as `TransformerDecoderLayer`. - dropout (float): Same as `TransformerDecoderLayer`. Default 0.0. - order (tuple[str]): Same as `TransformerDecoderLayer`. - act_cfg (dict): Same as `TransformerDecoderLayer`. Default ReLU. - norm_cfg (dict): Same as `TransformerDecoderLayer`. Default - layer normalization. - num_fcs (int): Same as `TransformerDecoderLayer`. Default 2. - """ - - def __init__(self, - num_layers, - embed_dims, - num_heads, - feedforward_channels, - dropout=0.0, - order=('selfattn', 'norm', 'multiheadattn', 'norm', 'ffn', - 'norm'), - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2, - return_intermediate=False): - super(TransformerDecoder, self).__init__() - assert isinstance(order, tuple) and len(order) == 6 - assert set(order) == set(['selfattn', 'norm', 'multiheadattn', 'ffn']) - self.num_layers = num_layers - self.embed_dims = embed_dims - self.num_heads = num_heads - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.order = order - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.return_intermediate = return_intermediate - self.layers = nn.ModuleList() - for _ in range(num_layers): - self.layers.append( - TransformerDecoderLayer(embed_dims, num_heads, - feedforward_channels, dropout, order, - act_cfg, norm_cfg, num_fcs)) - self.norm = build_norm_layer(norm_cfg, embed_dims)[1] - - def forward(self, - x, - memory, - memory_pos=None, - query_pos=None, - memory_attn_mask=None, - target_attn_mask=None, - memory_key_padding_mask=None, - target_key_padding_mask=None): - """Forward function for `TransformerDecoder`. - - Args: - x (Tensor): Input query. Same in `TransformerDecoderLayer.forward`. - memory (Tensor): Same in `TransformerDecoderLayer.forward`. - memory_pos (Tensor): Same in `TransformerDecoderLayer.forward`. - Default None. - query_pos (Tensor): Same in `TransformerDecoderLayer.forward`. - Default None. - memory_attn_mask (Tensor): Same in - `TransformerDecoderLayer.forward`. Default None. - target_attn_mask (Tensor): Same in - `TransformerDecoderLayer.forward`. Default None. - memory_key_padding_mask (Tensor): Same in - `TransformerDecoderLayer.forward`. Default None. - target_key_padding_mask (Tensor): Same in - `TransformerDecoderLayer.forward`. Default None. - - Returns: - Tensor: Results with shape [num_query, bs, embed_dims]. - """ - intermediate = [] - for layer in self.layers: - x = layer(x, memory, memory_pos, query_pos, memory_attn_mask, - target_attn_mask, memory_key_padding_mask, - target_key_padding_mask) - if self.return_intermediate: - intermediate.append(self.norm(x)) - if self.norm is not None: - x = self.norm(x) - if self.return_intermediate: - intermediate.pop() - intermediate.append(x) - if self.return_intermediate: - return torch.stack(intermediate) - return x.unsqueeze(0) - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(num_layers={self.num_layers}, ' - repr_str += f'embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'order={self.order}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs}, ' - repr_str += f'return_intermediate={self.return_intermediate})' - return repr_str - - -@TRANSFORMER.register_module() -class Transformer(nn.Module): - """Implements the DETR transformer. - - Following the official DETR implementation, this module copy-paste - from torch.nn.Transformer with modifications: - - * positional encodings are passed in MultiheadAttention - * extra LN at the end of encoder is removed - * decoder returns a stack of activations from all decoding layers - - See `paper: End-to-End Object Detection with Transformers - `_ for details. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. Same as - `nn.MultiheadAttention`. - num_encoder_layers (int): Number of `TransformerEncoderLayer`. - num_decoder_layers (int): Number of `TransformerDecoderLayer`. - feedforward_channels (int): The hidden dimension for FFNs used in both - encoder and decoder. - dropout (float): Probability of an element to be zeroed. Default 0.0. - act_cfg (dict): Activation config for FFNs used in both encoder - and decoder. Default ReLU. - norm_cfg (dict): Config dict for normalization used in both encoder - and decoder. Default layer normalization. - num_fcs (int): The number of fully-connected layers in FFNs, which is - used for both encoder and decoder. - pre_norm (bool): Whether the normalization layer is ordered - first in the encoder and decoder. Default False. - return_intermediate_dec (bool): Whether to return the intermediate - output from each TransformerDecoderLayer or only the last - TransformerDecoderLayer. Default False. If False, the returned - `hs` has shape [num_decoder_layers, bs, num_query, embed_dims]. - If True, the returned `hs` will have shape [1, bs, num_query, - embed_dims]. - """ - - def __init__(self, - embed_dims=512, - num_heads=8, - num_encoder_layers=6, - num_decoder_layers=6, - feedforward_channels=2048, - dropout=0.0, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2, - pre_norm=False, - return_intermediate_dec=False): - super(Transformer, self).__init__() - self.embed_dims = embed_dims - self.num_heads = num_heads - self.num_encoder_layers = num_encoder_layers - self.num_decoder_layers = num_decoder_layers - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.pre_norm = pre_norm - self.return_intermediate_dec = return_intermediate_dec - if self.pre_norm: - encoder_order = ('norm', 'selfattn', 'norm', 'ffn') - decoder_order = ('norm', 'selfattn', 'norm', 'multiheadattn', - 'norm', 'ffn') - else: - encoder_order = ('selfattn', 'norm', 'ffn', 'norm') - decoder_order = ('selfattn', 'norm', 'multiheadattn', 'norm', - 'ffn', 'norm') - self.encoder = TransformerEncoder(num_encoder_layers, embed_dims, - num_heads, feedforward_channels, - dropout, encoder_order, act_cfg, - norm_cfg, num_fcs) - self.decoder = TransformerDecoder(num_decoder_layers, embed_dims, - num_heads, feedforward_channels, - dropout, decoder_order, act_cfg, - norm_cfg, num_fcs, - return_intermediate_dec) - - def init_weights(self, distribution='uniform'): - """Initialize the transformer weights.""" - # follow the official DETR to init parameters - for m in self.modules(): - if hasattr(m, 'weight') and m.weight.dim() > 1: - xavier_init(m, distribution=distribution) - - def forward(self, x, mask, query_embed, pos_embed): - """Forward function for `Transformer`. - - Args: - x (Tensor): Input query with shape [bs, c, h, w] where - c = embed_dims. - mask (Tensor): The key_padding_mask used for encoder and decoder, - with shape [bs, h, w]. - query_embed (Tensor): The query embedding for decoder, with shape - [num_query, c]. - pos_embed (Tensor): The positional encoding for encoder and - decoder, with the same shape as `x`. - - Returns: - tuple[Tensor]: results of decoder containing the following tensor. - - - out_dec: Output from decoder. If return_intermediate_dec \ - is True output has shape [num_dec_layers, bs, - num_query, embed_dims], else has shape [1, bs, \ - num_query, embed_dims]. - - memory: Output results from encoder, with shape \ - [bs, embed_dims, h, w]. - """ - bs, c, h, w = x.shape - x = x.flatten(2).permute(2, 0, 1) # [bs, c, h, w] -> [h*w, bs, c] - pos_embed = pos_embed.flatten(2).permute(2, 0, 1) - query_embed = query_embed.unsqueeze(1).repeat( - 1, bs, 1) # [num_query, dim] -> [num_query, bs, dim] - mask = mask.flatten(1) # [bs, h, w] -> [bs, h*w] - memory = self.encoder( - x, pos=pos_embed, attn_mask=None, key_padding_mask=mask) - target = torch.zeros_like(query_embed) - # out_dec: [num_layers, num_query, bs, dim] - out_dec = self.decoder( - target, - memory, - memory_pos=pos_embed, - query_pos=query_embed, - memory_attn_mask=None, - target_attn_mask=None, - memory_key_padding_mask=mask, - target_key_padding_mask=None) - out_dec = out_dec.transpose(1, 2) - memory = memory.permute(1, 2, 0).reshape(bs, c, h, w) - return out_dec, memory - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'num_encoder_layers={self.num_encoder_layers}, ' - repr_str += f'num_decoder_layers={self.num_decoder_layers}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs}, ' - repr_str += f'pre_norm={self.pre_norm}, ' - repr_str += f'return_intermediate_dec={self.return_intermediate_dec})' - return repr_str - - -@TRANSFORMER.register_module() -class DynamicConv(nn.Module): - """Implements Dynamic Convolution. - - This module generate parameters for each sample and - use bmm to implement 1*1 convolution. Code is modified - from the `official github repo `_ . - - Args: - in_channels (int): The input feature channel. - Defaults to 256. - feat_channels (int): The inner feature channel. - Defaults to 64. - out_channels (int, optional): The output feature channel. - When not specified, it will be set to `in_channels` - by default - input_feat_shape (int): The shape of input feature. - Defaults to 7. - act_cfg (dict): The activation config for DynamicConv. - norm_cfg (dict): Config dict for normalization layer. Default - layer normalization. - """ - - def __init__(self, - in_channels=256, - feat_channels=64, - out_channels=None, - input_feat_shape=7, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN')): - super(DynamicConv, self).__init__() - self.in_channels = in_channels - self.feat_channels = feat_channels - self.out_channels_raw = out_channels - self.input_feat_shape = input_feat_shape - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.out_channels = out_channels if out_channels else in_channels - - self.num_params_in = self.in_channels * self.feat_channels - self.num_params_out = self.out_channels * self.feat_channels - self.dynamic_layer = nn.Linear( - self.in_channels, self.num_params_in + self.num_params_out) - - self.norm_in = build_norm_layer(norm_cfg, self.feat_channels)[1] - self.norm_out = build_norm_layer(norm_cfg, self.out_channels)[1] - - self.activation = build_activation_layer(act_cfg) - - num_output = self.out_channels * input_feat_shape**2 - self.fc_layer = nn.Linear(num_output, self.out_channels) - self.fc_norm = build_norm_layer(norm_cfg, self.out_channels)[1] - - def forward(self, param_feature, input_feature): - """Forward function for `DynamicConv`. - - Args: - param_feature (Tensor): The feature can be used - to generate the parameter, has shape - (num_all_proposals, in_channels). - input_feature (Tensor): Feature that - interact with parameters, has shape - (num_all_proposals, in_channels, H, W). - - Returns: - Tensor: The output feature has shape - (num_all_proposals, out_channels). - """ - num_proposals = param_feature.size(0) - input_feature = input_feature.view(num_proposals, self.in_channels, - -1).permute(2, 0, 1) - - input_feature = input_feature.permute(1, 0, 2) - parameters = self.dynamic_layer(param_feature) - - param_in = parameters[:, :self.num_params_in].view( - -1, self.in_channels, self.feat_channels) - param_out = parameters[:, -self.num_params_out:].view( - -1, self.feat_channels, self.out_channels) - - # input_feature has shape (num_all_proposals, H*W, in_channels) - # param_in has shape (num_all_proposals, in_channels, feat_channels) - # feature has shape (num_all_proposals, H*W, feat_channels) - features = torch.bmm(input_feature, param_in) - features = self.norm_in(features) - features = self.activation(features) - - # param_out has shape (batch_size, feat_channels, out_channels) - features = torch.bmm(features, param_out) - features = self.norm_out(features) - features = self.activation(features) - - features = features.flatten(1) - features = self.fc_layer(features) - features = self.fc_norm(features) - features = self.activation(features) - - return features - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(in_channels={self.in_channels}, ' - repr_str += f'feat_channels={self.feat_channels}, ' - repr_str += f'out_channels={self.out_channels_raw}, ' - repr_str += f'input_feat_shape={self.input_feat_shape}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg})' - return repr_str diff --git a/spaces/abhisheky127/Fold_TransactionClassification/app.py b/spaces/abhisheky127/Fold_TransactionClassification/app.py deleted file mode 100644 index c08c9b55c0a8ed6a9c5a1989bb78b31736a0c86d..0000000000000000000000000000000000000000 --- a/spaces/abhisheky127/Fold_TransactionClassification/app.py +++ /dev/null @@ -1,110 +0,0 @@ -import gradio as gr -import re -from transformers import pipeline -# from googlesearch import search -import requests -from bs4 import BeautifulSoup - -def get_google_description(keyword): - query = keyword - results = search(query, num_results=1, lang='en') - - for result in results: - description = get_description_from_url(result) - if description: - return description - - return keyword - -def get_description_from_url(url): - response = requests.get(url, timeout=10) - soup = BeautifulSoup(response.text, 'html.parser') - description_tag = soup.find('meta', {'name': 'description'}) - - if description_tag: - return description_tag.get('content') - - return None - - -title = "Fold: Contextual Tag Recommendation System" -description = "powered by bart-large-mnli, made by @abhisheky127" - -classifier = pipeline("zero-shot-classification", - model="facebook/bart-large-mnli") - - - - - -#define a function to process your input and output -def zero_shot(doc, candidates): - given_labels = candidates.split(",") - given_labels = list(map(str.strip, given_labels)) - doc = preprocess(doc) - print(doc) - dictionary = classifier(doc, given_labels) - labels = dictionary['labels'] - scores = dictionary['scores'] - return dict(zip(labels, scores)) - -def preprocess(text): - # Remove digits - cleaned_text = re.sub(r'\d', '', text) - - # Remove special characters except spaces and letters - cleaned_text = re.sub(r'[^a-zA-Z\s]', ' ', cleaned_text) - - # Remove extra spaces - cleaned_text = re.sub(r'\s+', ' ', cleaned_text).strip() - - # Convert to uppercase - cleaned_text = cleaned_text.upper() - - # Remove unwanted words - words_to_remove = ["MPS", "POS", "BIL", "ONL", "BANGALORE", "PVT", "LTD", "INDIA", "LT", "XXXXXXXXXXXX"] - cleaned_text = " ".join([word for word in cleaned_text.split() if word not in words_to_remove]) - - # Convert to lowercase - cleaned_text = cleaned_text.lower() - - # cleaned_text = get_google_description(cleaned_text) - - return cleaned_text - - -#create input and output objects -#input object1 -input1 = gr.Textbox(label="Text") - -#input object 2 -input2 = gr.Textbox(label="Labels") - -#output object -output = gr.Label(label="Output") - -#example object -transactions_and_tags = [ - ["MPS/TRUFFLES/202303261700/034587/Bangalore", "Medical, Food, Shopping, Subscription, Travel"], - ["MPS/TACO BELL/202304012247/108300/BANGALORE", "Medical, Food, Shopping, Subscription, Travel"], - ["POS XXXXXXXXXXXX0001 APOLLO PHARMACY", "Medical, Food, Shopping, Subscription, Travel"], - ["BIL/ONL/000471093694/1MG Techno/X7ZRUSVLURFQZO", "Medical, Food, Shopping, Subscription, Travel"], - ["POS XXXXXXXXXXXX1111 DECATHLON SPORTS", "Medical, Food, Shopping, Subscription, Travel"], - ["POS XXXXXXXXXXXX1111 WWW AMAZON IN", "Medical, Food, Shopping, Subscription, Travel"], - ["ME DC SI XXXXXXXXXXXX1111 SPOTIFY SI", "Medical, Food, Shopping, Subscription, Travel"], - ["POS/NETFLIX/1140920002/100623/17:25", "Medical, Food, Shopping, Subscription, Travel"], - ["POS XXXXXXXXXXXX1110 MAKEMYTRIP INDIA", "Medical, Food, Shopping, Subscription, Travel"], - ["BIL/ONL/000691178015/IRCTC Serv/XZZBX91LTCY1AZ", "Medical, Food, Shopping, Subscription, Travel"] -] - -#create interface -gui = gr.Interface(title=title, - description=description, - fn=zero_shot, - inputs=[input1, input2], - outputs=[output], - examples=transactions_and_tags - ) - -#display the interface -gui.launch(debug=True) \ No newline at end of file diff --git a/spaces/adirik/maskformer-demo/README.md b/spaces/adirik/maskformer-demo/README.md deleted file mode 100644 index 334b2938846cfac8dda9c779b45105e74d1fc933..0000000000000000000000000000000000000000 --- a/spaces/adirik/maskformer-demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MaskFormer Demo -emoji: 🔥 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.1.3 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/aiEDUcurriculum/introtoAI-clubs-project/app.py b/spaces/aiEDUcurriculum/introtoAI-clubs-project/app.py deleted file mode 100644 index bb77c577295bc94f812f8c8ab5c73cdd5da6d601..0000000000000000000000000000000000000000 --- a/spaces/aiEDUcurriculum/introtoAI-clubs-project/app.py +++ /dev/null @@ -1,173 +0,0 @@ -### ----------------------------- ### -### libraries ### -### ----------------------------- ### - -import gradio as gr - -import pandas as pd -import numpy as np -from sklearn.model_selection import train_test_split -from sklearn.linear_model import LogisticRegression -from sklearn import metrics - - -### ------------------------------ ### -### data transformation ### -### ------------------------------ ### - -# load dataset -uncleaned_data = pd.read_csv('data.csv') - -# remove timestamp from dataset (always first column) -uncleaned_data = uncleaned_data.iloc[: , 1:] -data = pd.DataFrame() - -# keep track of which columns are categorical and what -# those columns' value mappings are -# structure: {colname1: {...}, colname2: {...} } -cat_value_dicts = {} -final_colname = uncleaned_data.columns[len(uncleaned_data.columns) - 1] - -# for each column... -for (colname, colval) in uncleaned_data.iteritems(): - - # check if col is already a number; if so, add col directly - # to new dataframe and skip to next column - if isinstance(colval.values[0], (np.integer, float)): - data[colname] = uncleaned_data[colname].copy() - continue - - # structure: {0: "lilac", 1: "blue", ...} - new_dict = {} - val = 0 # first index per column - transformed_col_vals = [] # new numeric datapoints - - # if not, for each item in that column... - for (row, item) in enumerate(colval.values): - - # if item is not in this col's dict... - if item not in new_dict: - new_dict[item] = val - val += 1 - - # then add numerical value to transformed dataframe - transformed_col_vals.append(new_dict[item]) - - # reverse dictionary only for final col (0, 1) => (vals) - if colname == final_colname: - new_dict = {value : key for (key, value) in new_dict.items()} - - cat_value_dicts[colname] = new_dict - data[colname] = transformed_col_vals - - -### -------------------------------- ### -### model training ### -### -------------------------------- ### - -# select features and predicton; automatically selects last column as prediction -cols = len(data.columns) -num_features = cols - 1 -x = data.iloc[: , :num_features] -y = data.iloc[: , num_features:] - -# split data into training and testing sets -x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25) - -# instantiate the model (using default parameters) -model = LogisticRegression() -model.fit(x_train, y_train.values.ravel()) -y_pred = model.predict(x_test) - - -### -------------------------------- ### -### article generation ### -### -------------------------------- ### -# borrow file reading function from reader.py - -def get_feat(): - feats = [abs(x) for x in model.coef_[0]] - max_val = max(feats) - idx = feats.index(max_val) - return data.columns[idx] - -acc = str(round(metrics.accuracy_score(y_test, y_pred) * 100, 1)) + "%" -most_imp_feat = get_feat() -# info = get_article(acc, most_imp_feat) - - - -### ------------------------------- ### -### interface creation ### -### ------------------------------- ### - - -# predictor for generic number of features -def general_predictor(*args): - features = [] - - # transform categorical input - for colname, arg in zip(data.columns, args): - if (colname in cat_value_dicts): - features.append(cat_value_dicts[colname][arg]) - else: - features.append(arg) - - # predict single datapoint - new_input = [features] - result = model.predict(new_input) - return cat_value_dicts[final_colname][result[0]] - -# add data labels to replace those lost via star-args - - -block = gr.Blocks() - -with open('info.md') as f: - with block: - gr.Markdown(f.readline()) - gr.Markdown('Take the quiz to get a personalized recommendation using AI.') - - with gr.Row(): - with gr.Box(): - inputls = [] - for colname in data.columns: - # skip last column - if colname == final_colname: - continue - - # access categories dict if data is categorical - # otherwise, just use a number input - if colname in cat_value_dicts: - radio_options = list(cat_value_dicts[colname].keys()) - inputls.append(gr.Dropdown(choices=radio_options, type="value", label=colname)) - else: - # add numerical input - inputls.append(gr.Number(label=colname)) - gr.Markdown("
          ") - - submit = gr.Button("Click to see your personalized result!", variant="primary") - gr.Markdown("
          ") - output = gr.Textbox(label="Your recommendation:", placeholder="your recommendation will appear here") - - submit.click(fn=general_predictor, inputs=inputls, outputs=output) - gr.Markdown("
          ") - - with gr.Row(): - with gr.Box(): - gr.Markdown(f"

          Accuracy:

          {acc}") - with gr.Box(): - gr.Markdown(f"

          Most important feature:

          {most_imp_feat}") - - gr.Markdown("
          ") - - with gr.Box(): - gr.Markdown('''⭐ Note that model accuracy is based on the uploaded data.csv and reflects how well the AI model can give correct recommendations for that dataset. Model accuracy and most important feature can be helpful for understanding how the model works, but should not be considered absolute facts about the real world.''') - - with gr.Box(): - with open('info.md') as f: - f.readline() - gr.Markdown(f.read()) - -# show the interface -block.launch() \ No newline at end of file diff --git a/spaces/akhaliq/Mask2Former/tools/evaluate_coco_boundary_ap.py b/spaces/akhaliq/Mask2Former/tools/evaluate_coco_boundary_ap.py deleted file mode 100644 index 1e96b5d2f7d3f0987098904e8f5d97854906d58a..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/tools/evaluate_coco_boundary_ap.py +++ /dev/null @@ -1,49 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# Modified by Bowen Cheng from: https://github.com/bowenc0221/boundary-iou-api/blob/master/tools/coco_instance_evaluation.py - -""" -Evaluation for COCO val2017: -python ./tools/coco_instance_evaluation.py \ - --gt-json-file COCO_GT_JSON \ - --dt-json-file COCO_DT_JSON -""" -import argparse -import json - -from boundary_iou.coco_instance_api.coco import COCO -from boundary_iou.coco_instance_api.cocoeval import COCOeval - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--gt-json-file", default="") - parser.add_argument("--dt-json-file", default="") - parser.add_argument("--iou-type", default="boundary") - parser.add_argument("--dilation-ratio", default="0.020", type=float) - args = parser.parse_args() - print(args) - - annFile = args.gt_json_file - resFile = args.dt_json_file - dilation_ratio = args.dilation_ratio - if args.iou_type == "boundary": - get_boundary = True - else: - get_boundary = False - cocoGt = COCO(annFile, get_boundary=get_boundary, dilation_ratio=dilation_ratio) - - # remove box predictions - resFile = json.load(open(resFile)) - for c in resFile: - c.pop("bbox", None) - - cocoDt = cocoGt.loadRes(resFile) - cocoEval = COCOeval(cocoGt, cocoDt, iouType=args.iou_type, dilation_ratio=dilation_ratio) - cocoEval.evaluate() - cocoEval.accumulate() - cocoEval.summarize() - - -if __name__ == '__main__': - main() diff --git a/spaces/akhaliq/Music_Source_Separation/bytesep/callbacks/musdb18.py b/spaces/akhaliq/Music_Source_Separation/bytesep/callbacks/musdb18.py deleted file mode 100644 index 37a8a65b6005efa5671d05044593d3805c289897..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Music_Source_Separation/bytesep/callbacks/musdb18.py +++ /dev/null @@ -1,485 +0,0 @@ -import logging -import os -import time -from typing import Dict, List, NoReturn - -import librosa -import musdb -import museval -import numpy as np -import pytorch_lightning as pl -import torch.nn as nn -from pytorch_lightning.utilities import rank_zero_only - -from bytesep.callbacks.base_callbacks import SaveCheckpointsCallback -from bytesep.dataset_creation.pack_audios_to_hdf5s.musdb18 import preprocess_audio -from bytesep.inference import Separator -from bytesep.utils import StatisticsContainer, read_yaml - - -def get_musdb18_callbacks( - config_yaml: str, - workspace: str, - checkpoints_dir: str, - statistics_path: str, - logger: pl.loggers.TensorBoardLogger, - model: nn.Module, - evaluate_device: str, -) -> List[pl.Callback]: - r"""Get MUSDB18 callbacks of a config yaml. - - Args: - config_yaml: str - workspace: str - checkpoints_dir: str, directory to save checkpoints - statistics_dir: str, directory to save statistics - logger: pl.loggers.TensorBoardLogger - model: nn.Module - evaluate_device: str - - Return: - callbacks: List[pl.Callback] - """ - configs = read_yaml(config_yaml) - task_name = configs['task_name'] - evaluation_callback = configs['train']['evaluation_callback'] - target_source_types = configs['train']['target_source_types'] - input_channels = configs['train']['channels'] - evaluation_audios_dir = os.path.join(workspace, "evaluation_audios", task_name) - test_segment_seconds = configs['evaluate']['segment_seconds'] - sample_rate = configs['train']['sample_rate'] - test_segment_samples = int(test_segment_seconds * sample_rate) - test_batch_size = configs['evaluate']['batch_size'] - - evaluate_step_frequency = configs['train']['evaluate_step_frequency'] - save_step_frequency = configs['train']['save_step_frequency'] - - # save checkpoint callback - save_checkpoints_callback = SaveCheckpointsCallback( - model=model, - checkpoints_dir=checkpoints_dir, - save_step_frequency=save_step_frequency, - ) - - # evaluation callback - EvaluationCallback = _get_evaluation_callback_class(evaluation_callback) - - # statistics container - statistics_container = StatisticsContainer(statistics_path) - - # evaluation callback - evaluate_train_callback = EvaluationCallback( - dataset_dir=evaluation_audios_dir, - model=model, - target_source_types=target_source_types, - input_channels=input_channels, - sample_rate=sample_rate, - split='train', - segment_samples=test_segment_samples, - batch_size=test_batch_size, - device=evaluate_device, - evaluate_step_frequency=evaluate_step_frequency, - logger=logger, - statistics_container=statistics_container, - ) - - evaluate_test_callback = EvaluationCallback( - dataset_dir=evaluation_audios_dir, - model=model, - target_source_types=target_source_types, - input_channels=input_channels, - sample_rate=sample_rate, - split='test', - segment_samples=test_segment_samples, - batch_size=test_batch_size, - device=evaluate_device, - evaluate_step_frequency=evaluate_step_frequency, - logger=logger, - statistics_container=statistics_container, - ) - - # callbacks = [save_checkpoints_callback, evaluate_train_callback, evaluate_test_callback] - callbacks = [save_checkpoints_callback, evaluate_test_callback] - - return callbacks - - -def _get_evaluation_callback_class(evaluation_callback) -> pl.Callback: - r"""Get evaluation callback class.""" - if evaluation_callback == "Musdb18EvaluationCallback": - return Musdb18EvaluationCallback - - if evaluation_callback == 'Musdb18ConditionalEvaluationCallback': - return Musdb18ConditionalEvaluationCallback - - else: - raise NotImplementedError - - -class Musdb18EvaluationCallback(pl.Callback): - def __init__( - self, - dataset_dir: str, - model: nn.Module, - target_source_types: str, - input_channels: int, - split: str, - sample_rate: int, - segment_samples: int, - batch_size: int, - device: str, - evaluate_step_frequency: int, - logger: pl.loggers.TensorBoardLogger, - statistics_container: StatisticsContainer, - ): - r"""Callback to evaluate every #save_step_frequency steps. - - Args: - dataset_dir: str - model: nn.Module - target_source_types: List[str], e.g., ['vocals', 'bass', ...] - input_channels: int - split: 'train' | 'test' - sample_rate: int - segment_samples: int, length of segments to be input to a model, e.g., 44100*30 - batch_size, int, e.g., 12 - device: str, e.g., 'cuda' - evaluate_step_frequency: int, evaluate every #save_step_frequency steps - logger: object - statistics_container: StatisticsContainer - """ - self.model = model - self.target_source_types = target_source_types - self.input_channels = input_channels - self.sample_rate = sample_rate - self.split = split - self.segment_samples = segment_samples - self.evaluate_step_frequency = evaluate_step_frequency - self.logger = logger - self.statistics_container = statistics_container - self.mono = input_channels == 1 - self.resample_type = "kaiser_fast" - - self.mus = musdb.DB(root=dataset_dir, subsets=[split]) - - error_msg = "The directory {} is empty!".format(dataset_dir) - assert len(self.mus) > 0, error_msg - - # separator - self.separator = Separator(model, self.segment_samples, batch_size, device) - - @rank_zero_only - def on_batch_end(self, trainer: pl.Trainer, _) -> NoReturn: - r"""Evaluate separation SDRs of audio recordings.""" - global_step = trainer.global_step - - if global_step % self.evaluate_step_frequency == 0: - - sdr_dict = {} - - logging.info("--- Step {} ---".format(global_step)) - logging.info("Total {} pieces for evaluation:".format(len(self.mus.tracks))) - - eval_time = time.time() - - for track in self.mus.tracks: - - audio_name = track.name - - # Get waveform of mixture. - mixture = track.audio.T - # (channels_num, audio_samples) - - mixture = preprocess_audio( - audio=mixture, - mono=self.mono, - origin_sr=track.rate, - sr=self.sample_rate, - resample_type=self.resample_type, - ) - # (channels_num, audio_samples) - - target_dict = {} - sdr_dict[audio_name] = {} - - # Get waveform of all target source types. - for j, source_type in enumerate(self.target_source_types): - # E.g., ['vocals', 'bass', ...] - - audio = track.targets[source_type].audio.T - - audio = preprocess_audio( - audio=audio, - mono=self.mono, - origin_sr=track.rate, - sr=self.sample_rate, - resample_type=self.resample_type, - ) - # (channels_num, audio_samples) - - target_dict[source_type] = audio - # (channels_num, audio_samples) - - # Separate. - input_dict = {'waveform': mixture} - - sep_wavs = self.separator.separate(input_dict) - # sep_wavs: (target_sources_num * channels_num, audio_samples) - - # Post process separation results. - sep_wavs = preprocess_audio( - audio=sep_wavs, - mono=self.mono, - origin_sr=self.sample_rate, - sr=track.rate, - resample_type=self.resample_type, - ) - # sep_wavs: (target_sources_num * channels_num, audio_samples) - - sep_wavs = librosa.util.fix_length( - sep_wavs, size=mixture.shape[1], axis=1 - ) - # sep_wavs: (target_sources_num * channels_num, audio_samples) - - sep_wav_dict = get_separated_wavs_from_simo_output( - sep_wavs, self.input_channels, self.target_source_types - ) - # output_dict: dict, e.g., { - # 'vocals': (channels_num, audio_samples), - # 'bass': (channels_num, audio_samples), - # ..., - # } - - # Evaluate for all target source types. - for source_type in self.target_source_types: - # E.g., ['vocals', 'bass', ...] - - # Calculate SDR using museval, input shape should be: (nsrc, nsampl, nchan). - (sdrs, _, _, _) = museval.evaluate( - [target_dict[source_type].T], [sep_wav_dict[source_type].T] - ) - - sdr = np.nanmedian(sdrs) - sdr_dict[audio_name][source_type] = sdr - - logging.info( - "{}, {}, sdr: {:.3f}".format(audio_name, source_type, sdr) - ) - - logging.info("-----------------------------") - median_sdr_dict = {} - - # Calculate median SDRs of all songs. - for source_type in self.target_source_types: - # E.g., ['vocals', 'bass', ...] - - median_sdr = np.median( - [ - sdr_dict[audio_name][source_type] - for audio_name in sdr_dict.keys() - ] - ) - - median_sdr_dict[source_type] = median_sdr - - logging.info( - "Step: {}, {}, Median SDR: {:.3f}".format( - global_step, source_type, median_sdr - ) - ) - - logging.info("Evlauation time: {:.3f}".format(time.time() - eval_time)) - - statistics = {"sdr_dict": sdr_dict, "median_sdr_dict": median_sdr_dict} - self.statistics_container.append(global_step, statistics, self.split) - self.statistics_container.dump() - - -def get_separated_wavs_from_simo_output(x, input_channels, target_source_types) -> Dict: - r"""Get separated waveforms of target sources from a single input multiple - output (SIMO) system. - - Args: - x: (target_sources_num * channels_num, audio_samples) - input_channels: int - target_source_types: List[str], e.g., ['vocals', 'bass', ...] - - Returns: - output_dict: dict, e.g., { - 'vocals': (channels_num, audio_samples), - 'bass': (channels_num, audio_samples), - ..., - } - """ - output_dict = {} - - for j, source_type in enumerate(target_source_types): - output_dict[source_type] = x[j * input_channels : (j + 1) * input_channels] - - return output_dict - - -class Musdb18ConditionalEvaluationCallback(pl.Callback): - def __init__( - self, - dataset_dir: str, - model: nn.Module, - target_source_types: str, - input_channels: int, - split: str, - sample_rate: int, - segment_samples: int, - batch_size: int, - device: str, - evaluate_step_frequency: int, - logger: pl.loggers.TensorBoardLogger, - statistics_container: StatisticsContainer, - ): - r"""Callback to evaluate every #save_step_frequency steps. - - Args: - dataset_dir: str - model: nn.Module - target_source_types: List[str], e.g., ['vocals', 'bass', ...] - input_channels: int - split: 'train' | 'test' - sample_rate: int - segment_samples: int, length of segments to be input to a model, e.g., 44100*30 - batch_size, int, e.g., 12 - device: str, e.g., 'cuda' - evaluate_step_frequency: int, evaluate every #save_step_frequency steps - logger: object - statistics_container: StatisticsContainer - """ - self.model = model - self.target_source_types = target_source_types - self.input_channels = input_channels - self.sample_rate = sample_rate - self.split = split - self.segment_samples = segment_samples - self.evaluate_step_frequency = evaluate_step_frequency - self.logger = logger - self.statistics_container = statistics_container - self.mono = input_channels == 1 - self.resample_type = "kaiser_fast" - - self.mus = musdb.DB(root=dataset_dir, subsets=[split]) - - error_msg = "The directory {} is empty!".format(dataset_dir) - assert len(self.mus) > 0, error_msg - - # separator - self.separator = Separator(model, self.segment_samples, batch_size, device) - - @rank_zero_only - def on_batch_end(self, trainer: pl.Trainer, _) -> NoReturn: - r"""Evaluate separation SDRs of audio recordings.""" - global_step = trainer.global_step - - if global_step % self.evaluate_step_frequency == 0: - - sdr_dict = {} - - logging.info("--- Step {} ---".format(global_step)) - logging.info("Total {} pieces for evaluation:".format(len(self.mus.tracks))) - - eval_time = time.time() - - for track in self.mus.tracks: - - audio_name = track.name - - # Get waveform of mixture. - mixture = track.audio.T - # (channels_num, audio_samples) - - mixture = preprocess_audio( - audio=mixture, - mono=self.mono, - origin_sr=track.rate, - sr=self.sample_rate, - resample_type=self.resample_type, - ) - # (channels_num, audio_samples) - - target_dict = {} - sdr_dict[audio_name] = {} - - # Get waveform of all target source types. - for j, source_type in enumerate(self.target_source_types): - # E.g., ['vocals', 'bass', ...] - - audio = track.targets[source_type].audio.T - - audio = preprocess_audio( - audio=audio, - mono=self.mono, - origin_sr=track.rate, - sr=self.sample_rate, - resample_type=self.resample_type, - ) - # (channels_num, audio_samples) - - target_dict[source_type] = audio - # (channels_num, audio_samples) - - condition = np.zeros(len(self.target_source_types)) - condition[j] = 1 - - input_dict = {'waveform': mixture, 'condition': condition} - - sep_wav = self.separator.separate(input_dict) - # sep_wav: (channels_num, audio_samples) - - sep_wav = preprocess_audio( - audio=sep_wav, - mono=self.mono, - origin_sr=self.sample_rate, - sr=track.rate, - resample_type=self.resample_type, - ) - # sep_wav: (channels_num, audio_samples) - - sep_wav = librosa.util.fix_length( - sep_wav, size=mixture.shape[1], axis=1 - ) - # sep_wav: (target_sources_num * channels_num, audio_samples) - - # Calculate SDR using museval, input shape should be: (nsrc, nsampl, nchan) - (sdrs, _, _, _) = museval.evaluate( - [target_dict[source_type].T], [sep_wav.T] - ) - - sdr = np.nanmedian(sdrs) - sdr_dict[audio_name][source_type] = sdr - - logging.info( - "{}, {}, sdr: {:.3f}".format(audio_name, source_type, sdr) - ) - - logging.info("-----------------------------") - median_sdr_dict = {} - - # Calculate median SDRs of all songs. - for source_type in self.target_source_types: - - median_sdr = np.median( - [ - sdr_dict[audio_name][source_type] - for audio_name in sdr_dict.keys() - ] - ) - - median_sdr_dict[source_type] = median_sdr - - logging.info( - "Step: {}, {}, Median SDR: {:.3f}".format( - global_step, source_type, median_sdr - ) - ) - - logging.info("Evlauation time: {:.3f}".format(time.time() - eval_time)) - - statistics = {"sdr_dict": sdr_dict, "median_sdr_dict": median_sdr_dict} - self.statistics_container.append(global_step, statistics, self.split) - self.statistics_container.dump() diff --git a/spaces/akhaliq/Music_Source_Separation/bytesep/dataset_creation/create_evaluation_audios/vctk-musdb18.py b/spaces/akhaliq/Music_Source_Separation/bytesep/dataset_creation/create_evaluation_audios/vctk-musdb18.py deleted file mode 100644 index 8e337feaa304f09b21fc400dfffd9c77a9961074..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Music_Source_Separation/bytesep/dataset_creation/create_evaluation_audios/vctk-musdb18.py +++ /dev/null @@ -1,164 +0,0 @@ -import argparse -import os -import soundfile -from typing import NoReturn - -import musdb -import numpy as np - -from bytesep.utils import load_audio - - -def create_evaluation(args) -> NoReturn: - r"""Random mix and write out audios for evaluation. - - Args: - vctk_dataset_dir: str, the directory of the VCTK dataset - symphony_dataset_dir: str, the directory of the symphony dataset - evaluation_audios_dir: str, the directory to write out randomly selected and mixed audio segments - sample_rate: int - channels: int, e.g., 1 | 2 - evaluation_segments_num: int - mono: bool - - Returns: - NoReturn - """ - - # arguments & parameters - vctk_dataset_dir = args.vctk_dataset_dir - musdb18_dataset_dir = args.musdb18_dataset_dir - evaluation_audios_dir = args.evaluation_audios_dir - sample_rate = args.sample_rate - channels = args.channels - evaluation_segments_num = args.evaluation_segments_num - mono = True if channels == 1 else False - - split = 'test' - random_state = np.random.RandomState(1234) - - # paths - audios_dir = os.path.join(vctk_dataset_dir, "wav48", split) - - for source_type in ['speech', 'music', 'mixture']: - output_dir = os.path.join(evaluation_audios_dir, split, source_type) - os.makedirs(output_dir, exist_ok=True) - - # Get VCTK audio paths. - speech_audio_paths = [] - speaker_ids = sorted(os.listdir(audios_dir)) - - for speaker_id in speaker_ids: - speaker_audios_dir = os.path.join(audios_dir, speaker_id) - - audio_names = sorted(os.listdir(speaker_audios_dir)) - - for audio_name in audio_names: - speaker_audio_path = os.path.join(speaker_audios_dir, audio_name) - speech_audio_paths.append(speaker_audio_path) - - # Get Musdb18 audio paths. - mus = musdb.DB(root=musdb18_dataset_dir, subsets=[split]) - track_indexes = np.arange(len(mus.tracks)) - - for n in range(evaluation_segments_num): - - print('{} / {}'.format(n, evaluation_segments_num)) - - # Randomly select and write out a clean speech segment. - speech_audio_path = random_state.choice(speech_audio_paths) - - speech_audio = load_audio( - audio_path=speech_audio_path, mono=mono, sample_rate=sample_rate - ) - # (channels_num, audio_samples) - - if channels == 2: - speech_audio = np.tile(speech_audio, (2, 1)) - # (channels_num, audio_samples) - - output_speech_path = os.path.join( - evaluation_audios_dir, split, 'speech', '{:04d}.wav'.format(n) - ) - soundfile.write( - file=output_speech_path, data=speech_audio.T, samplerate=sample_rate - ) - print("Write out to {}".format(output_speech_path)) - - # Randomly select and write out a clean music segment. - track_index = random_state.choice(track_indexes) - track = mus[track_index] - - segment_samples = speech_audio.shape[1] - start_sample = int( - random_state.uniform(0.0, segment_samples - speech_audio.shape[1]) - ) - - music_audio = track.audio[start_sample : start_sample + segment_samples, :].T - # (channels_num, audio_samples) - - output_music_path = os.path.join( - evaluation_audios_dir, split, 'music', '{:04d}.wav'.format(n) - ) - soundfile.write( - file=output_music_path, data=music_audio.T, samplerate=sample_rate - ) - print("Write out to {}".format(output_music_path)) - - # Mix speech and music segments and write out a mixture segment. - mixture_audio = speech_audio + music_audio - # (channels_num, audio_samples) - - output_mixture_path = os.path.join( - evaluation_audios_dir, split, 'mixture', '{:04d}.wav'.format(n) - ) - soundfile.write( - file=output_mixture_path, data=mixture_audio.T, samplerate=sample_rate - ) - print("Write out to {}".format(output_mixture_path)) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument( - "--vctk_dataset_dir", - type=str, - required=True, - help="The directory of the VCTK dataset.", - ) - parser.add_argument( - "--musdb18_dataset_dir", - type=str, - required=True, - help="The directory of the MUSDB18 dataset.", - ) - parser.add_argument( - "--evaluation_audios_dir", - type=str, - required=True, - help="The directory to write out randomly selected and mixed audio segments.", - ) - parser.add_argument( - "--sample_rate", - type=int, - required=True, - help="Sample rate", - ) - parser.add_argument( - "--channels", - type=int, - required=True, - help="Audio channels, e.g, 1 or 2.", - ) - parser.add_argument( - "--evaluation_segments_num", - type=int, - required=True, - help="The number of segments to create for evaluation.", - ) - - # Parse arguments. - args = parser.parse_args() - - create_evaluation(args) diff --git a/spaces/akhaliq/animeganv2-onnx/app.py b/spaces/akhaliq/animeganv2-onnx/app.py deleted file mode 100644 index 76df55cc7b05cf6980aa2247a7fcff182a63e24d..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/animeganv2-onnx/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import onnxruntime -print(onnxruntime.get_device()) -import os -import gradio as gr -os.system("pip install gdown") -os.system("gdown https://drive.google.com/uc?id=1riNxV1BWMAXfmWZ3LrQbEkvzV8f7lOCp") - -opts = onnxruntime.SessionOptions() -opts.intra_op_num_threads = 16 -onnx_session = onnxruntime.InferenceSession("/home/user/app/face_paint_512_v2_0.onnx",sess_options=opts) - -input_name = onnx_session.get_inputs()[0].name -output_name = onnx_session.get_outputs()[0].name - -side_length = 512 - -import cv2 as cv -import numpy as np -from PIL import Image - -def inference(img): - image = np.array(img) - image = image[:, :, ::-1].copy() - image = cv.resize(image, dsize=(side_length, side_length)) - x = cv.cvtColor(image, cv.COLOR_BGR2RGB) - - x = np.array(x, dtype=np.float32) - x = x.transpose(2, 0, 1) - x = x * 2 - 1 - x = x.reshape(-1, 3, side_length, side_length) - - onnx_result = onnx_session.run([output_name], {input_name: x}) - - onnx_result = np.array(onnx_result).squeeze() - onnx_result = (onnx_result * 0.5 + 0.5).clip(0, 1) - onnx_result = onnx_result * 255 - - onnx_result = onnx_result.transpose(1, 2, 0).astype('uint8') - onnx_result = cv.cvtColor(onnx_result, cv.COLOR_RGB2BGR) - - - img = cv.cvtColor(onnx_result, cv.COLOR_BGR2RGB) - im_pil = Image.fromarray(img) - return im_pil - - -title = "Animeganv2" -description = "Gradio demo for AnimeGanv2 Face Portrait v2. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below." -article = "

          Github Repo Pytorch | Github Repo ONNX

          samples from repo: animation animation

          " - - -gr.Interface(inference, gr.inputs.Image(type="pil", source="webcam"), gr.outputs.Image(type="pil"),title=title,description=description,article=article,enable_queue=True,live=True).launch() \ No newline at end of file diff --git a/spaces/akhaliq/yolov7/utils/wandb_logging/wandb_utils.py b/spaces/akhaliq/yolov7/utils/wandb_logging/wandb_utils.py deleted file mode 100644 index aec7c5f486f962b7b59198f40a1edb5a79824afe..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/yolov7/utils/wandb_logging/wandb_utils.py +++ /dev/null @@ -1,306 +0,0 @@ -import json -import sys -from pathlib import Path - -import torch -import yaml -from tqdm import tqdm - -sys.path.append(str(Path(__file__).parent.parent.parent)) # add utils/ to path -from utils.datasets import LoadImagesAndLabels -from utils.datasets import img2label_paths -from utils.general import colorstr, xywh2xyxy, check_dataset - -try: - import wandb - from wandb import init, finish -except ImportError: - wandb = None - -WANDB_ARTIFACT_PREFIX = 'wandb-artifact://' - - -def remove_prefix(from_string, prefix=WANDB_ARTIFACT_PREFIX): - return from_string[len(prefix):] - - -def check_wandb_config_file(data_config_file): - wandb_config = '_wandb.'.join(data_config_file.rsplit('.', 1)) # updated data.yaml path - if Path(wandb_config).is_file(): - return wandb_config - return data_config_file - - -def get_run_info(run_path): - run_path = Path(remove_prefix(run_path, WANDB_ARTIFACT_PREFIX)) - run_id = run_path.stem - project = run_path.parent.stem - model_artifact_name = 'run_' + run_id + '_model' - return run_id, project, model_artifact_name - - -def check_wandb_resume(opt): - process_wandb_config_ddp_mode(opt) if opt.global_rank not in [-1, 0] else None - if isinstance(opt.resume, str): - if opt.resume.startswith(WANDB_ARTIFACT_PREFIX): - if opt.global_rank not in [-1, 0]: # For resuming DDP runs - run_id, project, model_artifact_name = get_run_info(opt.resume) - api = wandb.Api() - artifact = api.artifact(project + '/' + model_artifact_name + ':latest') - modeldir = artifact.download() - opt.weights = str(Path(modeldir) / "last.pt") - return True - return None - - -def process_wandb_config_ddp_mode(opt): - with open(opt.data) as f: - data_dict = yaml.load(f, Loader=yaml.SafeLoader) # data dict - train_dir, val_dir = None, None - if isinstance(data_dict['train'], str) and data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX): - api = wandb.Api() - train_artifact = api.artifact(remove_prefix(data_dict['train']) + ':' + opt.artifact_alias) - train_dir = train_artifact.download() - train_path = Path(train_dir) / 'data/images/' - data_dict['train'] = str(train_path) - - if isinstance(data_dict['val'], str) and data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX): - api = wandb.Api() - val_artifact = api.artifact(remove_prefix(data_dict['val']) + ':' + opt.artifact_alias) - val_dir = val_artifact.download() - val_path = Path(val_dir) / 'data/images/' - data_dict['val'] = str(val_path) - if train_dir or val_dir: - ddp_data_path = str(Path(val_dir) / 'wandb_local_data.yaml') - with open(ddp_data_path, 'w') as f: - yaml.dump(data_dict, f) - opt.data = ddp_data_path - - -class WandbLogger(): - def __init__(self, opt, name, run_id, data_dict, job_type='Training'): - # Pre-training routine -- - self.job_type = job_type - self.wandb, self.wandb_run, self.data_dict = wandb, None if not wandb else wandb.run, data_dict - # It's more elegant to stick to 1 wandb.init call, but useful config data is overwritten in the WandbLogger's wandb.init call - if isinstance(opt.resume, str): # checks resume from artifact - if opt.resume.startswith(WANDB_ARTIFACT_PREFIX): - run_id, project, model_artifact_name = get_run_info(opt.resume) - model_artifact_name = WANDB_ARTIFACT_PREFIX + model_artifact_name - assert wandb, 'install wandb to resume wandb runs' - # Resume wandb-artifact:// runs here| workaround for not overwriting wandb.config - self.wandb_run = wandb.init(id=run_id, project=project, resume='allow') - opt.resume = model_artifact_name - elif self.wandb: - self.wandb_run = wandb.init(config=opt, - resume="allow", - project='YOLOR' if opt.project == 'runs/train' else Path(opt.project).stem, - name=name, - job_type=job_type, - id=run_id) if not wandb.run else wandb.run - if self.wandb_run: - if self.job_type == 'Training': - if not opt.resume: - wandb_data_dict = self.check_and_upload_dataset(opt) if opt.upload_dataset else data_dict - # Info useful for resuming from artifacts - self.wandb_run.config.opt = vars(opt) - self.wandb_run.config.data_dict = wandb_data_dict - self.data_dict = self.setup_training(opt, data_dict) - if self.job_type == 'Dataset Creation': - self.data_dict = self.check_and_upload_dataset(opt) - else: - prefix = colorstr('wandb: ') - print(f"{prefix}Install Weights & Biases for YOLOR logging with 'pip install wandb' (recommended)") - - def check_and_upload_dataset(self, opt): - assert wandb, 'Install wandb to upload dataset' - check_dataset(self.data_dict) - config_path = self.log_dataset_artifact(opt.data, - opt.single_cls, - 'YOLOR' if opt.project == 'runs/train' else Path(opt.project).stem) - print("Created dataset config file ", config_path) - with open(config_path) as f: - wandb_data_dict = yaml.load(f, Loader=yaml.SafeLoader) - return wandb_data_dict - - def setup_training(self, opt, data_dict): - self.log_dict, self.current_epoch, self.log_imgs = {}, 0, 16 # Logging Constants - self.bbox_interval = opt.bbox_interval - if isinstance(opt.resume, str): - modeldir, _ = self.download_model_artifact(opt) - if modeldir: - self.weights = Path(modeldir) / "last.pt" - config = self.wandb_run.config - opt.weights, opt.save_period, opt.batch_size, opt.bbox_interval, opt.epochs, opt.hyp = str( - self.weights), config.save_period, config.total_batch_size, config.bbox_interval, config.epochs, \ - config.opt['hyp'] - data_dict = dict(self.wandb_run.config.data_dict) # eliminates the need for config file to resume - if 'val_artifact' not in self.__dict__: # If --upload_dataset is set, use the existing artifact, don't download - self.train_artifact_path, self.train_artifact = self.download_dataset_artifact(data_dict.get('train'), - opt.artifact_alias) - self.val_artifact_path, self.val_artifact = self.download_dataset_artifact(data_dict.get('val'), - opt.artifact_alias) - self.result_artifact, self.result_table, self.val_table, self.weights = None, None, None, None - if self.train_artifact_path is not None: - train_path = Path(self.train_artifact_path) / 'data/images/' - data_dict['train'] = str(train_path) - if self.val_artifact_path is not None: - val_path = Path(self.val_artifact_path) / 'data/images/' - data_dict['val'] = str(val_path) - self.val_table = self.val_artifact.get("val") - self.map_val_table_path() - if self.val_artifact is not None: - self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation") - self.result_table = wandb.Table(["epoch", "id", "prediction", "avg_confidence"]) - if opt.bbox_interval == -1: - self.bbox_interval = opt.bbox_interval = (opt.epochs // 10) if opt.epochs > 10 else 1 - return data_dict - - def download_dataset_artifact(self, path, alias): - if isinstance(path, str) and path.startswith(WANDB_ARTIFACT_PREFIX): - dataset_artifact = wandb.use_artifact(remove_prefix(path, WANDB_ARTIFACT_PREFIX) + ":" + alias) - assert dataset_artifact is not None, "'Error: W&B dataset artifact doesn\'t exist'" - datadir = dataset_artifact.download() - return datadir, dataset_artifact - return None, None - - def download_model_artifact(self, opt): - if opt.resume.startswith(WANDB_ARTIFACT_PREFIX): - model_artifact = wandb.use_artifact(remove_prefix(opt.resume, WANDB_ARTIFACT_PREFIX) + ":latest") - assert model_artifact is not None, 'Error: W&B model artifact doesn\'t exist' - modeldir = model_artifact.download() - epochs_trained = model_artifact.metadata.get('epochs_trained') - total_epochs = model_artifact.metadata.get('total_epochs') - assert epochs_trained < total_epochs, 'training to %g epochs is finished, nothing to resume.' % ( - total_epochs) - return modeldir, model_artifact - return None, None - - def log_model(self, path, opt, epoch, fitness_score, best_model=False): - model_artifact = wandb.Artifact('run_' + wandb.run.id + '_model', type='model', metadata={ - 'original_url': str(path), - 'epochs_trained': epoch + 1, - 'save period': opt.save_period, - 'project': opt.project, - 'total_epochs': opt.epochs, - 'fitness_score': fitness_score - }) - model_artifact.add_file(str(path / 'last.pt'), name='last.pt') - wandb.log_artifact(model_artifact, - aliases=['latest', 'epoch ' + str(self.current_epoch), 'best' if best_model else '']) - print("Saving model artifact on epoch ", epoch + 1) - - def log_dataset_artifact(self, data_file, single_cls, project, overwrite_config=False): - with open(data_file) as f: - data = yaml.load(f, Loader=yaml.SafeLoader) # data dict - nc, names = (1, ['item']) if single_cls else (int(data['nc']), data['names']) - names = {k: v for k, v in enumerate(names)} # to index dictionary - self.train_artifact = self.create_dataset_table(LoadImagesAndLabels( - data['train']), names, name='train') if data.get('train') else None - self.val_artifact = self.create_dataset_table(LoadImagesAndLabels( - data['val']), names, name='val') if data.get('val') else None - if data.get('train'): - data['train'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'train') - if data.get('val'): - data['val'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'val') - path = data_file if overwrite_config else '_wandb.'.join(data_file.rsplit('.', 1)) # updated data.yaml path - data.pop('download', None) - with open(path, 'w') as f: - yaml.dump(data, f) - - if self.job_type == 'Training': # builds correct artifact pipeline graph - self.wandb_run.use_artifact(self.val_artifact) - self.wandb_run.use_artifact(self.train_artifact) - self.val_artifact.wait() - self.val_table = self.val_artifact.get('val') - self.map_val_table_path() - else: - self.wandb_run.log_artifact(self.train_artifact) - self.wandb_run.log_artifact(self.val_artifact) - return path - - def map_val_table_path(self): - self.val_table_map = {} - print("Mapping dataset") - for i, data in enumerate(tqdm(self.val_table.data)): - self.val_table_map[data[3]] = data[0] - - def create_dataset_table(self, dataset, class_to_id, name='dataset'): - # TODO: Explore multiprocessing to slpit this loop parallely| This is essential for speeding up the the logging - artifact = wandb.Artifact(name=name, type="dataset") - img_files = tqdm([dataset.path]) if isinstance(dataset.path, str) and Path(dataset.path).is_dir() else None - img_files = tqdm(dataset.img_files) if not img_files else img_files - for img_file in img_files: - if Path(img_file).is_dir(): - artifact.add_dir(img_file, name='data/images') - labels_path = 'labels'.join(dataset.path.rsplit('images', 1)) - artifact.add_dir(labels_path, name='data/labels') - else: - artifact.add_file(img_file, name='data/images/' + Path(img_file).name) - label_file = Path(img2label_paths([img_file])[0]) - artifact.add_file(str(label_file), - name='data/labels/' + label_file.name) if label_file.exists() else None - table = wandb.Table(columns=["id", "train_image", "Classes", "name"]) - class_set = wandb.Classes([{'id': id, 'name': name} for id, name in class_to_id.items()]) - for si, (img, labels, paths, shapes) in enumerate(tqdm(dataset)): - height, width = shapes[0] - labels[:, 2:] = (xywh2xyxy(labels[:, 2:].view(-1, 4))) * torch.Tensor([width, height, width, height]) - box_data, img_classes = [], {} - for cls, *xyxy in labels[:, 1:].tolist(): - cls = int(cls) - box_data.append({"position": {"minX": xyxy[0], "minY": xyxy[1], "maxX": xyxy[2], "maxY": xyxy[3]}, - "class_id": cls, - "box_caption": "%s" % (class_to_id[cls]), - "scores": {"acc": 1}, - "domain": "pixel"}) - img_classes[cls] = class_to_id[cls] - boxes = {"ground_truth": {"box_data": box_data, "class_labels": class_to_id}} # inference-space - table.add_data(si, wandb.Image(paths, classes=class_set, boxes=boxes), json.dumps(img_classes), - Path(paths).name) - artifact.add(table, name) - return artifact - - def log_training_progress(self, predn, path, names): - if self.val_table and self.result_table: - class_set = wandb.Classes([{'id': id, 'name': name} for id, name in names.items()]) - box_data = [] - total_conf = 0 - for *xyxy, conf, cls in predn.tolist(): - if conf >= 0.25: - box_data.append( - {"position": {"minX": xyxy[0], "minY": xyxy[1], "maxX": xyxy[2], "maxY": xyxy[3]}, - "class_id": int(cls), - "box_caption": "%s %.3f" % (names[cls], conf), - "scores": {"class_score": conf}, - "domain": "pixel"}) - total_conf = total_conf + conf - boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space - id = self.val_table_map[Path(path).name] - self.result_table.add_data(self.current_epoch, - id, - wandb.Image(self.val_table.data[id][1], boxes=boxes, classes=class_set), - total_conf / max(1, len(box_data)) - ) - - def log(self, log_dict): - if self.wandb_run: - for key, value in log_dict.items(): - self.log_dict[key] = value - - def end_epoch(self, best_result=False): - if self.wandb_run: - wandb.log(self.log_dict) - self.log_dict = {} - if self.result_artifact: - train_results = wandb.JoinedTable(self.val_table, self.result_table, "id") - self.result_artifact.add(train_results, 'result') - wandb.log_artifact(self.result_artifact, aliases=['latest', 'epoch ' + str(self.current_epoch), - ('best' if best_result else '')]) - self.result_table = wandb.Table(["epoch", "id", "prediction", "avg_confidence"]) - self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation") - - def finish_run(self): - if self.wandb_run: - if self.log_dict: - wandb.log(self.log_dict) - wandb.run.finish() diff --git a/spaces/alamin655/websurfx/src/models/parser_models.rs b/spaces/alamin655/websurfx/src/models/parser_models.rs deleted file mode 100644 index 9dad348eaeae11b9cba54d5b1a83fc251a1a2a29..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/src/models/parser_models.rs +++ /dev/null @@ -1,52 +0,0 @@ -//! This module provides public models for handling, storing and serializing parsed config file -//! options from config.lua by grouping them together. - -use serde::{Deserialize, Serialize}; - -/// A named struct which stores,deserializes, serializes and groups the parsed config file options -/// of theme and colorscheme names into the Style struct which derives the `Clone`, `Serialize` -/// and Deserialize traits where the `Clone` trait is derived for allowing the struct to be -/// cloned and passed to the server as a shared data between all routes except `/robots.txt` and -/// the `Serialize` trait has been derived for allowing the object to be serialized so that it -/// can be passed to handlebars template files and the `Deserialize` trait has been derived in -/// order to allow the deserializing the json back to struct in aggregate function in -/// aggregator.rs and create a new struct out of it and then serialize it back to json and pass -/// it to the template files. -#[derive(Serialize, Deserialize, Clone, Default)] -pub struct Style { - /// It stores the parsed theme option used to set a theme for the website. - pub theme: String, - /// It stores the parsed colorscheme option used to set a colorscheme for the - /// theme being used. - pub colorscheme: String, -} - -impl Style { - /// Constructs a new `Style` with the given arguments needed for the struct. - /// - /// # Arguments - /// - /// * `theme` - It takes the parsed theme option used to set a theme for the website. - /// * `colorscheme` - It takes the parsed colorscheme option used to set a colorscheme - /// for the theme being used. - pub fn new(theme: String, colorscheme: String) -> Self { - Style { theme, colorscheme } - } -} - -/// Configuration options for the aggregator. -#[derive(Clone)] -pub struct AggregatorConfig { - /// It stores the option to whether enable or disable random delays between - /// requests. - pub random_delay: bool, -} - -/// Configuration options for the rate limiter middleware. -#[derive(Clone)] -pub struct RateLimiter { - /// The number of request that are allowed within a provided time limit. - pub number_of_requests: u8, - /// The time limit in which the quantity of requests that should be accepted. - pub time_limit: u8, -} diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/styles/__init__.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/styles/__init__.py deleted file mode 100644 index e437d170ed78a453d72cadba14f3aae57ed92351..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/styles/__init__.py +++ /dev/null @@ -1,93 +0,0 @@ -""" - pygments.styles - ~~~~~~~~~~~~~~~ - - Contains built-in styles. - - :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pip._vendor.pygments.plugin import find_plugin_styles -from pip._vendor.pygments.util import ClassNotFound - - -#: Maps style names to 'submodule::classname'. -STYLE_MAP = { - 'default': 'default::DefaultStyle', - 'emacs': 'emacs::EmacsStyle', - 'friendly': 'friendly::FriendlyStyle', - 'friendly_grayscale': 'friendly_grayscale::FriendlyGrayscaleStyle', - 'colorful': 'colorful::ColorfulStyle', - 'autumn': 'autumn::AutumnStyle', - 'murphy': 'murphy::MurphyStyle', - 'manni': 'manni::ManniStyle', - 'material': 'material::MaterialStyle', - 'monokai': 'monokai::MonokaiStyle', - 'perldoc': 'perldoc::PerldocStyle', - 'pastie': 'pastie::PastieStyle', - 'borland': 'borland::BorlandStyle', - 'trac': 'trac::TracStyle', - 'native': 'native::NativeStyle', - 'fruity': 'fruity::FruityStyle', - 'bw': 'bw::BlackWhiteStyle', - 'vim': 'vim::VimStyle', - 'vs': 'vs::VisualStudioStyle', - 'tango': 'tango::TangoStyle', - 'rrt': 'rrt::RrtStyle', - 'xcode': 'xcode::XcodeStyle', - 'igor': 'igor::IgorStyle', - 'paraiso-light': 'paraiso_light::ParaisoLightStyle', - 'paraiso-dark': 'paraiso_dark::ParaisoDarkStyle', - 'lovelace': 'lovelace::LovelaceStyle', - 'algol': 'algol::AlgolStyle', - 'algol_nu': 'algol_nu::Algol_NuStyle', - 'arduino': 'arduino::ArduinoStyle', - 'rainbow_dash': 'rainbow_dash::RainbowDashStyle', - 'abap': 'abap::AbapStyle', - 'solarized-dark': 'solarized::SolarizedDarkStyle', - 'solarized-light': 'solarized::SolarizedLightStyle', - 'sas': 'sas::SasStyle', - 'stata': 'stata_light::StataLightStyle', - 'stata-light': 'stata_light::StataLightStyle', - 'stata-dark': 'stata_dark::StataDarkStyle', - 'inkpot': 'inkpot::InkPotStyle', - 'zenburn': 'zenburn::ZenburnStyle', - 'gruvbox-dark': 'gruvbox::GruvboxDarkStyle', - 'gruvbox-light': 'gruvbox::GruvboxLightStyle', - 'dracula': 'dracula::DraculaStyle', - 'one-dark': 'onedark::OneDarkStyle', - 'lilypond' : 'lilypond::LilyPondStyle', -} - - -def get_style_by_name(name): - if name in STYLE_MAP: - mod, cls = STYLE_MAP[name].split('::') - builtin = "yes" - else: - for found_name, style in find_plugin_styles(): - if name == found_name: - return style - # perhaps it got dropped into our styles package - builtin = "" - mod = name - cls = name.title() + "Style" - - try: - mod = __import__('pygments.styles.' + mod, None, None, [cls]) - except ImportError: - raise ClassNotFound("Could not find style module %r" % mod + - (builtin and ", though it should be builtin") + ".") - try: - return getattr(mod, cls) - except AttributeError: - raise ClassNotFound("Could not find style class %r in style module." % cls) - - -def get_all_styles(): - """Return a generator for all styles by name, - both builtin and plugin.""" - yield from STYLE_MAP - for name, _ in find_plugin_styles(): - yield name diff --git a/spaces/algomuffin/jojo_fork/e4e/criteria/lpips/networks.py b/spaces/algomuffin/jojo_fork/e4e/criteria/lpips/networks.py deleted file mode 100644 index 3a0d13ad2d560278f16586da68d3a5eadb26e746..0000000000000000000000000000000000000000 --- a/spaces/algomuffin/jojo_fork/e4e/criteria/lpips/networks.py +++ /dev/null @@ -1,96 +0,0 @@ -from typing import Sequence - -from itertools import chain - -import torch -import torch.nn as nn -from torchvision import models - -from criteria.lpips.utils import normalize_activation - - -def get_network(net_type: str): - if net_type == 'alex': - return AlexNet() - elif net_type == 'squeeze': - return SqueezeNet() - elif net_type == 'vgg': - return VGG16() - else: - raise NotImplementedError('choose net_type from [alex, squeeze, vgg].') - - -class LinLayers(nn.ModuleList): - def __init__(self, n_channels_list: Sequence[int]): - super(LinLayers, self).__init__([ - nn.Sequential( - nn.Identity(), - nn.Conv2d(nc, 1, 1, 1, 0, bias=False) - ) for nc in n_channels_list - ]) - - for param in self.parameters(): - param.requires_grad = False - - -class BaseNet(nn.Module): - def __init__(self): - super(BaseNet, self).__init__() - - # register buffer - self.register_buffer( - 'mean', torch.Tensor([-.030, -.088, -.188])[None, :, None, None]) - self.register_buffer( - 'std', torch.Tensor([.458, .448, .450])[None, :, None, None]) - - def set_requires_grad(self, state: bool): - for param in chain(self.parameters(), self.buffers()): - param.requires_grad = state - - def z_score(self, x: torch.Tensor): - return (x - self.mean) / self.std - - def forward(self, x: torch.Tensor): - x = self.z_score(x) - - output = [] - for i, (_, layer) in enumerate(self.layers._modules.items(), 1): - x = layer(x) - if i in self.target_layers: - output.append(normalize_activation(x)) - if len(output) == len(self.target_layers): - break - return output - - -class SqueezeNet(BaseNet): - def __init__(self): - super(SqueezeNet, self).__init__() - - self.layers = models.squeezenet1_1(True).features - self.target_layers = [2, 5, 8, 10, 11, 12, 13] - self.n_channels_list = [64, 128, 256, 384, 384, 512, 512] - - self.set_requires_grad(False) - - -class AlexNet(BaseNet): - def __init__(self): - super(AlexNet, self).__init__() - - self.layers = models.alexnet(True).features - self.target_layers = [2, 5, 8, 10, 12] - self.n_channels_list = [64, 192, 384, 256, 256] - - self.set_requires_grad(False) - - -class VGG16(BaseNet): - def __init__(self): - super(VGG16, self).__init__() - - self.layers = models.vgg16(True).features - self.target_layers = [4, 9, 16, 23, 30] - self.n_channels_list = [64, 128, 256, 512, 512] - - self.set_requires_grad(False) \ No newline at end of file diff --git a/spaces/aliabid94/crossword/test.py b/spaces/aliabid94/crossword/test.py deleted file mode 100644 index 71567d6c5c857a2ae001669611463c195b3ba1ab..0000000000000000000000000000000000000000 --- a/spaces/aliabid94/crossword/test.py +++ /dev/null @@ -1,21 +0,0 @@ -import game_manager -import random -random.seed(2) - -game_manager.SIZE = 8 - -XXX = None -grid = [ - [XXX, "s", XXX, XXX, XXX, XXX, XXX, XXX], - [XXX, "h", XXX, XXX, XXX, "t", XXX, XXX], - ["l", "o", "n", "e", "l", "y", XXX, XXX], - [XXX, "n", XXX, XXX, XXX, "l", XXX, XXX], - [XXX, "e", XXX, XXX, XXX, "e", "n", "d"], - [XXX, XXX, XXX, XXX, XXX, "r", XXX, XXX], - [XXX, XXX, XXX, XXX, XXX, XXX, XXX, XXX], - [XXX, XXX, XXX, XXX, XXX, XXX, XXX, XXX], -] - -clues = game_manager.find_clues(grid, 7) -for c in clues: - print(c) \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test121/README.md b/spaces/allknowingroger/Image-Models-Test121/README.md deleted file mode 100644 index 10e7b29689d4556242b90d24093e6c8a1e56bff6..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test121/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -duplicated_from: allknowingroger/Image-Models-Test120 ---- - - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test161/README.md b/spaces/allknowingroger/Image-Models-Test161/README.md deleted file mode 100644 index f91e4b31ab345f987b425de029c057bfb69d9e1b..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test161/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test ---- - - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test194/app.py b/spaces/allknowingroger/Image-Models-Test194/app.py deleted file mode 100644 index 136608f8bc315a548819d9a2b636bcba98117f3b..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test194/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "artificialguybr/IconsRedmond-IconsLoraForSDXL-V2", - "machinelearnear/preguntale_al_candidato_MASSA", - "machinelearnear/preguntale_al_candidato_BULLRICH", - "Yntec/BrandiMilne", - "flobbit/serenity-firefly-spaceship-sdxl-lora", - "machinelearnear/preguntale_al_candidato_SCHIARETTI", - "joachimsallstrom/aether-fire-lora-for-sdxl", - "machinelearnear/preguntale_al_candidato_BREGMAN", - "malhajar/sd_cl_myphotos", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/allyssonmacedo/good-clients/app.py b/spaces/allyssonmacedo/good-clients/app.py deleted file mode 100644 index 008e3cc4e6aaccd119a69783c29420f613e21bea..0000000000000000000000000000000000000000 --- a/spaces/allyssonmacedo/good-clients/app.py +++ /dev/null @@ -1,207 +0,0 @@ -import streamlit as st -import numpy as np -import pandas as pd -from sklearn.linear_model import LogisticRegression -from sklearn.model_selection import train_test_split - -st.set_page_config(page_title="Classificação de bons clientes") - -with st.container(): - print('teste') - st.subheader("Esse projeto prediz a probabilidade de um cliente acionar o suporte mais de uma vez por semana") - st.title("Modelo de Machine Learning") - st.write("Escolha os filtros") - - -@st.cache_data -def carregar_dados(): - dados = pd.read_excel('perfil_clientes_edits.xlsx') - return dados - -# @st.cache_data -# def targets(): -## criando coluna de targets -with st.container(): - dados = carregar_dados() - dados['target_gestor'] = 0 - mente_aberta = dados['Caracteristica do Gestor'] == '"Cabeça aberta"' - dados.loc[mente_aberta, 'target_gestor'] = 1 - - dados['target_suporte'] = 0 - baixo_suporte = (dados['Frequencia Suporte'] == 'Menos de uma vez por semana') | (dados['Frequencia Suporte'] == 'Raramente') - #baixo_suporte = dados['Frequencia Suporte'] == 'Raramente' - dados.loc[baixo_suporte, 'target_suporte'] = 1 - - dados_y = dados['target_suporte'] - dados_x = dados.drop(dados[['Frequencia Suporte', 'Tipo de Suporte','Categoria Suporte','target_gestor','target_suporte']], axis=1) - - # Convertendo valores de string para valores númericos para conseguirmos usar no modelo de Regressão. - dummies = pd.get_dummies(dados_x) - - - # Carregando os dados em um array numpy - X = np.array(dummies.values) - y = np.array(dados_y.values) - - # Dividir os dados em conjunto de treinamento e teste - X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=45) - - # Para as variáveis de treino - - # normalizando e padronizando os dados - # MinMaxScaler é usado para normalizar as variáveis, e StandardScaler é usado para padronizar - from sklearn.preprocessing import MinMaxScaler, StandardScaler - - # dados x (features) normalizados - # X = np.array(X) - - # normalizando - scaler = MinMaxScaler() - scaler.fit(X_train) - normalized_data = scaler.transform(X_train) - #print(normalized_data) - - # Padronizando - scaler = StandardScaler() - scaler.fit(X_train) - standardized_data = scaler.transform(X_train) - #print(standardized_data) - - #print(standardized_data.shape) - - X_train = standardized_data - - # Para as variáveis de teste - - # normalizando e padronizando os dados - # MinMaxScaler é usado para normalizar as variáveis, e StandardScaler é usado para padronizar - from sklearn.preprocessing import MinMaxScaler, StandardScaler - - # dados x (features) normalizados - # X = np.array(X) - - # normalizando - scaler = MinMaxScaler() - scaler.fit(X_test) - normalized_data = scaler.transform(X_test) - #print(normalized_data) - - # Padronizando - scaler = StandardScaler() - scaler.fit(X_test) - standardized_data = scaler.transform(X_test) - #print(standardized_data) - - #print(standardized_data.shape) - - X_test = standardized_data - - # Criando o modelo - model = LogisticRegression(random_state=0,max_iter=1000) - - # Treinando o modelo - model.fit(X_train, y_train) - - clf2 = LogisticRegression(random_state=45,max_iter=1000).fit(X_train, y_train) - - # Fazendo a previsão das classes - y_pred2 = clf2.predict(X_test) - - # Avaliando o modelo - # score = model.score(X_test, y_test) - - from sklearn import metrics - - score = metrics.accuracy_score(y_test, y_pred2) - - #print('Acurácia:', score) - - # Percentagem de acerto - - # Usando o modelo para previsão - predictions = model.predict(X_test) - #print(predictions) - - # Fazendo a previsão das probabilidades - proba = clf2.predict_proba(X_test) - #print(proba) - - # Probabilidade de acionar o suporte até 1vez na semana é de: - probabilidade_baixo_suporte = proba[:,1] - - - - - -with st.container(): - st.write("---") - atividade = st.selectbox("Selecione a Atividade", list(dados.Atividade.unique())) - ramo_atividade = st.selectbox("Selecione o Ramo de atuação", list(dados['Ramo de atuação'].unique())) - colaboradores = st.selectbox("Selecione a Quantidade de Colaboradores", list(dados['Colaboradores'].unique())) - gestor = st.selectbox("Selecione a Faixa Etária do Gestor", list(dados['Faixa etária gestor'].unique())) - carac = st.selectbox("Selecione a Característica do Gestor", list(dados['Caracteristica do Gestor'].unique())) - fat = st.selectbox("Selecione a Característica do Gestor", list(dados['Faturamento estimado'].unique())) - - - -with st.container(): - st.write("---") -# def func(atividade, ramo_atividade, colaboradores, gestor, carac, fat): - -# # wid_atividade.value, wid_ramo_atividade.value, wid_colaboradores.value, wid_gestor.value, wid_carac.value, wid_fat.value = 'COMÉRCIO', 'PANIFICAÇÃO', 'De 10 a 20', '50+', '"Cabeça aberta"', 'Até R$ 50 mil' - - data = { - 'Atividade': [atividade], - 'Ramo de atuação': [ramo_atividade], - 'Colaboradores': [colaboradores], - 'Faixa etária gestor': [gestor], - 'Caracteristica do Gestor': [carac], - 'Faturamento estimado': [fat], - 'Input': [1] - } - - dados_input = pd.DataFrame(data) - - dados_x_input = dados.drop(dados[['Frequencia Suporte', 'Tipo de Suporte','Categoria Suporte','target_gestor','target_suporte']], axis=1) - #dados_x_input['Input'] = 0 - - dados_input = pd.concat([dados_x_input, dados_input]) - - dummies = pd.get_dummies(dados_input) - - dummies_input = dummies[dummies['Input'] == 1] - dummies_input = dummies_input.drop(dummies_input[['Input']], axis=1) - - X = np.array(dummies_input.values) - - - st.write('Prevendo classificação para o dado de entrada... \n 0 - Frequência de suporte + 1 Vez por Semana \n 1 - Frequência de suporte menor do que 1 vez por semana') - - # print('') - - # Fazendo a previsão das probabilidades - proba = clf2.predict_proba(X) - - - st.write(f'Classificação 0 --> {(proba[0][0] * 100):.2f}% de probabilidade') - - st.write(f'Classificação 1 --> {(proba[0][1] * 100):.2f}% de probabilidade') - - - # print(f'Classificação 0 --> {(proba[0][0] * 100):.2f}% de probabilidade') - # print(f'Classificação 1 --> {(proba[0][1] * 100):.2f}% de probabilidade') - - # print() - - - predictions = model.predict(X)[-1] - # print('A classificação predita foi', predictions) - - - # # Performance do modelo: - # print('\nAcurácia do modelo:', round((score * 100),2), '%') - - st.write('A classificação predita foi', predictions) - st.write('\nAcurácia do modelo:', round((score * 100),2), '%') - - diff --git a/spaces/anonymous-pits/pits/models.py b/spaces/anonymous-pits/pits/models.py deleted file mode 100644 index 05f9b29ba782ffee43180f2579a0f237b1ca222a..0000000000000000000000000000000000000000 --- a/spaces/anonymous-pits/pits/models.py +++ /dev/null @@ -1,1383 +0,0 @@ -# from https://github.com/jaywalnut310/vits -# from https://github.com/ncsoft/avocodo -import math -import torch -from torch import nn -from torch.nn import functional as F -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -import modules -import attentions -import commons -from commons import init_weights, get_padding -#for Q option -#from functions import vq, vq_st - -from analysis import Pitch -from pqmf import PQMF - - -class StochasticDurationPredictor(nn.Module): - - def __init__(self, - in_channels, - filter_channels, - kernel_size, - p_dropout, - n_flows=4, - gin_channels=0): - super().__init__() - # it needs to be removed from future version. - filter_channels = in_channels - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append( - modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, - kernel_size, - n_layers=3, - p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append( - modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, - kernel_size, - n_layers=3, - p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, - x, - x_mask, - w=None, - g=None, - reverse=False, - noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to( - device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum( - (F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]) - logq = torch.sum( - -0.5 * (math.log(2 * math.pi) + - (e_q**2)) * x_mask, [1, 2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2 * math.pi) + - (z**2)) * x_mask, [1, 2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to( - device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - - def __init__(self, - in_channels, - filter_channels, - kernel_size, - p_dropout, - gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, - filter_channels, - kernel_size, - padding=kernel_size // 2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, - filter_channels, - kernel_size, - padding=kernel_size // 2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - - def __init__(self, n_vocab, out_channels, hidden_channels, filter_channels, - n_heads, n_layers, kernel_size, p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - self.emb_t = nn.Embedding(6, hidden_channels) - nn.init.normal_(self.emb_t.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder(hidden_channels, filter_channels, - n_heads, n_layers, kernel_size, - p_dropout) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, t, x_lengths): - t_zero = (t == 0) - emb_t = self.emb_t(t) - emb_t[t_zero, :] = 0 - x = (self.emb(x) + emb_t) * math.sqrt( - self.hidden_channels) # [b, t, h] - #x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(1)), - 1).to(x.dtype) - #x = self.encoder(x * x_mask, x_mask) - x = torch.einsum('btd,but->bdt', x, x_mask) - x = self.encoder(x, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), - 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(nn.Module): - - def __init__(self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, - upsample_initial_channel, - 7, - 1, - padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d(upsample_initial_channel // (2**i), - upsample_initial_channel // (2**(i + 1)), - k, - u, - padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - self.conv_posts = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2**(i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - if i >= len(self.ups) - 3: - self.conv_posts.append( - Conv1d(ch, 1, 7, 1, padding=3, bias=False)) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - xs = xs + self.resblocks[i * self.num_kernels + j](x) if xs is not None \ - else self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_posts[-1](x) - x = torch.tanh(x) - - return x - - def hier_forward(self, x, g=None): - outs = [] - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - xs = xs + self.resblocks[i * self.num_kernels + j](x) if xs is not None \ - else self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - if i >= self.num_upsamples - 3: - _x = F.leaky_relu(x) - _x = self.conv_posts[i - self.num_upsamples + 3](_x) - _x = torch.tanh(_x) - outs.append(_x) - return outs - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(nn.Module): - - def __init__(self, - period, - kernel_size=5, - stride=3, - use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f( - Conv2d(1, - 32, (kernel_size, 1), (stride, 1), - padding=(get_padding(kernel_size, 1), 0))), - norm_f( - Conv2d(32, - 128, (kernel_size, 1), (stride, 1), - padding=(get_padding(kernel_size, 1), 0))), - norm_f( - Conv2d(128, - 512, (kernel_size, 1), (stride, 1), - padding=(get_padding(kernel_size, 1), 0))), - norm_f( - Conv2d(512, - 1024, (kernel_size, 1), (stride, 1), - padding=(get_padding(kernel_size, 1), 0))), - norm_f( - Conv2d(1024, - 1024, (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(nn.Module): - - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(nn.Module): - - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + \ - [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) - for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -##### Avocodo -class CoMBDBlock(torch.nn.Module): - - def __init__( - self, - h_u, # List[int], - d_k, # List[int], - d_s, # List[int], - d_d, # List[int], - d_g, # List[int], - d_p, # List[int], - op_f, # int, - op_k, # int, - op_g, # int, - use_spectral_norm=False): - super(CoMBDBlock, self).__init__() - norm_f = weight_norm if use_spectral_norm is False else spectral_norm - - self.convs = nn.ModuleList() - filters = [[1, h_u[0]]] - for i in range(len(h_u) - 1): - filters.append([h_u[i], h_u[i + 1]]) - for _f, _k, _s, _d, _g, _p in zip(filters, d_k, d_s, d_d, d_g, d_p): - self.convs.append( - norm_f( - Conv1d(in_channels=_f[0], - out_channels=_f[1], - kernel_size=_k, - stride=_s, - dilation=_d, - groups=_g, - padding=_p))) - self.projection_conv = norm_f( - Conv1d(in_channels=filters[-1][1], - out_channels=op_f, - kernel_size=op_k, - groups=op_g)) - - def forward(self, x, b_y, b_y_hat): - fmap_r = [] - fmap_g = [] - for block in self.convs: - x = block(x) - x = F.leaky_relu(x, 0.2) - f_r, f_g = x.split([b_y, b_y_hat], dim=0) - fmap_r.append(f_r.tile([2, 1, 1]) if b_y < b_y_hat else f_r) - fmap_g.append(f_g) - x = self.projection_conv(x) - x_r, x_g = x.split([b_y, b_y_hat], dim=0) - return x_r.tile([2, 1, 1 - ]) if b_y < b_y_hat else x_r, x_g, fmap_r, fmap_g - - -class CoMBD(torch.nn.Module): - - def __init__(self, use_spectral_norm=False): - super(CoMBD, self).__init__() - self.pqmf_list = nn.ModuleList([ - PQMF(4, 192, 0.13, 10.0), #lv2 - PQMF(2, 256, 0.25, 10.0) #lv1 - ]) - combd_h_u = [[16, 64, 256, 1024, 1024, 1024] for _ in range(3)] - combd_d_k = [[7, 11, 11, 11, 11, 5], [11, 21, 21, 21, 21, 5], - [15, 41, 41, 41, 41, 5]] - combd_d_s = [[1, 1, 4, 4, 4, 1] for _ in range(3)] - combd_d_d = [[1, 1, 1, 1, 1, 1] for _ in range(3)] - combd_d_g = [[1, 4, 16, 64, 256, 1] for _ in range(3)] - - combd_d_p = [[3, 5, 5, 5, 5, 2], [5, 10, 10, 10, 10, 2], - [7, 20, 20, 20, 20, 2]] - combd_op_f = [1, 1, 1] - combd_op_k = [3, 3, 3] - combd_op_g = [1, 1, 1] - - self.blocks = nn.ModuleList() - for _h_u, _d_k, _d_s, _d_d, _d_g, _d_p, _op_f, _op_k, _op_g in zip( - combd_h_u, - combd_d_k, - combd_d_s, - combd_d_d, - combd_d_g, - combd_d_p, - combd_op_f, - combd_op_k, - combd_op_g, - ): - self.blocks.append( - CoMBDBlock( - _h_u, - _d_k, - _d_s, - _d_d, - _d_g, - _d_p, - _op_f, - _op_k, - _op_g, - )) - - def _block_forward(self, ys, ys_hat, blocks): - outs_real = [] - outs_fake = [] - f_maps_real = [] - f_maps_fake = [] - for y, y_hat, block in zip(ys, ys_hat, - blocks): #y:B, y_hat: 2B if i!=-1 else B,B - b_y = y.shape[0] - b_y_hat = y_hat.shape[0] - cat_y = torch.cat([y, y_hat], dim=0) - out_real, out_fake, f_map_r, f_map_g = block(cat_y, b_y, b_y_hat) - outs_real.append(out_real) - outs_fake.append(out_fake) - f_maps_real.append(f_map_r) - f_maps_fake.append(f_map_g) - return outs_real, outs_fake, f_maps_real, f_maps_fake - - def _pqmf_forward(self, ys, ys_hat): - # preprocess for multi_scale forward - multi_scale_inputs_hat = [] - for pqmf_ in self.pqmf_list: - multi_scale_inputs_hat.append(pqmf_.analysis(ys_hat[-1])[:, :1, :]) - - # real - # for hierarchical forward - #outs_real_, f_maps_real_ = self._block_forward( - # ys, self.blocks) - - # for multi_scale forward - #outs_real, f_maps_real = self._block_forward( - # ys[:-1], self.blocks[:-1], outs_real, f_maps_real) - #outs_real.extend(outs_real[:-1]) - #f_maps_real.extend(f_maps_real[:-1]) - - #outs_real = [torch.cat([o,o], dim=0) if i!=len(outs_real_)-1 else o for i,o in enumerate(outs_real_)] - #f_maps_real = [[torch.cat([fmap,fmap], dim=0) if i!=len(f_maps_real_)-1 else fmap for fmap in fmaps ] \ - # for i,fmaps in enumerate(f_maps_real_)] - - inputs_fake = [ - torch.cat([y, multi_scale_inputs_hat[i]], dim=0) - if i != len(ys_hat) - 1 else y for i, y in enumerate(ys_hat) - ] - outs_real, outs_fake, f_maps_real, f_maps_fake = self._block_forward( - ys, inputs_fake, self.blocks) - - # predicted - # for hierarchical forward - #outs_fake, f_maps_fake = self._block_forward( - # inputs_fake, self.blocks) - - #outs_real_, f_maps_real_ = self._block_forward( - # ys, self.blocks) - # for multi_scale forward - #outs_fake, f_maps_fake = self._block_forward( - # multi_scale_inputs_hat, self.blocks[:-1], outs_fake, f_maps_fake) - - return outs_real, outs_fake, f_maps_real, f_maps_fake - - def forward(self, ys, ys_hat): - outs_real, outs_fake, f_maps_real, f_maps_fake = self._pqmf_forward( - ys, ys_hat) - return outs_real, outs_fake, f_maps_real, f_maps_fake - - -class MDC(torch.nn.Module): - - def __init__(self, - in_channels, - out_channels, - strides, - kernel_size, - dilations, - use_spectral_norm=False): - super(MDC, self).__init__() - norm_f = weight_norm if not use_spectral_norm else spectral_norm - self.d_convs = nn.ModuleList() - for _k, _d in zip(kernel_size, dilations): - self.d_convs.append( - norm_f( - Conv1d(in_channels=in_channels, - out_channels=out_channels, - kernel_size=_k, - dilation=_d, - padding=get_padding(_k, _d)))) - self.post_conv = norm_f( - Conv1d(in_channels=out_channels, - out_channels=out_channels, - kernel_size=3, - stride=strides, - padding=get_padding(_k, _d))) - self.softmax = torch.nn.Softmax(dim=-1) - - def forward(self, x): - _out = None - for _l in self.d_convs: - _x = torch.unsqueeze(_l(x), -1) - _x = F.leaky_relu(_x, 0.2) - _out = torch.cat([_out, _x], axis=-1) if _out is not None \ - else _x - x = torch.sum(_out, dim=-1) - x = self.post_conv(x) - x = F.leaky_relu(x, 0.2) # @@ - - return x - - -class SBDBlock(torch.nn.Module): - - def __init__(self, - segment_dim, - strides, - filters, - kernel_size, - dilations, - use_spectral_norm=False): - super(SBDBlock, self).__init__() - norm_f = weight_norm if not use_spectral_norm else spectral_norm - self.convs = nn.ModuleList() - filters_in_out = [(segment_dim, filters[0])] - for i in range(len(filters) - 1): - filters_in_out.append([filters[i], filters[i + 1]]) - - for _s, _f, _k, _d in zip(strides, filters_in_out, kernel_size, - dilations): - self.convs.append( - MDC(in_channels=_f[0], - out_channels=_f[1], - strides=_s, - kernel_size=_k, - dilations=_d, - use_spectral_norm=use_spectral_norm)) - self.post_conv = norm_f( - Conv1d(in_channels=_f[1], - out_channels=1, - kernel_size=3, - stride=1, - padding=3 // 2)) # @@ - - def forward(self, x): - fmap_r = [] - fmap_g = [] - for _l in self.convs: - x = _l(x) - f_r, f_g = torch.chunk(x, 2, dim=0) - fmap_r.append(f_r) - fmap_g.append(f_g) - x = self.post_conv(x) # @@ - x_r, x_g = torch.chunk(x, 2, dim=0) - return x_r, x_g, fmap_r, fmap_g - - -class MDCDConfig: - - def __init__(self): - self.pqmf_params = [16, 256, 0.03, 10.0] - self.f_pqmf_params = [64, 256, 0.1, 9.0] - self.filters = [[64, 128, 256, 256, 256], [64, 128, 256, 256, 256], - [64, 128, 256, 256, 256], [32, 64, 128, 128, 128]] - self.kernel_sizes = [[[7, 7, 7], [7, 7, 7], [7, 7, 7], [7, 7, 7], - [7, 7, 7]], - [[5, 5, 5], [5, 5, 5], [5, 5, 5], [5, 5, 5], - [5, 5, 5]], - [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], - [3, 3, 3]], - [[5, 5, 5], [5, 5, 5], [5, 5, 5], [5, 5, 5], - [5, 5, 5]]] - self.dilations = [[[5, 7, 11], [5, 7, 11], [5, 7, 11], [5, 7, 11], - [5, 7, 11]], - [[3, 5, 7], [3, 5, 7], [3, 5, 7], [3, 5, 7], - [3, 5, 7]], - [[1, 2, 3], [1, 2, 3], [1, 2, 3], [1, 2, 3], - [1, 2, 3]], - [[1, 2, 3], [1, 2, 3], [1, 2, 3], [2, 3, 5], - [2, 3, 5]]] - self.strides = [[1, 1, 3, 3, 1], [1, 1, 3, 3, 1], [1, 1, 3, 3, 1], - [1, 1, 3, 3, 1]] - self.band_ranges = [[0, 6], [0, 11], [0, 16], [0, 64]] - self.transpose = [False, False, False, True] - self.segment_size = 8192 - - -class SBD(torch.nn.Module): - - def __init__(self, use_spectral_norm=False): - super(SBD, self).__init__() - self.config = MDCDConfig() - self.pqmf = PQMF(*self.config.pqmf_params) - if True in self.config.transpose: - self.f_pqmf = PQMF(*self.config.f_pqmf_params) - else: - self.f_pqmf = None - - self.discriminators = torch.nn.ModuleList() - - for _f, _k, _d, _s, _br, _tr in zip(self.config.filters, - self.config.kernel_sizes, - self.config.dilations, - self.config.strides, - self.config.band_ranges, - self.config.transpose): - if _tr: - segment_dim = self.config.segment_size // _br[1] - _br[0] - else: - segment_dim = _br[1] - _br[0] - - self.discriminators.append( - SBDBlock(segment_dim=segment_dim, - filters=_f, - kernel_size=_k, - dilations=_d, - strides=_s, - use_spectral_norm=use_spectral_norm)) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - y_in = self.pqmf.analysis(y) - y_hat_in = self.pqmf.analysis(y_hat) - y_in_f = self.f_pqmf.analysis(y) - y_hat_in_f = self.f_pqmf.analysis(y_hat) - - for d, br, tr in zip(self.discriminators, self.config.band_ranges, - self.config.transpose): - if not tr: - _y_in = y_in[:, br[0]:br[1], :] - _y_hat_in = y_hat_in[:, br[0]:br[1], :] - else: - _y_in = y_in_f[:, br[0]:br[1], :] - _y_hat_in = y_hat_in_f[:, br[0]:br[1], :] - _y_in = torch.transpose(_y_in, 1, 2) - _y_hat_in = torch.transpose(_y_hat_in, 1, 2) - #y_d_r, fmap_r = d(_y_in) - #y_d_g, fmap_g = d(_y_hat_in) - cat_y = torch.cat([_y_in, _y_hat_in], dim=0) - y_d_r, y_d_g, fmap_r, fmap_g = d(cat_y) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class AvocodoDiscriminator(nn.Module): - - def __init__(self, use_spectral_norm=False): - super(AvocodoDiscriminator, self).__init__() - self.combd = CoMBD(use_spectral_norm) - self.sbd = SBD(use_spectral_norm) - - def forward(self, y, ys_hat): - ys = [ - self.combd.pqmf_list[0].analysis(y)[:, :1], #lv2 - self.combd.pqmf_list[1].analysis(y)[:, :1], #lv1 - y - ] - y_c_rs, y_c_gs, fmap_c_rs, fmap_c_gs = self.combd(ys, ys_hat) - y_s_rs, y_s_gs, fmap_s_rs, fmap_s_gs = self.sbd(y, ys_hat[-1]) - y_c_rs.extend(y_s_rs) - y_c_gs.extend(y_s_gs) - fmap_c_rs.extend(fmap_s_rs) - fmap_c_gs.extend(fmap_s_gs) - return y_c_rs, y_c_gs, fmap_c_rs, fmap_c_gs - - -##### Avocodo - - -class YingDecoder(nn.Module): - - def __init__(self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - yin_start, - yin_scope, - yin_shift_range, - gin_channels=0): - super().__init__() - self.in_channels = yin_scope - self.out_channels = yin_scope - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.yin_start = yin_start - self.yin_scope = yin_scope - self.yin_shift_range = yin_shift_range - - self.pre = nn.Conv1d(self.in_channels, hidden_channels, 1) - self.dec = modules.WN(hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, self.out_channels, 1) - - def crop_scope(self, x, yin_start, - scope_shift): # x: tensor [B,C,T] #scope_shift: tensor [B] - return torch.stack([ - x[i, yin_start + scope_shift[i]:yin_start + self.yin_scope + - scope_shift[i], :] for i in range(x.shape[0]) - ], - dim=0) - - def infer(self, z_yin, z_mask, g=None): - B = z_yin.shape[0] - scope_shift = torch.randint(-self.yin_shift_range, - self.yin_shift_range, (B, ), - dtype=torch.int) - z_yin_crop = self.crop_scope(z_yin, self.yin_start, scope_shift) - x = self.pre(z_yin_crop) * z_mask - x = self.dec(x, z_mask, g=g) - yin_hat_crop = self.proj(x) * z_mask - return yin_hat_crop - - def forward(self, z_yin, yin_gt, z_mask, g=None): - B = z_yin.shape[0] - scope_shift = torch.randint(-self.yin_shift_range, - self.yin_shift_range, (B, ), - dtype=torch.int) - z_yin_crop = self.crop_scope(z_yin, self.yin_start, scope_shift) - yin_gt_shifted_crop = self.crop_scope(yin_gt, self.yin_start, - scope_shift) - yin_gt_crop = self.crop_scope(yin_gt, self.yin_start, - torch.zeros_like(scope_shift)) - x = self.pre(z_yin_crop) * z_mask - x = self.dec(x, z_mask, g=g) - yin_hat_crop = self.proj(x) * z_mask - return yin_gt_crop, yin_gt_shifted_crop, yin_hat_crop, z_yin_crop, scope_shift - - -# For Q option -#class VQEmbedding(nn.Module): -# -# def __init__(self, codebook_size, -# code_channels): -# super().__init__() -# self.embedding = nn.Embedding(codebook_size, code_channels) -# self.embedding.weight.data.uniform_(-1. / codebook_size, -# 1. / codebook_size) -# -# def forward(self, z_e_x): -# z_e_x_ = z_e_x.permute(0, 2, 1).contiguous() -# latent_indices = vq(z_e_x_, self.embedding.weight) -# z_q = self.embedding(latent_indices).permute(0, 2, 1) -# return z_q -# -# def straight_through(self, z_e_x): -# z_e_x_ = z_e_x.permute(0, 2, 1).contiguous() -# z_q_x_st_, indices = vq_st(z_e_x_, self.embedding.weight.detach()) -# z_q_x_st = z_q_x_st_.permute(0, 2, 1).contiguous() -# -# z_q_x_flatten = torch.index_select(self.embedding.weight, -# dim=0, -# index=indices) -# z_q_x_ = z_q_x_flatten.view_as(z_e_x_) -# z_q_x = z_q_x_.permute(0, 2, 1).contiguous() -# return z_q_x_st, z_q_x - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - n_vocab, - spec_channels, - segment_size, - midi_start, - midi_end, - octave_range, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - yin_channels, - yin_start, - yin_scope, - yin_shift_range, - n_speakers=0, - gin_channels=0, - use_sdp=True, - #codebook_size=256, #for Q option - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.yin_channels = yin_channels - self.yin_start = yin_start - self.yin_scope = yin_scope - - self.use_sdp = use_sdp - self.enc_p = TextEncoder(n_vocab, inter_channels, hidden_channels, - filter_channels, n_heads, n_layers, - kernel_size, p_dropout) - self.dec = Generator( - inter_channels - yin_channels + - yin_scope, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels) - - self.enc_spec = PosteriorEncoder(spec_channels, - inter_channels - yin_channels, - inter_channels - yin_channels, - 5, - 1, - 16, - gin_channels=gin_channels) - - self.enc_pitch = PosteriorEncoder(yin_channels, - yin_channels, - yin_channels, - 5, - 1, - 16, - gin_channels=gin_channels) - - self.flow = ResidualCouplingBlock(inter_channels, - hidden_channels, - 5, - 1, - 4, - gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, - 192, - 3, - 0.5, - 4, - gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, - 256, - 3, - 0.5, - gin_channels=gin_channels) - - self.yin_dec = YingDecoder(yin_scope, - 5, - 1, - 4, - yin_start, - yin_scope, - yin_shift_range, - gin_channels=gin_channels) - - #self.vq = VQEmbedding(codebook_size, inter_channels - yin_channels)#inter_channels // 2) - self.emb_g = nn.Embedding(self.n_speakers, gin_channels) - - self.pitch = Pitch(midi_start=midi_start, - midi_end=midi_end, - octave_range=octave_range) - - def crop_scope( - self, - x, - scope_shift=0): # x: list #need to modify for non-scalar shift - return [ - i[:, self.yin_start + scope_shift:self.yin_start + self.yin_scope + - scope_shift, :] for i in x - ] - - def crop_scope_tensor( - self, x, - scope_shift): # x: tensor [B,C,T] #scope_shift: tensor [B] - return torch.stack([ - x[i, self.yin_start + scope_shift[i]:self.yin_start + - self.yin_scope + scope_shift[i], :] for i in range(x.shape[0]) - ], - dim=0) - - def yin_dec_infer(self, z_yin, z_mask, sid=None): - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - return self.yin_dec.infer(z_yin, z_mask, g) - - def forward(self, - x, - t, - x_lengths, - y, - y_lengths, - ying, - ying_lengths, - sid=None, - scope_shift=0): - x, m_p, logs_p, x_mask = self.enc_p(x, t, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z_spec, m_spec, logs_spec, spec_mask = self.enc_spec(y, y_lengths, g=g) - - #for Q option - #z_spec_q_st, z_spec_q = self.vq.straight_through(z_spec) - #z_spec_q_st = z_spec_q_st * spec_mask - #z_spec_q = z_spec_q * spec_mask - - z_yin, m_yin, logs_yin, yin_mask = self.enc_pitch(ying, y_lengths, g=g) - z_yin_crop, logs_yin_crop, m_yin_crop = self.crop_scope( - [z_yin, logs_yin, m_yin], scope_shift) - - #yin dec loss - yin_gt_crop, yin_gt_shifted_crop, yin_dec_crop, z_yin_crop_shifted, scope_shift = self.yin_dec( - z_yin, ying, yin_mask, g) - - z = torch.cat([z_spec, z_yin], dim=1) - logs_q = torch.cat([logs_spec, logs_yin], dim=1) - m_q = torch.cat([m_spec, m_yin], dim=1) - y_mask = spec_mask - - z_p = self.flow(z, y_mask, g=g) - - z_dec = torch.cat([z_spec, z_yin_crop], dim=1) - - z_dec_shifted = torch.cat([z_spec.detach(), z_yin_crop_shifted], dim=1) - z_dec_ = torch.cat([z_dec, z_dec_shifted], dim=0) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - # [b, 1, t_s] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], - keepdim=True) - # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s], z_p: [b,d,t] - #neg_cent2 = torch.matmul(-0.5 * (z_p**2).transpose(1, 2), s_p_sq_r) - neg_cent2 = torch.einsum('bdt, bds -> bts', -0.5 * (z_p**2), - s_p_sq_r) - # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - #neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) - neg_cent3 = torch.einsum('bdt, bds -> bts', z_p, (m_p * s_p_sq_r)) - neg_cent4 = torch.sum(-0.5 * (m_p**2) * s_p_sq_r, [1], - keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze( - y_mask, -1) - from monotonic_align import maximum_path - attn = maximum_path(neg_cent, - attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum( - (logw - logw_)**2, [1, 2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.einsum('bctn, bdn -> bdt', attn, m_p) - logs_p = torch.einsum('bctn, bdn -> bdt', attn, logs_p) - - #z_slice, ids_slice = commons.rand_slice_segments(z_dec, y_lengths, self.segment_size) - #o = self.dec(z_slice, g=g) - z_slice, ids_slice = commons.rand_slice_segments_for_cat( - z_dec_, torch.cat([y_lengths, y_lengths], dim=0), - self.segment_size) - o_ = self.dec.hier_forward(z_slice, g=torch.cat([g, g], dim=0)) - o = [torch.chunk(o_hier, 2, dim=0)[0] for o_hier in o_] - - o_pad = F.pad(o_[-1], (768, 768 + (-o_[-1].shape[-1]) % 256 + 256 * - (o_[-1].shape[-1] % 256 == 0)), - mode='constant').squeeze(1) - yin_hat = self.pitch.yingram(o_pad) - yin_hat_crop = self.crop_scope([yin_hat])[0] - yin_hat_shifted = self.crop_scope_tensor( - torch.chunk(yin_hat, 2, dim=0)[0], scope_shift) - return o, l_length, attn, ids_slice, x_mask, y_mask, o_, \ - (z, z_p, m_p, logs_p, m_q, logs_q), \ - (z_dec_), \ - (z_spec, m_spec, logs_spec, spec_mask, z_yin, m_yin, logs_yin, yin_mask), \ - (yin_gt_crop, yin_gt_shifted_crop, yin_dec_crop, yin_hat_crop, scope_shift, yin_hat_shifted) - - def infer(self, - x, - t, - x_lengths, - sid=None, - noise_scale=1, - length_scale=1, - noise_scale_w=1., - max_len=None, - scope_shift=0): #need to fix #vector scope shift needed - x, m_p, logs_p, x_mask = self.enc_p(x, t, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, - x_mask, - g=g, - reverse=True, - noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), - 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.einsum('bctn, bdn -> bdt', attn, m_p) - logs_p = torch.einsum('bctn, bdn -> bdt', attn, logs_p) - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - z_spec, z_yin = torch.split(z, - self.inter_channels - self.yin_channels, - dim=1) - z_yin_crop = self.crop_scope([z_yin], scope_shift)[0] - z_crop = torch.cat([z_spec, z_yin_crop], dim=1) - o = self.dec((z_crop * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z_crop, z, z_p, m_p, logs_p) - - def infer_pre_decoder(self, - x, - t, - x_lengths, - sid=None, - noise_scale=1., - length_scale=1., - noise_scale_w=1., - max_len=None, - scope_shift=0): - x, m_p, logs_p, x_mask = self.enc_p(x, t, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, - x_mask, - g=g, - reverse=True, - noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), - 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.einsum('bctn, bdn -> bdt', attn, m_p) - logs_p = torch.einsum('bctn, bdn -> bdt', attn, logs_p) - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - z_spec, z_yin = torch.split(z, - self.inter_channels - self.yin_channels, - dim=1) - z_yin_crop = self.crop_scope([z_yin], scope_shift)[0] - z_crop = torch.cat([z_spec, z_yin_crop], dim=1) - decoder_inputs = z_crop * y_mask - return decoder_inputs, attn, y_mask, (z_crop, z, z_p, m_p, logs_p) - - def infer_pre_lr( - self, - x, - t, - x_lengths, - sid=None, - length_scale=1, - noise_scale_w=1., - ): - x, m_p, logs_p, x_mask = self.enc_p(x, t, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, - x_mask, - g=g, - reverse=True, - noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - return w_ceil, x, m_p, logs_p, x_mask, g - - def infer_lr(self, w_ceil, x, m_p, logs_p, x_mask): - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), - 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.einsum('bctn, bdn -> bdt', attn, m_p) - logs_p = torch.einsum('bctn, bdn -> bdt', attn, logs_p) - return m_p, logs_p, y_mask - - def infer_post_lr_pre_decoder(self, - m_p, - logs_p, - g, - y_mask, - noise_scale=1, - scope_shift=0): - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - z_spec, z_yin = torch.split(z, - self.inter_channels - self.yin_channels, - dim=1) - - z_yin_crop = self.crop_scope([z_yin], scope_shift)[0] - z_crop = torch.cat([z_spec, z_yin_crop], dim=1) - decoder_inputs = z_crop * y_mask - - return decoder_inputs, y_mask, (z_crop, z, z_p, m_p, logs_p) - - def infer_decode_chunk(self, decoder_inputs, sid=None): - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - return self.dec(decoder_inputs, g=g) - - diff --git a/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/util/time_counter.py b/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/util/time_counter.py deleted file mode 100644 index 0aedb2e4d61bfbe7571dca9d50053f0fedaa1359..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/util/time_counter.py +++ /dev/null @@ -1,62 +0,0 @@ -import json -import time - - -class TimeCounter: - def __init__(self) -> None: - pass - - def clear(self): - self.timedict = {} - self.basetime = time.perf_counter() - - def timeit(self, name): - nowtime = time.perf_counter() - self.basetime - self.timedict[name] = nowtime - self.basetime = time.perf_counter() - - -class TimeHolder: - def __init__(self) -> None: - self.timedict = {} - - def update(self, _timedict: dict): - for k, v in _timedict.items(): - if k not in self.timedict: - self.timedict[k] = AverageMeter(name=k, val_only=True) - self.timedict[k].update(val=v) - - def final_res(self): - return {k: v.avg for k, v in self.timedict.items()} - - def __str__(self): - return json.dumps(self.final_res(), indent=2) - - -class AverageMeter(object): - """Computes and stores the average and current value""" - - def __init__(self, name, fmt=":f", val_only=False): - self.name = name - self.fmt = fmt - self.val_only = val_only - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - def __str__(self): - if self.val_only: - fmtstr = "{name} {val" + self.fmt + "}" - else: - fmtstr = "{name} {val" + self.fmt + "} ({avg" + self.fmt + "})" - return fmtstr.format(**self.__dict__) diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/ngrok.py b/spaces/aodianyun/stable-diffusion-webui/modules/ngrok.py deleted file mode 100644 index 3df2c06bf1f10d49b7e9397758bc4f3661a51ba7..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/modules/ngrok.py +++ /dev/null @@ -1,26 +0,0 @@ -from pyngrok import ngrok, conf, exception - -def connect(token, port, region): - account = None - if token is None: - token = 'None' - else: - if ':' in token: - # token = authtoken:username:password - account = token.split(':')[1] + ':' + token.split(':')[-1] - token = token.split(':')[0] - - config = conf.PyngrokConfig( - auth_token=token, region=region - ) - try: - if account is None: - public_url = ngrok.connect(port, pyngrok_config=config, bind_tls=True).public_url - else: - public_url = ngrok.connect(port, pyngrok_config=config, bind_tls=True, auth=account).public_url - except exception.PyngrokNgrokError: - print(f'Invalid ngrok authtoken, ngrok connection aborted.\n' - f'Your token: {token}, get the right one on https://dashboard.ngrok.com/get-started/your-authtoken') - else: - print(f'ngrok connected to localhost:{port}! URL: {public_url}\n' - 'You can use this link after the launch is complete.') diff --git a/spaces/arch-123/bingo/src/lib/storage.ts b/spaces/arch-123/bingo/src/lib/storage.ts deleted file mode 100644 index a5b7825c4f76a28c704da512ae39e8bb45addd09..0000000000000000000000000000000000000000 --- a/spaces/arch-123/bingo/src/lib/storage.ts +++ /dev/null @@ -1,27 +0,0 @@ -import { getMany, set, del, clear } from 'idb-keyval'; - -export const Storage = { - async get(key: string | string[] | null): Promise { - if (key === null) return null; - if (typeof key === 'string') { - key = [key] - } - const returnData: Record = {} - const values = await getMany(key) - key.forEach((k, idx)=> { - returnData[k] = values[idx] - }) - return returnData; - }, - async set(object: any) { - for (let key of Object.keys(object)) { - await set(key, object[key]) - } - }, - async remove(key: string) { - return del(key); - }, - async clear() { - return clear(); - } -} diff --git a/spaces/arshian/linearepitopemodels/app.py b/spaces/arshian/linearepitopemodels/app.py deleted file mode 100644 index acad3df7aba1f04eda5ac00b7947c573e4900b7b..0000000000000000000000000000000000000000 --- a/spaces/arshian/linearepitopemodels/app.py +++ /dev/null @@ -1,76 +0,0 @@ -import streamlit as st -from transformers import TFAutoModelForSequenceClassification -from transformers import AutoTokenizer -import pandas as pd -import tensorflow as tf - -# title -st.title('Ravens AI') - -# text input with label -sequence = st.text_input('Enter Amino Acid Sequence') - -model_type = st.radio( - "Choose Linear Epitope Classifier", - ('Linear T-Cells (MHC Class I Restriction)', 'Linear T-Cells (MHC Class II Restriction)', 'Linear B-Cell')) - -# windows length slider -# length = st.slider('Window Length', 1, 50, 10) -threshold = st.slider('Probability Threshold', 0.0, 1.0, 0.5) - -model_checkpoint = "facebook/esm2_t6_8M_UR50D" - -tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) - -# try: -if model_type == 'Linear T-Cells (MHC Class I Restriction)': - try: - model = TFAutoModelForSequenceClassification.from_pretrained('classifier') - except: - st.warning("We're experiencing server issues. Please try again later!", icon="⚠️") -elif model_type == 'Linear T-Cells (MHC Class II Restriction)': - try: - model = TFAutoModelForSequenceClassification.from_pretrained('classifier2') - except: - st.warning("We're experiencing server issues. Please try again later!", icon="⚠️") -elif model_type == 'Linear B-Cell': - try: - model = TFAutoModelForSequenceClassification.from_pretrained('bcell') - except: - st.warning("We're experiencing server issues. Please refresh and try again!", icon="⚠️") -try: - # submit button - if st.button('Submit'): - locations = [] - peptide_name = sequence - peptide = tokenizer(peptide_name, return_tensors="tf") - output = model(peptide) - locations.append([peptide_name, output.logits.numpy()[0][0]]) - - locations = pd.DataFrame(locations, columns = ['Peptide', 'Probability']) - - # display table with sequence and probability as the headers - def color_survived(x: float): # x between 0 and 1 - # red to green scale based on x - # 0 -> red - # 0.5 -> clear - # 1 -> green - - # red - if x < threshold: - r = 179 - g = 40 - b = 2 - # green - else: - r = 18 - g = 150 - b = 6 - - return f'background-color: rgb({r}, {g}, {b})' - - st.table(locations.style.applymap(color_survived, subset=['Probability'])) -except NameError: - st.warning("We're experiencing server issues. Please refresh and try again!", icon="⚠️") -# except InvalidArgumentError: -# st.warning("We're experiencing server issues. Please try again later!", icon="⚠️") diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/align_tts/__init__.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/align_tts/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Util/test_asn1.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Util/test_asn1.py deleted file mode 100644 index 68292f3067ed2d3bbd527181c16a37d5ac6a195c..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Util/test_asn1.py +++ /dev/null @@ -1,784 +0,0 @@ -# -# SelfTest/Util/test_asn.py: Self-test for the Crypto.Util.asn1 module -# -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -"""Self-tests for Crypto.Util.asn1""" - -import unittest - -from Crypto.Util.py3compat import * -from Crypto.Util.asn1 import (DerObject, DerSetOf, DerInteger, - DerBitString, - DerObjectId, DerNull, DerOctetString, - DerSequence) - -class DerObjectTests(unittest.TestCase): - - def testObjInit1(self): - # Fail with invalid tag format (must be 1 byte) - self.assertRaises(ValueError, DerObject, b('\x00\x99')) - # Fail with invalid implicit tag (must be <0x1F) - self.assertRaises(ValueError, DerObject, 0x1F) - - # ------ - - def testObjEncode1(self): - # No payload - der = DerObject(b('\x02')) - self.assertEqual(der.encode(), b('\x02\x00')) - # Small payload (primitive) - der.payload = b('\x45') - self.assertEqual(der.encode(), b('\x02\x01\x45')) - # Invariant - self.assertEqual(der.encode(), b('\x02\x01\x45')) - # Initialize with numerical tag - der = DerObject(0x04) - der.payload = b('\x45') - self.assertEqual(der.encode(), b('\x04\x01\x45')) - # Initialize with constructed type - der = DerObject(b('\x10'), constructed=True) - self.assertEqual(der.encode(), b('\x30\x00')) - - def testObjEncode2(self): - # Initialize with payload - der = DerObject(0x03, b('\x12\x12')) - self.assertEqual(der.encode(), b('\x03\x02\x12\x12')) - - def testObjEncode3(self): - # Long payload - der = DerObject(b('\x10')) - der.payload = b("0")*128 - self.assertEqual(der.encode(), b('\x10\x81\x80' + "0"*128)) - - def testObjEncode4(self): - # Implicit tags (constructed) - der = DerObject(0x10, implicit=1, constructed=True) - der.payload = b('ppll') - self.assertEqual(der.encode(), b('\xa1\x04ppll')) - # Implicit tags (primitive) - der = DerObject(0x02, implicit=0x1E, constructed=False) - der.payload = b('ppll') - self.assertEqual(der.encode(), b('\x9E\x04ppll')) - - def testObjEncode5(self): - # Encode type with explicit tag - der = DerObject(0x10, explicit=5) - der.payload = b("xxll") - self.assertEqual(der.encode(), b("\xa5\x06\x10\x04xxll")) - - # ----- - - def testObjDecode1(self): - # Decode short payload - der = DerObject(0x02) - der.decode(b('\x02\x02\x01\x02')) - self.assertEqual(der.payload, b("\x01\x02")) - self.assertEqual(der._tag_octet, 0x02) - - def testObjDecode2(self): - # Decode long payload - der = DerObject(0x02) - der.decode(b('\x02\x81\x80' + "1"*128)) - self.assertEqual(der.payload, b("1")*128) - self.assertEqual(der._tag_octet, 0x02) - - def testObjDecode3(self): - # Decode payload with too much data gives error - der = DerObject(0x02) - self.assertRaises(ValueError, der.decode, b('\x02\x02\x01\x02\xFF')) - # Decode payload with too little data gives error - der = DerObject(0x02) - self.assertRaises(ValueError, der.decode, b('\x02\x02\x01')) - - def testObjDecode4(self): - # Decode implicit tag (primitive) - der = DerObject(0x02, constructed=False, implicit=0xF) - self.assertRaises(ValueError, der.decode, b('\x02\x02\x01\x02')) - der.decode(b('\x8F\x01\x00')) - self.assertEqual(der.payload, b('\x00')) - # Decode implicit tag (constructed) - der = DerObject(0x02, constructed=True, implicit=0xF) - self.assertRaises(ValueError, der.decode, b('\x02\x02\x01\x02')) - der.decode(b('\xAF\x01\x00')) - self.assertEqual(der.payload, b('\x00')) - - def testObjDecode5(self): - # Decode payload with unexpected tag gives error - der = DerObject(0x02) - self.assertRaises(ValueError, der.decode, b('\x03\x02\x01\x02')) - - def testObjDecode6(self): - # Arbitrary DER object - der = DerObject() - der.decode(b('\x65\x01\x88')) - self.assertEqual(der._tag_octet, 0x65) - self.assertEqual(der.payload, b('\x88')) - - def testObjDecode7(self): - # Decode explicit tag - der = DerObject(0x10, explicit=5) - der.decode(b("\xa5\x06\x10\x04xxll")) - self.assertEqual(der._inner_tag_octet, 0x10) - self.assertEqual(der.payload, b('xxll')) - - # Explicit tag may be 0 - der = DerObject(0x10, explicit=0) - der.decode(b("\xa0\x06\x10\x04xxll")) - self.assertEqual(der._inner_tag_octet, 0x10) - self.assertEqual(der.payload, b('xxll')) - - def testObjDecode8(self): - # Verify that decode returns the object - der = DerObject(0x02) - self.assertEqual(der, der.decode(b('\x02\x02\x01\x02'))) - -class DerIntegerTests(unittest.TestCase): - - def testInit1(self): - der = DerInteger(1) - self.assertEqual(der.encode(), b('\x02\x01\x01')) - - def testEncode1(self): - # Single-byte integers - # Value 0 - der = DerInteger(0) - self.assertEqual(der.encode(), b('\x02\x01\x00')) - # Value 1 - der = DerInteger(1) - self.assertEqual(der.encode(), b('\x02\x01\x01')) - # Value 127 - der = DerInteger(127) - self.assertEqual(der.encode(), b('\x02\x01\x7F')) - - def testEncode2(self): - # Multi-byte integers - # Value 128 - der = DerInteger(128) - self.assertEqual(der.encode(), b('\x02\x02\x00\x80')) - # Value 0x180 - der = DerInteger(0x180) - self.assertEqual(der.encode(), b('\x02\x02\x01\x80')) - # One very long integer - der = DerInteger(2**2048) - self.assertEqual(der.encode(), - b('\x02\x82\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00')) - - def testEncode3(self): - # Negative integers - # Value -1 - der = DerInteger(-1) - self.assertEqual(der.encode(), b('\x02\x01\xFF')) - # Value -128 - der = DerInteger(-128) - self.assertEqual(der.encode(), b('\x02\x01\x80')) - # Value - der = DerInteger(-87873) - self.assertEqual(der.encode(), b('\x02\x03\xFE\xA8\xBF')) - - def testEncode4(self): - # Explicit encoding - number = DerInteger(0x34, explicit=3) - self.assertEqual(number.encode(), b('\xa3\x03\x02\x01\x34')) - - # ----- - - def testDecode1(self): - # Single-byte integer - der = DerInteger() - # Value 0 - der.decode(b('\x02\x01\x00')) - self.assertEqual(der.value, 0) - # Value 1 - der.decode(b('\x02\x01\x01')) - self.assertEqual(der.value, 1) - # Value 127 - der.decode(b('\x02\x01\x7F')) - self.assertEqual(der.value, 127) - - def testDecode2(self): - # Multi-byte integer - der = DerInteger() - # Value 0x180L - der.decode(b('\x02\x02\x01\x80')) - self.assertEqual(der.value,0x180) - # One very long integer - der.decode( - b('\x02\x82\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00')) - self.assertEqual(der.value,2**2048) - - def testDecode3(self): - # Negative integer - der = DerInteger() - # Value -1 - der.decode(b('\x02\x01\xFF')) - self.assertEqual(der.value, -1) - # Value -32768 - der.decode(b('\x02\x02\x80\x00')) - self.assertEqual(der.value, -32768) - - def testDecode5(self): - # We still accept BER integer format - der = DerInteger() - # Redundant leading zeroes - der.decode(b('\x02\x02\x00\x01')) - self.assertEqual(der.value, 1) - # Redundant leading 0xFF - der.decode(b('\x02\x02\xFF\xFF')) - self.assertEqual(der.value, -1) - # Empty payload - der.decode(b('\x02\x00')) - self.assertEqual(der.value, 0) - - def testDecode6(self): - # Explicit encoding - number = DerInteger(explicit=3) - number.decode(b('\xa3\x03\x02\x01\x34')) - self.assertEqual(number.value, 0x34) - - def testDecode7(self): - # Verify decode returns the DerInteger - der = DerInteger() - self.assertEqual(der, der.decode(b('\x02\x01\x7F'))) - - ### - - def testStrict1(self): - number = DerInteger() - - number.decode(b'\x02\x02\x00\x01') - number.decode(b'\x02\x02\x00\x7F') - self.assertRaises(ValueError, number.decode, b'\x02\x02\x00\x01', strict=True) - self.assertRaises(ValueError, number.decode, b'\x02\x02\x00\x7F', strict=True) - - ### - - def testErrDecode1(self): - # Wide length field - der = DerInteger() - self.assertRaises(ValueError, der.decode, b('\x02\x81\x01\x01')) - - -class DerSequenceTests(unittest.TestCase): - - def testInit1(self): - der = DerSequence([1, DerInteger(2), b('0\x00')]) - self.assertEqual(der.encode(), b('0\x08\x02\x01\x01\x02\x01\x020\x00')) - - def testEncode1(self): - # Empty sequence - der = DerSequence() - self.assertEqual(der.encode(), b('0\x00')) - self.assertFalse(der.hasOnlyInts()) - # One single-byte integer (zero) - der.append(0) - self.assertEqual(der.encode(), b('0\x03\x02\x01\x00')) - self.assertEqual(der.hasInts(),1) - self.assertEqual(der.hasInts(False),1) - self.assertTrue(der.hasOnlyInts()) - self.assertTrue(der.hasOnlyInts(False)) - # Invariant - self.assertEqual(der.encode(), b('0\x03\x02\x01\x00')) - - def testEncode2(self): - # Indexing - der = DerSequence() - der.append(0) - der[0] = 1 - self.assertEqual(len(der),1) - self.assertEqual(der[0],1) - self.assertEqual(der[-1],1) - self.assertEqual(der.encode(), b('0\x03\x02\x01\x01')) - # - der[:] = [1] - self.assertEqual(len(der),1) - self.assertEqual(der[0],1) - self.assertEqual(der.encode(), b('0\x03\x02\x01\x01')) - - def testEncode3(self): - # One multi-byte integer (non-zero) - der = DerSequence() - der.append(0x180) - self.assertEqual(der.encode(), b('0\x04\x02\x02\x01\x80')) - - def testEncode4(self): - # One very long integer - der = DerSequence() - der.append(2**2048) - self.assertEqual(der.encode(), b('0\x82\x01\x05')+ - b('\x02\x82\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00')) - - def testEncode5(self): - der = DerSequence() - der += 1 - der += b('\x30\x00') - self.assertEqual(der.encode(), b('\x30\x05\x02\x01\x01\x30\x00')) - - def testEncode6(self): - # Two positive integers - der = DerSequence() - der.append(0x180) - der.append(0xFF) - self.assertEqual(der.encode(), b('0\x08\x02\x02\x01\x80\x02\x02\x00\xff')) - self.assertTrue(der.hasOnlyInts()) - self.assertTrue(der.hasOnlyInts(False)) - # Two mixed integers - der = DerSequence() - der.append(2) - der.append(-2) - self.assertEqual(der.encode(), b('0\x06\x02\x01\x02\x02\x01\xFE')) - self.assertEqual(der.hasInts(), 1) - self.assertEqual(der.hasInts(False), 2) - self.assertFalse(der.hasOnlyInts()) - self.assertTrue(der.hasOnlyInts(False)) - # - der.append(0x01) - der[1:] = [9,8] - self.assertEqual(len(der),3) - self.assertEqual(der[1:],[9,8]) - self.assertEqual(der[1:-1],[9]) - self.assertEqual(der.encode(), b('0\x09\x02\x01\x02\x02\x01\x09\x02\x01\x08')) - - def testEncode7(self): - # One integer and another type (already encoded) - der = DerSequence() - der.append(0x180) - der.append(b('0\x03\x02\x01\x05')) - self.assertEqual(der.encode(), b('0\x09\x02\x02\x01\x800\x03\x02\x01\x05')) - self.assertFalse(der.hasOnlyInts()) - - def testEncode8(self): - # One integer and another type (yet to encode) - der = DerSequence() - der.append(0x180) - der.append(DerSequence([5])) - self.assertEqual(der.encode(), b('0\x09\x02\x02\x01\x800\x03\x02\x01\x05')) - self.assertFalse(der.hasOnlyInts()) - - #### - - def testDecode1(self): - # Empty sequence - der = DerSequence() - der.decode(b('0\x00')) - self.assertEqual(len(der),0) - # One single-byte integer (zero) - der.decode(b('0\x03\x02\x01\x00')) - self.assertEqual(len(der),1) - self.assertEqual(der[0],0) - # Invariant - der.decode(b('0\x03\x02\x01\x00')) - self.assertEqual(len(der),1) - self.assertEqual(der[0],0) - - def testDecode2(self): - # One single-byte integer (non-zero) - der = DerSequence() - der.decode(b('0\x03\x02\x01\x7f')) - self.assertEqual(len(der),1) - self.assertEqual(der[0],127) - - def testDecode4(self): - # One very long integer - der = DerSequence() - der.decode(b('0\x82\x01\x05')+ - b('\x02\x82\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ - b('\x00\x00\x00\x00\x00\x00\x00\x00\x00')) - self.assertEqual(len(der),1) - self.assertEqual(der[0],2**2048) - - def testDecode6(self): - # Two integers - der = DerSequence() - der.decode(b('0\x08\x02\x02\x01\x80\x02\x02\x00\xff')) - self.assertEqual(len(der),2) - self.assertEqual(der[0],0x180) - self.assertEqual(der[1],0xFF) - - def testDecode7(self): - # One integer and 2 other types - der = DerSequence() - der.decode(b('0\x0A\x02\x02\x01\x80\x24\x02\xb6\x63\x12\x00')) - self.assertEqual(len(der),3) - self.assertEqual(der[0],0x180) - self.assertEqual(der[1],b('\x24\x02\xb6\x63')) - self.assertEqual(der[2],b('\x12\x00')) - - def testDecode8(self): - # Only 2 other types - der = DerSequence() - der.decode(b('0\x06\x24\x02\xb6\x63\x12\x00')) - self.assertEqual(len(der),2) - self.assertEqual(der[0],b('\x24\x02\xb6\x63')) - self.assertEqual(der[1],b('\x12\x00')) - self.assertEqual(der.hasInts(), 0) - self.assertEqual(der.hasInts(False), 0) - self.assertFalse(der.hasOnlyInts()) - self.assertFalse(der.hasOnlyInts(False)) - - def testDecode9(self): - # Verify that decode returns itself - der = DerSequence() - self.assertEqual(der, der.decode(b('0\x06\x24\x02\xb6\x63\x12\x00'))) - - ### - - def testErrDecode1(self): - # Not a sequence - der = DerSequence() - self.assertRaises(ValueError, der.decode, b('')) - self.assertRaises(ValueError, der.decode, b('\x00')) - self.assertRaises(ValueError, der.decode, b('\x30')) - - def testErrDecode2(self): - der = DerSequence() - # Too much data - self.assertRaises(ValueError, der.decode, b('\x30\x00\x00')) - - def testErrDecode3(self): - # Wrong length format - der = DerSequence() - # Missing length in sub-item - self.assertRaises(ValueError, der.decode, b('\x30\x04\x02\x01\x01\x00')) - # Valid BER, but invalid DER length - self.assertRaises(ValueError, der.decode, b('\x30\x81\x03\x02\x01\x01')) - self.assertRaises(ValueError, der.decode, b('\x30\x04\x02\x81\x01\x01')) - - def test_expected_nr_elements(self): - der_bin = DerSequence([1, 2, 3]).encode() - - DerSequence().decode(der_bin, nr_elements=3) - DerSequence().decode(der_bin, nr_elements=(2,3)) - self.assertRaises(ValueError, DerSequence().decode, der_bin, nr_elements=1) - self.assertRaises(ValueError, DerSequence().decode, der_bin, nr_elements=(4,5)) - - def test_expected_only_integers(self): - - der_bin1 = DerSequence([1, 2, 3]).encode() - der_bin2 = DerSequence([1, 2, DerSequence([3, 4])]).encode() - - DerSequence().decode(der_bin1, only_ints_expected=True) - DerSequence().decode(der_bin1, only_ints_expected=False) - DerSequence().decode(der_bin2, only_ints_expected=False) - self.assertRaises(ValueError, DerSequence().decode, der_bin2, only_ints_expected=True) - - -class DerOctetStringTests(unittest.TestCase): - - def testInit1(self): - der = DerOctetString(b('\xFF')) - self.assertEqual(der.encode(), b('\x04\x01\xFF')) - - def testEncode1(self): - # Empty sequence - der = DerOctetString() - self.assertEqual(der.encode(), b('\x04\x00')) - # Small payload - der.payload = b('\x01\x02') - self.assertEqual(der.encode(), b('\x04\x02\x01\x02')) - - #### - - def testDecode1(self): - # Empty sequence - der = DerOctetString() - der.decode(b('\x04\x00')) - self.assertEqual(der.payload, b('')) - # Small payload - der.decode(b('\x04\x02\x01\x02')) - self.assertEqual(der.payload, b('\x01\x02')) - - def testDecode2(self): - # Verify that decode returns the object - der = DerOctetString() - self.assertEqual(der, der.decode(b('\x04\x00'))) - - def testErrDecode1(self): - # No leftovers allowed - der = DerOctetString() - self.assertRaises(ValueError, der.decode, b('\x04\x01\x01\xff')) - -class DerNullTests(unittest.TestCase): - - def testEncode1(self): - der = DerNull() - self.assertEqual(der.encode(), b('\x05\x00')) - - #### - - def testDecode1(self): - # Empty sequence - der = DerNull() - self.assertEqual(der, der.decode(b('\x05\x00'))) - -class DerObjectIdTests(unittest.TestCase): - - def testInit1(self): - der = DerObjectId("1.1") - self.assertEqual(der.encode(), b('\x06\x01)')) - - def testEncode1(self): - der = DerObjectId('1.2.840.113549.1.1.1') - self.assertEqual(der.encode(), b('\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01')) - # - der = DerObjectId() - der.value = '1.2.840.113549.1.1.1' - self.assertEqual(der.encode(), b('\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01')) - - #### - - def testDecode1(self): - # Empty sequence - der = DerObjectId() - der.decode(b('\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01')) - self.assertEqual(der.value, '1.2.840.113549.1.1.1') - - def testDecode2(self): - # Verify that decode returns the object - der = DerObjectId() - self.assertEqual(der, - der.decode(b('\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01'))) - - def testDecode3(self): - der = DerObjectId() - der.decode(b('\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x00\x01')) - self.assertEqual(der.value, '1.2.840.113549.1.0.1') - - -class DerBitStringTests(unittest.TestCase): - - def testInit1(self): - der = DerBitString(b("\xFF")) - self.assertEqual(der.encode(), b('\x03\x02\x00\xFF')) - - def testInit2(self): - der = DerBitString(DerInteger(1)) - self.assertEqual(der.encode(), b('\x03\x04\x00\x02\x01\x01')) - - def testEncode1(self): - # Empty sequence - der = DerBitString() - self.assertEqual(der.encode(), b('\x03\x01\x00')) - # Small payload - der = DerBitString(b('\x01\x02')) - self.assertEqual(der.encode(), b('\x03\x03\x00\x01\x02')) - # Small payload - der = DerBitString() - der.value = b('\x01\x02') - self.assertEqual(der.encode(), b('\x03\x03\x00\x01\x02')) - - #### - - def testDecode1(self): - # Empty sequence - der = DerBitString() - der.decode(b('\x03\x00')) - self.assertEqual(der.value, b('')) - # Small payload - der.decode(b('\x03\x03\x00\x01\x02')) - self.assertEqual(der.value, b('\x01\x02')) - - def testDecode2(self): - # Verify that decode returns the object - der = DerBitString() - self.assertEqual(der, der.decode(b('\x03\x00'))) - - -class DerSetOfTests(unittest.TestCase): - - def testInit1(self): - der = DerSetOf([DerInteger(1), DerInteger(2)]) - self.assertEqual(der.encode(), b('1\x06\x02\x01\x01\x02\x01\x02')) - - def testEncode1(self): - # Empty set - der = DerSetOf() - self.assertEqual(der.encode(), b('1\x00')) - # One single-byte integer (zero) - der.add(0) - self.assertEqual(der.encode(), b('1\x03\x02\x01\x00')) - # Invariant - self.assertEqual(der.encode(), b('1\x03\x02\x01\x00')) - - def testEncode2(self): - # Two integers - der = DerSetOf() - der.add(0x180) - der.add(0xFF) - self.assertEqual(der.encode(), b('1\x08\x02\x02\x00\xff\x02\x02\x01\x80')) - # Initialize with integers - der = DerSetOf([0x180, 0xFF]) - self.assertEqual(der.encode(), b('1\x08\x02\x02\x00\xff\x02\x02\x01\x80')) - - def testEncode3(self): - # One integer and another type (no matter what it is) - der = DerSetOf() - der.add(0x180) - self.assertRaises(ValueError, der.add, b('\x00\x02\x00\x00')) - - def testEncode4(self): - # Only non integers - der = DerSetOf() - der.add(b('\x01\x00')) - der.add(b('\x01\x01\x01')) - self.assertEqual(der.encode(), b('1\x05\x01\x00\x01\x01\x01')) - - #### - - def testDecode1(self): - # Empty sequence - der = DerSetOf() - der.decode(b('1\x00')) - self.assertEqual(len(der),0) - # One single-byte integer (zero) - der.decode(b('1\x03\x02\x01\x00')) - self.assertEqual(len(der),1) - self.assertEqual(list(der),[0]) - - def testDecode2(self): - # Two integers - der = DerSetOf() - der.decode(b('1\x08\x02\x02\x01\x80\x02\x02\x00\xff')) - self.assertEqual(len(der),2) - l = list(der) - self.assertTrue(0x180 in l) - self.assertTrue(0xFF in l) - - def testDecode3(self): - # One integer and 2 other types - der = DerSetOf() - #import pdb; pdb.set_trace() - self.assertRaises(ValueError, der.decode, - b('0\x0A\x02\x02\x01\x80\x24\x02\xb6\x63\x12\x00')) - - def testDecode4(self): - # Verify that decode returns the object - der = DerSetOf() - self.assertEqual(der, - der.decode(b('1\x08\x02\x02\x01\x80\x02\x02\x00\xff'))) - - ### - - def testErrDecode1(self): - # No leftovers allowed - der = DerSetOf() - self.assertRaises(ValueError, der.decode, - b('1\x08\x02\x02\x01\x80\x02\x02\x00\xff\xAA')) - -def get_tests(config={}): - from Crypto.SelfTest.st_common import list_test_cases - listTests = [] - listTests += list_test_cases(DerObjectTests) - listTests += list_test_cases(DerIntegerTests) - listTests += list_test_cases(DerSequenceTests) - listTests += list_test_cases(DerOctetStringTests) - listTests += list_test_cases(DerNullTests) - listTests += list_test_cases(DerObjectIdTests) - listTests += list_test_cases(DerBitStringTests) - listTests += list_test_cases(DerSetOfTests) - return listTests - -if __name__ == '__main__': - suite = lambda: unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') - -# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/london_tube.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/london_tube.py deleted file mode 100644 index 3a39e6aef5cd8534f76698a2379221b9fe368f0c..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/london_tube.py +++ /dev/null @@ -1,59 +0,0 @@ -""" -London Tube Lines -================= -This example shows the London tube lines against the background of the -borough boundaries. It is based on the vega-lite example at -https://vega.github.io/vega-lite/examples/geo_layer_line_london.html. -""" -# category: case studies -import altair as alt -from vega_datasets import data - -boroughs = alt.topo_feature(data.londonBoroughs.url, 'boroughs') -tubelines = alt.topo_feature(data.londonTubeLines.url, 'line') -centroids = data.londonCentroids.url - -background = alt.Chart(boroughs).mark_geoshape( - stroke='white', - strokeWidth=2 -).encode( - color=alt.value('#eee'), -).properties( - width=700, - height=500 -) - -labels = alt.Chart(centroids).mark_text().encode( - longitude='cx:Q', - latitude='cy:Q', - text='bLabel:N', - size=alt.value(8), - opacity=alt.value(0.6) -).transform_calculate( - "bLabel", "indexof (datum.name,' ') > 0 ? substring(datum.name,0,indexof(datum.name, ' ')) : datum.name" -) - -line_scale = alt.Scale(domain=["Bakerloo", "Central", "Circle", "District", "DLR", - "Hammersmith & City", "Jubilee", "Metropolitan", "Northern", - "Piccadilly", "Victoria", "Waterloo & City"], - range=["rgb(137,78,36)", "rgb(220,36,30)", "rgb(255,206,0)", - "rgb(1,114,41)", "rgb(0,175,173)", "rgb(215,153,175)", - "rgb(106,114,120)", "rgb(114,17,84)", "rgb(0,0,0)", - "rgb(0,24,168)", "rgb(0,160,226)", "rgb(106,187,170)"]) - -lines = alt.Chart(tubelines).mark_geoshape( - filled=False, - strokeWidth=2 -).encode( - alt.Color( - 'id:N', - legend=alt.Legend( - title=None, - orient='bottom-right', - offset=0 - ), - scale=line_scale - ) -) - -background + labels + lines diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/token_block_dataset.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/token_block_dataset.py deleted file mode 100644 index a414e7ef64193b4c9e285e357350c09663dd2d8f..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/token_block_dataset.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from fairseq.data import FairseqDataset, plasma_utils -from fairseq.data.indexed_dataset import best_fitting_int_dtype -from typing import Tuple - - -class TokenBlockDataset(FairseqDataset): - """Break a Dataset of tokens into blocks. - - Args: - dataset (~torch.utils.data.Dataset): dataset to break into blocks - sizes (List[int]): sentence lengths (required for 'complete' and 'eos') - block_size (int): maximum block size (ignored in 'eos' break mode) - break_mode (str, optional): Mode used for breaking tokens. Values can - be one of: - - 'none': break tokens into equally sized blocks (up to block_size) - - 'complete': break tokens into blocks (up to block_size) such that - blocks contains complete sentences, although block_size may be - exceeded if some sentences exceed block_size - - 'complete_doc': similar to 'complete' mode, but do not - cross document boundaries - - 'eos': each block contains one sentence (block_size is ignored) - include_targets (bool, optional): return next tokens as targets - (default: False). - document_sep_len (int, optional): document separator size (required for - 'complete_doc' break mode). Typically 1 if the sentences have eos - and 0 otherwise. - """ - - def __init__( - self, - dataset, - sizes, - block_size, - pad, - eos, - break_mode=None, - include_targets=False, - document_sep_len=1, - use_plasma_view=False, - split_path=None, - plasma_path=None, - ): - - super().__init__() - self.dataset = dataset - self.pad = pad - self.eos = eos - self.include_targets = include_targets - - assert len(dataset) > 0 - - assert len(dataset) == len(sizes) - _sizes, block_to_dataset_index, slice_indices = self._build_slice_indices( - sizes, break_mode, document_sep_len, block_size - ) - if use_plasma_view: - plasma_id = (block_size, document_sep_len, str(break_mode), len(dataset)) - self._slice_indices = plasma_utils.PlasmaView( - slice_indices, split_path, (plasma_id, 0), plasma_path=plasma_path - ) - self._sizes = plasma_utils.PlasmaView( - _sizes, split_path, (plasma_id, 1), plasma_path=plasma_path - ) - self._block_to_dataset_index = plasma_utils.PlasmaView( - block_to_dataset_index, - split_path, - (plasma_id, 2), - plasma_path=plasma_path, - ) - else: - self._slice_indices = plasma_utils.PlasmaArray(slice_indices) - self._sizes = plasma_utils.PlasmaArray(_sizes) - self._block_to_dataset_index = plasma_utils.PlasmaArray( - block_to_dataset_index - ) - - @staticmethod - def _build_slice_indices( - sizes, break_mode, document_sep_len, block_size - ) -> Tuple[np.ndarray]: - """Use token_block_utils_fast to build arrays for indexing into self.dataset""" - try: - from fairseq.data.token_block_utils_fast import ( - _get_slice_indices_fast, - _get_block_to_dataset_index_fast, - ) - except ImportError: - raise ImportError( - "Please build Cython components with: `pip install --editable .` " - "or `python setup.py build_ext --inplace`" - ) - - if isinstance(sizes, list): - sizes = np.array(sizes, dtype=np.int64) - else: - if torch.is_tensor(sizes): - sizes = sizes.numpy() - sizes = sizes.astype(np.int64) - - break_mode = break_mode if break_mode is not None else "none" - - # For "eos" break-mode, block_size is not required parameters. - if break_mode == "eos" and block_size is None: - block_size = 0 - - slice_indices = _get_slice_indices_fast( - sizes, str(break_mode), block_size, document_sep_len - ) - _sizes = slice_indices[:, 1] - slice_indices[:, 0] - - # build index mapping block indices to the underlying dataset indices - if break_mode == "eos": - # much faster version for eos break mode - block_to_dataset_index = np.stack( - [ - np.arange(len(sizes)), # starting index in dataset - np.zeros( - len(sizes), dtype=np.compat.long - ), # starting offset within starting index - np.arange(len(sizes)), # ending index in dataset - ], - 1, - ) - else: - block_to_dataset_index = _get_block_to_dataset_index_fast( - sizes, - slice_indices, - ) - size_dtype = np.uint16 if block_size < 65535 else np.uint32 - num_tokens = slice_indices[-1].max() - slice_indices_dtype = best_fitting_int_dtype(num_tokens) - slice_indices = slice_indices.astype(slice_indices_dtype) - _sizes = _sizes.astype(size_dtype) - block_to_dataset_index = block_to_dataset_index.astype(slice_indices_dtype) - return _sizes, block_to_dataset_index, slice_indices - - @property - def slice_indices(self): - return self._slice_indices.array - - @property - def sizes(self): - return self._sizes.array - - @property - def block_to_dataset_index(self): - return self._block_to_dataset_index.array - - def attr(self, attr: str, index: int): - start_ds_idx, _, _ = self.block_to_dataset_index[index] - return self.dataset.attr(attr, start_ds_idx) - - def __getitem__(self, index): - start_ds_idx, start_offset, end_ds_idx = self.block_to_dataset_index[index] - - buffer = torch.cat( - [self.dataset[idx] for idx in range(start_ds_idx, end_ds_idx + 1)] - ) - slice_s, slice_e = self.slice_indices[index] - length = slice_e - slice_s - s, e = start_offset, start_offset + length - item = buffer[s:e] - - if self.include_targets: - # *target* is the original sentence (=item) - # *source* is shifted right by 1 (maybe left-padded with eos) - # *past_target* is shifted right by 2 (left-padded as needed) - if s == 0: - source = torch.cat([item.new([self.eos]), buffer[0 : e - 1]]) - past_target = torch.cat( - [item.new([self.pad, self.eos]), buffer[0 : e - 2]] - ) - else: - source = buffer[s - 1 : e - 1] - if s == 1: - past_target = torch.cat([item.new([self.eos]), buffer[0 : e - 2]]) - else: - past_target = buffer[s - 2 : e - 2] - - return source, item, past_target - - return item - - def __len__(self): - return len(self.slice_indices) - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - self.dataset.prefetch( - { - ds_idx - for index in indices - for start_ds_idx, _, end_ds_idx in [self.block_to_dataset_index[index]] - for ds_idx in range(start_ds_idx, end_ds_idx + 1) - } - ) diff --git a/spaces/aryadytm/paraphrase/src/utils.py b/spaces/aryadytm/paraphrase/src/utils.py deleted file mode 100644 index 400e28c9523361a39bd7645a78bf778b7b1f496b..0000000000000000000000000000000000000000 --- a/spaces/aryadytm/paraphrase/src/utils.py +++ /dev/null @@ -1,16 +0,0 @@ -from easytextgen import EasyPrompt -from translate import Translator - -trans_to_id = Translator(to_lang="id") -trans_to_en = Translator(to_lang="en") -prompt = EasyPrompt.from_file("assets/paraphrase-v1.yml") - - -def paraphrase_english(english_text: str) -> str: - return prompt.get_output(english_text).output_text.strip() - - -def paraphrase_indonesian(indonesian_text: str) -> str: - eng_text = trans_to_en.translate(indonesian_text) - eng_paraphrased = paraphrase_english(eng_text) - return trans_to_id.translate(eng_paraphrased) \ No newline at end of file diff --git a/spaces/avatar2k/image-ocr-ex5-multi-lingual/app.py b/spaces/avatar2k/image-ocr-ex5-multi-lingual/app.py deleted file mode 100644 index 83ab99d0715b5c0033e0f452087543187147eaa6..0000000000000000000000000000000000000000 --- a/spaces/avatar2k/image-ocr-ex5-multi-lingual/app.py +++ /dev/null @@ -1,54 +0,0 @@ -import pandas as pd -import PIL -from PIL import Image -from PIL import ImageDraw -import gradio as gr -import torch -import easyocr - -torch.hub.download_url_to_file('https://github.com/JaidedAI/EasyOCR/raw/master/examples/english.png', 'english.png') -torch.hub.download_url_to_file('https://github.com/JaidedAI/EasyOCR/raw/master/examples/chinese.jpg', 'chinese.jpg') -torch.hub.download_url_to_file('https://github.com/JaidedAI/EasyOCR/raw/master/examples/japanese.jpg', 'japanese.jpg') -torch.hub.download_url_to_file('https://i.imgur.com/mwQFd7G.jpeg', 'Hindi.jpeg') - -def draw_boxes(image, bounds, color='yellow', width=2): - draw = ImageDraw.Draw(image) - for bound in bounds: - p0, p1, p2, p3 = bound[0] - draw.line([*p0, *p1, *p2, *p3, *p0], fill=color, width=width) - return image - -def inference(img, lang): - reader = easyocr.Reader(lang) - bounds = reader.readtext(img.name) - im = PIL.Image.open(img.name) - draw_boxes(im, bounds) - im.save('result.jpg') - return ['result.jpg', pd.DataFrame(bounds).iloc[: , 1:]] - -title = 'Image To Optical Character Recognition' -description = 'Multilingual OCR which works conveniently on all devices in multiple languages.' -article = "

          " -examples = [['english.png',['en']],['chinese.jpg',['ch_sim', 'en']],['japanese.jpg',['ja', 'en']],['Hindi.jpeg',['hi', 'en']]] -css = ".output_image, .input_image {height: 40rem !important; width: 100% !important;}" -choices = [ - "ch_sim", - "ch_tra", - "de", - "en", - "es", - "ja", - "hi", - "ru" -] -gr.Interface( - inference, - [gr.inputs.Image(type='file', label='Input'),gr.inputs.CheckboxGroup(choices, type="value", default=['en'], label='language')], - [gr.outputs.Image(type='file', label='Output'), gr.outputs.Dataframe(headers=['text', 'confidence'])], - title=title, - description=description, - article=article, - examples=examples, - css=css, - enable_queue=True - ).launch(debug=True) \ No newline at end of file diff --git a/spaces/awacke1/4.RealTime-MediaPipe-AI-From-Video-On-Any-Device/README.md b/spaces/awacke1/4.RealTime-MediaPipe-AI-From-Video-On-Any-Device/README.md deleted file mode 100644 index af09c36cdbd50507d1abc0027a68acbe95d91a16..0000000000000000000000000000000000000000 --- a/spaces/awacke1/4.RealTime-MediaPipe-AI-From-Video-On-Any-Device/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: MediaPipe-Realtime-AI -emoji: 👁💻👁 -colorFrom: blue -colorTo: blue -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: AI-ZTH-03-23/4.RealTime-MediaPipe-AI-From-Video-On-Any-Device ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/awacke1/HTML5-Aframe-Augmented-Reality-Model-Viewer/style.css b/spaces/awacke1/HTML5-Aframe-Augmented-Reality-Model-Viewer/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/HTML5-Aframe-Augmented-Reality-Model-Viewer/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/DigitalGlitch.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/DigitalGlitch.js deleted file mode 100644 index 0348e25d5dc8dc509f71d206e08feefd0a21c8a0..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/DigitalGlitch.js +++ /dev/null @@ -1,103 +0,0 @@ -/** - * @author felixturner / http://airtight.cc/ - * - * RGB Shift Shader - * Shifts red and blue channels from center in opposite directions - * Ported from http://kriss.cx/tom/2009/05/rgb-shift/ - * by Tom Butterworth / http://kriss.cx/tom/ - * - * amount: shift distance (1 is width of input) - * angle: shift angle in radians - */ - -THREE.DigitalGlitch = { - - uniforms: { - - "tDiffuse": { value: null },//diffuse texture - "tDisp": { value: null },//displacement texture for digital glitch squares - "byp": { value: 0 },//apply the glitch ? - "amount": { value: 0.08 }, - "angle": { value: 0.02 }, - "seed": { value: 0.02 }, - "seed_x": { value: 0.02 },//-1,1 - "seed_y": { value: 0.02 },//-1,1 - "distortion_x": { value: 0.5 }, - "distortion_y": { value: 0.6 }, - "col_s": { value: 0.05 } - }, - - vertexShader: [ - - "varying vec2 vUv;", - "void main() {", - "vUv = uv;", - "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );", - "}" - ].join( "\n" ), - - fragmentShader: [ - "uniform int byp;",//should we apply the glitch ? - - "uniform sampler2D tDiffuse;", - "uniform sampler2D tDisp;", - - "uniform float amount;", - "uniform float angle;", - "uniform float seed;", - "uniform float seed_x;", - "uniform float seed_y;", - "uniform float distortion_x;", - "uniform float distortion_y;", - "uniform float col_s;", - - "varying vec2 vUv;", - - - "float rand(vec2 co){", - "return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);", - "}", - - "void main() {", - "if(byp<1) {", - "vec2 p = vUv;", - "float xs = floor(gl_FragCoord.x / 0.5);", - "float ys = floor(gl_FragCoord.y / 0.5);", - //based on staffantans glitch shader for unity https://github.com/staffantan/unityglitch - "vec4 normal = texture2D (tDisp, p*seed*seed);", - "if(p.ydistortion_x-col_s*seed) {", - "if(seed_x>0.){", - "p.y = 1. - (p.y + distortion_y);", - "}", - "else {", - "p.y = distortion_y;", - "}", - "}", - "if(p.xdistortion_y-col_s*seed) {", - "if(seed_y>0.){", - "p.x=distortion_x;", - "}", - "else {", - "p.x = 1. - (p.x + distortion_x);", - "}", - "}", - "p.x+=normal.x*seed_x*(seed/5.);", - "p.y+=normal.y*seed_y*(seed/5.);", - //base from RGB shift shader - "vec2 offset = amount * vec2( cos(angle), sin(angle));", - "vec4 cr = texture2D(tDiffuse, p + offset);", - "vec4 cga = texture2D(tDiffuse, p);", - "vec4 cb = texture2D(tDiffuse, p - offset);", - "gl_FragColor = vec4(cr.r, cga.g, cb.b, cga.a);", - //add noise - "vec4 snow = 200.*amount*vec4(rand(vec2(xs * seed,ys * seed*50.))*0.2);", - "gl_FragColor = gl_FragColor+ snow;", - "}", - "else {", - "gl_FragColor=texture2D (tDiffuse, vUv);", - "}", - "}" - - ].join( "\n" ) - -}; diff --git a/spaces/bankholdup/stylegan_petbreeder/e4e/configs/paths_config.py b/spaces/bankholdup/stylegan_petbreeder/e4e/configs/paths_config.py deleted file mode 100644 index 4604f6063b8125364a52a492de52fcc54004f373..0000000000000000000000000000000000000000 --- a/spaces/bankholdup/stylegan_petbreeder/e4e/configs/paths_config.py +++ /dev/null @@ -1,28 +0,0 @@ -dataset_paths = { - # Face Datasets (In the paper: FFHQ - train, CelebAHQ - test) - 'ffhq': '', - 'celeba_test': '', - - # Cars Dataset (In the paper: Stanford cars) - 'cars_train': '', - 'cars_test': '', - - # Horse Dataset (In the paper: LSUN Horse) - 'horse_train': '', - 'horse_test': '', - - # Church Dataset (In the paper: LSUN Church) - 'church_train': '', - 'church_test': '', - - # Cats Dataset (In the paper: LSUN Cat) - 'cats_train': '', - 'cats_test': '' -} - -model_paths = { - 'stylegan_ffhq': 'pretrained_models/stylegan2-ffhq-config-f.pt', - 'ir_se50': 'pretrained_models/model_ir_se50.pth', - 'shape_predictor': 'pretrained_models/shape_predictor_68_face_landmarks.dat', - 'moco': 'pretrained_models/moco_v2_800ep_pretrain.pth' -} diff --git a/spaces/bankholdup/stylegan_petbreeder/e4e/criteria/__init__.py b/spaces/bankholdup/stylegan_petbreeder/e4e/criteria/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/bingbing520/ChatGPT2/custom.css b/spaces/bingbing520/ChatGPT2/custom.css deleted file mode 100644 index 5143eb138ea2469d8c457c71cb210fd3fb7cbe15..0000000000000000000000000000000000000000 --- a/spaces/bingbing520/ChatGPT2/custom.css +++ /dev/null @@ -1,162 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -#chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: #FFFFFF !important; -} -[data-testid = "user"] { - background-color: #95EC69 !important; -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/bioriAsaeru/text-to-voice/FBX Review Mobile And Desktop App 2009 64bit Keygen Xforce _BEST_.md b/spaces/bioriAsaeru/text-to-voice/FBX Review Mobile And Desktop App 2009 64bit Keygen Xforce _BEST_.md deleted file mode 100644 index 77e512046a8355e4e58c63bbce6fdc297aa2ca8e..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/FBX Review Mobile And Desktop App 2009 64bit Keygen Xforce _BEST_.md +++ /dev/null @@ -1,6 +0,0 @@ -

          FBX Review mobile and desktop app 2009 64bit Keygen Xforce


          Download File ››› https://urloso.com/2uyO6x



          - - aaccfb2cb3
          -
          -
          -

          diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/PointSup/point_sup/dataset_mapper.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/PointSup/point_sup/dataset_mapper.py deleted file mode 100644 index 52b9bd4ce19d51e07f98aa9adf36c41f6ddc22af..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/PointSup/point_sup/dataset_mapper.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import copy -import logging -import numpy as np -from typing import List, Union -import torch - -import detectron2.data.detection_utils as utils -import detectron2.data.transforms as T -from detectron2.config import configurable - -from .detection_utils import annotations_to_instances, transform_instance_annotations - -__all__ = [ - "PointSupDatasetMapper", -] - - -class PointSupDatasetMapper: - """ - The callable currently does the following: - 1. Read the image from "file_name" - 2. Applies transforms to the image and annotations - 3. Prepare data and annotations to Tensor and :class:`Instances` - """ - - @configurable - def __init__( - self, - is_train: bool, - *, - augmentations: List[Union[T.Augmentation, T.Transform]], - image_format: str, - # Extra data augmentation for point supervision - sample_points: int = 0, - ): - """ - NOTE: this interface is experimental. - - Args: - is_train: whether it's used in training or inference - augmentations: a list of augmentations or deterministic transforms to apply - image_format: an image format supported by :func:`detection_utils.read_image`. - sample_points: subsample points at each iteration - """ - # fmt: off - self.is_train = is_train - self.augmentations = T.AugmentationList(augmentations) - self.image_format = image_format - self.sample_points = sample_points - # fmt: on - logger = logging.getLogger(__name__) - mode = "training" if is_train else "inference" - logger.info(f"[DatasetMapper] Augmentations used in {mode}: {augmentations}") - logger.info(f"Point Augmentations used in {mode}: sample {sample_points} points") - - @classmethod - def from_config(cls, cfg, is_train: bool = True): - augs = utils.build_augmentation(cfg, is_train) - if cfg.INPUT.CROP.ENABLED and is_train: - raise ValueError("Crop augmentation not supported to point supervision.") - - ret = { - "is_train": is_train, - "augmentations": augs, - "image_format": cfg.INPUT.FORMAT, - "sample_points": cfg.INPUT.SAMPLE_POINTS, - } - - return ret - - def __call__(self, dataset_dict): - """ - Args: - dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format. - Returns: - dict: a format that builtin models in detectron2 accept - """ - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - image = utils.read_image(dataset_dict["file_name"], format=self.image_format) - utils.check_image_size(dataset_dict, image) - - aug_input = T.AugInput(image) - transforms = self.augmentations(aug_input) - image = aug_input.image - - image_shape = image.shape[:2] # h, w - # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory, - # but not efficient on large generic data structures due to the use of pickle & mp.Queue. - # Therefore it's important to use torch.Tensor. - dataset_dict["image"] = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1))) - - if not self.is_train: - dataset_dict.pop("annotations", None) - return dataset_dict - - if "annotations" in dataset_dict: - # Maps points from the closed interval [0, image_size - 1] on discrete - # image coordinates to the half-open interval [x1, x2) on continuous image - # coordinates. We use the continuous-discrete conversion from Heckbert - # 1990 ("What is the coordinate of a pixel?"): d = floor(c) and c = d + 0.5, - # where d is a discrete coordinate and c is a continuous coordinate. - for ann in dataset_dict["annotations"]: - point_coords_wrt_image = np.array(ann["point_coords"]).astype(np.float) - point_coords_wrt_image = point_coords_wrt_image + 0.5 - ann["point_coords"] = point_coords_wrt_image - - annos = [ - # also need to transform point coordinates - transform_instance_annotations( - obj, - transforms, - image_shape, - ) - for obj in dataset_dict.pop("annotations") - if obj.get("iscrowd", 0) == 0 - ] - instances = annotations_to_instances( - annos, - image_shape, - sample_points=self.sample_points, - ) - - dataset_dict["instances"] = utils.filter_empty_instances(instances) - return dataset_dict diff --git a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/benchmarks.py b/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/benchmarks.py deleted file mode 100644 index d0f2a2529c5d8ed8b88c8bbb904487e274540cde..0000000000000000000000000000000000000000 --- a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/benchmarks.py +++ /dev/null @@ -1,148 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Run YOLOv5 benchmarks on all supported export formats - -Format | `export.py --include` | Model ---- | --- | --- -PyTorch | - | yolov5s.pt -TorchScript | `torchscript` | yolov5s.torchscript -ONNX | `onnx` | yolov5s.onnx -OpenVINO | `openvino` | yolov5s_openvino_model/ -TensorRT | `engine` | yolov5s.engine -CoreML | `coreml` | yolov5s.mlmodel -TensorFlow SavedModel | `saved_model` | yolov5s_saved_model/ -TensorFlow GraphDef | `pb` | yolov5s.pb -TensorFlow Lite | `tflite` | yolov5s.tflite -TensorFlow Edge TPU | `edgetpu` | yolov5s_edgetpu.tflite -TensorFlow.js | `tfjs` | yolov5s_web_model/ - -Requirements: - $ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime openvino-dev tensorflow-cpu # CPU - $ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime-gpu openvino-dev tensorflow # GPU - $ pip install -U nvidia-tensorrt --index-url https://pypi.ngc.nvidia.com # TensorRT - -Usage: - $ python utils/benchmarks.py --weights yolov5s.pt --img 640 -""" - -import argparse -import sys -import time -from pathlib import Path - -import pandas as pd - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[1] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -# ROOT = ROOT.relative_to(Path.cwd()) # relative - -import export -import val -from utils import notebook_init -from utils.general import LOGGER, check_yaml, print_args -from utils.torch_utils import select_device - - -def run( - weights=ROOT / 'yolov5s.pt', # weights path - imgsz=640, # inference size (pixels) - batch_size=1, # batch size - data=ROOT / 'data/coco128.yaml', # dataset.yaml path - device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu - half=False, # use FP16 half-precision inference - test=False, # test exports only - pt_only=False, # test PyTorch only -): - y, t = [], time.time() - device = select_device(device) - for i, (name, f, suffix, gpu) in export.export_formats().iterrows(): # index, (name, file, suffix, gpu-capable) - try: - assert i != 9, 'Edge TPU not supported' - assert i != 10, 'TF.js not supported' - if device.type != 'cpu': - assert gpu, f'{name} inference not supported on GPU' - - # Export - if f == '-': - w = weights # PyTorch format - else: - w = export.run(weights=weights, imgsz=[imgsz], include=[f], device=device, half=half)[-1] # all others - assert suffix in str(w), 'export failed' - - # Validate - result = val.run(data, w, batch_size, imgsz, plots=False, device=device, task='benchmark', half=half) - metrics = result[0] # metrics (mp, mr, map50, map, *losses(box, obj, cls)) - speeds = result[2] # times (preprocess, inference, postprocess) - y.append([name, round(metrics[3], 4), round(speeds[1], 2)]) # mAP, t_inference - except Exception as e: - LOGGER.warning(f'WARNING: Benchmark failure for {name}: {e}') - y.append([name, None, None]) # mAP, t_inference - if pt_only and i == 0: - break # break after PyTorch - - # Print results - LOGGER.info('\n') - parse_opt() - notebook_init() # print system info - py = pd.DataFrame(y, columns=['Format', 'mAP@0.5:0.95', 'Inference time (ms)'] if map else ['Format', 'Export', '']) - LOGGER.info(f'\nBenchmarks complete ({time.time() - t:.2f}s)') - LOGGER.info(str(py if map else py.iloc[:, :2])) - return py - - -def test( - weights=ROOT / 'yolov5s.pt', # weights path - imgsz=640, # inference size (pixels) - batch_size=1, # batch size - data=ROOT / 'data/coco128.yaml', # dataset.yaml path - device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu - half=False, # use FP16 half-precision inference - test=False, # test exports only - pt_only=False, # test PyTorch only -): - y, t = [], time.time() - device = select_device(device) - for i, (name, f, suffix, gpu) in export.export_formats().iterrows(): # index, (name, file, suffix, gpu-capable) - try: - w = weights if f == '-' else \ - export.run(weights=weights, imgsz=[imgsz], include=[f], device=device, half=half)[-1] # weights - assert suffix in str(w), 'export failed' - y.append([name, True]) - except Exception: - y.append([name, False]) # mAP, t_inference - - # Print results - LOGGER.info('\n') - parse_opt() - notebook_init() # print system info - py = pd.DataFrame(y, columns=['Format', 'Export']) - LOGGER.info(f'\nExports complete ({time.time() - t:.2f}s)') - LOGGER.info(str(py)) - return py - - -def parse_opt(): - parser = argparse.ArgumentParser() - parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='weights path') - parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='inference size (pixels)') - parser.add_argument('--batch-size', type=int, default=1, help='batch size') - parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference') - parser.add_argument('--test', action='store_true', help='test exports only') - parser.add_argument('--pt-only', action='store_true', help='test PyTorch only') - opt = parser.parse_args() - opt.data = check_yaml(opt.data) # check YAML - print_args(vars(opt)) - return opt - - -def main(opt): - test(**vars(opt)) if opt.test else run(**vars(opt)) - - -if __name__ == "__main__": - opt = parse_opt() - main(opt) diff --git a/spaces/camenduru-com/webui-api/Dockerfile b/spaces/camenduru-com/webui-api/Dockerfile deleted file mode 100644 index fbd7c8a1cb9caec55b22a8cd2ea724cbefadd67d..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/webui-api/Dockerfile +++ /dev/null @@ -1,49 +0,0 @@ -# Dockerfile.Lite - -# https://gitlab.com/nvidia/container-images/cuda/-/blob/master/dist/11.7.1/ubuntu2204/devel/cudnn8/Dockerfile -# FROM nvidia/cuda:11.7.1-cudnn8-devel-ubuntu22.04 -# https://gitlab.com/nvidia/container-images/cuda/-/blob/master/dist/11.7.1/ubuntu2204/base/Dockerfile -FROM nvidia/cuda:11.7.1-base-ubuntu22.04 -ENV DEBIAN_FRONTEND noninteractive - -RUN apt-get update -y && apt-get upgrade -y && apt-get install -y libgl1 libglib2.0-0 wget git git-lfs python3-pip python-is-python3 && rm -rf /var/lib/apt/lists/* - -RUN adduser --disabled-password --gecos '' user -RUN mkdir /content && chown -R user:user /content -WORKDIR /content -USER user - -RUN pip3 install --upgrade pip -RUN pip install xformers==0.0.16 triton==2.0.0 -U -RUN pip install numexpr - -RUN git clone -b v2.2 https://github.com/camenduru/stable-diffusion-webui -RUN cd stable-diffusion-webui && git reset --hard - -RUN sed -i -e 's/ start()/ #start()/g' /content/stable-diffusion-webui/launch.py -# RUN sed -i 's/^\( \{4\}\)start\(\)/\1#start\(\)/g' /content/stable-diffusion-webui/launch.py -# RUN sed -i -e 's/ \bstart\(\)\b/ \#start\(\)\b/g' /content/stable-diffusion-webui/launch.py - -RUN sed -i -e '/(txt2img_interface, \"txt2img\", \"txt2img\"),/d' /content/stable-diffusion-webui/modules/ui.py -RUN sed -i -e '/(img2img_interface, \"img2img\", \"img2img\"),/d' /content/stable-diffusion-webui/modules/ui.py -RUN sed -i -e '/(extras_interface, \"Extras\", \"extras\"),/d' /content/stable-diffusion-webui/modules/ui.py -RUN sed -i -e '/(pnginfo_interface, \"PNG Info\", \"pnginfo\"),/d' /content/stable-diffusion-webui/modules/ui.py -RUN sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /content/stable-diffusion-webui/modules/ui.py -RUN sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /content/stable-diffusion-webui/modules/ui.py -RUN sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /content/stable-diffusion-webui/modules/ui.py -RUN sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /content/stable-diffusion-webui/modules/ui.py - -RUN cd stable-diffusion-webui && python launch.py --no-half --use-cpu all --skip-torch-cuda-test - -COPY --chown=user config.json /content/config.json -COPY --chown=user ui-config.json /content/ui-config.json - -ADD --chown=user https://huggingface.co/ckpt/sd15/resolve/main/v1-5-pruned-emaonly.ckpt /content/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt - -# EXPOSE 7860 - -# CMD python -m http.server 7860 -# /bin/sh: 1: Syntax error: "(" unexpected -# --api-auth={os.getenv('API_AUTH')} -# --nowebui -CMD cd /content/stable-diffusion-webui && python webui.py --xformers --listen --enable-insecure-extension-access --gradio-queue --api --cors-allow-origins=* --ui-config-file /content/ui-config.json --ui-settings-file /content/config.json --api-auth=$API_AUTH \ No newline at end of file diff --git a/spaces/camilacorreamelo/medicalDetection/app.py b/spaces/camilacorreamelo/medicalDetection/app.py deleted file mode 100644 index 64ad275ceab152fad407661cfdd187c26dd6bb59..0000000000000000000000000000000000000000 --- a/spaces/camilacorreamelo/medicalDetection/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import torch -import gradio as gr -from huggingface_hub import hf_hub_download -from PIL import Image - -REPO_ID = "camilacorreamelo/medicalinstruments" -FILENAME = "best (5).pt" - -yolov5_weights = hf_hub_download(repo_id=REPO_ID, filename=FILENAME) - -model = torch.hub.load('ultralytics/yolov5', 'custom', path=yolov5_weights, force_reload=True) # local repo - -def object_detection(im, size=640): - results = model(im) # inference - #results.print() # print results to screen - #results.show() # display results - #results.save() # save as results1.jpg, results2.jpg... etc. - results.render() # updates results.imgs with boxes and labels - return Image.fromarray(results.ims[0]) - -title = "Surgery Instruments Detection" -description = """This model helps to detect surgery instruments. It may help to medical shcool classes -""" - -image = gr.inputs.Image(shape=(640, 640), image_mode="RGB", source="upload", label="Imagem", optional=False) -outputs = gr.outputs.Image(type="pil", label="Output Image") - -gr.Interface( - fn=object_detection, - inputs=image, - outputs=outputs, - title=title, - description=description, - examples=[["example/01.jpg"], ["example/02.jpg"], - ["example/03.jpg"], ["example/04.jpg"], ["example/05.jpg"], - ["example/06.jpg"]]).launch() \ No newline at end of file diff --git a/spaces/candlend/vits-hoshimi/README.md b/spaces/candlend/vits-hoshimi/README.md deleted file mode 100644 index 14d3c700138065234bfa7b13eac5c2fae6011148..0000000000000000000000000000000000000000 --- a/spaces/candlend/vits-hoshimi/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Vits Hoshimi -emoji: 🌖 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/candlend/vits-hoshimi/vits/mel_processing.py b/spaces/candlend/vits-hoshimi/vits/mel_processing.py deleted file mode 100644 index c29cfe8d79860f7fc6b7a95f6267ce94339885ad..0000000000000000000000000000000000000000 --- a/spaces/candlend/vits-hoshimi/vits/mel_processing.py +++ /dev/null @@ -1,116 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - # spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - # center=center, pad_mode='reflect', normalized=False, onesided=True) - with torch.autocast("cuda", enabled=False): - y = y.float() - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/captainChan/CaptainChan/modules/attention.py b/spaces/captainChan/CaptainChan/modules/attention.py deleted file mode 100644 index 6b70138d1bfc3205461df4a10d377a89e4f9ceea..0000000000000000000000000000000000000000 --- a/spaces/captainChan/CaptainChan/modules/attention.py +++ /dev/null @@ -1,97 +0,0 @@ -import torch -import torch.nn as nn -from .transformer import PositionalEncoding - -class Attention(nn.Module): - def __init__(self, in_channels=512, max_length=25, n_feature=256): - super().__init__() - self.max_length = max_length - - self.f0_embedding = nn.Embedding(max_length, in_channels) - self.w0 = nn.Linear(max_length, n_feature) - self.wv = nn.Linear(in_channels, in_channels) - self.we = nn.Linear(in_channels, max_length) - - self.active = nn.Tanh() - self.softmax = nn.Softmax(dim=2) - - def forward(self, enc_output): - enc_output = enc_output.permute(0, 2, 3, 1).flatten(1, 2) - reading_order = torch.arange(self.max_length, dtype=torch.long, device=enc_output.device) - reading_order = reading_order.unsqueeze(0).expand(enc_output.size(0), -1) # (S,) -> (B, S) - reading_order_embed = self.f0_embedding(reading_order) # b,25,512 - - t = self.w0(reading_order_embed.permute(0, 2, 1)) # b,512,256 - t = self.active(t.permute(0, 2, 1) + self.wv(enc_output)) # b,256,512 - - attn = self.we(t) # b,256,25 - attn = self.softmax(attn.permute(0, 2, 1)) # b,25,256 - g_output = torch.bmm(attn, enc_output) # b,25,512 - return g_output, attn.view(*attn.shape[:2], 8, 32) - - -def encoder_layer(in_c, out_c, k=3, s=2, p=1): - return nn.Sequential(nn.Conv2d(in_c, out_c, k, s, p), - nn.BatchNorm2d(out_c), - nn.ReLU(True)) - -def decoder_layer(in_c, out_c, k=3, s=1, p=1, mode='nearest', scale_factor=None, size=None): - align_corners = None if mode=='nearest' else True - return nn.Sequential(nn.Upsample(size=size, scale_factor=scale_factor, - mode=mode, align_corners=align_corners), - nn.Conv2d(in_c, out_c, k, s, p), - nn.BatchNorm2d(out_c), - nn.ReLU(True)) - - -class PositionAttention(nn.Module): - def __init__(self, max_length, in_channels=512, num_channels=64, - h=8, w=32, mode='nearest', **kwargs): - super().__init__() - self.max_length = max_length - self.k_encoder = nn.Sequential( - encoder_layer(in_channels, num_channels, s=(1, 2)), - encoder_layer(num_channels, num_channels, s=(2, 2)), - encoder_layer(num_channels, num_channels, s=(2, 2)), - encoder_layer(num_channels, num_channels, s=(2, 2)) - ) - self.k_decoder = nn.Sequential( - decoder_layer(num_channels, num_channels, scale_factor=2, mode=mode), - decoder_layer(num_channels, num_channels, scale_factor=2, mode=mode), - decoder_layer(num_channels, num_channels, scale_factor=2, mode=mode), - decoder_layer(num_channels, in_channels, size=(h, w), mode=mode) - ) - - self.pos_encoder = PositionalEncoding(in_channels, dropout=0, max_len=max_length) - self.project = nn.Linear(in_channels, in_channels) - - def forward(self, x): - N, E, H, W = x.size() - k, v = x, x # (N, E, H, W) - - # calculate key vector - features = [] - for i in range(0, len(self.k_encoder)): - k = self.k_encoder[i](k) - features.append(k) - for i in range(0, len(self.k_decoder) - 1): - k = self.k_decoder[i](k) - k = k + features[len(self.k_decoder) - 2 - i] - k = self.k_decoder[-1](k) - - # calculate query vector - # TODO q=f(q,k) - zeros = x.new_zeros((self.max_length, N, E)) # (T, N, E) - q = self.pos_encoder(zeros) # (T, N, E) - q = q.permute(1, 0, 2) # (N, T, E) - q = self.project(q) # (N, T, E) - - # calculate attention - attn_scores = torch.bmm(q, k.flatten(2, 3)) # (N, T, (H*W)) - attn_scores = attn_scores / (E ** 0.5) - attn_scores = torch.softmax(attn_scores, dim=-1) - - v = v.permute(0, 2, 3, 1).view(N, -1, E) # (N, (H*W), E) - attn_vecs = torch.bmm(attn_scores, v) # (N, T, E) - - return attn_vecs, attn_scores.view(N, -1, H, W) \ No newline at end of file diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/tests/test_chart_based_annotations_accumulator.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/tests/test_chart_based_annotations_accumulator.py deleted file mode 100644 index a1c4f8565a3c55b79b6ed96b03635e6c2932958d..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/tests/test_chart_based_annotations_accumulator.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import unittest -import torch - -from detectron2.structures import Boxes, BoxMode, Instances - -from densepose.modeling.losses.utils import ChartBasedAnnotationsAccumulator -from densepose.structures import DensePoseDataRelative, DensePoseList - -image_shape = (100, 100) -instances = Instances(image_shape) -n_instances = 3 -instances.proposal_boxes = Boxes(torch.rand(n_instances, 4)) -instances.gt_boxes = Boxes(torch.rand(n_instances, 4)) - - -# instances.gt_densepose = None cannot happen because instances attributes need a length -class TestChartBasedAnnotationsAccumulator(unittest.TestCase): - def test_chart_based_annotations_accumulator_no_gt_densepose(self): - accumulator = ChartBasedAnnotationsAccumulator() - accumulator.accumulate(instances) - expected_values = {"nxt_bbox_with_dp_index": 0, "nxt_bbox_index": n_instances} - for key in accumulator.__dict__: - self.assertEqual(getattr(accumulator, key), expected_values.get(key, [])) - - def test_chart_based_annotations_accumulator_gt_densepose_none(self): - instances.gt_densepose = [None] * n_instances - accumulator = ChartBasedAnnotationsAccumulator() - accumulator.accumulate(instances) - expected_values = {"nxt_bbox_with_dp_index": 0, "nxt_bbox_index": n_instances} - for key in accumulator.__dict__: - self.assertEqual(getattr(accumulator, key), expected_values.get(key, [])) - - def test_chart_based_annotations_accumulator_gt_densepose(self): - data_relative_keys = [ - DensePoseDataRelative.X_KEY, - DensePoseDataRelative.Y_KEY, - DensePoseDataRelative.I_KEY, - DensePoseDataRelative.U_KEY, - DensePoseDataRelative.V_KEY, - DensePoseDataRelative.S_KEY, - ] - annotations = [DensePoseDataRelative({k: [0] for k in data_relative_keys})] * n_instances - instances.gt_densepose = DensePoseList(annotations, instances.gt_boxes, image_shape) - accumulator = ChartBasedAnnotationsAccumulator() - accumulator.accumulate(instances) - bbox_xywh_est = BoxMode.convert( - instances.proposal_boxes.tensor.clone(), BoxMode.XYXY_ABS, BoxMode.XYWH_ABS - ) - bbox_xywh_gt = BoxMode.convert( - instances.gt_boxes.tensor.clone(), BoxMode.XYXY_ABS, BoxMode.XYWH_ABS - ) - expected_values = { - "s_gt": [ - torch.zeros((3, DensePoseDataRelative.MASK_SIZE, DensePoseDataRelative.MASK_SIZE)) - ] - * n_instances, - "bbox_xywh_est": bbox_xywh_est.split(1), - "bbox_xywh_gt": bbox_xywh_gt.split(1), - "point_bbox_with_dp_indices": [torch.tensor([i]) for i in range(n_instances)], - "point_bbox_indices": [torch.tensor([i]) for i in range(n_instances)], - "bbox_indices": list(range(n_instances)), - "nxt_bbox_with_dp_index": n_instances, - "nxt_bbox_index": n_instances, - } - default_value = [torch.tensor([0])] * 3 - for key in accumulator.__dict__: - to_test = getattr(accumulator, key) - gt_value = expected_values.get(key, default_value) - if key in ["nxt_bbox_with_dp_index", "nxt_bbox_index"]: - self.assertEqual(to_test, gt_value) - elif key == "bbox_indices": - self.assertListEqual(to_test, gt_value) - else: - self.assertTrue(torch.allclose(torch.stack(to_test), torch.stack(gt_value))) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/Panoptic-DeepLab/panoptic_deeplab/post_processing.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/Panoptic-DeepLab/panoptic_deeplab/post_processing.py deleted file mode 100644 index 194724eb414db073bde87bf482e5c647fa23cde7..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/Panoptic-DeepLab/panoptic_deeplab/post_processing.py +++ /dev/null @@ -1,234 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Reference: https://github.com/bowenc0221/panoptic-deeplab/blob/master/segmentation/model/post_processing/instance_post_processing.py # noqa - -from collections import Counter -import torch -import torch.nn.functional as F - - -def find_instance_center(center_heatmap, threshold=0.1, nms_kernel=3, top_k=None): - """ - Find the center points from the center heatmap. - Args: - center_heatmap: A Tensor of shape [1, H, W] of raw center heatmap output. - threshold: A float, threshold applied to center heatmap score. - nms_kernel: An integer, NMS max pooling kernel size. - top_k: An integer, top k centers to keep. - Returns: - A Tensor of shape [K, 2] where K is the number of center points. The - order of second dim is (y, x). - """ - # Thresholding, setting values below threshold to -1. - center_heatmap = F.threshold(center_heatmap, threshold, -1) - - # NMS - nms_padding = (nms_kernel - 1) // 2 - center_heatmap_max_pooled = F.max_pool2d( - center_heatmap, kernel_size=nms_kernel, stride=1, padding=nms_padding - ) - center_heatmap[center_heatmap != center_heatmap_max_pooled] = -1 - - # Squeeze first two dimensions. - center_heatmap = center_heatmap.squeeze() - assert len(center_heatmap.size()) == 2, "Something is wrong with center heatmap dimension." - - # Find non-zero elements. - if top_k is None: - return torch.nonzero(center_heatmap > 0) - else: - # find top k centers. - top_k_scores, _ = torch.topk(torch.flatten(center_heatmap), top_k) - return torch.nonzero(center_heatmap > top_k_scores[-1].clamp_(min=0)) - - -def group_pixels(center_points, offsets): - """ - Gives each pixel in the image an instance id. - Args: - center_points: A Tensor of shape [K, 2] where K is the number of center points. - The order of second dim is (y, x). - offsets: A Tensor of shape [2, H, W] of raw offset output. The order of - second dim is (offset_y, offset_x). - Returns: - A Tensor of shape [1, H, W] with values in range [1, K], which represents - the center this pixel belongs to. - """ - height, width = offsets.size()[1:] - - # Generates a coordinate map, where each location is the coordinate of - # that location. - y_coord, x_coord = torch.meshgrid( - torch.arange(height, dtype=offsets.dtype, device=offsets.device), - torch.arange(width, dtype=offsets.dtype, device=offsets.device), - ) - coord = torch.cat((y_coord.unsqueeze(0), x_coord.unsqueeze(0)), dim=0) - - center_loc = coord + offsets - center_loc = center_loc.flatten(1).T.unsqueeze_(0) # [1, H*W, 2] - center_points = center_points.unsqueeze(1) # [K, 1, 2] - - # Distance: [K, H*W]. - distance = torch.norm(center_points - center_loc, dim=-1) - - # Finds center with minimum distance at each location, offset by 1, to - # reserve id=0 for stuff. - instance_id = torch.argmin(distance, dim=0).reshape((1, height, width)) + 1 - return instance_id - - -def get_instance_segmentation( - sem_seg, center_heatmap, offsets, thing_seg, thing_ids, threshold=0.1, nms_kernel=3, top_k=None -): - """ - Post-processing for instance segmentation, gets class agnostic instance id. - Args: - sem_seg: A Tensor of shape [1, H, W], predicted semantic label. - center_heatmap: A Tensor of shape [1, H, W] of raw center heatmap output. - offsets: A Tensor of shape [2, H, W] of raw offset output. The order of - second dim is (offset_y, offset_x). - thing_seg: A Tensor of shape [1, H, W], predicted foreground mask, - if not provided, inference from semantic prediction. - thing_ids: A set of ids from contiguous category ids belonging - to thing categories. - threshold: A float, threshold applied to center heatmap score. - nms_kernel: An integer, NMS max pooling kernel size. - top_k: An integer, top k centers to keep. - Returns: - A Tensor of shape [1, H, W] with value 0 represent stuff (not instance) - and other positive values represent different instances. - A Tensor of shape [1, K, 2] where K is the number of center points. - The order of second dim is (y, x). - """ - center_points = find_instance_center( - center_heatmap, threshold=threshold, nms_kernel=nms_kernel, top_k=top_k - ) - if center_points.size(0) == 0: - return torch.zeros_like(sem_seg), center_points.unsqueeze(0) - ins_seg = group_pixels(center_points, offsets) - return thing_seg * ins_seg, center_points.unsqueeze(0) - - -def merge_semantic_and_instance( - sem_seg, ins_seg, semantic_thing_seg, label_divisor, thing_ids, stuff_area, void_label -): - """ - Post-processing for panoptic segmentation, by merging semantic segmentation - label and class agnostic instance segmentation label. - Args: - sem_seg: A Tensor of shape [1, H, W], predicted category id for each pixel. - ins_seg: A Tensor of shape [1, H, W], predicted instance id for each pixel. - semantic_thing_seg: A Tensor of shape [1, H, W], predicted foreground mask. - label_divisor: An integer, used to convert panoptic id = - semantic id * label_divisor + instance_id. - thing_ids: Set, a set of ids from contiguous category ids belonging - to thing categories. - stuff_area: An integer, remove stuff whose area is less tan stuff_area. - void_label: An integer, indicates the region has no confident prediction. - Returns: - A Tensor of shape [1, H, W]. - """ - # In case thing mask does not align with semantic prediction. - pan_seg = torch.zeros_like(sem_seg) + void_label - is_thing = (ins_seg > 0) & (semantic_thing_seg > 0) - - # Keep track of instance id for each class. - class_id_tracker = Counter() - - # Paste thing by majority voting. - instance_ids = torch.unique(ins_seg) - for ins_id in instance_ids: - if ins_id == 0: - continue - # Make sure only do majority voting within `semantic_thing_seg`. - thing_mask = (ins_seg == ins_id) & is_thing - if torch.nonzero(thing_mask).size(0) == 0: - continue - class_id, _ = torch.mode(sem_seg[thing_mask].view(-1)) - class_id_tracker[class_id.item()] += 1 - new_ins_id = class_id_tracker[class_id.item()] - pan_seg[thing_mask] = class_id * label_divisor + new_ins_id - - # Paste stuff to unoccupied area. - class_ids = torch.unique(sem_seg) - for class_id in class_ids: - if class_id.item() in thing_ids: - # thing class - continue - # Calculate stuff area. - stuff_mask = (sem_seg == class_id) & (ins_seg == 0) - if stuff_mask.sum().item() >= stuff_area: - pan_seg[stuff_mask] = class_id * label_divisor - - return pan_seg - - -def get_panoptic_segmentation( - sem_seg, - center_heatmap, - offsets, - thing_ids, - label_divisor, - stuff_area, - void_label, - threshold=0.1, - nms_kernel=7, - top_k=200, - foreground_mask=None, -): - """ - Post-processing for panoptic segmentation. - Args: - sem_seg: A Tensor of shape [1, H, W] of predicted semantic label. - center_heatmap: A Tensor of shape [1, H, W] of raw center heatmap output. - offsets: A Tensor of shape [2, H, W] of raw offset output. The order of - second dim is (offset_y, offset_x). - thing_ids: A set of ids from contiguous category ids belonging - to thing categories. - label_divisor: An integer, used to convert panoptic id = - semantic id * label_divisor + instance_id. - stuff_area: An integer, remove stuff whose area is less tan stuff_area. - void_label: An integer, indicates the region has no confident prediction. - threshold: A float, threshold applied to center heatmap score. - nms_kernel: An integer, NMS max pooling kernel size. - top_k: An integer, top k centers to keep. - foreground_mask: Optional, A Tensor of shape [1, H, W] of predicted - binary foreground mask. If not provided, it will be generated from - sem_seg. - Returns: - A Tensor of shape [1, H, W], int64. - """ - if sem_seg.dim() != 3 and sem_seg.size(0) != 1: - raise ValueError("Semantic prediction with un-supported shape: {}.".format(sem_seg.size())) - if center_heatmap.dim() != 3: - raise ValueError( - "Center prediction with un-supported dimension: {}.".format(center_heatmap.dim()) - ) - if offsets.dim() != 3: - raise ValueError("Offset prediction with un-supported dimension: {}.".format(offsets.dim())) - if foreground_mask is not None: - if foreground_mask.dim() != 3 and foreground_mask.size(0) != 1: - raise ValueError( - "Foreground prediction with un-supported shape: {}.".format(sem_seg.size()) - ) - thing_seg = foreground_mask - else: - # inference from semantic segmentation - thing_seg = torch.zeros_like(sem_seg) - for thing_class in list(thing_ids): - thing_seg[sem_seg == thing_class] = 1 - - instance, center = get_instance_segmentation( - sem_seg, - center_heatmap, - offsets, - thing_seg, - thing_ids, - threshold=threshold, - nms_kernel=nms_kernel, - top_k=top_k, - ) - panoptic = merge_semantic_and_instance( - sem_seg, instance, thing_seg, label_divisor, thing_ids, stuff_area, void_label - ) - - return panoptic, center diff --git a/spaces/cfwef/gpt/README.md b/spaces/cfwef/gpt/README.md deleted file mode 100644 index 01bac90e809880f1ae2f10527edaede5a0535b51..0000000000000000000000000000000000000000 --- a/spaces/cfwef/gpt/README.md +++ /dev/null @@ -1,274 +0,0 @@ ---- -title: ChatImprovement -emoji: 😻 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -duplicated_from: qingxu98/gpt-academic ---- - - -# ChatGPT 学术优化 - -**如果喜欢这个项目,请给它一个Star;如果你发明了更好用的快捷键或函数插件,欢迎发issue或者pull requests(dev分支)** - -If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request (to `dev` branch). - -``` -代码中参考了很多其他优秀项目中的设计,主要包括: - -# 借鉴项目1:借鉴了ChuanhuChatGPT中读取OpenAI json的方法、记录历史问询记录的方法以及gradio queue的使用技巧 -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# 借鉴项目2:借鉴了mdtex2html中公式处理的方法 -https://github.com/polarwinkel/mdtex2html - -项目使用OpenAI的gpt-3.5-turbo模型,期待gpt-4早点放宽门槛😂 -``` - -> **Note** -> -> 1.请注意只有“红颜色”标识的函数插件(按钮)才支持读取文件。目前对pdf/word格式文件的支持插件正在逐步完善中,需要更多developer的帮助。 -> -> 2.本项目中每个文件的功能都在自译解[`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。 -> -> 3.如果您不太习惯部分中文命名的函数、注释或者界面,您可以随时点击相关函数插件,调用ChatGPT一键生成纯英文的项目源代码。 - -
          - -功能 | 描述 ---- | --- -一键润色 | 支持一键润色、一键查找论文语法错误 -一键中英互译 | 一键中英互译 -一键代码解释 | 可以正确显示代码、解释代码 -自定义快捷键 | 支持自定义快捷键 -配置代理服务器 | 支持配置代理服务器 -模块化设计 | 支持自定义高阶的实验性功能与[函数插件],插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) -自我程序剖析 | [函数插件] 一键读懂本项目的源代码 -程序剖析 | [函数插件] 一键可以剖析其他Python/C/C++/Java项目树 -读论文 | [函数插件] 一键解读latex论文全文并生成摘要 -批量注释生成 | [函数插件] 一键批量生成函数注释 -chat分析报告生成 | [函数插件] 运行后自动生成总结汇报 -arxiv小助手 | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF -公式显示 | 可以同时显示公式的tex形式和渲染形式 -图片显示 | 可以在markdown中显示图片 -多线程函数插件支持 | 支持多线调用chatgpt,一键处理海量文本或程序 -支持GPT输出的markdown表格 | 可以输出支持GPT的markdown表格 -…… | …… - -
          - - -- 新界面 -
          - -
          - - -- 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放粘贴板 -
          - -
          - -- 润色/纠错 -
          - -
          - - -- 支持GPT输出的markdown表格 -
          - -
          - -- 如果输出包含公式,会同时以tex形式和渲染形式显示,方便复制和阅读 -
          - -
          - - -- 懒得看项目代码?整个工程直接给chatgpt炫嘴里 -
          - -
          - -## 直接运行 (Windows, Linux or MacOS) - -### 1. 下载项目 -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -### 2. 配置API_KEY和代理设置 - -在`config.py`中,配置 海外Proxy 和 OpenAI API KEY,说明如下 -``` -1. 如果你在国内,需要设置海外代理才能够顺利使用 OpenAI API,设置方法请仔细阅读config.py(1.修改其中的USE_PROXY为True; 2.按照说明修改其中的proxies)。 -2. 配置 OpenAI API KEY。你需要在 OpenAI 官网上注册并获取 API KEY。一旦你拿到了 API KEY,在 config.py 文件里配置好即可。 -3. 与代理网络有关的issue(网络超时、代理不起作用)汇总到 https://github.com/binary-husky/chatgpt_academic/issues/1 -``` -(P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中。`config_private.py`不受git管控,可以让您的隐私信息更加安全。) - - -### 3. 安装依赖 -```sh -# (选择一)推荐 -python -m pip install -r requirements.txt - -# (选择二)如果您使用anaconda,步骤也是类似的: -# (选择二.1)conda create -n gptac_venv python=3.11 -# (选择二.2)conda activate gptac_venv -# (选择二.3)python -m pip install -r requirements.txt - -# 备注:使用官方pip源或者阿里pip源,其他pip源(如清华pip)有可能出问题,临时换源方法: -# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -``` - -### 4. 运行 -```sh -python main.py -``` - -### 5. 测试实验性功能 -``` -- 测试C++项目头文件分析 - input区域 输入 `./crazy_functions/test_project/cpp/libJPG` , 然后点击 "[实验] 解析整个C++项目(input输入项目根路径)" -- 测试给Latex项目写摘要 - input区域 输入 `./crazy_functions/test_project/latex/attention` , 然后点击 "[实验] 读tex论文写摘要(input输入项目根路径)" -- 测试Python项目分析 - input区域 输入 `./crazy_functions/test_project/python/dqn` , 然后点击 "[实验] 解析整个py项目(input输入项目根路径)" -- 测试自我代码解读 - 点击 "[实验] 请解析并解构此项目本身" -- 测试实验功能模板函数(要求gpt回答历史上的今天发生了什么),您可以根据此函数为模板,实现更复杂的功能 - 点击 "[实验] 实验功能函数模板" -``` - -## 使用docker (Linux) - -``` sh -# 下载项目 -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -# 配置 海外Proxy 和 OpenAI API KEY -用任意文本编辑器编辑 config.py -# 安装 -docker build -t gpt-academic . -# 运行 -docker run --rm -it --net=host gpt-academic - -# 测试实验性功能 -## 测试自我代码解读 -点击 "[实验] 请解析并解构此项目本身" -## 测试实验功能模板函数(要求gpt回答历史上的今天发生了什么),您可以根据此函数为模板,实现更复杂的功能 -点击 "[实验] 实验功能函数模板" -##(请注意在docker中运行时,需要额外注意程序的文件访问权限问题) -## 测试C++项目头文件分析 -input区域 输入 ./crazy_functions/test_project/cpp/libJPG , 然后点击 "[实验] 解析整个C++项目(input输入项目根路径)" -## 测试给Latex项目写摘要 -input区域 输入 ./crazy_functions/test_project/latex/attention , 然后点击 "[实验] 读tex论文写摘要(input输入项目根路径)" -## 测试Python项目分析 -input区域 输入 ./crazy_functions/test_project/python/dqn , 然后点击 "[实验] 解析整个py项目(input输入项目根路径)" - -``` - -## 其他部署方式 -- 使用WSL2(Windows Subsystem for Linux 子系统) -请访问[部署wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - -- nginx远程部署 -请访问[部署wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E7%9A%84%E6%8C%87%E5%AF%BC) - - -## 自定义新的便捷按钮(学术快捷键自定义) -打开functional.py,添加条目如下,然后重启程序即可。(如果按钮已经添加成功并可见,那么前缀、后缀都支持热修改,无需重启程序即可生效。) -例如 -``` -"超级英译中": { - - # 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等 - "Prefix": "请翻译把下面一段内容成中文,然后用一个markdown表格逐一解释文中出现的专有名词:\n\n", - - # 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来。 - "Suffix": "", - -}, -``` -
          - -
          - - -如果你发明了更好用的学术快捷键,欢迎发issue或者pull requests! - -## 配置代理 -### 方法一:常规方法 -在```config.py```中修改端口与代理软件对应 - -
          - - -
          - -配置完成后,你可以用以下命令测试代理是否工作,如果一切正常,下面的代码将输出你的代理服务器所在地: -``` -python check_proxy.py -``` -### 方法二:纯新手教程 -[纯新手教程](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89) - -## 兼容性测试 - -### 图片显示: - -
          - -
          - - -### 如果一个程序能够读懂并剖析自己: - -
          - -
          - -
          - -
          - -### 其他任意Python/Cpp项目剖析: -
          - -
          - -
          - -
          - -### Latex论文一键阅读理解与摘要生成 -
          - -
          - -### 自动报告生成 -
          - - - -
          - -### 模块化功能设计 -
          - - -
          - -## Todo: - -- (Top Priority) 调用另一个开源项目text-generation-webui的web接口,使用其他llm模型 -- 总结大工程源代码时,文本过长、token溢出的问题(目前的方法是直接二分丢弃处理溢出,过于粗暴,有效信息大量丢失) - - diff --git a/spaces/chasemcdo/hf_localai/pkg/utils/untar.go b/spaces/chasemcdo/hf_localai/pkg/utils/untar.go deleted file mode 100644 index 782b2d172e55976ec15be2bdd8b54b445b541d74..0000000000000000000000000000000000000000 --- a/spaces/chasemcdo/hf_localai/pkg/utils/untar.go +++ /dev/null @@ -1,56 +0,0 @@ -package utils - -import ( - "fmt" - - "github.com/mholt/archiver/v3" -) - -func IsArchive(file string) bool { - uaIface, err := archiver.ByExtension(file) - if err != nil { - return false - } - - _, ok := uaIface.(archiver.Unarchiver) - return ok -} - -func ExtractArchive(archive, dst string) error { - uaIface, err := archiver.ByExtension(archive) - if err != nil { - return err - } - - un, ok := uaIface.(archiver.Unarchiver) - if !ok { - return fmt.Errorf("format specified by source filename is not an archive format: %s (%T)", archive, uaIface) - } - - mytar := &archiver.Tar{ - OverwriteExisting: true, - MkdirAll: true, - ImplicitTopLevelFolder: false, - ContinueOnError: true, - } - - switch v := uaIface.(type) { - case *archiver.Tar: - uaIface = mytar - case *archiver.TarBrotli: - v.Tar = mytar - case *archiver.TarBz2: - v.Tar = mytar - case *archiver.TarGz: - v.Tar = mytar - case *archiver.TarLz4: - v.Tar = mytar - case *archiver.TarSz: - v.Tar = mytar - case *archiver.TarXz: - v.Tar = mytar - case *archiver.TarZstd: - v.Tar = mytar - } - return un.Unarchive(archive, dst) -} diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/mm-imdb/utils_mmimdb.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/mm-imdb/utils_mmimdb.py deleted file mode 100644 index df8e38d59749ed736b4d97d6548f89f38b85961f..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/mm-imdb/utils_mmimdb.py +++ /dev/null @@ -1,146 +0,0 @@ -# coding=utf-8 -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import json -import os -from collections import Counter - -import torch -import torchvision -import torchvision.transforms as transforms -from PIL import Image -from torch import nn -from torch.utils.data import Dataset - - -POOLING_BREAKDOWN = {1: (1, 1), 2: (2, 1), 3: (3, 1), 4: (2, 2), 5: (5, 1), 6: (3, 2), 7: (7, 1), 8: (4, 2), 9: (3, 3)} - - -class ImageEncoder(nn.Module): - def __init__(self, args): - super().__init__() - model = torchvision.models.resnet152(pretrained=True) - modules = list(model.children())[:-2] - self.model = nn.Sequential(*modules) - self.pool = nn.AdaptiveAvgPool2d(POOLING_BREAKDOWN[args.num_image_embeds]) - - def forward(self, x): - # Bx3x224x224 -> Bx2048x7x7 -> Bx2048xN -> BxNx2048 - out = self.pool(self.model(x)) - out = torch.flatten(out, start_dim=2) - out = out.transpose(1, 2).contiguous() - return out # BxNx2048 - - -class JsonlDataset(Dataset): - def __init__(self, data_path, tokenizer, transforms, labels, max_seq_length): - self.data = [json.loads(l) for l in open(data_path)] - self.data_dir = os.path.dirname(data_path) - self.tokenizer = tokenizer - self.labels = labels - self.n_classes = len(labels) - self.max_seq_length = max_seq_length - - self.transforms = transforms - - def __len__(self): - return len(self.data) - - def __getitem__(self, index): - sentence = torch.LongTensor(self.tokenizer.encode(self.data[index]["text"], add_special_tokens=True)) - start_token, sentence, end_token = sentence[0], sentence[1:-1], sentence[-1] - sentence = sentence[: self.max_seq_length] - - label = torch.zeros(self.n_classes) - label[[self.labels.index(tgt) for tgt in self.data[index]["label"]]] = 1 - - image = Image.open(os.path.join(self.data_dir, self.data[index]["img"])).convert("RGB") - image = self.transforms(image) - - return { - "image_start_token": start_token, - "image_end_token": end_token, - "sentence": sentence, - "image": image, - "label": label, - } - - def get_label_frequencies(self): - label_freqs = Counter() - for row in self.data: - label_freqs.update(row["label"]) - return label_freqs - - -def collate_fn(batch): - lens = [len(row["sentence"]) for row in batch] - bsz, max_seq_len = len(batch), max(lens) - - mask_tensor = torch.zeros(bsz, max_seq_len, dtype=torch.long) - text_tensor = torch.zeros(bsz, max_seq_len, dtype=torch.long) - - for i_batch, (input_row, length) in enumerate(zip(batch, lens)): - text_tensor[i_batch, :length] = input_row["sentence"] - mask_tensor[i_batch, :length] = 1 - - img_tensor = torch.stack([row["image"] for row in batch]) - tgt_tensor = torch.stack([row["label"] for row in batch]) - img_start_token = torch.stack([row["image_start_token"] for row in batch]) - img_end_token = torch.stack([row["image_end_token"] for row in batch]) - - return text_tensor, mask_tensor, img_tensor, img_start_token, img_end_token, tgt_tensor - - -def get_mmimdb_labels(): - return [ - "Crime", - "Drama", - "Thriller", - "Action", - "Comedy", - "Romance", - "Documentary", - "Short", - "Mystery", - "History", - "Family", - "Adventure", - "Fantasy", - "Sci-Fi", - "Western", - "Horror", - "Sport", - "War", - "Music", - "Musical", - "Animation", - "Biography", - "Film-Noir", - ] - - -def get_image_transforms(): - return transforms.Compose( - [ - transforms.Resize(256), - transforms.CenterCrop(224), - transforms.ToTensor(), - transforms.Normalize( - mean=[0.46777044, 0.44531429, 0.40661017], - std=[0.12221994, 0.12145835, 0.14380469], - ), - ] - ) diff --git a/spaces/chenxx/ChuanhuChatGPT/chatgpt - macOS.command b/spaces/chenxx/ChuanhuChatGPT/chatgpt - macOS.command deleted file mode 100644 index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000 --- a/spaces/chenxx/ChuanhuChatGPT/chatgpt - macOS.command +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -echo Opening ChuanhuChatGPT... -cd "$(dirname "${BASH_SOURCE[0]}")" -nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 & -sleep 5 -open http://127.0.0.1:7860 -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal. \ No newline at end of file diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/http.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/http.py deleted file mode 100644 index ca9dc54b215f7977970658250f23e3be137f1b3e..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/http.py +++ /dev/null @@ -1,70 +0,0 @@ -import http.server -import sys -from typing import Mapping, Tuple - -from . import __version__ -from .http_exceptions import HttpProcessingError as HttpProcessingError -from .http_parser import ( - HeadersParser as HeadersParser, - HttpParser as HttpParser, - HttpRequestParser as HttpRequestParser, - HttpResponseParser as HttpResponseParser, - RawRequestMessage as RawRequestMessage, - RawResponseMessage as RawResponseMessage, -) -from .http_websocket import ( - WS_CLOSED_MESSAGE as WS_CLOSED_MESSAGE, - WS_CLOSING_MESSAGE as WS_CLOSING_MESSAGE, - WS_KEY as WS_KEY, - WebSocketError as WebSocketError, - WebSocketReader as WebSocketReader, - WebSocketWriter as WebSocketWriter, - WSCloseCode as WSCloseCode, - WSMessage as WSMessage, - WSMsgType as WSMsgType, - ws_ext_gen as ws_ext_gen, - ws_ext_parse as ws_ext_parse, -) -from .http_writer import ( - HttpVersion as HttpVersion, - HttpVersion10 as HttpVersion10, - HttpVersion11 as HttpVersion11, - StreamWriter as StreamWriter, -) - -__all__ = ( - "HttpProcessingError", - "RESPONSES", - "SERVER_SOFTWARE", - # .http_writer - "StreamWriter", - "HttpVersion", - "HttpVersion10", - "HttpVersion11", - # .http_parser - "HeadersParser", - "HttpParser", - "HttpRequestParser", - "HttpResponseParser", - "RawRequestMessage", - "RawResponseMessage", - # .http_websocket - "WS_CLOSED_MESSAGE", - "WS_CLOSING_MESSAGE", - "WS_KEY", - "WebSocketReader", - "WebSocketWriter", - "ws_ext_gen", - "ws_ext_parse", - "WSMessage", - "WebSocketError", - "WSMsgType", - "WSCloseCode", -) - - -SERVER_SOFTWARE: str = "Python/{0[0]}.{0[1]} aiohttp/{1}".format( - sys.version_info, __version__ -) - -RESPONSES: Mapping[int, Tuple[str, str]] = http.server.BaseHTTPRequestHandler.responses diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dotenv/variables.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dotenv/variables.py deleted file mode 100644 index 667f2f26ff2182ecdfc5b809ba97a6cf1d1be13a..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dotenv/variables.py +++ /dev/null @@ -1,86 +0,0 @@ -import re -from abc import ABCMeta, abstractmethod -from typing import Iterator, Mapping, Optional, Pattern - -_posix_variable: Pattern[str] = re.compile( - r""" - \$\{ - (?P[^\}:]*) - (?::- - (?P[^\}]*) - )? - \} - """, - re.VERBOSE, -) - - -class Atom(metaclass=ABCMeta): - def __ne__(self, other: object) -> bool: - result = self.__eq__(other) - if result is NotImplemented: - return NotImplemented - return not result - - @abstractmethod - def resolve(self, env: Mapping[str, Optional[str]]) -> str: ... - - -class Literal(Atom): - def __init__(self, value: str) -> None: - self.value = value - - def __repr__(self) -> str: - return f"Literal(value={self.value})" - - def __eq__(self, other: object) -> bool: - if not isinstance(other, self.__class__): - return NotImplemented - return self.value == other.value - - def __hash__(self) -> int: - return hash((self.__class__, self.value)) - - def resolve(self, env: Mapping[str, Optional[str]]) -> str: - return self.value - - -class Variable(Atom): - def __init__(self, name: str, default: Optional[str]) -> None: - self.name = name - self.default = default - - def __repr__(self) -> str: - return f"Variable(name={self.name}, default={self.default})" - - def __eq__(self, other: object) -> bool: - if not isinstance(other, self.__class__): - return NotImplemented - return (self.name, self.default) == (other.name, other.default) - - def __hash__(self) -> int: - return hash((self.__class__, self.name, self.default)) - - def resolve(self, env: Mapping[str, Optional[str]]) -> str: - default = self.default if self.default is not None else "" - result = env.get(self.name, default) - return result if result is not None else "" - - -def parse_variables(value: str) -> Iterator[Atom]: - cursor = 0 - - for match in _posix_variable.finditer(value): - (start, end) = match.span() - name = match["name"] - default = match["default"] - - if start > cursor: - yield Literal(value=value[cursor:start]) - - yield Variable(name=name, default=default) - cursor = end - - length = len(value) - if cursor < length: - yield Literal(value=value[cursor:length]) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/ranged_response.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/ranged_response.py deleted file mode 100644 index 88eb696184e56f683f8feabbf895a1bd6346a667..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/ranged_response.py +++ /dev/null @@ -1,185 +0,0 @@ -# Taken from https://gist.github.com/kevinastone/a6a62db57577b3f24e8a6865ed311463 -# Context: https://github.com/encode/starlette/pull/1090 -from __future__ import annotations - -import os -import re -import stat -from typing import NamedTuple -from urllib.parse import quote - -import aiofiles -from aiofiles.os import stat as aio_stat -from starlette.datastructures import Headers -from starlette.exceptions import HTTPException -from starlette.responses import Response, guess_type -from starlette.staticfiles import StaticFiles -from starlette.types import Receive, Scope, Send - -RANGE_REGEX = re.compile(r"^bytes=(?P\d+)-(?P\d*)$") - - -class ClosedRange(NamedTuple): - start: int - end: int - - def __len__(self) -> int: - return self.end - self.start + 1 - - def __bool__(self) -> bool: - return len(self) > 0 - - -class OpenRange(NamedTuple): - start: int - end: int | None = None - - def clamp(self, start: int, end: int) -> ClosedRange: - begin = max(self.start, start) - end = min(x for x in (self.end, end) if x) - - begin = min(begin, end) - end = max(begin, end) - - return ClosedRange(begin, end) - - -class RangedFileResponse(Response): - chunk_size = 4096 - - def __init__( - self, - path: str | os.PathLike, - range: OpenRange, - headers: dict[str, str] | None = None, - media_type: str | None = None, - filename: str | None = None, - stat_result: os.stat_result | None = None, - method: str | None = None, - ) -> None: - assert aiofiles is not None, "'aiofiles' must be installed to use FileResponse" - self.path = path - self.range = range - self.filename = filename - self.background = None - self.send_header_only = method is not None and method.upper() == "HEAD" - if media_type is None: - media_type = guess_type(filename or path)[0] or "text/plain" - self.media_type = media_type - self.init_headers(headers or {}) - if self.filename is not None: - content_disposition_filename = quote(self.filename) - if content_disposition_filename != self.filename: - content_disposition = ( - f"attachment; filename*=utf-8''{content_disposition_filename}" - ) - else: - content_disposition = f'attachment; filename="{self.filename}"' - self.headers.setdefault("content-disposition", content_disposition) - self.stat_result = stat_result - - def set_range_headers(self, range: ClosedRange) -> None: - assert self.stat_result - total_length = self.stat_result.st_size - content_length = len(range) - self.headers[ - "content-range" - ] = f"bytes {range.start}-{range.end}/{total_length}" - self.headers["content-length"] = str(content_length) - pass - - async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: - if self.stat_result is None: - try: - stat_result = await aio_stat(self.path) - self.stat_result = stat_result - except FileNotFoundError as fnfe: - raise RuntimeError( - f"File at path {self.path} does not exist." - ) from fnfe - else: - mode = stat_result.st_mode - if not stat.S_ISREG(mode): - raise RuntimeError(f"File at path {self.path} is not a file.") - - byte_range = self.range.clamp(0, self.stat_result.st_size) - self.set_range_headers(byte_range) - - async with aiofiles.open(self.path, mode="rb") as file: - await file.seek(byte_range.start) - await send( - { - "type": "http.response.start", - "status": 206, - "headers": self.raw_headers, - } - ) - if self.send_header_only: - await send( - {"type": "http.response.body", "body": b"", "more_body": False} - ) - else: - remaining_bytes = len(byte_range) - - if not byte_range: - await send( - {"type": "http.response.body", "body": b"", "more_body": False} - ) - return - - while remaining_bytes > 0: - chunk_size = min(self.chunk_size, remaining_bytes) - chunk = await file.read(chunk_size) - remaining_bytes -= len(chunk) - await send( - { - "type": "http.response.body", - "body": chunk, - "more_body": remaining_bytes > 0, - } - ) - - -class RangedStaticFiles(StaticFiles): - def file_response( - self, - full_path: str | os.PathLike, - stat_result: os.stat_result, - scope: Scope, - status_code: int = 200, - ) -> Response: - request_headers = Headers(scope=scope) - - if request_headers.get("range"): - response = self.ranged_file_response( - full_path, stat_result=stat_result, scope=scope - ) - else: - response = super().file_response( - full_path, stat_result=stat_result, scope=scope, status_code=status_code - ) - response.headers["accept-ranges"] = "bytes" - return response - - def ranged_file_response( - self, - full_path: str | os.PathLike, - stat_result: os.stat_result, - scope: Scope, - ) -> Response: - method = scope["method"] - request_headers = Headers(scope=scope) - - range_header = request_headers["range"] - - match = RANGE_REGEX.search(range_header) - if not match: - raise HTTPException(400) - - start, end = match.group("start"), match.group("end") - - range = OpenRange(int(start), int(end) if end else None) - - return RangedFileResponse( - full_path, range, stat_result=stat_result, method=method - ) diff --git a/spaces/cihyFjudo/fairness-paper-search/.md b/spaces/cihyFjudo/fairness-paper-search/.md deleted file mode 100644 index 885af9dba80e82571bc932cfe305c7f401cd29f3..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/.md +++ /dev/null @@ -1,55 +0,0 @@ -## Spaghetti 24 7 Download Utorrent - - - - " width="300"> - - - -**Click Here >>>>> [https://walllowcopo.blogspot.com/?download=2twr2T](https://walllowcopo.blogspot.com/?download=2twr2T)** - - - -# Spaghetti 24/7: A Bollywood Comedy You Can't Miss - - - -If you are looking for a fun and light-hearted movie to watch, you might want to check out Spaghetti 24/7, a Bollywood comedy that was released in 2005. The movie follows the lives of four friends who share a love for spaghetti and get into hilarious situations involving romance, family, and mafia. - - - -The movie stars Shweta Menon, Ashmit Patel, Kiran Janjani, and Reema Sen as the four friends who work at a call center and live together in a rented apartment. They cook spaghetti every day and night, hence the title of the movie. Their lives get complicated when they fall in love with different people and have to deal with their respective families and backgrounds. - - - -The movie also features Anupam Kher, Rajpal Yadav, Rakesh Bedi, and Shakti Kapoor in supporting roles. The movie is directed by Gaurav Pandey and produced by Subhash Ghai. The movie has a rating of 5.3 out of 10 on IMDb and received mixed reviews from critics and audiences. - - - -Spaghetti 24/7 is available for download on various torrent sites such as The Pirate Bay[^1^], YTS[^1^], and 1337x[^1^]. However, downloading movies from torrent sites is illegal and may expose you to malware and viruses. It is recommended that you watch the movie legally on streaming platforms or DVD. - - - -Spaghetti 24/7 is a comedy movie that has some funny moments and dialogues. The movie also tries to explore the themes of friendship, love, and cultural differences. The movie has a colorful and vibrant look and a catchy soundtrack. The movie is suitable for people who enjoy light-hearted and quirky movies. - - - -However, the movie also has some flaws and drawbacks. The movie has a weak plot and a predictable ending. The movie also relies on stereotypes and cliches to create humor. The movie has some scenes that are unrealistic and illogical. The movie may not appeal to people who are looking for a serious or realistic movie. - - - -Spaghetti 24/7 is a movie that you can watch if you are in the mood for some laughter and entertainment. The movie is not a masterpiece, but it is not a disaster either. The movie is a decent attempt at making a comedy movie with a Bollywood twist. - - - -Spaghetti 24/7 is a movie that has some similarities and differences with another Bollywood comedy, Hungama. Both movies are about a group of friends who get into trouble because of their love interests and their families. Both movies have a lot of confusion and chaos that leads to comedy. Both movies have Anupam Kher and Rajpal Yadav in supporting roles. - - - -However, Spaghetti 24/7 is not as successful or popular as Hungama. Hungama has a better script and direction than Spaghetti 24/7. Hungama has more memorable characters and performances than Spaghetti 24/7. Hungama has a higher rating and more positive reviews than Spaghetti 24/7. Hungama is considered to be one of the best comedy movies of Bollywood, while Spaghetti 24/7 is mostly forgotten. - - - -Spaghetti 24/7 is a movie that deserves a chance to be watched by comedy lovers. The movie may not be perfect, but it has its own charm and humor. The movie may make you laugh and smile, and also make you hungry for some spaghetti. The movie is a fun and enjoyable way to spend some time with your friends and family. - - 1b8d091108 \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Adobe Photoshop CS 2018 V19.4.0.98906 Crack Serial Key Keygen The Best Tool for Professional and Creative Designers.md b/spaces/cihyFjudo/fairness-paper-search/Adobe Photoshop CS 2018 V19.4.0.98906 Crack Serial Key Keygen The Best Tool for Professional and Creative Designers.md deleted file mode 100644 index 5ffa4dc6eeb3e19ccf1952cec07c998f204be0ca..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Adobe Photoshop CS 2018 V19.4.0.98906 Crack Serial Key Keygen The Best Tool for Professional and Creative Designers.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Adobe Photoshop CS 2018 V19.4.0.98906 Crack Serial Key Keygen


          Downloadhttps://tinurli.com/2uwhAo



          - - aaccfb2cb3
          -
          -
          -

          diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/test_utils.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/test_utils.py deleted file mode 100644 index fcda2f3ddc045a381470012ba331c75299af4981..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/test_utils.py +++ /dev/null @@ -1,706 +0,0 @@ -"""Utilities shared by tests.""" - -import asyncio -import contextlib -import gc -import inspect -import ipaddress -import os -import socket -import sys -import warnings -from abc import ABC, abstractmethod -from types import TracebackType -from typing import ( - TYPE_CHECKING, - Any, - Callable, - Iterator, - List, - Optional, - Type, - Union, - cast, -) -from unittest import mock - -from aiosignal import Signal -from multidict import CIMultiDict, CIMultiDictProxy -from yarl import URL - -import aiohttp -from aiohttp.client import _RequestContextManager, _WSRequestContextManager - -from . import ClientSession, hdrs -from .abc import AbstractCookieJar -from .client_reqrep import ClientResponse -from .client_ws import ClientWebSocketResponse -from .helpers import PY_38, sentinel -from .http import HttpVersion, RawRequestMessage -from .web import ( - Application, - AppRunner, - BaseRunner, - Request, - Server, - ServerRunner, - SockSite, - UrlMappingMatchInfo, -) -from .web_protocol import _RequestHandler - -if TYPE_CHECKING: # pragma: no cover - from ssl import SSLContext -else: - SSLContext = None - -if PY_38: - from unittest import IsolatedAsyncioTestCase as TestCase -else: - from asynctest import TestCase # type: ignore[no-redef] - -REUSE_ADDRESS = os.name == "posix" and sys.platform != "cygwin" - - -def get_unused_port_socket( - host: str, family: socket.AddressFamily = socket.AF_INET -) -> socket.socket: - return get_port_socket(host, 0, family) - - -def get_port_socket( - host: str, port: int, family: socket.AddressFamily -) -> socket.socket: - s = socket.socket(family, socket.SOCK_STREAM) - if REUSE_ADDRESS: - # Windows has different semantics for SO_REUSEADDR, - # so don't set it. Ref: - # https://docs.microsoft.com/en-us/windows/win32/winsock/using-so-reuseaddr-and-so-exclusiveaddruse - s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - s.bind((host, port)) - return s - - -def unused_port() -> int: - """Return a port that is unused on the current host.""" - with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: - s.bind(("127.0.0.1", 0)) - return cast(int, s.getsockname()[1]) - - -class BaseTestServer(ABC): - __test__ = False - - def __init__( - self, - *, - scheme: Union[str, object] = sentinel, - loop: Optional[asyncio.AbstractEventLoop] = None, - host: str = "127.0.0.1", - port: Optional[int] = None, - skip_url_asserts: bool = False, - socket_factory: Callable[ - [str, int, socket.AddressFamily], socket.socket - ] = get_port_socket, - **kwargs: Any, - ) -> None: - self._loop = loop - self.runner: Optional[BaseRunner] = None - self._root: Optional[URL] = None - self.host = host - self.port = port - self._closed = False - self.scheme = scheme - self.skip_url_asserts = skip_url_asserts - self.socket_factory = socket_factory - - async def start_server( - self, loop: Optional[asyncio.AbstractEventLoop] = None, **kwargs: Any - ) -> None: - if self.runner: - return - self._loop = loop - self._ssl = kwargs.pop("ssl", None) - self.runner = await self._make_runner(**kwargs) - await self.runner.setup() - if not self.port: - self.port = 0 - try: - version = ipaddress.ip_address(self.host).version - except ValueError: - version = 4 - family = socket.AF_INET6 if version == 6 else socket.AF_INET - _sock = self.socket_factory(self.host, self.port, family) - self.host, self.port = _sock.getsockname()[:2] - site = SockSite(self.runner, sock=_sock, ssl_context=self._ssl) - await site.start() - server = site._server - assert server is not None - sockets = server.sockets - assert sockets is not None - self.port = sockets[0].getsockname()[1] - if self.scheme is sentinel: - if self._ssl: - scheme = "https" - else: - scheme = "http" - self.scheme = scheme - self._root = URL(f"{self.scheme}://{self.host}:{self.port}") - - @abstractmethod # pragma: no cover - async def _make_runner(self, **kwargs: Any) -> BaseRunner: - pass - - def make_url(self, path: str) -> URL: - assert self._root is not None - url = URL(path) - if not self.skip_url_asserts: - assert not url.is_absolute() - return self._root.join(url) - else: - return URL(str(self._root) + path) - - @property - def started(self) -> bool: - return self.runner is not None - - @property - def closed(self) -> bool: - return self._closed - - @property - def handler(self) -> Server: - # for backward compatibility - # web.Server instance - runner = self.runner - assert runner is not None - assert runner.server is not None - return runner.server - - async def close(self) -> None: - """Close all fixtures created by the test client. - - After that point, the TestClient is no longer usable. - - This is an idempotent function: running close multiple times - will not have any additional effects. - - close is also run when the object is garbage collected, and on - exit when used as a context manager. - - """ - if self.started and not self.closed: - assert self.runner is not None - await self.runner.cleanup() - self._root = None - self.port = None - self._closed = True - - def __enter__(self) -> None: - raise TypeError("Use async with instead") - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_value: Optional[BaseException], - traceback: Optional[TracebackType], - ) -> None: - # __exit__ should exist in pair with __enter__ but never executed - pass # pragma: no cover - - async def __aenter__(self) -> "BaseTestServer": - await self.start_server(loop=self._loop) - return self - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc_value: Optional[BaseException], - traceback: Optional[TracebackType], - ) -> None: - await self.close() - - -class TestServer(BaseTestServer): - def __init__( - self, - app: Application, - *, - scheme: Union[str, object] = sentinel, - host: str = "127.0.0.1", - port: Optional[int] = None, - **kwargs: Any, - ): - self.app = app - super().__init__(scheme=scheme, host=host, port=port, **kwargs) - - async def _make_runner(self, **kwargs: Any) -> BaseRunner: - return AppRunner(self.app, **kwargs) - - -class RawTestServer(BaseTestServer): - def __init__( - self, - handler: _RequestHandler, - *, - scheme: Union[str, object] = sentinel, - host: str = "127.0.0.1", - port: Optional[int] = None, - **kwargs: Any, - ) -> None: - self._handler = handler - super().__init__(scheme=scheme, host=host, port=port, **kwargs) - - async def _make_runner(self, debug: bool = True, **kwargs: Any) -> ServerRunner: - srv = Server(self._handler, loop=self._loop, debug=debug, **kwargs) - return ServerRunner(srv, debug=debug, **kwargs) - - -class TestClient: - """ - A test client implementation. - - To write functional tests for aiohttp based servers. - - """ - - __test__ = False - - def __init__( - self, - server: BaseTestServer, - *, - cookie_jar: Optional[AbstractCookieJar] = None, - loop: Optional[asyncio.AbstractEventLoop] = None, - **kwargs: Any, - ) -> None: - if not isinstance(server, BaseTestServer): - raise TypeError( - "server must be TestServer " "instance, found type: %r" % type(server) - ) - self._server = server - self._loop = loop - if cookie_jar is None: - cookie_jar = aiohttp.CookieJar(unsafe=True, loop=loop) - self._session = ClientSession(loop=loop, cookie_jar=cookie_jar, **kwargs) - self._closed = False - self._responses: List[ClientResponse] = [] - self._websockets: List[ClientWebSocketResponse] = [] - - async def start_server(self) -> None: - await self._server.start_server(loop=self._loop) - - @property - def host(self) -> str: - return self._server.host - - @property - def port(self) -> Optional[int]: - return self._server.port - - @property - def server(self) -> BaseTestServer: - return self._server - - @property - def app(self) -> Optional[Application]: - return cast(Optional[Application], getattr(self._server, "app", None)) - - @property - def session(self) -> ClientSession: - """An internal aiohttp.ClientSession. - - Unlike the methods on the TestClient, client session requests - do not automatically include the host in the url queried, and - will require an absolute path to the resource. - - """ - return self._session - - def make_url(self, path: str) -> URL: - return self._server.make_url(path) - - async def _request(self, method: str, path: str, **kwargs: Any) -> ClientResponse: - resp = await self._session.request(method, self.make_url(path), **kwargs) - # save it to close later - self._responses.append(resp) - return resp - - def request(self, method: str, path: str, **kwargs: Any) -> _RequestContextManager: - """Routes a request to tested http server. - - The interface is identical to aiohttp.ClientSession.request, - except the loop kwarg is overridden by the instance used by the - test server. - - """ - return _RequestContextManager(self._request(method, path, **kwargs)) - - def get(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP GET request.""" - return _RequestContextManager(self._request(hdrs.METH_GET, path, **kwargs)) - - def post(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP POST request.""" - return _RequestContextManager(self._request(hdrs.METH_POST, path, **kwargs)) - - def options(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP OPTIONS request.""" - return _RequestContextManager(self._request(hdrs.METH_OPTIONS, path, **kwargs)) - - def head(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP HEAD request.""" - return _RequestContextManager(self._request(hdrs.METH_HEAD, path, **kwargs)) - - def put(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP PUT request.""" - return _RequestContextManager(self._request(hdrs.METH_PUT, path, **kwargs)) - - def patch(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP PATCH request.""" - return _RequestContextManager(self._request(hdrs.METH_PATCH, path, **kwargs)) - - def delete(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP PATCH request.""" - return _RequestContextManager(self._request(hdrs.METH_DELETE, path, **kwargs)) - - def ws_connect(self, path: str, **kwargs: Any) -> _WSRequestContextManager: - """Initiate websocket connection. - - The api corresponds to aiohttp.ClientSession.ws_connect. - - """ - return _WSRequestContextManager(self._ws_connect(path, **kwargs)) - - async def _ws_connect(self, path: str, **kwargs: Any) -> ClientWebSocketResponse: - ws = await self._session.ws_connect(self.make_url(path), **kwargs) - self._websockets.append(ws) - return ws - - async def close(self) -> None: - """Close all fixtures created by the test client. - - After that point, the TestClient is no longer usable. - - This is an idempotent function: running close multiple times - will not have any additional effects. - - close is also run on exit when used as a(n) (asynchronous) - context manager. - - """ - if not self._closed: - for resp in self._responses: - resp.close() - for ws in self._websockets: - await ws.close() - await self._session.close() - await self._server.close() - self._closed = True - - def __enter__(self) -> None: - raise TypeError("Use async with instead") - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc: Optional[BaseException], - tb: Optional[TracebackType], - ) -> None: - # __exit__ should exist in pair with __enter__ but never executed - pass # pragma: no cover - - async def __aenter__(self) -> "TestClient": - await self.start_server() - return self - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc: Optional[BaseException], - tb: Optional[TracebackType], - ) -> None: - await self.close() - - -class AioHTTPTestCase(TestCase): - """A base class to allow for unittest web applications using aiohttp. - - Provides the following: - - * self.client (aiohttp.test_utils.TestClient): an aiohttp test client. - * self.loop (asyncio.BaseEventLoop): the event loop in which the - application and server are running. - * self.app (aiohttp.web.Application): the application returned by - self.get_application() - - Note that the TestClient's methods are asynchronous: you have to - execute function on the test client using asynchronous methods. - """ - - async def get_application(self) -> Application: - """Get application. - - This method should be overridden - to return the aiohttp.web.Application - object to test. - """ - return self.get_app() - - def get_app(self) -> Application: - """Obsolete method used to constructing web application. - - Use .get_application() coroutine instead. - """ - raise RuntimeError("Did you forget to define get_application()?") - - def setUp(self) -> None: - if not PY_38: - asyncio.get_event_loop().run_until_complete(self.asyncSetUp()) - - async def asyncSetUp(self) -> None: - try: - self.loop = asyncio.get_running_loop() - except (AttributeError, RuntimeError): # AttributeError->py36 - self.loop = asyncio.get_event_loop_policy().get_event_loop() - - return await self.setUpAsync() - - async def setUpAsync(self) -> None: - self.app = await self.get_application() - self.server = await self.get_server(self.app) - self.client = await self.get_client(self.server) - - await self.client.start_server() - - def tearDown(self) -> None: - if not PY_38: - self.loop.run_until_complete(self.asyncTearDown()) - - async def asyncTearDown(self) -> None: - return await self.tearDownAsync() - - async def tearDownAsync(self) -> None: - await self.client.close() - - async def get_server(self, app: Application) -> TestServer: - """Return a TestServer instance.""" - return TestServer(app, loop=self.loop) - - async def get_client(self, server: TestServer) -> TestClient: - """Return a TestClient instance.""" - return TestClient(server, loop=self.loop) - - -def unittest_run_loop(func: Any, *args: Any, **kwargs: Any) -> Any: - """ - A decorator dedicated to use with asynchronous AioHTTPTestCase test methods. - - In 3.8+, this does nothing. - """ - warnings.warn( - "Decorator `@unittest_run_loop` is no longer needed in aiohttp 3.8+", - DeprecationWarning, - stacklevel=2, - ) - return func - - -_LOOP_FACTORY = Callable[[], asyncio.AbstractEventLoop] - - -@contextlib.contextmanager -def loop_context( - loop_factory: _LOOP_FACTORY = asyncio.new_event_loop, fast: bool = False -) -> Iterator[asyncio.AbstractEventLoop]: - """A contextmanager that creates an event_loop, for test purposes. - - Handles the creation and cleanup of a test loop. - """ - loop = setup_test_loop(loop_factory) - yield loop - teardown_test_loop(loop, fast=fast) - - -def setup_test_loop( - loop_factory: _LOOP_FACTORY = asyncio.new_event_loop, -) -> asyncio.AbstractEventLoop: - """Create and return an asyncio.BaseEventLoop instance. - - The caller should also call teardown_test_loop, - once they are done with the loop. - """ - loop = loop_factory() - try: - module = loop.__class__.__module__ - skip_watcher = "uvloop" in module - except AttributeError: # pragma: no cover - # Just in case - skip_watcher = True - asyncio.set_event_loop(loop) - if sys.platform != "win32" and not skip_watcher: - policy = asyncio.get_event_loop_policy() - watcher: asyncio.AbstractChildWatcher - try: # Python >= 3.8 - # Refs: - # * https://github.com/pytest-dev/pytest-xdist/issues/620 - # * https://stackoverflow.com/a/58614689/595220 - # * https://bugs.python.org/issue35621 - # * https://github.com/python/cpython/pull/14344 - watcher = asyncio.ThreadedChildWatcher() - except AttributeError: # Python < 3.8 - watcher = asyncio.SafeChildWatcher() - watcher.attach_loop(loop) - with contextlib.suppress(NotImplementedError): - policy.set_child_watcher(watcher) - return loop - - -def teardown_test_loop(loop: asyncio.AbstractEventLoop, fast: bool = False) -> None: - """Teardown and cleanup an event_loop created by setup_test_loop.""" - closed = loop.is_closed() - if not closed: - loop.call_soon(loop.stop) - loop.run_forever() - loop.close() - - if not fast: - gc.collect() - - asyncio.set_event_loop(None) - - -def _create_app_mock() -> mock.MagicMock: - def get_dict(app: Any, key: str) -> Any: - return app.__app_dict[key] - - def set_dict(app: Any, key: str, value: Any) -> None: - app.__app_dict[key] = value - - app = mock.MagicMock(spec=Application) - app.__app_dict = {} - app.__getitem__ = get_dict - app.__setitem__ = set_dict - - app._debug = False - app.on_response_prepare = Signal(app) - app.on_response_prepare.freeze() - return app - - -def _create_transport(sslcontext: Optional[SSLContext] = None) -> mock.Mock: - transport = mock.Mock() - - def get_extra_info(key: str) -> Optional[SSLContext]: - if key == "sslcontext": - return sslcontext - else: - return None - - transport.get_extra_info.side_effect = get_extra_info - return transport - - -def make_mocked_request( - method: str, - path: str, - headers: Any = None, - *, - match_info: Any = sentinel, - version: HttpVersion = HttpVersion(1, 1), - closing: bool = False, - app: Any = None, - writer: Any = sentinel, - protocol: Any = sentinel, - transport: Any = sentinel, - payload: Any = sentinel, - sslcontext: Optional[SSLContext] = None, - client_max_size: int = 1024**2, - loop: Any = ..., -) -> Request: - """Creates mocked web.Request testing purposes. - - Useful in unit tests, when spinning full web server is overkill or - specific conditions and errors are hard to trigger. - """ - task = mock.Mock() - if loop is ...: - loop = mock.Mock() - loop.create_future.return_value = () - - if version < HttpVersion(1, 1): - closing = True - - if headers: - headers = CIMultiDictProxy(CIMultiDict(headers)) - raw_hdrs = tuple( - (k.encode("utf-8"), v.encode("utf-8")) for k, v in headers.items() - ) - else: - headers = CIMultiDictProxy(CIMultiDict()) - raw_hdrs = () - - chunked = "chunked" in headers.get(hdrs.TRANSFER_ENCODING, "").lower() - - message = RawRequestMessage( - method, - path, - version, - headers, - raw_hdrs, - closing, - None, - False, - chunked, - URL(path), - ) - if app is None: - app = _create_app_mock() - - if transport is sentinel: - transport = _create_transport(sslcontext) - - if protocol is sentinel: - protocol = mock.Mock() - protocol.transport = transport - - if writer is sentinel: - writer = mock.Mock() - writer.write_headers = make_mocked_coro(None) - writer.write = make_mocked_coro(None) - writer.write_eof = make_mocked_coro(None) - writer.drain = make_mocked_coro(None) - writer.transport = transport - - protocol.transport = transport - protocol.writer = writer - - if payload is sentinel: - payload = mock.Mock() - - req = Request( - message, payload, protocol, writer, task, loop, client_max_size=client_max_size - ) - - match_info = UrlMappingMatchInfo( - {} if match_info is sentinel else match_info, mock.Mock() - ) - match_info.add_app(app) - req._match_info = match_info - - return req - - -def make_mocked_coro( - return_value: Any = sentinel, raise_exception: Any = sentinel -) -> Any: - """Creates a coroutine mock.""" - - async def mock_coro(*args: Any, **kwargs: Any) -> Any: - if raise_exception is not sentinel: - raise raise_exception - if not inspect.isawaitable(return_value): - return return_value - await return_value - - return mock.Mock(wraps=mock_coro) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_core/__init__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_core/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/qu2cu/benchmark.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/qu2cu/benchmark.py deleted file mode 100644 index cee55f5e7d9bffba11859caae02255bcad77e17d..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/qu2cu/benchmark.py +++ /dev/null @@ -1,57 +0,0 @@ -"""Benchmark the qu2cu algorithm performance.""" - -from .qu2cu import * -from fontTools.cu2qu import curve_to_quadratic -import random -import timeit - -MAX_ERR = 0.5 -NUM_CURVES = 5 - - -def generate_curves(n): - points = [ - tuple(float(random.randint(0, 2048)) for coord in range(2)) - for point in range(1 + 3 * n) - ] - curves = [] - for i in range(n): - curves.append(tuple(points[i * 3 : i * 3 + 4])) - return curves - - -def setup_quadratic_to_curves(): - curves = generate_curves(NUM_CURVES) - quadratics = [curve_to_quadratic(curve, MAX_ERR) for curve in curves] - return quadratics, MAX_ERR - - -def run_benchmark(module, function, setup_suffix="", repeat=25, number=1): - setup_func = "setup_" + function - if setup_suffix: - print("%s with %s:" % (function, setup_suffix), end="") - setup_func += "_" + setup_suffix - else: - print("%s:" % function, end="") - - def wrapper(function, setup_func): - function = globals()[function] - setup_func = globals()[setup_func] - - def wrapped(): - return function(*setup_func()) - - return wrapped - - results = timeit.repeat(wrapper(function, setup_func), repeat=repeat, number=number) - print("\t%5.1fus" % (min(results) * 1000000.0 / number)) - - -def main(): - """Benchmark the qu2cu algorithm performance.""" - run_benchmark("qu2cu", "quadratic_to_curves") - - -if __name__ == "__main__": - random.seed(1) - main() diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/C_F_F_.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/C_F_F_.py deleted file mode 100644 index c231599e37b3a5864a774387d717baf297957876..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/C_F_F_.py +++ /dev/null @@ -1,46 +0,0 @@ -from io import BytesIO -from fontTools import cffLib -from . import DefaultTable - - -class table_C_F_F_(DefaultTable.DefaultTable): - def __init__(self, tag=None): - DefaultTable.DefaultTable.__init__(self, tag) - self.cff = cffLib.CFFFontSet() - self._gaveGlyphOrder = False - - def decompile(self, data, otFont): - self.cff.decompile(BytesIO(data), otFont, isCFF2=False) - assert len(self.cff) == 1, "can't deal with multi-font CFF tables." - - def compile(self, otFont): - f = BytesIO() - self.cff.compile(f, otFont, isCFF2=False) - return f.getvalue() - - def haveGlyphNames(self): - if hasattr(self.cff[self.cff.fontNames[0]], "ROS"): - return False # CID-keyed font - else: - return True - - def getGlyphOrder(self): - if self._gaveGlyphOrder: - from fontTools import ttLib - - raise ttLib.TTLibError("illegal use of getGlyphOrder()") - self._gaveGlyphOrder = True - return self.cff[self.cff.fontNames[0]].getGlyphOrder() - - def setGlyphOrder(self, glyphOrder): - pass - # XXX - # self.cff[self.cff.fontNames[0]].setGlyphOrder(glyphOrder) - - def toXML(self, writer, otFont): - self.cff.toXML(writer) - - def fromXML(self, name, attrs, content, otFont): - if not hasattr(self, "cff"): - self.cff = cffLib.CFFFontSet() - self.cff.fromXML(name, attrs, content, otFont) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_c_v_a_r.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_c_v_a_r.py deleted file mode 100644 index 6ea44dbab3b0a4b0da1e5327d077873867f0b520..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_c_v_a_r.py +++ /dev/null @@ -1,86 +0,0 @@ -from . import DefaultTable -from fontTools.misc import sstruct -from fontTools.misc.textTools import bytesjoin -from fontTools.ttLib.tables.TupleVariation import ( - compileTupleVariationStore, - decompileTupleVariationStore, - TupleVariation, -) - - -# https://www.microsoft.com/typography/otspec/cvar.htm -# https://www.microsoft.com/typography/otspec/otvarcommonformats.htm -# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6cvar.html - -CVAR_HEADER_FORMAT = """ - > # big endian - majorVersion: H - minorVersion: H - tupleVariationCount: H - offsetToData: H -""" - -CVAR_HEADER_SIZE = sstruct.calcsize(CVAR_HEADER_FORMAT) - - -class table__c_v_a_r(DefaultTable.DefaultTable): - dependencies = ["cvt ", "fvar"] - - def __init__(self, tag=None): - DefaultTable.DefaultTable.__init__(self, tag) - self.majorVersion, self.minorVersion = 1, 0 - self.variations = [] - - def compile(self, ttFont, useSharedPoints=False): - tupleVariationCount, tuples, data = compileTupleVariationStore( - variations=[v for v in self.variations if v.hasImpact()], - pointCount=len(ttFont["cvt "].values), - axisTags=[axis.axisTag for axis in ttFont["fvar"].axes], - sharedTupleIndices={}, - useSharedPoints=useSharedPoints, - ) - header = { - "majorVersion": self.majorVersion, - "minorVersion": self.minorVersion, - "tupleVariationCount": tupleVariationCount, - "offsetToData": CVAR_HEADER_SIZE + len(tuples), - } - return b"".join([sstruct.pack(CVAR_HEADER_FORMAT, header), tuples, data]) - - def decompile(self, data, ttFont): - axisTags = [axis.axisTag for axis in ttFont["fvar"].axes] - header = {} - sstruct.unpack(CVAR_HEADER_FORMAT, data[0:CVAR_HEADER_SIZE], header) - self.majorVersion = header["majorVersion"] - self.minorVersion = header["minorVersion"] - assert self.majorVersion == 1, self.majorVersion - self.variations = decompileTupleVariationStore( - tableTag=self.tableTag, - axisTags=axisTags, - tupleVariationCount=header["tupleVariationCount"], - pointCount=len(ttFont["cvt "].values), - sharedTuples=None, - data=data, - pos=CVAR_HEADER_SIZE, - dataPos=header["offsetToData"], - ) - - def fromXML(self, name, attrs, content, ttFont): - if name == "version": - self.majorVersion = int(attrs.get("major", "1")) - self.minorVersion = int(attrs.get("minor", "0")) - elif name == "tuple": - valueCount = len(ttFont["cvt "].values) - var = TupleVariation({}, [None] * valueCount) - self.variations.append(var) - for tupleElement in content: - if isinstance(tupleElement, tuple): - tupleName, tupleAttrs, tupleContent = tupleElement - var.fromXML(tupleName, tupleAttrs, tupleContent) - - def toXML(self, writer, ttFont): - axisTags = [axis.axisTag for axis in ttFont["fvar"].axes] - writer.simpletag("version", major=self.majorVersion, minor=self.minorVersion) - writer.newline() - for var in self.variations: - var.toXML(writer, axisTags) diff --git a/spaces/cncanon/locusts/start.sh b/spaces/cncanon/locusts/start.sh deleted file mode 100644 index e3a3628780b706faa289d1d0e0c8196cffb24ed6..0000000000000000000000000000000000000000 --- a/spaces/cncanon/locusts/start.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/bin/bash -# Start the addon app -pm2 start /app/addon/main.js --name "addon-app" - -sleep 5 - -# Start the main app -pm2 start npm --name "main-app" -- start - - -# Keep container running -pm2 logs \ No newline at end of file diff --git a/spaces/codedog-ai/edu-assistant/edu_assistant/__init__.py b/spaces/codedog-ai/edu-assistant/edu_assistant/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mf_utils.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mf_utils.h deleted file mode 100644 index aebfb9ad21b4ee62f41f1b827bace3fa7ba3278d..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mf_utils.h +++ /dev/null @@ -1,181 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_MF_UTILS_H -#define AVCODEC_MF_UTILS_H - -#include -#include -#ifdef _MSC_VER -// The official way of including codecapi (via dshow.h) makes the ICodecAPI -// interface unavailable in UWP mode, but including icodecapi.h + codecapi.h -// seems to be equivalent. (These headers conflict with the official way -// of including it though, through strmif.h via dshow.h. And on mingw, the -// mf*.h headers below indirectly include strmif.h.) -#include -#else -#define NO_DSHOW_STRSAFE -#include -// Older versions of mingw-w64 need codecapi.h explicitly included, while newer -// ones include it implicitly from dshow.h (via uuids.h). -#include -#endif -#include -#include -#include -#include - -#include "avcodec.h" - -// Windows N editions does not provide MediaFoundation by default. -// So to avoid DLL loading error, MediaFoundation will be dynamically loaded -// except on UWP build since LoadLibrary is not available on it. -typedef struct MFFunctions { - HRESULT (WINAPI *MFStartup) (ULONG Version, DWORD dwFlags); - HRESULT (WINAPI *MFShutdown) (void); - HRESULT (WINAPI *MFCreateAlignedMemoryBuffer) (DWORD cbMaxLength, - DWORD cbAligment, - IMFMediaBuffer **ppBuffer); - HRESULT (WINAPI *MFCreateSample) (IMFSample **ppIMFSample); - HRESULT (WINAPI *MFCreateMediaType) (IMFMediaType **ppMFType); - // MFTEnumEx is missing in Windows Vista's mfplat.dll. - HRESULT (WINAPI *MFTEnumEx)(GUID guidCategory, UINT32 Flags, - const MFT_REGISTER_TYPE_INFO *pInputType, - const MFT_REGISTER_TYPE_INFO *pOutputType, - IMFActivate ***pppMFTActivate, - UINT32 *pnumMFTActivate); -} MFFunctions; - -// These functions do exist in mfapi.h, but are only available within -// __cplusplus ifdefs. -HRESULT ff_MFGetAttributeSize(IMFAttributes *pattr, REFGUID guid, - UINT32 *pw, UINT32 *ph); -HRESULT ff_MFSetAttributeSize(IMFAttributes *pattr, REFGUID guid, - UINT32 uw, UINT32 uh); -#define ff_MFSetAttributeRatio ff_MFSetAttributeSize -#define ff_MFGetAttributeRatio ff_MFGetAttributeSize - -// These do exist in mingw-w64's codecapi.h, but they aren't properly defined -// by the header until after mingw-w64 v7.0.0. -DEFINE_GUID(ff_CODECAPI_AVDecVideoThumbnailGenerationMode, 0x2efd8eee,0x1150,0x4328,0x9c,0xf5,0x66,0xdc,0xe9,0x33,0xfc,0xf4); -DEFINE_GUID(ff_CODECAPI_AVDecVideoDropPicWithMissingRef, 0xf8226383,0x14c2,0x4567,0x97,0x34,0x50,0x04,0xe9,0x6f,0xf8,0x87); -DEFINE_GUID(ff_CODECAPI_AVDecVideoSoftwareDeinterlaceMode, 0x0c08d1ce,0x9ced,0x4540,0xba,0xe3,0xce,0xb3,0x80,0x14,0x11,0x09); -DEFINE_GUID(ff_CODECAPI_AVDecVideoFastDecodeMode, 0x6b529f7d,0xd3b1,0x49c6,0xa9,0x99,0x9e,0xc6,0x91,0x1b,0xed,0xbf); -DEFINE_GUID(ff_CODECAPI_AVLowLatencyMode, 0x9c27891a,0xed7a,0x40e1,0x88,0xe8,0xb2,0x27,0x27,0xa0,0x24,0xee); -DEFINE_GUID(ff_CODECAPI_AVDecVideoH264ErrorConcealment, 0xececace8,0x3436,0x462c,0x92,0x94,0xcd,0x7b,0xac,0xd7,0x58,0xa9); -DEFINE_GUID(ff_CODECAPI_AVDecVideoMPEG2ErrorConcealment, 0x9d2bfe18,0x728d,0x48d2,0xb3,0x58,0xbc,0x7e,0x43,0x6c,0x66,0x74); -DEFINE_GUID(ff_CODECAPI_AVDecVideoCodecType, 0x434528e5,0x21f0,0x46b6,0xb6,0x2c,0x9b,0x1b,0x6b,0x65,0x8c,0xd1); -DEFINE_GUID(ff_CODECAPI_AVDecVideoDXVAMode, 0xf758f09e,0x7337,0x4ae7,0x83,0x87,0x73,0xdc,0x2d,0x54,0xe6,0x7d); -DEFINE_GUID(ff_CODECAPI_AVDecVideoDXVABusEncryption, 0x42153c8b,0xfd0b,0x4765,0xa4,0x62,0xdd,0xd9,0xe8,0xbc,0xc3,0x88); -DEFINE_GUID(ff_CODECAPI_AVDecVideoSWPowerLevel, 0xfb5d2347,0x4dd8,0x4509,0xae,0xd0,0xdb,0x5f,0xa9,0xaa,0x93,0xf4); -DEFINE_GUID(ff_CODECAPI_AVDecVideoMaxCodedWidth, 0x5ae557b8,0x77af,0x41f5,0x9f,0xa6,0x4d,0xb2,0xfe,0x1d,0x4b,0xca); -DEFINE_GUID(ff_CODECAPI_AVDecVideoMaxCodedHeight, 0x7262a16a,0xd2dc,0x4e75,0x9b,0xa8,0x65,0xc0,0xc6,0xd3,0x2b,0x13); -DEFINE_GUID(ff_CODECAPI_AVDecNumWorkerThreads, 0x9561c3e8,0xea9e,0x4435,0x9b,0x1e,0xa9,0x3e,0x69,0x18,0x94,0xd8); -DEFINE_GUID(ff_CODECAPI_AVDecSoftwareDynamicFormatChange, 0x862e2f0a,0x507b,0x47ff,0xaf,0x47,0x01,0xe2,0x62,0x42,0x98,0xb7); -DEFINE_GUID(ff_CODECAPI_AVDecDisableVideoPostProcessing, 0xf8749193,0x667a,0x4f2c,0xa9,0xe8,0x5d,0x4a,0xf9,0x24,0xf0,0x8f); - -// These are missing from mingw-w64's headers until after mingw-w64 v7.0.0. -DEFINE_GUID(ff_CODECAPI_AVEncCommonRateControlMode, 0x1c0608e9, 0x370c, 0x4710, 0x8a, 0x58, 0xcb, 0x61, 0x81, 0xc4, 0x24, 0x23); -DEFINE_GUID(ff_CODECAPI_AVEncCommonQuality, 0xfcbf57a3, 0x7ea5, 0x4b0c, 0x96, 0x44, 0x69, 0xb4, 0x0c, 0x39, 0xc3, 0x91); -DEFINE_GUID(ff_CODECAPI_AVEncCommonMeanBitRate, 0xf7222374, 0x2144, 0x4815, 0xb5, 0x50, 0xa3, 0x7f, 0x8e, 0x12, 0xee, 0x52); -DEFINE_GUID(ff_CODECAPI_AVEncH264CABACEnable, 0xee6cad62, 0xd305, 0x4248, 0xa5, 0xe, 0xe1, 0xb2, 0x55, 0xf7, 0xca, 0xf8); -DEFINE_GUID(ff_CODECAPI_AVEncVideoForceKeyFrame, 0x398c1b98, 0x8353, 0x475a, 0x9e, 0xf2, 0x8f, 0x26, 0x5d, 0x26, 0x3, 0x45); -DEFINE_GUID(ff_CODECAPI_AVEncMPVDefaultBPictureCount, 0x8d390aac, 0xdc5c, 0x4200, 0xb5, 0x7f, 0x81, 0x4d, 0x04, 0xba, 0xba, 0xb2); -DEFINE_GUID(ff_CODECAPI_AVScenarioInfo, 0xb28a6e64,0x3ff9,0x446a,0x8a,0x4b,0x0d,0x7a,0x53,0x41,0x32,0x36); - -DEFINE_GUID(ff_MF_SA_D3D11_BINDFLAGS, 0xeacf97ad, 0x065c, 0x4408, 0xbe, 0xe3, 0xfd, 0xcb, 0xfd, 0x12, 0x8b, 0xe2); -DEFINE_GUID(ff_MF_SA_D3D11_USAGE, 0xe85fe442, 0x2ca3, 0x486e, 0xa9, 0xc7, 0x10, 0x9d, 0xda, 0x60, 0x98, 0x80); -DEFINE_GUID(ff_MF_SA_D3D11_AWARE, 0x206b4fc8, 0xfcf9, 0x4c51, 0xaf, 0xe3, 0x97, 0x64, 0x36, 0x9e, 0x33, 0xa0); -DEFINE_GUID(ff_MF_SA_D3D11_SHARED, 0x7b8f32c3, 0x6d96, 0x4b89, 0x92, 0x3, 0xdd, 0x38, 0xb6, 0x14, 0x14, 0xf3); -DEFINE_GUID(ff_MF_SA_D3D11_SHARED_WITHOUT_MUTEX, 0x39dbd44d, 0x2e44, 0x4931, 0xa4, 0xc8, 0x35, 0x2d, 0x3d, 0xc4, 0x21, 0x15); -DEFINE_GUID(ff_MF_SA_MINIMUM_OUTPUT_SAMPLE_COUNT, 0x851745d5, 0xc3d6, 0x476d, 0x95, 0x27, 0x49, 0x8e, 0xf2, 0xd1, 0xd, 0x18); -DEFINE_GUID(ff_MF_SA_MINIMUM_OUTPUT_SAMPLE_COUNT_PROGRESSIVE, 0xf5523a5, 0x1cb2, 0x47c5, 0xa5, 0x50, 0x2e, 0xeb, 0x84, 0xb4, 0xd1, 0x4a); - -DEFINE_MEDIATYPE_GUID(ff_MFVideoFormat_HEVC, 0x43564548); // FCC('HEVC') -DEFINE_MEDIATYPE_GUID(ff_MFVideoFormat_HEVC_ES, 0x53564548); // FCC('HEVS') - - -// This enum is missing from mingw-w64's codecapi.h by v7.0.0. -enum ff_eAVEncCommonRateControlMode { - ff_eAVEncCommonRateControlMode_CBR = 0, - ff_eAVEncCommonRateControlMode_PeakConstrainedVBR = 1, - ff_eAVEncCommonRateControlMode_UnconstrainedVBR = 2, - ff_eAVEncCommonRateControlMode_Quality = 3, - ff_eAVEncCommonRateControlMode_LowDelayVBR = 4, - ff_eAVEncCommonRateControlMode_GlobalVBR = 5, - ff_eAVEncCommonRateControlMode_GlobalLowDelayVBR = 6 -}; - -enum ff_eAVScenarioInfo { - ff_eAVScenarioInfo_Unknown = 0, - ff_eAVScenarioInfo_DisplayRemoting = 1, - ff_eAVScenarioInfo_VideoConference = 2, - ff_eAVScenarioInfo_Archive = 3, - ff_eAVScenarioInfo_LiveStreaming = 4, - ff_eAVScenarioInfo_CameraRecord = 5, - ff_eAVScenarioInfo_DisplayRemotingWithFeatureMap = 6 -}; - -// These do exist in mingw-w64's mfobjects.idl, but are missing from -// mfobjects.h that is generated from the former, due to incorrect use of -// ifdefs in the IDL file. -enum { - ff_METransformUnknown = 600, - ff_METransformNeedInput, - ff_METransformHaveOutput, - ff_METransformDrainComplete, - ff_METransformMarker, -}; - -// These do exist in all supported headers, but are manually defined here -// to avoid having to include codecapi.h, as there's problems including that -// header when targeting UWP (where including it with MSVC seems to work, -// but fails when built with clang in MSVC mode). -enum ff_eAVEncH264VProfile { - ff_eAVEncH264VProfile_Base = 66, - ff_eAVEncH264VProfile_Main = 77, - ff_eAVEncH264VProfile_High = 100, -}; - -char *ff_hr_str_buf(char *buf, size_t size, HRESULT hr); -#define ff_hr_str(hr) ff_hr_str_buf((char[80]){0}, 80, hr) - -// Possibly compiler-dependent; the MS/MinGW definition for this is just crazy. -#define FF_VARIANT_VALUE(type, contents) &(VARIANT){ .vt = (type), contents } - -#define FF_VAL_VT_UI4(v) FF_VARIANT_VALUE(VT_UI4, .ulVal = (v)) -#define FF_VAL_VT_BOOL(v) FF_VARIANT_VALUE(VT_BOOL, .boolVal = (v)) - -IMFSample *ff_create_memory_sample(MFFunctions *f, void *fill_data, - size_t size, size_t align); -enum AVSampleFormat ff_media_type_to_sample_fmt(IMFAttributes *type); -enum AVPixelFormat ff_media_type_to_pix_fmt(IMFAttributes *type); -const GUID *ff_pix_fmt_to_guid(enum AVPixelFormat pix_fmt); -int ff_fourcc_from_guid(const GUID *guid, uint32_t *out_fourcc); -char *ff_guid_str_buf(char *buf, size_t buf_size, const GUID *guid); -#define ff_guid_str(guid) ff_guid_str_buf((char[80]){0}, 80, guid) -void ff_attributes_dump(void *log, IMFAttributes *attrs); -void ff_media_type_dump(void *log, IMFMediaType *type); -const CLSID *ff_codec_to_mf_subtype(enum AVCodecID codec); -int ff_instantiate_mf(void *log, MFFunctions *f, GUID category, - MFT_REGISTER_TYPE_INFO *in_type, - MFT_REGISTER_TYPE_INFO *out_type, - int use_hw, IMFTransform **res); -void ff_free_mf(MFFunctions *f, IMFTransform **mft); - -#endif diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/wmv2dsp_mips.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/wmv2dsp_mips.h deleted file mode 100644 index c96b3d94c7bd8c4caed347160e26b50244510e24..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/wmv2dsp_mips.h +++ /dev/null @@ -1,29 +0,0 @@ -/* - * Copyright (c) 2016 Zhou Xiaoyong - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_MIPS_WMV2DSP_MIPS_H -#define AVCODEC_MIPS_WMV2DSP_MIPS_H - -#include "libavcodec/wmv2dsp.h" - -void ff_wmv2_idct_add_mmi(uint8_t *dest, ptrdiff_t line_size, int16_t *block); -void ff_wmv2_idct_put_mmi(uint8_t *dest, ptrdiff_t line_size, int16_t *block); - -#endif /* AVCODEC_MIPS_WMV2DSP_MIPS_H */ diff --git a/spaces/congsaPfin/Manga-OCR/logs/Domino 39s Base Apk Download.md b/spaces/congsaPfin/Manga-OCR/logs/Domino 39s Base Apk Download.md deleted file mode 100644 index 44128dec2a3218134653674c8b06e6681e9278bd..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Domino 39s Base Apk Download.md +++ /dev/null @@ -1,86 +0,0 @@ - -

          Domino's Base APK Download: How to Get the Best Pizza App on Your Android Device

          -

          If you love pizza, you probably love Domino's. And if you love Domino's, you probably want to order it from anywhere, anytime, and with ease. That's why you need Domino's Base APK, the app that lets you enjoy all the features of Domino's Pizza on your Android device. In this article, we will show you what Domino's Base APK is, how to download and install it, how to use it, what are its features, and what are some alternatives to it.

          -

          What is Domino's Base APK?

          -

          Domino's Base APK is a file that contains the app for Domino's Pizza. APK stands for Android Package Kit, and it is a format that allows you to install apps on your Android device without using the Google Play Store. You might wonder why you would need an APK file instead of a regular app. Here are some reasons:

          -

          domino 39;s base apk download


          Download Zip --->>> https://urlca.com/2uObt6



          -

          The difference between APK and regular app

          -

          A regular app is downloaded from the Google Play Store, which means that it has to follow certain rules and regulations set by Google. For example, it has to be compatible with your device, it has to be updated regularly, and it has to comply with Google's policies. An APK file, on the other hand, is downloaded from a different source, which means that it can bypass some of these restrictions. For example, it can be compatible with older or newer devices, it can have features that are not available in the regular app, and it can be modified by developers or users.

          -

          The benefits of downloading Domino's Base APK

          -

          Downloading Domino's Base APK can have some advantages over downloading the regular app. For instance:

          -
            -
          • You can get access to the latest version of the app before it is released on the Google Play Store.
          • -
          • You can get access to features that are not available in the regular app, such as exclusive deals and discounts.
          • -
          • You can avoid some bugs or errors that might occur in the regular app.
          • -
          • You can customize the app according to your preferences.
          • -
          -

          How to Download and Install Domino's Base APK

          -

          Now that you know what Domino's Base APK is and why you might want to download it, let's see how you can do it. Here are the steps:

          -

          Step 1: Find a reliable source for the APK file

          -

          The first thing you need to do is find a website that offers the APK file for Domino's Base. You have to be careful here, because not all websites are trustworthy. Some might contain malware or viruses that can harm your device or steal your data. To avoid this, you should look for websites that have positive reviews, ratings, and feedback from other users. You can also use antivirus software or VPN services to protect your device and your privacy.

          -

          One website that we recommend is [APKCombo](^1^), which offers free and safe downloads for various apps and a progress bar that shows the status of your order, such as "Prep", "Bake", "Quality Check", and "Out for Delivery". -

        • You can also tap on the map icon to see the location of your order and the driver.
        • -
        • When your order arrives, you can rate your experience and leave feedback by tapping on the stars and the comment icon.
        • -
      -

      What are the Features of Domino's Base APK?

      -

      Domino's Base APK is not just a regular pizza app. It has some amazing features that make it stand out from the rest. Here are some of them:

      -

      Access to exclusive deals and discounts

      -

      One of the benefits of downloading Domino's Base APK is that you can get access to exclusive deals and discounts that are not available in the regular app. For example, you can get coupons, free items, combo offers, and more. You can also join Domino's Piece of the Pie Rewards program, which lets you earn points for every order and redeem them for free pizza. To access these deals and discounts, you have to follow these steps:

      -
        -
      1. Open the app and tap on the menu icon in the top left corner.
      2. -
      3. Tap on "Coupons" or "Rewards" and choose the offer you want to use.
      4. -
      5. Add the items to your order and apply the coupon or reward at checkout.
      6. -
      -

      Integration with voice assistant, smartwatch, and car

      -

      Another feature of Domino's Base APK is that it integrates with your voice assistant, smartwatch, and car. This means that you can order pizza without even touching your phone. For example, you can use Google Assistant, Alexa, or Siri to place your Easy Order with just a voice command. You can also use your smartwatch, such as Samsung Gear or Apple Watch, to track your order or pay with Apple Pay. You can also use your car, such as Ford Sync or Chevy MyLink, to order pizza from your dashboard or listen to Domino's Tracker updates. To use these integrations, you have to follow these steps:

      -
        -
      1. Open the app and tap on the menu icon in the top left corner.
      2. -
      3. Tap on "Settings" and choose the integration you want to enable.
      4. -
      5. Follow the instructions to link your device or account with Domino's Base APK.
      6. -
      7. Use your voice assistant, smartwatch, or car to order pizza with ease.
      8. -
      -

      Option to pay with cash, card, PayPal, or Apple Pay

      -

      A final feature of Domino's Base APK is that it gives you the option to pay with cash, card, PayPal, or Apple Pay. This means that you can choose the payment method that suits you best. You can also save your payment details for faster checkout. To pay with cash, card, PayPal, or Apple Pay, you have to follow these steps:

      -

      -
        -
      1. Open the app and tap on the menu icon in the top left corner.
      2. -
      3. Tap on "Payment" and choose the payment method you want to use.
      4. -
      5. If you are using a card, PayPal, or Apple Pay, enter your details and save them for future orders.
      6. -
      7. If you are using cash, make sure you have enough money to pay for your order and tip.
      8. -
      -

      What are the Alternatives to Domino's Base APK?

      -

      Domino's Base APK is a great app for ordering pizza from Domino's Pizza. However, it is not the only app that lets you do that. There are some alternatives that you might want to try if you want to compare prices, quality, or variety. Here are some of them:

      -

      Other pizza delivery apps

      -

      If you want to order pizza from other pizza chains or local pizzerias, you can use other pizza delivery apps. Some of them are:

      -
        -
      • [Pizza Hut]: This app lets you order pizza from Pizza Hut, one of the largest pizza chains in the world. You can also get access to deals, rewards, and coupons.
      • -
      • [Papa John's]: This app lets you order pizza from Papa John's, another popular pizza chain that claims to have better ingredients and better pizza. You can also join Papa Rewards and earn free pizza.
      • -
      • [Slice]: This app lets you order pizza from local pizzerias near you. You can also support small businesses and get exclusive offers.
      • -
      -

      Other food delivery apps

      -

      If you want to order food other than pizza, you can use other food delivery apps. Some of them are:

      -
    11. [Uber Eats]: This app lets you order food from thousands of restaurants near you. You can also get access to ratings, reviews, and delivery time estimates.
    12. -
    13. [DoorDash]: This app lets you order food from hundreds of cuisines and local favorites. You can also get access to deals, rewards, and contactless delivery.
    14. -
    15. [Grubhub]: This app lets you order food from over 300,000 restaurants across the U.S. You can also get access to coupons, perks, and free delivery.
    16. - -

      Conclusion

      -

      Domino's Base APK is a file that lets you download and install the app for Domino's Pizza on your Android device without using the Google Play Store. It has some advantages over the regular app, such as access to exclusive features, deals, and discounts. It also has some amazing features, such as integration with voice assistant, smartwatch, and car, and option to pay with cash, card, PayPal, or Apple Pay. It is easy to download and install Domino's Base APK, as well as to use it to order pizza from Domino's. However, it is not the only app that lets you order pizza or food online. There are some alternatives that you might want to try if you want to compare prices, quality, or variety.

      -

      FAQs

      -

      Here are some frequently asked questions about Domino's Base APK:

      -

      Is Domino's Base APK safe?

      -

      Domino's Base APK is generally safe to download and install on your device. However, you have to be careful about the source of the APK file, as some websites might contain malware or viruses that can harm your device or steal your data. To avoid this, you should look for websites that have positive reviews, ratings, and feedback from other users. You can also use antivirus software or VPN services to protect your device and your privacy.

      -

      Is Domino's Base APK legal?

      -

      Domino's Base APK is legal to download and install on your device. However, it might violate some terms and conditions of Domino's Pizza or Google Play Store. For example, it might bypass some restrictions or policies set by these parties. Therefore, you should use Domino's Base APK at your own risk and discretion.

      -

      Is Domino's Base APK free?

      -

      Domino's Base APK is free to download and install on your device. However, you might have to pay for some items or services within the app. For example, you have to pay for your pizza order, delivery fee, tip, or taxes. You might also have to pay for some premium features or subscriptions within the app.

      -

      How do I update Domino's Base APK?

      -

      To update Domino's Base APK, you have to follow the same steps as downloading and installing it. You have to find a website that offers the latest version of the APK file and download and install it on your device. You might also have to uninstall the previous version of the app before installing the new one.

      -

      How do I uninstall Domino's Base APK?

      -

      To uninstall Domino's Base APK, you have to follow these steps:

      -
        -
      1. Go to your device's settings and look for apps or applications options.
      2. -
      3. Find Domino's Base APK and tap on it.
      4. -
      5. Tap on "Uninstall" or "Delete" and confirm your action.
      6. -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Efootball PES 2023 The Best ISO File for PS2 Gamers.md b/spaces/congsaPfin/Manga-OCR/logs/Efootball PES 2023 The Best ISO File for PS2 Gamers.md deleted file mode 100644 index 76ea8ac7c8aa49a4922b173069e8f086dd37aaac..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Efootball PES 2023 The Best ISO File for PS2 Gamers.md +++ /dev/null @@ -1,187 +0,0 @@ -
      -

      PES ISO File Download 2023: How to Get the Latest Version of eFootball

      -

      If you are a fan of soccer games, you might have heard of PES, or Pro Evolution Soccer, one of the most popular franchises in the genre. PES is known for its realistic graphics, gameplay, and licensed teams and players. However, if you want to enjoy the latest version of PES, which is now called eFootball 2023, you might need to download a PES ISO file.

      -

      pes iso file download 2023


      Download File ---> https://urlca.com/2uO8m8



      -

      A PES ISO file is a compressed file that contains all the data and files needed to run the game on different devices, such as PC, PlayStation, Xbox, or mobile phones. By downloading a PES ISO file, you can play eFootball 2023 without having to buy the game or install it from a disc.

      -

      In this article, we will explain what a PES ISO file is, what are its benefits, what is eFootball 2023, how to download a PES ISO file 2023, and how to play eFootball 2023 with a PES ISO file. Let's get started!

      -

      What is PES ISO File?

      -

      An ISO file is a type of archive file that contains an exact copy of a disc, such as a CD or DVD. An ISO file can be used to create a backup of a disc, or to transfer its contents to another device.

      -

      A PES ISO file is an ISO file that contains all the data and files needed to run a PES game on different devices. A PES ISO file can be created by ripping a PES disc, or by downloading it from a trusted source online.

      -

      Benefits of PES ISO File

      -

      There are several benefits of using a PES ISO file to play eFootball 2023, such as:

      -

      pes 2023 iso file for ppsspp download
      -pes 2023 ps2 iso download mr games
      -pes 2023 psp iso save data and textures
      -download pes 2023 iso file for android
      -pes 2023 iso file for pc free download
      -pes efootball 2023 iso ps2 game
      -how to install pes 2023 iso on psp
      -pes 2023 iso file download with commentary
      -pes 2023 ps2 iso english version download
      -pes 2023 ppsspp iso file highly compressed
      -pes 2023 iso file download for ps4
      -pes 2023 iso file update kits and transfers
      -pes 2023 iso file download offline mode
      -pes 2023 ps2 iso full hd graphics download
      -pes 2023 psp iso latest player ratings
      -download pes 2023 iso file for xbox one
      -pes 2023 iso file with new stadiums and teams
      -pes 2023 iso file download no password
      -pes 2023 ps2 iso best camera angle download
      -pes 2023 ppsspp iso file multiplayer mode
      -pes 2023 iso file download for windows 10
      -pes 2023 iso file with real faces and names
      -pes 2023 iso file download with cheats and tricks
      -pes 2023 ps2 iso new features and gameplay download
      -pes 2023 psp iso best sound quality and music
      -download pes 2023 iso file for mac os
      -pes 2023 iso file with custom kits and logos
      -pes 2023 iso file download with patch and mods
      -pes 2023 ps2 iso original version download
      -pes 2023 ppsspp iso file smooth and fast performance
      -pes 2023 iso file download for linux ubuntu
      -pes 2023 iso file with latest skills and animations
      -pes 2023 iso file download with online mode and tournaments
      -pes 2023 ps2 iso ultimate edition download
      -pes 2023 psp iso realistic physics and graphics
      -download pes 2023 iso file for chrome os
      -pes 2023 iso file with all leagues and cups
      -pes 2023 iso file download with career mode and manager mode
      -pes 2023 ps2 iso classic teams and players download
      -pes 2023 ppsspp iso file easy controls and settings

      -
        -
      • You can play eFootball 2023 without having to buy the game or install it from a disc.
      • -
      • You can play eFootball 2023 on any device that supports an emulator, such as PC, PlayStation, Xbox, or mobile phones.
      • -
      • You can play eFootball 2023 with improved graphics and performance, as well as custom patches and mods.
      • -
      • You can play eFootball 2023 offline or online with other players who use a PES ISO file.
      • -
      -

      What is eFootball 2023?

      -

      eFootball 2023 is the latest installment in the PES series, developed and published by Konami. It was released on September 29, 2021 for PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series X/S, and mobile devices.

      -

      eFootball 2023 is a free-to-play game that offers a new football experience with unparalleled realism and gameplay. It features full national team squads of Euro 2023, more realistic animations, player models, enhanced physics, photorealistic visuals, and improved artificial intelligence.

      -

      eFootball 2023 also has a large eSports platform for football fans around the world to enjoy the best head-to-head experience, no matter their device of choice. It has various modes and features, such as:

      -

      Features of eFootball 2023

      -
        -
      • eFootball - eFootball is the main mode of eFootball 2023, where you can play online matches with other players around the world. You can choose from various match types, such as 1v1, 2v2, 3v3, or 11v11. You can also join or create a clan and compete in clan battles and tournaments. You can earn eFootball points by playing eFootball matches, which you can use to unlock new players, kits, stadiums, and more.
      • -
      • Master League - Master League is the classic single-player mode of PES, where you can create your own club and manage it from the ground up. You can sign players, hire staff, set tactics, train your team, and compete in various leagues and cups. You can also experience a realistic transfer market, where players have their own personalities, preferences, and values. You can also customize your club's logo, kit, stadium, and sponsors.
      • -
      • Matchday - Matchday is a special mode that reflects the real-life events and matches of the football world. You can choose a side and play online matches with other players who support the same team. You can earn points for your side by winning matches and scoring goals. The points are accumulated throughout the week and determine the winner of the Matchday event. You can also watch the grand final match between the top players of each side and cheer for your team.
      • -
      • Edit Mode - Edit Mode is a mode that allows you to customize various aspects of the game, such as players, teams, leagues, kits, stadiums, balls, and more. You can create your own original content or download content created by other users. You can also apply custom patches and mods to enhance your game experience.
      • -
      -

      System Requirements for eFootball 2023

      -

      To play eFootball 2023 on PC, you need to meet the following system requirements:

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

      How to Download PES ISO File 2023

      -

      To play eFootball 2023 with a PES ISO file, you need to follow these steps:

      -

      Step 1: Download the PES ISO File from a Trusted Source

      -

      The first step is to download the PES ISO file from a trusted source online. There are many websites that offer PES ISO files for download, but you need to be careful and avoid any malicious or fake links. Some of the trusted sources that we recommend are:

      -
        -
      • [PES Patch]: This website offers various PES patches, mods, updates, and ISO files for download. You can find the latest PES ISO file 2023 here.
      • -
      • [PES Universe]: This website is a community of PES fans who create and share custom content for the game. You can find the latest PES ISO file 2023 here.
      • -
      • [PES Mobile]: This website is dedicated to PES mobile games. You can find the latest PES ISO file 2023 here.
      • -
      • [PES Futebol]: This website is another source of PES patches, mods, updates, and ISO files for download. You can find the latest PES ISO file 2023 here.
      -

      Step 2: Extract the PES ISO File Using a Zip Extractor

      -

      The second step is to extract the PES ISO file using a zip extractor. A zip extractor is a software that can decompress and extract files from a compressed archive, such as a zip file. Some of the zip extractors that we recommend are:

      -
        -
      • [WinRAR]: This is a popular and powerful zip extractor that can handle various types of compressed files, such as rar, zip, 7z, iso, and more. You can download WinRAR here.
      • -
      • [7-Zip]: This is a free and open-source zip extractor that can also handle various types of compressed files, such as zip, rar, 7z, iso, and more. You can download 7-Zip here.
      • -
      • [ZArchiver]: This is a zip extractor for mobile devices that can also handle various types of compressed files, such as zip, rar, 7z, iso, and more. You can download ZArchiver here.
      • -
      -

      To extract the PES ISO file using a zip extractor, you need to follow these steps:

      -
        -
      1. Locate the PES ISO file that you have downloaded on your device.
      2. -
      3. Right-click on the PES ISO file and select "Extract Here" or "Extract to" depending on your zip extractor.
      4. -
      5. Wait for the extraction process to finish. You should see a folder with the same name as the PES ISO file.
      6. -
      7. Open the folder and you should see the PES ISO file inside.
      8. -
      -

      Step 3: Transfer the PES ISO File to Your Device

      -

      The third step is to transfer the PES ISO file to your device. Depending on what device you want to play eFootball 2023 on, you need to transfer the PES ISO file to a specific location on your device. Here are some examples:

      -
        -
      • If you want to play eFootball 2023 on PC, you need to transfer the PES ISO file to a folder where you have installed an emulator, such as C:\Program Files\PCSX2\isos.
      • -
      • If you want to play eFootball 2023 on PlayStation, you need to transfer the PES ISO file to a USB flash drive or an external hard drive that is formatted in FAT32 or exFAT.
      • -
      • If you want to play eFootball 2023 on Xbox, you need to transfer the PES ISO file to a USB flash drive or an external hard drive that is formatted in NTFS or exFAT.
      • -
      • If you want to play eFootball 2023 on mobile phones, you need to transfer the PES ISO file to a folder on your internal storage or SD card, such as Android\data\com.pesmobile\files\isos.
      • -
      -

      To transfer the PES ISO file to your device, you need to follow these steps:

      -
        -
      1. Connect your device to your PC using a USB cable or a wireless connection.
      2. -
      3. Open your device's storage on your PC and locate the folder where you want to transfer the PES ISO file.
      4. -
      5. Drag and drop the PES ISO file from your PC to your device's folder.
      6. -
      7. Wait for the transfer process to finish. You should see the PES ISO file on your device's folder.
      8. -
      -

      Step 4: Install an Emulator to Run the PES ISO File

      -

      The fourth step is to install an emulator to run the PES ISO file. An emulator is a software that can simulate another device's hardware and software on your device. By using an emulator, you can run games and applications that are not compatible with your device.

      -

      To play eFootball 2023 with a PES ISO file, you need to install an emulator that can run PlayStation 2 games, such as:

      -
        -
      • [PCSX2]: This is a popular and powerful emulator for PC that can run PlayStation 2 games with high compatibility and performance. You can download PCSX2 here.
      • -
      • [PPSSPP]: This is a popular and powerful emulator for mobile devices that can run PlayStation Portable games with high compatibility and performance. You can download PPSSPP here.
      • -
      -

      To install an emulator on your device, you need to follow these steps:

      -
        -
      1. Download the emulator from its official website or app store.
      2. -
      3. Run the installer or open the app and follow the instructions on the screen.
      4. -
      5. Configure the settings and controls of the emulator according to your preference.
      6. -
      7. Make sure that you have installed the necessary BIOS files and plugins for the emulator to run the PES ISO file. You can find the BIOS files and plugins here.
      8. -
      -

      How to Play eFootball 2023 with PES ISO File

      -

      The final step is to play eFootball 2023 with a PES ISO file. To do this, you need to follow these steps:

      -

      Step 1: Launch the Emulator and Locate the PES ISO File

      -

      Open the emulator that you have installed on your device and locate the PES ISO file that you have transferred to your device. You can use the file browser or the game library of the emulator to find the PES ISO file.

      -

      Select the PES ISO file and press the play button or double-click on it to launch the game. You should see the game loading screen and then the main menu of eFootball 2023.

      -

      Step 2: Adjust the Settings and Controls According to Your Preference

      -

      Before you start playing eFootball 2023, you might want to adjust the settings and controls of the game according to your preference. You can access the settings and controls menu from the main menu of eFootball 2023 or from the emulator's menu.

      -

      You can change various aspects of the game, such as the language, difficulty, camera angle, sound volume, graphics quality, and more. You can also customize the controls of the game, such as the buttons, joysticks, keyboard, mouse, or touch screen.

      -

      Make sure that you save your settings and controls before you exit the menu.

      -

      Step 3: Enjoy the Game with Realistic Graphics and Gameplay

      -

      Now you are ready to enjoy eFootball 2023 with a PES ISO file. You can choose from various modes and features of the game, such as eFootball, Master League, Matchday, Edit Mode, and more.

      -

      You can also play offline or online with other players who use a PES ISO file. You can join or create a clan and compete in clan battles and tournaments. You can also earn eFootball points by playing eFootball matches, which you can use to unlock new players, kits, stadiums, and more.

      -

      You can also enjoy the realistic graphics and gameplay of eFootball 2023, which are enhanced by using a PES ISO file. You can see the full national team squads of Euro 2023, more realistic animations, player models, enhanced physics, photorealistic visuals, and improved artificial intelligence.

      -

      Conclusion

      -

      In this article, we have explained what a PES ISO file is, what are its benefits, what is eFootball 2023, how to download a PES ISO file 2023, and how to play eFootball 2023 with a PES ISO file.

      -

      We hope that this article has helped you to understand how to get the latest version of eFootball 2023 by using a PES ISO file. By following these steps, you can enjoy eFootball 2023 without having to buy the game or install it from a disc.

      -

      If you have any questions or feedback about this article, please feel free to leave a comment below. We would love to hear from you!

      -

      FAQs

      -

      Here are some of the frequently asked questions about PES ISO file download 2023:

      -
        -
      1. Q: Is it legal to download a PES ISO file? - A: It depends on your country's laws and regulations regarding intellectual property rights and piracy. Generally speaking, it is not legal to download a PES ISO file if you do not own a copy of the original game or if you do not have permission from the game developer or publisher. However, some countries may allow downloading a PES ISO file for personal use or backup purposes only.
      2. -
      3. Q: Is it safe to download a PES ISO file? - A: It depends on where you download it from. There are many websites that offer PES ISO files for download, but some of them may contain viruses, malware, spyware, or other harmful software that can damage your device or steal your personal information. Therefore, you should always download a PES ISO file from a trusted source online, such as the ones we have recommended in this article.
      4. -
      5. Q: What is the difference between PES and eFootball? - A: PES and eFootball are both names of the same game series developed and published by Konami. However, starting from 2021, Konami decided to rebrand PES as eFootball to reflect its focus on online gaming and eSports. Therefore, eFootball is the new name of PES from 2021 onwards.
      6. -
      7. Q: What is the size of the PES ISO file 2023? - A: The size of the PES ISO file 2023 may vary depending on the source and the version of the file. However, the average size of the PES ISO file 2023 is around 4 GB. You should make sure that you have enough storage space on your device before downloading the PES ISO file 2023.
      8. -
      9. Q: How can I update the PES ISO file 2023? - A: You can update the PES ISO file 2023 by downloading and applying the latest patches, mods, and updates from the trusted sources that we have recommended in this article. You can also check the official website or social media accounts of Konami for any news or announcements regarding eFootball 2023 updates.
      10. -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Get Google Play Store APK 6 and Enjoy the Best Apps Games and More.md b/spaces/congsaPfin/Manga-OCR/logs/Get Google Play Store APK 6 and Enjoy the Best Apps Games and More.md deleted file mode 100644 index c47c0db0f3f1173a318c2cd9ccc514efceecb81a..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Get Google Play Store APK 6 and Enjoy the Best Apps Games and More.md +++ /dev/null @@ -1,93 +0,0 @@ -
      -

      Google Play Store APK 6: What You Need to Know

      -

      If you are an Android user, you probably know what Google Play Store is. It is the official app store for Android devices, where you can find millions of apps, games, books, and more. But did you know that there is a new version of Google Play Store available? It is called Google Play Store APK 6, and it comes with some new features and improvements. In this article, we will tell you everything you need to know about Google Play Store APK 6, including what it is, why you need it, how to download and install it, and how to use it.

      -

      What is Google Play Store APK 6?

      -

      The official app store for Android devices

      -

      Google Play Store is the official app store for Android devices. It is developed and maintained by Google, and it offers a variety of apps, games, books, and more for Android users. You can use Google Play Store to browse and search for apps, games, books, and more that suit your needs and preferences. You can also use Google Play Store to download and update apps, games, books, and more on your device. You can also use Google Play Store to manage your account and settings, such as payment methods, parental controls, subscriptions, etc.

      -

      google play store apk 6


      Download File ===== https://urlca.com/2uOg39



      -

      The latest version of Google Play Store

      -

      Google Play Store APK 6 is the latest version of Google Play Store. It was released in June 2023, and it comes with some new features and improvements. Some of the new features and improvements include:

      -
        -
      • A new design that makes browsing and searching easier and faster
      • -
      • A new section that shows personalized recommendations based on your interests and behavior
      • -
      • A new feature that lets you pre-register for upcoming apps and games
      • -
      • A new feature that lets you share apps and games with your friends via Nearby Share
      • -
      • A new feature that lets you see app ratings and reviews from trusted sources
      • -
      • Improved performance and stability
      • -
      -

      Why do you need Google Play Store APK 6?

      -

      To access millions of apps, games, books, and more

      -

      One of the main reasons why you need Google Play Store APK 6 is to access millions of apps, games, books, and more on your Android device. Google Play Store has over 3 million apps, over 1 million games, over 5 million books, and more for you to choose from. You can find apps, games, books, and more for every category, genre, interest, purpose, and occasion. Whether you want to play games, read books, watch movies, listen to music, learn something new, or do anything else on your device, you can find what you need on Google Play Store.

      -

      To enjoy new features and improvements

      -

      Another reason why you need Google Play Store APK 6 is to enjoy new features and improvements that make your experience better. As we mentioned earlier, Google Play Store APK 6 comes with some new features and improvements that make browsing and searching easier and faster. You can also enjoy personalized recommendations based on your interests and behavior. You can also pre-register for upcoming apps and games that you are interested in. You can also share apps and games with your friends via Nearby Share. You can also see app ratings and reviews from trusted sources.

      All these new features and improvements make Google Play Store APK 6 more user-friendly, convenient, and fun. You can enjoy a better app store experience with Google Play Store APK 6.

      -

      How to download and install Google Play Store APK 6?

      -

      Check your device compatibility and settings

      -

      Before you download and install Google Play Store APK 6, you need to check your device compatibility and settings. Google Play Store APK 6 is compatible with Android devices running Android 4.1 or higher. You also need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps that are not from the official Google Play Store.

      -

      Download the APK file from a trusted source

      -

      After you check your device compatibility and settings, you need to download the APK file from a trusted source. An APK file is an Android application package file that contains the app's code, resources, and manifest. You can download the Google Play Store APK 6 file from various websites that offer APK downloads, such as APKMirror, APKPure, or Uptodown. Make sure you download the latest version of the file, which is 6.0.5 as of June 2023. You can also scan the file with an antivirus software before installing it to ensure its safety.

      -

      Install the APK file on your device

      -

      Once you download the APK file, you need to install it on your device. To do this, locate the file on your device's storage and tap on it. You will see a prompt asking you to confirm the installation. Tap on Install and wait for the process to complete. You may also see a prompt asking you to grant permissions to the app. Tap on Accept and continue. After the installation is done, you will see a message saying that the app is installed. You can then open the app and start using it.

      -

      How to download and install google play store apk 6
      -Google play store apk 6 latest version download
      -Google play store apk 6 for android free download
      -Google play store apk 6 update
      -Google play store apk 6 not working
      -Google play store apk 6 for fire tablet
      -Google play store apk 6 uptodown
      -Google play store apk 6 mod
      -Google play store apk 6 offline installer
      -Google play store apk 6 for pc
      -Google play store apk 6 for samsung
      -Google play store apk 6 old version
      -Google play store apk 6 error
      -Google play store apk 6 for kindle fire
      -Google play store apk 6 xda
      -Google play store apk 6 cracked
      -Google play store apk 6 no root
      -Google play store apk 6 for huawei
      -Google play store apk 6 pure
      -Google play store apk 6 pro
      -Google play store apk 6 beta
      -Google play store apk 6 dark mode
      -Google play store apk 6 for android tv
      -Google play store apk 6 apkpure
      -Google play store apk 6 patched
      -Google play store apk 6 for chromebook
      -Google play store apk 6 lite
      -Google play store apk 6 alternative
      -Google play store apk 6 install failed
      -Google play store apk 6 reddit
      -Google play store apk 6 review
      -Google play store apk 6 features
      -Google play store apk 6 malware
      -Google play store apk 6 safe
      -Google play store apk 6 premium
      -Google play store apk 6 hack
      -Google play store apk 6 fix
      -Google play store apk 6 tips and tricks
      -Google play store apk 6 guide
      -Google play store apk 6 tutorial

      -

      How to use Google Play Store APK 6?

      -

      Browse and search for apps, games, books, and more

      -

      To use Google Play Store APK 6, you can browse and search for apps, games, books, and more on your device. You can use the navigation bar at the bottom of the screen to switch between different categories, such as Apps, Games, Books, etc. You can also use the search bar at the top of the screen to type in keywords or phrases related to what you are looking for. You can also use filters and sorting options to narrow down your results. You can also swipe left or right to see different sections, such as Top Charts, Editors' Choice, For You, etc.

      -

      Download and update apps, games, books, and more

      -

      To download and update apps, games, books, and more on your device, you can use Google Play Store APK 6. To download an app, game, book, or anything else, tap on its icon or name on the screen. You will see a page with more information about it, such as description, screenshots, ratings, reviews, etc. Tap on Install or Buy (if it is a paid item) and follow the instructions to complete the download. To update an app, game, book, or anything else, tap on the Menu icon (three horizontal lines) at the top left corner of the screen. Tap on My Apps & Games and then tap on Update All or Update next to each item that needs an update.

      -

      Manage your account and settings

      -

      To manage your account and settings on your device, you can use Google Play Store APK 6. To access your account and settings, tap on the Menu icon (three horizontal lines) at the top left corner of the screen. Tap on Account to see your payment methods, subscriptions, rewards, order history, etc. Tap on Settings to see your preferences, notifications, parental controls,

      security, etc. You can also tap on Help & Feedback to get support or send feedback to Google Play Store.

      -

      Conclusion

      -

      Google Play Store APK 6 is the latest version of Google Play Store, the official app store for Android devices. It offers millions of apps, games, books, and more for Android users. It also comes with new features and improvements that make browsing and searching easier and faster. You can download and install Google Play Store APK 6 on your device by following the steps we explained in this article. You can also use Google Play Store APK 6 to browse and search for apps, games, books, and more, download and update them, and manage your account and settings. We hope you found this article helpful and informative. If you have any questions or comments, please feel free to leave them below.

      -

      FAQs

      -
        -
      • What is the difference between Google Play Store and Google Play Services?
      • -

        Google Play Store is the app store for Android devices, where you can find and download apps, games, books, and more. Google Play Services is a background service that provides core functionality for Android devices, such as authentication, location, synchronization, etc. You need both Google Play Store and Google Play Services to use your Android device properly.

        -
      • How can I update Google Play Store APK 6?
      • -

        You can update Google Play Store APK 6 by downloading and installing the latest version of the APK file from a trusted source. You can also check for updates on your device by going to Settings > Apps > Google Play Store > App Details > Update.

        -
      • Is Google Play Store APK 6 safe to use?
      • -

        Google Play Store APK 6 is safe to use if you download it from a trusted source and scan it with an antivirus software before installing it. However, you should be careful when downloading and installing apps from unknown sources, as they may contain malware or viruses that can harm your device or data.

        -
      • How can I uninstall Google Play Store APK 6?
      • -

        You can uninstall Google Play Store APK 6 by going to Settings > Apps > Google Play Store > Uninstall. However, we do not recommend uninstalling Google Play Store APK 6, as it may cause problems with your device or other apps. If you have any issues with Google Play Store APK 6, you can try clearing its cache and data, or contacting its support team.

        -
      • How can I contact Google Play Store support team?
      • -

        You can contact Google Play Store support team by going to Menu > Help & Feedback on the app. You can also visit the official website of Google Play Store or call the toll-free number 1-855-466-4438.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Get Messenger on Your Desktop Link Download Guide.md b/spaces/congsaPfin/Manga-OCR/logs/How to Get Messenger on Your Desktop Link Download Guide.md deleted file mode 100644 index ce9c6a44ddcf64ea79a364a69bd835dbaceb26df..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Get Messenger on Your Desktop Link Download Guide.md +++ /dev/null @@ -1,158 +0,0 @@ -
      -
      MinimumRecommended
      OS: Windows 10 64-bitOS: Windows 10 64-bit
      CPU: Intel Core i5-3470 or AMD FX-4350CPU: Intel Core i7-3770 or AMD FX-8350
      RAM: 8 GBRAM: 16 GB
      GPU: NVIDIA GeForce GTX 670 or AMD Radeon HD 7870GPU: NVIDIA GeForce GTX 760 or AMD Radeon R9 270X
      DirectX: Version 11DirectX: Version 11
      Storage: 40 GB available spaceStorage: 40 GB available space
      Network: Broadband Internet connectionNetwork: Broadband Internet connection
      Sound Card: DirectX compatible soundcard or onboard chipsetSound Card: DirectX compatible soundcard or onboard chipset
      - -
      -

      How to Download Messenger on Your PC or Mac

      -

      Do you want to stay connected with your friends and family on Facebook without using your phone or browser? If so, you might be interested in downloading Messenger on your PC or Mac. Messenger is a free all-in-one communication app that lets you send text, voice and video messages, make group calls, share files and photos, watch videos together, and more. In this article, we will show you how to download Messenger on your desktop device, how to use it, what are its benefits and drawbacks, and what are some alternatives you can try.

      -

      link download messenger


      DOWNLOAD ☆☆☆☆☆ https://urlca.com/2uOezh



      -

      Features of Messenger Desktop App

      -

      Messenger desktop app has many features that make it a great choice for staying in touch with your loved ones. Here are some of them:

      -
        -
      • Text, audio and video calls: You can send unlimited text messages, make high-quality voice and video calls, and even record and send voice and video messages.
      • -
      • Group chats: You can send unlimited text messages, make high-quality voice and video calls, and even record and send voice and video messages.
      • -
      • Group chats: You can create group chats with up to 250 people, add group admins, change group names and photos, and use @mentions to get someone's attention.
      • -
      • Privacy settings: You can control who can contact you, block unwanted messages and calls, report abusive behavior, and manage your active status.
      • -
      • Custom reactions: You can express yourself with more than just a thumbs up. You can choose from a variety of emojis and stickers to react to messages.
      • -
      • Chat themes: You can customize your chat background with different colors, gradients, and images to suit your mood or personality.
      • -
      • Watch together: You can watch videos from Facebook Watch, IGTV, Reels, TV shows, movies, and more with your friends and family in real time.
      • -
      • Stickers, GIFs and emojis: You can spice up your conversations with thousands of stickers, GIFs and emojis from the Messenger library or create your own.
      • -
      • Files, photos and videos: You can share files, photos and videos of any size and format with your contacts. You can also use the built-in camera to take selfies or capture moments.
      • -
      • Plans and polls: You can create plans and polls to organize events, get opinions, or make decisions with your group.
      • -
      • Location sharing: You can share your live location with your friends and family for a specified period of time or request their location.
      • -
      • Money transfer: You can send and receive money securely and easily with Facebook Pay (available in select countries).
      • -
      • Business chat: You can connect with businesses to get customer support, make reservations, shop online, and more.
      • -
      -

      How to Download Messenger Desktop App from Messenger.com

      -

      If you want to download Messenger desktop app from the official website, here are the steps you need to follow:

      -
        -
      1. Go to Messenger.com/download.
      2. -
      3. Click on Download for Windows or Download for Mac depending on your device.
      4. -
      5. Open the installer file and follow the instructions.
      6. -
      -

      You will need to have Windows 10 or macOS 10.10 or higher to run the app. The app will automatically update itself when a new version is available.

      -

      How to Download Messenger Desktop App from Microsoft Store or App Store

      -

      If you prefer to download Messenger desktop app from the Microsoft Store or the App Store, here are the steps you need to follow:

      -

      link download messenger app for pc
      -link download messenger lite apk
      -link download messenger for mac
      -link download messenger desktop app
      -link download messenger video call
      -link download messenger for windows 10
      -link download messenger for android
      -link download messenger for iphone
      -link download messenger beta version
      -link download messenger dark mode
      -link download messenger stickers free
      -link download messenger chat history
      -link download messenger group chat
      -link download messenger voice messages
      -link download messenger games online
      -link download messenger qr code scanner
      -link download messenger rooms app
      -link download messenger kids app
      -link download messenger business account
      -link download messenger new update
      -link download messenger old version
      -link download messenger without facebook
      -link download messenger offline installer
      -link download messenger web app
      -link download messenger themes and colors
      -link download messenger reactions and emojis
      -link download messenger watch together feature
      -link download messenger send money feature
      -link download messenger chat with businesses feature
      -link download messenger cross-app messaging feature
      -link download messenger privacy settings feature
      -link download messenger custom reactions feature
      -link download messenger chat themes feature
      -link download messenger record and send feature
      -link download messenger express yourself feature
      -link download messenger send files feature
      -link download messenger plan and make it happen feature
      -link download messenger send location feature
      -link download messenger compatible across platforms feature
      -how to get the link to download the Messenger app on Google Play Store?
      -how to get the link to download the Messenger app on Apple App Store?
      -how to get the direct link to download the Messenger app on your phone?
      -how to get the latest version of the Messenger app by using the link to download it?
      -how to get the best experience of using the Messenger app by following the instructions on the link to download it?
      -how to get access to all the features of the Messenger app by clicking on the link to download it?
      -how to get in touch with your friends and family on the Messenger app by using the link to download it?
      -how to get more information about the Messenger app by visiting the official website on the link to download it?
      -how to get help and support for the Messenger app by contacting the developer on the link to download it?

      -
        -
      1. Go to Microsoft Store or App Store on your device.
      2. -
      3. Search for Messenger in the search bar.
      4. -
      5. Click on Get or Install and wait for the app to download.
      6. -
      -

      You will need to have Windows 10 or macOS 10.12 or higher to run the app. The app will automatically update itself when a new version is available.

      -

      How to Use Messenger Desktop App

      -

      Once you have downloaded Messenger desktop app on your PC or Mac, you can start using it right away. Here are the steps you need to follow:

      -
        -
      1. Launch the app from your desktop.
      2. -
      3. Log in with your Facebook account or create a new one if you don't have one already.
      4. -
      5. Start chatting with your friends and family by clicking on their names or searching for them in the search bar.
      6. -
      -

      You can also access other features of the app by clicking on the icons at the top or bottom of the screen. For example, you can click on the video camera icon to start a video call, the phone icon to start a voice call, the plus icon to create a group chat, the settings icon to change your preferences, and so on.

      -

      Benefits of Using Messenger Desktop App

      -

      Messenger desktop app has many benefits that make it a convenient and enjoyable way to communicate with your loved ones. Here are some of them:

      -
        -
      • Larger screen: You can enjoy a bigger and clearer view of your conversations, photos, videos, and other content on your desktop screen. You can also resize the app window according to your preference.
      • -
      • Keyboard shortcuts: You can enjoy a bigger and clearer view of your conversations, photos, videos, and other content on your desktop screen. You can also resize the app window according to your preference.
      • -
      • Keyboard shortcuts: You can use your keyboard to perform various actions on the app, such as sending messages, starting calls, switching chats, and more. You can find the list of keyboard shortcuts by clicking on the settings icon and then on Keyboard Shortcuts.
      • -
      • Notifications: You can get notified of new messages and calls on your desktop, even when the app is minimized or closed. You can also customize your notification settings by clicking on the settings icon and then on Notifications.
      • -
      • Synced messages: You can access all your messages and chats across your devices, whether you use Messenger on your phone, tablet, browser, or desktop. You can also sync your contacts and preferences across your devices.
      • -
      • Dark mode: You can switch to dark mode to reduce eye strain and save battery life. You can toggle dark mode on or off by clicking on the settings icon and then on Dark Mode.
      • -
      -

      Drawbacks of Using Messenger Desktop App

      -

      Messenger desktop app also has some drawbacks that you should be aware of before using it. Here are some of them:

      -
        -
      • Requires internet connection: You need to have a stable internet connection to use the app. If you lose connection or have a slow network, you might experience delays, glitches, or errors.
      • -
      • Limited features compared to mobile app: The desktop app does not have some features that are available on the mobile app, such as stories, camera effects, games, and discover tab. You also cannot make group video calls with more than 50 people on the desktop app.
      • -
      • Data usage: The app uses data to send and receive messages and calls. Depending on your data plan and usage, you might incur additional charges from your internet service provider or carrier.
      • -
      -

      Tips and Tricks for Using Messenger Desktop App

      -

      To make the most out of Messenger desktop app, here are some tips and tricks you can try:

      -
        -
      • Change chat settings: You can change various chat settings by clicking on the info icon at the top right corner of any chat. For example, you can change the chat name, photo, color, emoji, or theme. You can also mute notifications, ignore messages, or block contacts from there.
      • -
      • Mute notifications: If you want to silence all notifications from the app, you can click on the settings icon and then on Notifications. You can choose to mute notifications for a specific period of time or until you turn them back on.
      • -
      • Archive or delete conversations: If you want to clean up your chat list, you can archive or delete conversations by right-clicking on them. Archiving a conversation will hide it from your chat list until you search for it or receive a new message from it. Deleting a conversation will remove it from your chat list permanently.
      • -
      • Use keyboard shortcuts: As mentioned earlier, you can use keyboard shortcuts to perform various actions on the app faster and easier. You can find the list of keyboard shortcuts by clicking on the settings icon and then on Keyboard Shortcuts.
      • -
      -

      Alternatives to Messenger Desktop App

      -

      If you are looking for other options to communicate with your friends and family on your desktop device, here are some alternatives you can try:

      -
        -
      • WhatsApp Desktop: WhatsApp is another popular messaging app owned by Facebook that lets you send text, voice and video messages, make group calls, share files and photos, and more. You can download WhatsApp Desktop from WhatsApp.com/download.
      • -
      • Skype: Skype is a well-known video calling app that also lets you send text, voice and video messages, make group calls, share files and photos, and more. You can download Skype from Skype.com/en/get-skype.
      • -
      • Telegram Desktop: Telegram is a secure and fast messaging app that lets you send text, voice and video messages, make group calls, share files and photos, and more. You can download Telegram Desktop from Desktop.Telegram.org.
      • -
      • Signal Desktop: Signal is a privacy-focused messaging app that lets you send text, voice and video messages, make group calls, share files and photos, and more. You can download Signal Desktop from Signal > Signal Desktop from Signal.org/download.
      • -
      -

      Conclusion

      -

      Messenger desktop app is a great way to communicate with your friends and family on your PC or Mac. It has many features that make it fun and easy to use, such as text, audio and video calls, group chats, custom reactions, chat themes, watch together, stickers, GIFs and emojis, files, photos and videos, plans and polls, location sharing, money transfer, and business chat. It also has some benefits over the mobile app, such as larger screen, keyboard shortcuts, notifications, synced messages, and dark mode. However, it also has some drawbacks, such as requiring internet connection, having limited features compared to the mobile app, and using data. You can download Messenger desktop app from Messenger.com, Microsoft Store, or App Store. You can also try some alternatives to Messenger desktop app, such as WhatsApp Desktop, Skype, Telegram Desktop, or Signal Desktop.

      -

      We hope you found this article helpful and informative. If you have any questions or feedback, please let us know in the comments below. Thank you for reading!

      -

      FAQs

      -

      Here are some frequently asked questions about Messenger desktop app:

      -
        -
      1. What are the system requirements for Messenger Desktop App?
      2. -

        The system requirements for Messenger desktop app are:

        -
          -
        • Windows 10 or macOS 10.10 or higher
        • -
        • At least 512 MB of RAM
        • -
        • At least 150 MB of free disk space
        • -
        • A stable internet connection
        • -
        -
      3. How can I update Messenger Desktop App?
      4. -

        Messenger desktop app will automatically update itself when a new version is available. You can also check for updates manually by clicking on the settings icon and then on About Messenger. If there is an update available, you will see a button to download and install it. -

      5. How can I log out of Messenger Desktop App?
      6. -

        To log out of Messenger desktop app, you can click on the settings icon and then on Log Out. You can also switch accounts by clicking on the settings icon and then on Switch Account. -

      7. How can I report a problem with Messenger Desktop App?
      8. -

        To report a problem with Messenger desktop app, you can click on the settings icon and then on Report a Problem. You can describe the issue you are facing and attach screenshots if possible. You can also send feedback or suggestions by clicking on the settings icon and then on Send Feedback. -

      9. How can I delete Messenger Desktop App?
      10. -

        To delete Messenger desktop app from your device, you can follow these steps:

        -
          -
        • For Windows: Go to Control Panel > Programs > Uninstall a Program. Find Messenger in the list and click on Uninstall.
        • -
        • For Mac: Go to Finder > Applications. Find Messenger in the list and drag it to the Trash.
        • -
        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Learn to Throw Knives Like a Pro with Knife Hit - Shooting Master APK.md b/spaces/congsaPfin/Manga-OCR/logs/Learn to Throw Knives Like a Pro with Knife Hit - Shooting Master APK.md deleted file mode 100644 index 82fe33c4685bf5f685de690af2dc73978a2cd675..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Learn to Throw Knives Like a Pro with Knife Hit - Shooting Master APK.md +++ /dev/null @@ -1,124 +0,0 @@ -
      -

      Knife Hit - Shooting Master: A Fun and Addictive Game for Android

      -

      If you are looking for a simple yet exciting game to play on your Android device, you might want to try Knife Hit - Shooting Master. This is a game where you have to tap to throw knives and hit the target. The more you hit, the more points you can get. But be careful, don't hit the other knives or you will lose. Sounds easy, right? Well, not so fast. The target will rotate, move, and change shape, making it harder to hit. And there are also boss levels where you have to defeat a giant fruit or a monster with your knives. Are you ready to test your skills and reflexes in this game? Let's find out more about it.

      -

      What is Knife Hit - Shooting Master?

      -

      Knife Hit - Shooting Master is a game developed by BlueGame Studio, a small indie team based in Vietnam. The game was released in 2022 and has been downloaded over 10 million times on Google Play Store. The game is rated 4.3 out of 5 stars by more than 100 thousand users who have enjoyed its gameplay, graphics, and sound effects.

      -

      knife hit shooting master apk


      Download Zip ··· https://urlca.com/2uOfE7



      -

      How to play Knife Hit - Shooting Master

      -

      The gameplay of Knife Hit - Shooting Master is very simple and intuitive. You just have to tap the screen to throw a knife at the target. The target can be a wooden log, a fruit, a cake, a pizza, or even a dinosaur. You have to hit the target as many times as possible without hitting the other knives that are already stuck on it. If you hit another knife, you will lose one life and have to start over. You have three lives in total, so be careful.

      -

      As you progress through the game, the target will rotate faster, move around, or change shape, making it harder to hit. You will also encounter boss levels where you have to defeat a giant fruit or a monster with your knives. These levels are more challenging and require more accuracy and timing. You will also get bonus points for hitting the center of the target or for hitting multiple targets in a row.

      -

      Features of Knife Hit - Shooting Master

      -

      Knife Hit - Shooting Master is not just a simple tapping game. It also has many features that make it more fun and addictive. Here are some of them:

      -

      Different knives and targets

      -

      The game has over 100 different knives that you can collect and use in the game. Each knife has its own design, color, and shape. Some of them are realistic, like kitchen knives or daggers, while others are more creative, like pencils, scissors, or swords. You can unlock new knives by completing levels, earning coins, or watching ads.

      -

      The game also has over 50 different targets that you can hit with your knives. Each target has its own theme, like food, animals, or objects. Some of them are easy to hit, while others are tricky and require more skill. You can unlock new targets by completing levels or earning coins.

      -

      knife hit shooting master game download
      -knife hit shooting master mod apk
      -knife hit shooting master online
      -knife hit shooting master free
      -knife hit shooting master android
      -knife hit shooting master hack
      -knife hit shooting master cheats
      -knife hit shooting master tips
      -knife hit shooting master review
      -knife hit shooting master gameplay
      -knife hit shooting master app store
      -knife hit shooting master ios
      -knife hit shooting master pc
      -knife hit shooting master windows
      -knife hit shooting master mac
      -knife hit shooting master linux
      -knife hit shooting master chromebook
      -knife hit shooting master bluestacks
      -knife hit shooting master nox player
      -knife hit shooting master emulator
      -knife hit shooting master apk pure
      -knife hit shooting master apk mirror
      -knife hit shooting master apk combo
      -knife hit shooting master apk online
      -knife hit shooting master apk offline
      -knife hit shooting master apk latest version
      -knife hit shooting master apk update
      -knife hit shooting master apk old version
      -knife hit shooting master apk file download
      -knife hit shooting master apk install
      -knife hit shooting master apk uninstall
      -knife hit shooting master apk size
      -knife hit shooting master apk requirements
      -knife hit shooting master apk features
      -knife hit shooting master apk bugs
      -knife hit shooting master apk fixes
      -knife hit shooting master apk support
      -knife hit shooting master apk contact
      -knife hit shooting master apk feedback
      -knife hit shooting master apk rating
      -knife hit shooting master apk alternatives
      -knife hit shooting master apk similar games
      -knife hit - throwing knives game apk download [^1^]
      -Hit Master 3D - Knife Assassin game download [^2^]
      -Hit Master 3D - Knife Assassin mod apk [^2^]
      -Hit Master 3D - Knife Assassin online [^2^]
      -Hit Master 3D - Knife Assassin free [^2^]
      -Hit Master 3D - Knife Assassin android [^2^]
      -Hit Master 3D - Knife Assassin hack [^2^]

      -

      Boss levels and challenges

      -

      The game has 10 boss levels where you have to defeat a giant fruit or a monster with your knives. These levels are more difficult than the regular ones and require more knives to complete. You have to hit the boss multiple times until its health bar is empty. But be careful, the boss will also attack you with its own weapons or abilities. For example, the pineapple boss will shoot spikes at you, while the dragon boss will breathe fire at you. You have to dodge these attacks and hit the boss as fast as possible.

      -

      The game also has daily challenges where you can earn coins and rewards by completing various tasks, such as hitting a certain number of targets, hitting the center of the target, or hitting multiple targets in a row. These challenges are updated every day and give you more reasons to play the game.

      -

      Rewards and achievements

      -

      The game has many rewards and achievements that you can earn by playing the game. You can get coins, gems, stars, and chests by hitting the target, completing levels, or watching ads. You can use these items to unlock new knives, targets, or skins for your game. You can also get trophies by completing achievements, such as hitting 1000 targets, defeating 10 bosses, or collecting 50 knives. These trophies will show your progress and skills in the game.

      -

      Leaderboards and rankings

      -

      The game has leaderboards and rankings where you can compare your score and performance with other players around the world. You can see your rank in different categories, such as total score, highest level, most coins, or most knives. You can also see the top players in each category and try to beat their scores. You can also share your score and achievements with your friends on social media platforms, such as Facebook, Twitter, or Instagram.

      -

      Graphics and sound effects

      -

      The game has colorful and cartoonish graphics that make it appealing and enjoyable to play. The game has a variety of themes and backgrounds for each target, such as forest, desert, ocean, or space. The game also has smooth animations and transitions that make it look realistic and dynamic. The game has catchy and upbeat sound effects that match the gameplay and mood of the game. The game has a cheerful and energetic music that plays in the background and changes according to the level and situation. The game also has voice-overs and sound effects that add more fun and humor to the game.

      -

      How to download and install Knife Hit - Shooting Master APK

      -

      If you want to play Knife Hit - Shooting Master on your Android device, you have two options to download and install it. You can either download it from Google Play Store or from APKCombo or ApkOnline websites. Here are the steps for each option:

      -

      Download from Google Play Store

      -

      This is the easiest and safest way to download and install Knife Hit - Shooting Master on your device. You just have to follow these steps:

      -
        -
      1. Open Google Play Store on your device.
      2. -
      3. Search for Knife Hit - Shooting Master in the search bar.
      4. -
      5. Select the game from the list of results.
      6. -
      7. Tap on Install button to download and install the game.
      8. -
      9. Wait for the installation to finish.
      10. -
      11. Tap on Open button to launch the game.
      12. -
      -

      Download from APKCombo or ApkOnline

      -

      This is another way to download and install Knife Hit - Shooting Master on your device. You can use this option if you don't have access to Google Play Store or if you want to get the latest version of the game. You just have to follow these steps:

      -
        -
      1. Open your browser on your device.
      2. -
      3. Go to APKCombo or ApkOnline website.
      4. -
      5. Search for Knife Hit - Shooting Master in the search bar.
      6. -
      7. Select the game from the list of results.
      8. -
      9. Tap on Download APK button to download the APK file of the game.
      10. -
      11. Wait for the download to finish.
      12. -
      -

      Install the APK file on your device

      -

      After downloading the APK file of Knife Hit - Shooting Master from APKCombo or ApkOnline website, you have to install it on your device. You just have to follow these steps:

      -
        -
      1. Go to Settings on your device.
      2. -
      3. Go to Security or Privacy section.
      4. -
      5. Enable Unknown Sources option to allow installation of apps from sources other than Google Play Store.
      6. -
      7. Go to Downloads or File Manager on your device.
      8. -
      9. Find and tap on the APK file of Knife Hit - Shooting Master that you downloaded earlier.
      10. -
      11. Tap on Install button to install the game.
      12. -
      13. Wait for the installation to finish.
      14. -
      15. Tap on Open button to launch the game.
      16. -
      -

      Conclusion

      -

      Knife Hit - Shooting Master is a fun and addictive game for Android devices that will test your skills and reflexes in throwing knives at various targets. The game has many features that make it more enjoyable and challenging, such as different knives and targets, boss levels and challenges, rewards and achievements, leaderboards and rankings, graphics and sound effects. The game is easy to play and hard to master. You can download and install it from Google Play Store or from APKCombo or ApkOnline websites. If you are looking for a game that will keep you entertained and challenged, you should give Knife Hit - Shooting Master a try.

      -

      FAQs

      -

      Here are some frequently asked questions about Knife Hit - Shooting Master:

      -
        -
      • Q: How many levels are there in Knife Hit - Shooting Master?
      • -
      • A: There are 100 levels in Knife Hit - Shooting Master, plus 10 boss levels. You can replay any level you have completed to improve your score and earn more coins.
      • -
      • Q: How can I get more coins and gems in Knife Hit - Shooting Master?
      • -
      • A: You can get more coins and gems by hitting the target, completing levels, watching ads, or opening chests. You can also buy coins and gems with real money if you want to support the developers.
      • -
      • Q: How can I change the skin of my game in Knife Hit - Shooting Master?
      • -
      • A: You can change the skin of your game by tapping on the settings icon on the top right corner of the screen. You can choose from different themes, such as dark, light, neon, or rainbow. You can also unlock new skins by earning stars or buying them with gems.
      • -
      • Q: What are the benefits of logging in with Facebook in Knife Hit - Shooting Master?
      • -
      • A: Logging in with Facebook will allow you to save your progress and sync it across different devices. You will also be able to see your friends' scores and challenge them to beat your score. You will also get 100 gems as a bonus for logging in with Facebook.
      • -
      • Q: Is Knife Hit - Shooting Master safe to play for children?
      • -
      • A: Knife Hit - Shooting Master is a game that is suitable for all ages. The game does not contain any violence, blood, or gore. The game is also free to play and does not require any personal information or permissions. However, the game does contain ads and in-app purchases that may require parental supervision.
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Solitario Premium APK El clsico juego de cartas con ms opciones y diversin.md b/spaces/congsaPfin/Manga-OCR/logs/Solitario Premium APK El clsico juego de cartas con ms opciones y diversin.md deleted file mode 100644 index 8520a8ea318b26f626d00ddd101edd79de1f9e97..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Solitario Premium APK El clsico juego de cartas con ms opciones y diversin.md +++ /dev/null @@ -1,115 +0,0 @@ -
      -

      Descargar Solitario Premium APK: Cómo disfrutar del clásico juego de cartas en tu dispositivo Android

      -

      ¿Te gusta el solitario? ¿Quieres jugar al clásico juego de cartas en tu teléfono o tableta Android? ¿Quieres acceder a funciones y opciones exclusivas que no encontrarás en otras versiones? Entonces, te interesa descargar solitario premium apk, una aplicación que te permite jugar al solitario de forma gratuita, sin anuncios, con gráficos de alta calidad, música relajante, desafíos diarios, temas personalizados y mucho más. En este artículo, te contamos todo lo que necesitas saber sobre el solitario premium apk, desde su origen e historia, hasta sus beneficios para la salud mental, pasando por sus características, requisitos, pasos de instalación y consejos para ganar. ¡Sigue leyendo y prepárate para disfrutar del mejor solitario en tu dispositivo Android!

      -

      ¿Qué es el solitario y por qué es tan popular?

      -

      El solitario, también conocido como paciencia o cabale, es un género de juegos de cartas que se pueden jugar por una sola persona. El solitario más común es el Klondike, que consiste en crear cuatro pilas de cartas, una por cada palo, en orden ascendente (desde el as hasta el rey). Estas pilas se llaman pilares o cimientos. Para lograrlo, se debe ir moviendo las cartas entre siete columnas o montones que se forman al repartir las cartas boca abajo. Solo se puede mover la carta que está boca arriba en cada columna, y se debe colocar sobre otra carta de color diferente y valor inferior. Por ejemplo, si hay un seis de corazones, se puede colocar sobre él un cinco de picas o un cinco de tréboles. Si no hay más movimientos posibles, se puede tomar una carta del mazo o pila de reserva que se encuentra aparte. El juego termina cuando se completan los cuatro pilares o cuando no hay más movimientos posibles.

      -

      descargar solitario premium apk


      DOWNLOAD ★★★★★ https://urlca.com/2uOd7y



      -

      El solitario es un juego muy popular por varias razones. En primer lugar, es un juego muy fácil de aprender, ya que solo se necesita un mazo de cartas y seguir unas reglas sencillas. En segundo lugar, es un juego muy entretenido y desafiante, ya que requiere de habilidad, estrategia y paciencia para resolverlo. En tercer lugar, es un juego muy relajante y terapéutico, ya que ayuda a calmar la mente, a entrar en un estado meditativo y a mejorar la memoria y la concentración. Además, el solitario tiene una larga y fascinante historia que lo hace aún más interesante.

      -

      El origen y la historia del solitario

      -

      El solitario no tiene una fecha definitiva de invención, pero su registro se puede rastrear hasta finales del siglo XVIII en el norte de Europa y Escandinavia. El término "patiencespiel" apareció por primera vez en un libro alemán publicado en 1788. También hay referencias al solitario en la literatura francesa. Se cree que el solitario se originó como una forma de entretenimiento para la nobleza y la realeza, y que se popularizó en el siglo XIX con la aparición de los primeros libros de solitario. Algunos personajes famosos que eran aficionados al solitario son Napoleón Bonaparte, Winston Churchill, Franklin D. Roosevelt y Marcel Proust.

      -

      Los beneficios de jugar al solitario para la salud mental

      -

      Jugar al solitario no solo es divertido, sino también beneficioso para la salud mental. Algunos de los beneficios que se pueden obtener son los siguientes:

      -
        -
      • Reduce el estrés y la ansiedad. El solitario es un juego que requiere concentración y atención, lo que ayuda a distraerse de los problemas y las preocupaciones. Además, el solitario tiene un efecto calmante y relajante, ya que se acompaña de música suave y gráficos agradables.
      • -
      • Mejora la memoria y la agilidad mental. El solitario es un juego que implica recordar las cartas y las posiciones, lo que estimula la memoria a corto y largo plazo. Asimismo, el solitario es un juego que exige pensar y planificar los movimientos, lo que mejora la capacidad de razonamiento y resolución de problemas.
      • -
      • Aumenta la autoestima y la confianza. El solitario es un juego que ofrece un reto personal y una satisfacción al completarlo. Al lograr resolver el solitario, se siente una sensación de logro y orgullo, lo que aumenta la autoestima y la confianza en uno mismo.
      • -
      • Fomenta la paciencia y la perseverancia. El solitario es un juego que no siempre se puede resolver a la primera, sino que a veces se necesita intentarlo varias veces hasta encontrar la solución. Esto enseña a tener paciencia y perseverancia, dos virtudes importantes para la vida.
      • -
      -

      ¿Qué es el solitario premium apk y qué ventajas tiene?

      -

      El solitario premium apk es una aplicación para dispositivos Android que te permite jugar al solitario de forma gratuita, sin anuncios, con gráficos de alta calidad, música relajante, desafíos diarios, temas personalizados y mucho más. Se trata de una versión mejorada del solitario clásico, que ofrece una experiencia de juego única e inigualable. Algunas de las ventajas que tiene el solitario premium apk son las siguientes:

      -

      Las características y funciones del solitario premium apk

      -

      El solitario premium apk tiene una serie de características y funciones que lo hacen diferente y superior a otras versiones del solitario. Estas son algunas de ellas:

      -
        -
      • Tiene gráficos de alta calidad, con efectos visuales realistas y detallados. Las cartas tienen un diseño elegante y clásico, con diferentes estilos para elegir. El fondo del juego también se puede cambiar según el gusto del usuario.
      • -
      • Tiene música relajante, con melodías suaves y tranquilas que acompañan al juego. La música se puede ajustar o silenciar según la preferencia del usuario.
      • -
      • Tiene desafíos diarios, con niveles de dificultad variados que ponen a prueba las habilidades del jugador. Cada día se puede acceder a un nuevo desafío, con recompensas especiales por completarlo.
      • -
      • Tiene temas personalizados, con colores y fondos diferentes para cada temporada o festividad. El usuario puede elegir el tema que más le guste o cambiarlo según su estado de ánimo.
      • -
      • Tiene modos de juego alternativos, como el modo cronometrado, el modo vegas o el modo experto. Estos modos ofrecen una mayor variedad y diversión al juego, con reglas distintas y puntuaciones diferentes.
      • -
      • Tiene estadísticas y logros, con datos e información sobre el rendimiento del jugador. El usuario puede consultar sus récords, sus victorias, sus derrotas, su tiempo medio, su porcentaje de éxito y mucho más. También puede ver los logros que ha conseguido y los que le faltan por obtener.
      • -
      • Tiene opciones de personalización, con ajustes para adaptar el juego a las preferencias del usuario. El usuario puede cambiar el tamaño de las cartas, el tipo de movimiento, el sonido, las notificaciones, el idioma y más.
      • -
      • Tiene soporte y actualizaciones, con un equipo de desarrolladores que se encarga de resolver cualquier problema o duda que tenga el usuario. Además, el solitario premium apk se actualiza constantemente con nuevas funciones y mejoras.
      • -
      -

      Los requisitos y los pasos para descargar e instalar el solitario premium apk

      -

      El solitario premium apk es una aplicación que se puede descargar e instalar fácilmente en cualquier dispositivo Android. Estos son los requisitos y los pasos que se deben seguir:

      -
        -
      1. El requisito principal es tener un dispositivo Android con una versión igual o superior a la 4.4 (KitKat). También se necesita una conexión a internet y un espacio de almacenamiento suficiente.
      2. -
      3. El primer paso es descargar el archivo apk del solitario premium desde un sitio web seguro y confiable. Se puede usar el siguiente enlace: [Descargar solitario premium apk].
      4. -
      5. El segundo paso es habilitar la opción de "Orígenes desconocidos" en el dispositivo Android. Esta opción permite instalar aplicaciones que no provienen de la tienda oficial de Google Play. Para ello, se debe ir a Ajustes > Seguridad > Orígenes desconocidos y activarla.
      6. -
      7. El tercer paso es localizar el archivo apk descargado en el dispositivo Android. Normalmente, se encuentra en la carpeta de Descargas o en la de Archivos. Una vez localizado, se debe pulsar sobre él para iniciar la instalación.
      8. -
      9. El cuarto paso es seguir las instrucciones que aparecen en la pantalla para completar la instalación. Se debe aceptar los permisos y las condiciones de uso de la aplicación.
      10. -
      11. El quinto paso es abrir la aplicación y disfrutar del solitario premium apk en el dispositivo Android.
      12. -
      -

      ¿Cómo jugar al solitario premium apk y qué consejos seguir para ganar?

      -

      El solitario premium apk es un juego muy fácil de jugar, pero también muy difícil de ganar. Por eso, es importante conocer el objetivo, las reglas y las estrategias del juego. Estos son algunos consejos que te ayudarán a mejorar tu juego de solitario:

      -

      descargar solitario premium apk gratis
      -descargar solitario premium apk full
      -descargar solitario premium apk mod
      -descargar solitario premium apk sin anuncios
      -descargar solitario premium apk ultima version
      -descargar solitario premium apk para android
      -descargar solitario premium apk mega
      -descargar solitario premium apk mediafire
      -descargar solitario premium apk 2023
      -descargar solitario premium apk 4.16.3141.1
      -descargar microsoft solitaire collection premium apk
      -descargar microsoft solitaire collection premium apk gratis
      -descargar microsoft solitaire collection premium apk full
      -descargar microsoft solitaire collection premium apk mod
      -descargar microsoft solitaire collection premium apk sin anuncios
      -descargar microsoft solitaire collection premium apk ultima version
      -descargar microsoft solitaire collection premium apk para android
      -descargar microsoft solitaire collection premium apk mega
      -descargar microsoft solitaire collection premium apk mediafire
      -descargar microsoft solitaire collection premium apk 2023
      -descargar microsoft solitaire collection premium apk 4.16.3141.1
      -descargar juegos de solitario premium apk
      -descargar juegos de solitario premium apk gratis
      -descargar juegos de solitario premium apk full
      -descargar juegos de solitario premium apk mod
      -descargar juegos de solitario premium apk sin anuncios
      -descargar juegos de solitario premium apk ultima version
      -descargar juegos de solitario premium apk para android
      -descargar juegos de solitario premium apk mega
      -descargar juegos de solitario premium apk mediafire
      -descargar juegos de solitario premium apk 2023
      -descargar juegos de cartas solitario premium apk
      -descargar juegos de cartas solitario premium apk gratis
      -descargar juegos de cartas solitario premium apk full
      -descargar juegos de cartas solitario premium apk mod
      -descargar juegos de cartas solitario premium apk sin anuncios
      -descargar juegos de cartas solitario premium apk ultima version
      -descargar juegos de cartas solitario premium apk para android
      -descargar juegos de cartas solitario premium apk mega
      -descargar juegos de cartas solitario premium apk mediafire
      -descargar juegos de cartas solitario premium apk 2023
      -como descargar solitario premium apk
      -como descargar microsoft solitaire collection premium apk
      -como descargar juegos de solitario premium apk
      -como descargar juegos de cartas solitario premium apk

      -

      El objetivo y las reglas del solitario

      -

      El objetivo del solitario es crear cuatro pilas de cartas, una por cada palo, en orden ascendente (desde el as hasta el rey). Estas pilas se llaman pilares o cimientos. Para lograrlo, se debe ir moviendo las cartas entre siete columnas o montones que se forman al repartir las cartas boca abajo. Solo se puede mover la carta que está boca arriba en cada columna, y se debe colocar sobre otra carta de color diferente y valor inferior. Por ejemplo, si hay un seis de corazones, se puede colocar sobre él un cinco de picas o un cinco de tréboles. Si no hay más movimientos posibles, se puede tomar una carta del mazo o pila de reserva que se encuentra aparte. El juego termina cuando se completan los cuatro pilares o cuando no hay más movimientos posibles.

      -

      Las estrategias y trucos para mejorar tu juego de solitario

      -

      Aunque el solitario es un juego que depende mucho del azar, también hay algunas estrategias y trucos que pueden aumentar las probabilidades de ganar. Estos son algunos de ellos:

      -
        -
      • Mover primero las cartas del mazo o pila de reserva. Esto permite tener más opciones y posibilidades de mover las cartas de las columnas.
      • -
      • No llenar los espacios vacíos con reyes. Esto limita los movimientos posibles y bloquea las columnas. Es mejor esperar a tener una carta baja o un as para llenar los espacios vacíos.
      • -
      • No mover las cartas a los pilares o cimientos demasiado pronto. Esto puede impedir que se puedan mover otras cartas que están debajo o que se necesitan para formar secuencias. Es mejor esperar a tener una buena cantidad de cartas ordenadas en las columnas antes de moverlas a los pilares.
      • -
      • Tener en cuenta los palos y los valores de las cartas. Esto ayuda a planificar los movimientos con anticipación y a evitar errores o bloqueos. Es conveniente saber qué cartas faltan por salir y qué cart as se pueden mover o no.
      • -
      • Usar el botón de deshacer cuando sea necesario. Esto permite corregir los movimientos que se hayan hecho por error o que no hayan sido favorables. El solitario premium apk tiene un botón de deshacer ilimitado, lo que facilita el juego.
      • -
      -

      Conclusión

      -

      El solitario es un juego de cartas clásico, popular y divertido que se puede jugar en cualquier dispositivo Android gracias al solitario premium apk. Esta aplicación ofrece una versión mejorada del solitario, con gráficos de alta calidad, música relajante, desafíos diarios, temas personalizados y muchas funciones y opciones más. Además, el solitario tiene beneficios para la salud mental, como reducir el estrés, mejorar la memoria, aumentar la autoestima y fomentar la paciencia. Para jugar al solitario premium apk, solo se necesita descargar e instalar el archivo apk desde un sitio web seguro y confiable, y seguir las reglas y las estrategias del juego. Si te gusta el solitario, no dudes en descargar solitario premium apk y disfrutar del mejor juego de cartas en tu dispositivo Android.

      -

      Preguntas frecuentes

      -

      A continuación, se responden algunas de las preguntas más frecuentes sobre el solitario premium apk:

      -

      ¿Es seguro descargar e instalar el solitario premium apk?

      -

      Sí, es seguro siempre y cuando se descargue e instale el archivo apk desde un sitio web seguro y confiable. El solitario premium apk no contiene virus ni malware que puedan dañar el dispositivo Android o comprometer la privacidad del usuario.

      -

      ¿Es legal descargar e instalar el solitario premium apk?

      -

      Sí, es legal siempre y cuando se respeten los derechos de autor y las condiciones de uso de la aplicación. El solitario premium apk es una aplicación gratuita que no infringe ninguna ley ni normativa vigente.

      -

      ¿Qué diferencia hay entre el solitario premium apk y el solitario clásico?

      -

      La diferencia principal es que el solitario premium apk ofrece una versión mejorada del solitario clásico, con gráficos de alta calidad, música relajante, desafíos diarios, temas personalizados y muchas funciones y opciones más. El solitario clásico es una versión más simple y básica del juego de cartas.

      -

      ¿Qué otros juegos de cartas se pueden jugar con el solitario premium apk?

      -

      El solitario premium apk incluye otros juegos de cartas que se pueden jugar con el mismo mazo de 52 cartas. Algunos de estos juegos son: Spider Solitaire, FreeCell Solitaire, Pyramid Solitaire, TriPeaks Solitaire y Golf Solitaire.

      -

      ¿Cómo se puede contactar con el equipo de desarrolladores del solitario premium apk?

      -

      Se puede contactar con el equipo de desarrolladores del solitario premium apk a través del correo electrónico [email protected] o a través de las redes sociales Facebook, Twitter e Instagram. El equipo está disponible para resolver cualquier problema o duda que tenga el usuario sobre la aplicación.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Test Your Skills and Patience on Truck Driver Crazy Road.md b/spaces/congsaPfin/Manga-OCR/logs/Test Your Skills and Patience on Truck Driver Crazy Road.md deleted file mode 100644 index 298dbbd77e93d1410b4658eaa2ef8ba6b5275621..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Test Your Skills and Patience on Truck Driver Crazy Road.md +++ /dev/null @@ -1,122 +0,0 @@ - -

      Truck Driver Crazy Road APKPure: A Challenging and Fun Driving Game

      -

      Do you love driving trucks and trailers on rough and bumpy roads? Do you want to experience the thrill and excitement of transporting cargo across different locations? If yes, then you should try Truck Driver Crazy Road APKPure, a realistic and fun truck driving game that will put you to the limits. In this article, we will tell you everything you need to know about this game, including what it is, what are its features, how to play it, and why you should download it.

      -

      truck driver crazy road apkpure


      DOWNLOADhttps://urlca.com/2uOfCE



      -

      What is Truck Driver Crazy Road APKPure?

      -

      A realistic and thrilling truck driving game

      -

      Truck Driver Crazy Road APKPure is a truck driving game that will test your balancing skills and also your patience. You will have to drive through the uphill with lots of rocks and debris scattered along the road. You will also have to face different weather conditions, such as rain, snow, fog, and night. You will have to deliver your cargo safely and on time, without losing or damaging it. You will have to deal with traffic, narrow bridges, sharp turns, steep slopes, and other obstacles on your way.

      -

      A free and easy-to-download app from APKPure

      -

      Truck Driver Crazy Road APKPure is a free app that you can download from APKPure, a website that offers safe and fast downloads of Android apps and games. You don't need to register or sign up to download this app. You just need to click on the download button and install it on your device. The app has a size of about 100 MB and requires Android 4.1 or higher to run. The app is updated regularly with bug fixes and improvements.

      -

      What are the features of Truck Driver Crazy Road APKPure?

      -

      Four different game modes

      -

      Truck Driver Crazy Road APKPure has four different game modes that you can choose from, depending on your preference and mood. They are:

      -

      truck driver crazy road game online
      -truck driver crazy road 2 download
      -truck driver crazy road mod apk
      -truck driver crazy road y8
      -truck driver crazy road gameplay
      -truck driver crazy road 3d
      -truck driver crazy road android
      -truck driver crazy road free play
      -truck driver crazy road cheats
      -truck driver crazy road hack
      -truck driver crazy road simulator
      -truck driver crazy road review
      -truck driver crazy road tips
      -truck driver crazy road unblocked
      -truck driver crazy road pc
      -truck driver crazy road app
      -truck driver crazy road best score
      -truck driver crazy road levels
      -truck driver crazy road trailer
      -truck driver crazy road walkthrough
      -truck driver crazy road challenges
      -truck driver crazy road update
      -truck driver crazy road offline
      -truck driver crazy road install
      -truck driver crazy road guide
      -truck driver crazy road controls
      -truck driver crazy road missions
      -truck driver crazy road apk mod
      -truck driver crazy road fun games
      -truck driver crazy road new version
      -truck driver crazy road full screen
      -truck driver crazy road apk download
      -truck driver crazy road 2 game online
      -truck driver crazy road 2 mod apk
      -truck driver crazy road 2 y8
      -truck driver crazy road 2 gameplay
      -truck driver crazy road 2 android
      -truck driver crazy road 2 free play
      -truck driver crazy road 2 cheats
      -truck driver crazy road 2 hack
      -truck driver crazy road 2 simulator
      -truck driver crazy road 2 review
      -truck driver crazy road 2 tips
      -truck driver crazy road 2 unblocked
      -truck driver crazy road 2 pc
      -truck driver crazy road 2 app

      -
        -
      • Delivery mode: In this mode, you have to deliver your cargo from one point to another within a given time limit. You have to be careful not to lose or damage your cargo on the way.
      • -
      • Parking mode: In this mode, you have to park your truck and trailer in a designated spot without hitting anything. You have to be precise and accurate in your movements.
      • -
      • Garage mode: In this mode, you can customize your truck and trailer with different colors, wheels, lights, horns, and stickers. You can also upgrade your engine, brakes, suspension, tires, and fuel tank.
      • -
      • Free mode: In this mode, you can drive freely on any map without any time limit or task. You can explore the environment and enjoy the scenery.
      • -
      -

      Various trucks and trailers to choose from

      -

      Truck Driver Crazy Road APKPure has a variety of trucks and trailers that you can choose from, each with its own characteristics and performance. You can unlock more trucks and trailers by completing tasks and earning coins. Some of the trucks and trailers that you can drive are:

      - - - - - - -
      TruckTrailer
      Red truckWooden trailer
      Blue truckMetal trailer
      Green truckOil tanker
      Yellow truckCement mixer
      Black truckContainer trailer
      -

      Stunning graphics and sound effects

      -

      Truck Driver Crazy Road APKPure has stunning graphics and sound effects that will make you feel like you are driving a real truck. The game has realistic 3D models of trucks and trailers, as well as detailed environments and landscapes. You can see the mountains, forests, rivers, bridges, buildings, and roads on your way. You can also hear the engine sound, the horn, the brakes, the tires, and the cargo noise. The game also has dynamic lighting and shadows, as well as weather effects such as rain, snow, fog, and night.

      -

      Realistic physics and weather conditions

      -

      Truck Driver Crazy Road APKPure has realistic physics and weather conditions that will affect your driving experience. The game has a realistic simulation of gravity, inertia, friction, and collision. You will have to balance your truck and trailer on the uneven and slippery roads. You will also have to adjust your speed and direction according to the wind, rain, snow, fog, and night. You will have to be careful not to tip over or crash your truck and trailer.

      -

      How to play Truck Driver Crazy Road APKPure?

      -

      Use the on-screen controls to steer, accelerate, brake, and horn

      -

      To play Truck Driver Crazy Road APKPure, you have to use the on-screen controls to steer, accelerate, brake, and horn. You can choose between two types of controls: tilt or buttons. You can also adjust the sensitivity and position of the controls in the settings menu. The controls are easy to use and responsive.

      -

      Follow the arrow to reach your destination

      -

      To complete your task in Truck Driver Crazy Road APKPure, you have to follow the arrow that shows you the direction to your destination. You have to drive carefully and avoid getting lost or stuck on the way. You have to reach your destination within the time limit and without losing or damaging your cargo.

      -

      Avoid obstacles and collisions on the road

      -

      To drive safely in Truck Driver Crazy Road APKPure, you have to avoid obstacles and collisions on the road. You have to watch out for other vehicles, pedestrians, animals, rocks, trees, poles, signs, barriers, and other objects that can block or damage your truck and trailer. You have to keep a safe distance from them and use your horn to warn them. You also have to obey the traffic rules and signals.

      -

      Complete the tasks and earn coins

      -

      To progress in Truck Driver Crazy Road APKPure, you have to complete the tasks and earn coins. You have to deliver your cargo from one point to another or park your truck and trailer in a designated spot. You have to do it within the time limit and without losing or damaging your cargo. You will earn coins based on your performance and speed. You can use the coins to unlock more trucks and trailers or customize them in the garage mode.

      -

      Why should you download Truck Driver Crazy Road APKPure?

      -

      Test your driving skills and patience

      -

      If you want to test your driving skills and patience, you should download Truck Driver Crazy Road APKPure. This game will challenge you with its realistic and difficult driving scenarios. You will have to master the art of balancing your truck and trailer on the rough and bumpy roads. You will also have to cope with the changing weather conditions and traffic situations. You will have to be careful not to lose or damage your cargo on the way.

      -

      Enjoy the scenic views and challenging terrains

      -

      If you want to enjoy the scenic views and challenging terrains, you should download Truck Driver Crazy Road APKPure. This game will take you to different locations with beautiful landscapes and environments. You will see the mountains, forests, rivers, bridges, buildings, and roads on your way. You will also face different terrains such as hills, valleys, plains, deserts, snowfields, swamps, and more.

      -

      Have fun and relax with this addictive game

      -

      If you want to have fun and relax with this addictive game, you should download Truck Driver Crazy Road APKPure. This game will keep you entertained for hours with its four different game modes and various trucks and trailers. You can play this game anytime and anywhere without any internet connection. You can also share your scores and achievements with your friends and family on social media. You can also rate and review this game on APKPure and give your feedback to the developers.

      -

      Conclusion

      -

      Truck Driver Crazy Road APKPure is a challenging and fun driving game that will make you feel like a real truck driver. You will have to drive through different locations and weather conditions, deliver your cargo safely and on time, avoid obstacles and collisions, and customize your truck and trailer. You will also enjoy the stunning graphics and sound effects, the realistic physics and simulation, and the four different game modes. You can download this game for free from APKPure and have fun and relax with this addictive game.

      -

      FAQs

      -

      What are the minimum requirements to play Truck Driver Crazy Road APKPure?

      -

      To play Truck Driver Crazy Road APKPure, you need an Android device with version 4.1 or higher, a storage space of about 100 MB, and an internet connection to download the app.

      -

      How can I change the language of Truck Driver Crazy Road APKPure?

      -

      To change the language of Truck Driver Crazy Road APKPure, you can go to the settings menu and select the language option. You can choose from English, Russian, Turkish, German, Spanish, French, Italian, Portuguese, Arabic, Chinese, Japanese, Korean, Hindi, Indonesian, and Vietnamese.

      -

      How can I contact the developers of Truck Driver Crazy Road APKPure?

      -

      To contact the developers of Truck Driver Crazy Road APKPure, you can visit their website at http://games89.com/ or their Facebook page at https://www.facebook.com/Games89com-100900695181173/. You can also email them at games89com@gmail.com.

      -

      What are some tips and tricks to play Truck Driver Crazy Road APKPure?

      -

      Some tips and tricks to play Truck Driver Crazy Road APKPure are:

      -
        -
      • Use the brake wisely to avoid skidding or sliding on the slippery roads.
      • -
      • Use the horn to warn other vehicles or pedestrians on your way.
      • -
      • Use the camera button to change the view angle and see your surroundings better.
      • -
      • Use the map button to see your location and destination.
      • -
      • Use the pause button to pause or resume the game.
      • -
      -

      What are some similar games to Truck Driver Crazy Road APKPure?

      -

      Some similar games to Truck Driver Crazy Road APKPure are:

      -
        -
      • Truck Simulator 2018: Europe by Zuuks Games
      • -
      • Truck Simulator USA by Ovidiu Pop
      • -
      • Euro Truck Driver 2018 by Ovidiu Pop
      • -
      • Offroad Cargo Transport Simulator by Game Pickle
      • -
      • Cargo Transport Simulator by SkisoSoft
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Clean Master License Key ((LINK)).md b/spaces/contluForse/HuggingGPT/assets/Clean Master License Key ((LINK)).md deleted file mode 100644 index 776e7bd8db791466de56de0732673298b2d01b7f..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Clean Master License Key ((LINK)).md +++ /dev/null @@ -1,52 +0,0 @@ -

      Clean master license key


      Downloadhttps://ssurll.com/2uzw6W



      -
      -It can maintain and expand your RAM and your CPU. To use this marvelous cleaner, just set up its licence key and you can begin using it immediately. - -Clean Master License Key is an excellent program that permits you to examine the PC’s health. We find out more in regards to the diverse features of Clean Master License Key. - -Thanks for visiting our site. Listed here you will get the most recent and also latest version of Clean Master License Key. - -Clean Master License Key 2020 Crack - -Clean Master License Key is an outstanding utility which enables you to view your pc and also RAM. This is a complete device which provides numerous features. Clean Master License Key has a clever algorithm which is used to identify and also remove the various abnormal files. You can also clean your programs, RAM and files. To put this into action, you need to launch the Clean Master License Key and then execute the functions you desire. This excellent program has an uncomplicated interface. So, users can control the Clean Master License Key on their own. The Clean Master License Key permits you to select the files and groups which you want to delete. Moreover, you can remove everything that is likely to cause a problem in your pc’s health. - -Clean Master License Key can also locate the junk files that have been occupying your pc’s internal storage. Hence, the procedure of cleaning your pc is rather easy. You can now create your computer to its finest state. - -Clean Master License Key Features: - -More, Clean Master License Key contains various tools to clean the various things that can spoil the health of your PC. Some of these tools are: - -Uninstaller - -A tool that allows you to easily uninstall programs that are useless. - -Optimizer - -A program that can improve the performance of your computer. - -System Cleaner - -This can identify and remove the junk files. - -Real-time Scanner - -Can take good care of the PC’s performance by scanning the performance of the PC. - -System Guard - -Can protect your computer from all kinds of threats. - -System Resurrector - -Can fix many problems in your pc. - -System Memory Booster - -You can enhance the performance of your computer using this tool. - -What’s New In The Latest Version? - -The latest version of Clean Master License Key enables you to quickly and also efficiently clean your computer. The Clean Master License Key 4fefd39f24
      -
      -
      -

      diff --git a/spaces/contluForse/HuggingGPT/assets/Da Vinci Code Ebook Free Download Epub The Best Way to Enjoy the Thrilling Mystery Novel.md b/spaces/contluForse/HuggingGPT/assets/Da Vinci Code Ebook Free Download Epub The Best Way to Enjoy the Thrilling Mystery Novel.md deleted file mode 100644 index 6adbbb1e110047c28c5ff570db7e4bd948807912..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Da Vinci Code Ebook Free Download Epub The Best Way to Enjoy the Thrilling Mystery Novel.md +++ /dev/null @@ -1,7 +0,0 @@ - -

      Project Gutenberg eBooks may be freely used in the United States because most are not protected by U.S. copyright law. They may not be free of copyright in other countries. Readers outside of the United States must check the copyright terms of their countries before accessing, downloading or redistributing eBooks. We also have a number of copyrighted titles, for which the copyright holder has given permission for unlimited non-commercial worldwide use.

      -

      da vinci code ebook free download epub


      Download ✒ ✒ ✒ https://ssurll.com/2uzz1X



      -

      This is where you can pick up your free downloads of Crafting Unforgettable Characters, as well as the bonus books the Complete Outline Transcript of Storming and 5 Secrets of Story Structure and my free Scrivener template.

      -

      Visit the Overdrive website or our online catalog for Overdrive ebooks available for Kindle devices. Libby will offer you an option to read in the Libby app or if you would like the book downloaded to your Kindle device.

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Download HD Movie of Terror Strike A Gripping Story of Courage and Survival.md b/spaces/contluForse/HuggingGPT/assets/Download HD Movie of Terror Strike A Gripping Story of Courage and Survival.md deleted file mode 100644 index 07b80e4aa45ac3db70f6911b98cdd41e54e121e6..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Download HD Movie of Terror Strike A Gripping Story of Courage and Survival.md +++ /dev/null @@ -1,7 +0,0 @@ - -

      The SIT formed to probe the Dinanagar attack believes that the strike was undertaken by the Laskhar-e-Toiba, while the attack on the Pathankot Air Force Station was the handiwork of Jaish-e-Mohammad. Both are Pakistan-based terror outfits and had used Mastgarh village route to enter India.

      -

      Terror Strike download HD movie


      DOWNLOADhttps://ssurll.com/2uzxWT



      -

      A counterpart of the principle of least action in Nature is that attackers in human conflict follow the path of least resistance. Thus, Sun Tzu notes: Now an army may be likened to water, for just as water avoids heights and hastens to the lowlands, so an army avoids strength and strikes weakness. For attacks by terrorists, cyber hackers or warring states, quantitative risk modelling is unified by the principles of adversarial conflict, such as those laid out by Sun Tzu. The well-defined principles underlying quantitative terrorism risk modelling minimize the need to resort to expert judgement (Woo 2011, 2015). Within the bounds defined by the Western counter-terrorism environment, terrorists maximize their operational utility by abiding by the classic principles of terrorist modus operandi: substituting hardened targets; following the path of least resistance in weapon selection; and leveraging their scarce resources to achieve the greatest impact. The metric for impact includes not just loss inflicted but also the media attention gained. An insightful ISIS slogan is that media is half Jihad. Media coverage is essential for terrorist recruitment and funding, as well as for propaganda. This is so important that in 2002, Osama bin Laden wrote that the media war may reach 90% of the preparation for battles (Awan 2016).

      -

      CM Terrorism Crisis Protocols are now not only necessary for airlines, government buildings and mass transit organizations, but for businesses as a whole, from college campuses to nightclubs to movie theaters. The unfortunate and tragic reality is this: Terrorism and mass calamity can strike anywhere at any time.

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/pidinet/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/pidinet/__init__.py deleted file mode 100644 index 4f3e3d9038c068f69c56a1fbb6e51b6b11faa0fd..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/pidinet/__init__.py +++ /dev/null @@ -1,39 +0,0 @@ -# Pidinet -# https://github.com/hellozhuo/pidinet - -import os -import torch -import numpy as np -from einops import rearrange -from annotator.pidinet.model import pidinet -from annotator.util import annotator_ckpts_path, safe_step - - -class PidiNetDetector: - def __init__(self): - remote_model_path = "https://huggingface.co/lllyasviel/Annotators/resolve/main/table5_pidinet.pth" - modelpath = os.path.join(annotator_ckpts_path, "table5_pidinet.pth") - if not os.path.exists(modelpath): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(remote_model_path, model_dir=annotator_ckpts_path) - self.netNetwork = pidinet() -# self.netNetwork.load_state_dict({k.replace('module.', ''): v for k, v in torch.load(modelpath)['state_dict'].items()}) - self.netNetwork.load_state_dict({k.replace('module.', ''): v for k, v in torch.load(modelpath, map_location=torch.device('cpu'))['state_dict'].items()}) -# self.netNetwork = self.netNetwork.cuda() - self.netNetwork = self.netNetwork.cpu() - self.netNetwork.eval() - - def __call__(self, input_image, safe=False): - assert input_image.ndim == 3 - input_image = input_image[:, :, ::-1].copy() - with torch.no_grad(): -# image_pidi = torch.from_numpy(input_image).float().cuda() - image_pidi = torch.from_numpy(input_image).float().cpu() - image_pidi = image_pidi / 255.0 - image_pidi = rearrange(image_pidi, 'h w c -> 1 c h w') - edge = self.netNetwork(image_pidi)[-1] - edge = edge.cpu().numpy() - if safe: - edge = safe_step(edge) - edge = (edge * 255.0).clip(0, 255).astype(np.uint8) - return edge[0][0] diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/layers/patch_transformer.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/layers/patch_transformer.py deleted file mode 100644 index 99d9e51a06b981bae45ce7dd64eaef19a4121991..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/layers/patch_transformer.py +++ /dev/null @@ -1,91 +0,0 @@ -# MIT License - -# Copyright (c) 2022 Intelligent Systems Lab Org - -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: - -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. - -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -# File author: Shariq Farooq Bhat - -import torch -import torch.nn as nn - - -class PatchTransformerEncoder(nn.Module): - def __init__(self, in_channels, patch_size=10, embedding_dim=128, num_heads=4, use_class_token=False): - """ViT-like transformer block - - Args: - in_channels (int): Input channels - patch_size (int, optional): patch size. Defaults to 10. - embedding_dim (int, optional): Embedding dimension in transformer model. Defaults to 128. - num_heads (int, optional): number of attention heads. Defaults to 4. - use_class_token (bool, optional): Whether to use extra token at the start for global accumulation (called as "class token"). Defaults to False. - """ - super(PatchTransformerEncoder, self).__init__() - self.use_class_token = use_class_token - encoder_layers = nn.TransformerEncoderLayer( - embedding_dim, num_heads, dim_feedforward=1024) - self.transformer_encoder = nn.TransformerEncoder( - encoder_layers, num_layers=4) # takes shape S,N,E - - self.embedding_convPxP = nn.Conv2d(in_channels, embedding_dim, - kernel_size=patch_size, stride=patch_size, padding=0) - - def positional_encoding_1d(self, sequence_length, batch_size, embedding_dim, device='cpu'): - """Generate positional encodings - - Args: - sequence_length (int): Sequence length - embedding_dim (int): Embedding dimension - - Returns: - torch.Tensor SBE: Positional encodings - """ - position = torch.arange( - 0, sequence_length, dtype=torch.float32, device=device).unsqueeze(1) - index = torch.arange( - 0, embedding_dim, 2, dtype=torch.float32, device=device).unsqueeze(0) - div_term = torch.exp(index * (-torch.log(torch.tensor(10000.0, device=device)) / embedding_dim)) - pos_encoding = position * div_term - pos_encoding = torch.cat([torch.sin(pos_encoding), torch.cos(pos_encoding)], dim=1) - pos_encoding = pos_encoding.unsqueeze(1).repeat(1, batch_size, 1) - return pos_encoding - - - def forward(self, x): - """Forward pass - - Args: - x (torch.Tensor - NCHW): Input feature tensor - - Returns: - torch.Tensor - SNE: Transformer output embeddings. S - sequence length (=HW/patch_size^2), N - batch size, E - embedding dim - """ - embeddings = self.embedding_convPxP(x).flatten( - 2) # .shape = n,c,s = n, embedding_dim, s - if self.use_class_token: - # extra special token at start ? - embeddings = nn.functional.pad(embeddings, (1, 0)) - - # change to S,N,E format required by transformer - embeddings = embeddings.permute(2, 0, 1) - S, N, E = embeddings.shape - embeddings = embeddings + self.positional_encoding_1d(S, N, E, device=embeddings.device) - x = self.transformer_encoder(embeddings) # .shape = S, N, E - return x diff --git a/spaces/crashedice/signify/nbs/styles.css b/spaces/crashedice/signify/nbs/styles.css deleted file mode 100644 index 66ccc49ee8f0e73901dac02dc4e9224b7d1b2c78..0000000000000000000000000000000000000000 --- a/spaces/crashedice/signify/nbs/styles.css +++ /dev/null @@ -1,37 +0,0 @@ -.cell { - margin-bottom: 1rem; -} - -.cell > .sourceCode { - margin-bottom: 0; -} - -.cell-output > pre { - margin-bottom: 0; -} - -.cell-output > pre, .cell-output > .sourceCode > pre, .cell-output-stdout > pre { - margin-left: 0.8rem; - margin-top: 0; - background: none; - border-left: 2px solid lightsalmon; - border-top-left-radius: 0; - border-top-right-radius: 0; -} - -.cell-output > .sourceCode { - border: none; -} - -.cell-output > .sourceCode { - background: none; - margin-top: 0; -} - -div.description { - padding-left: 2px; - padding-top: 5px; - font-style: italic; - font-size: 135%; - opacity: 70%; -} diff --git a/spaces/dachenchen/HiWantJoin/ChuanhuChatbot.py b/spaces/dachenchen/HiWantJoin/ChuanhuChatbot.py deleted file mode 100644 index c58896527ff5fc15650a6b1d9bbc1506988efb4b..0000000000000000000000000000000000000000 --- a/spaces/dachenchen/HiWantJoin/ChuanhuChatbot.py +++ /dev/null @@ -1,470 +0,0 @@ -# -*- coding:utf-8 -*- -import os -import logging -import sys - -import gradio as gr - -from modules import config -from modules.config import * -from modules.utils import * -from modules.presets import * -from modules.overwrites import * -from modules.models import get_model - - -gr.Chatbot._postprocess_chat_messages = postprocess_chat_messages -gr.Chatbot.postprocess = postprocess -PromptHelper.compact_text_chunks = compact_text_chunks - -with open("assets/custom.css", "r", encoding="utf-8") as f: - customCSS = f.read() - -def create_new_model(): - return get_model(model_name = MODELS[DEFAULT_MODEL], access_key = my_api_key)[0] - -with gr.Blocks(css=customCSS, theme=small_and_beautiful_theme) as demo: - user_name = gr.State("") - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - user_question = gr.State("") - user_api_key = gr.State(my_api_key) - current_model = gr.State(create_new_model) - - topic = gr.State(i18n("未命名对话历史记录")) - - with gr.Row(): - gr.HTML(CHUANHU_TITLE, elem_id="app_title") - status_display = gr.Markdown(get_geoip(), elem_id="status_display") - with gr.Row(elem_id="float_display"): - user_info = gr.Markdown(value="getting user info...", elem_id="user_info") - - # https://github.com/gradio-app/gradio/pull/3296 - def create_greeting(request: gr.Request): - if hasattr(request, "username") and request.username: # is not None or is not "" - logging.info(f"Get User Name: {request.username}") - return gr.Markdown.update(value=f"User: {request.username}"), request.username - else: - return gr.Markdown.update(value=f"User: default", visible=False), "" - demo.load(create_greeting, inputs=None, outputs=[user_info, user_name]) - - with gr.Row().style(equal_height=True): - with gr.Column(scale=5): - with gr.Row(): - chatbot = gr.Chatbot(elem_id="chuanhu_chatbot").style(height="100%") - with gr.Row(): - with gr.Column(min_width=225, scale=12): - user_input = gr.Textbox( - elem_id="user_input_tb", - show_label=False, placeholder=i18n("在这里输入") - ).style(container=False) - with gr.Column(min_width=42, scale=1): - submitBtn = gr.Button(value="", variant="primary", elem_id="submit_btn") - cancelBtn = gr.Button(value="", variant="secondary", visible=False, elem_id="cancel_btn") - with gr.Row(): - emptyBtn = gr.Button( - i18n("🧹 新的对话"), - ) - retryBtn = gr.Button(i18n("🔄 重新生成")) - delFirstBtn = gr.Button(i18n("🗑️ 删除最旧对话")) - delLastBtn = gr.Button(i18n("🗑️ 删除最新对话")) - with gr.Row(visible=False) as like_dislike_area: - with gr.Column(min_width=20, scale=1): - likeBtn = gr.Button(i18n("👍")) - with gr.Column(min_width=20, scale=1): - dislikeBtn = gr.Button(i18n("👎")) - - with gr.Column(): - with gr.Column(min_width=50, scale=1): - with gr.Tab(label=i18n("模型")): - keyTxt = gr.Textbox( - show_label=True, - placeholder=f"Your API-key...", - value=hide_middle_chars(user_api_key.value), - type="password", - visible=not HIDE_MY_KEY, - label="API-Key", - ) - if multi_api_key: - usageTxt = gr.Markdown(i18n("多账号模式已开启,无需输入key,可直接开始对话"), elem_id="usage_display", elem_classes="insert_block") - else: - usageTxt = gr.Markdown(i18n("**发送消息** 或 **提交key** 以显示额度"), elem_id="usage_display", elem_classes="insert_block") - model_select_dropdown = gr.Dropdown( - label=i18n("选择模型"), choices=MODELS, multiselect=False, value=MODELS[DEFAULT_MODEL], interactive=True - ) - lora_select_dropdown = gr.Dropdown( - label=i18n("选择LoRA模型"), choices=[], multiselect=False, interactive=True, visible=False - ) - with gr.Row(): - use_streaming_checkbox = gr.Checkbox( - label=i18n("实时传输回答"), value=True, visible=ENABLE_STREAMING_OPTION - ) - single_turn_checkbox = gr.Checkbox(label=i18n("单轮对话"), value=False) - use_websearch_checkbox = gr.Checkbox(label=i18n("使用在线搜索"), value=False) - language_select_dropdown = gr.Dropdown( - label=i18n("选择回复语言(针对搜索&索引功能)"), - choices=REPLY_LANGUAGES, - multiselect=False, - value=REPLY_LANGUAGES[0], - ) - index_files = gr.Files(label=i18n("上传"), type="file") - two_column = gr.Checkbox(label=i18n("双栏pdf"), value=advance_docs["pdf"].get("two_column", False)) - # TODO: 公式ocr - # formula_ocr = gr.Checkbox(label=i18n("识别公式"), value=advance_docs["pdf"].get("formula_ocr", False)) - - with gr.Tab(label="Prompt"): - systemPromptTxt = gr.Textbox( - show_label=True, - placeholder=i18n("在这里输入System Prompt..."), - label="System prompt", - value=INITIAL_SYSTEM_PROMPT, - lines=10, - ).style(container=False) - with gr.Accordion(label=i18n("加载Prompt模板"), open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown( - label=i18n("选择Prompt模板集合文件"), - choices=get_template_names(plain=True), - multiselect=False, - value=get_template_names(plain=True)[0], - ).style(container=False) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button(i18n("🔄 刷新")) - with gr.Row(): - with gr.Column(): - templateSelectDropdown = gr.Dropdown( - label=i18n("从Prompt模板中加载"), - choices=load_template( - get_template_names(plain=True)[0], mode=1 - ), - multiselect=False, - ).style(container=False) - - with gr.Tab(label=i18n("保存/加载")): - with gr.Accordion(label=i18n("保存/加载对话历史记录"), open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown( - label=i18n("从列表中加载对话"), - choices=get_history_names(plain=True), - multiselect=False, - value=get_history_names(plain=True)[0], - ) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button(i18n("🔄 刷新")) - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, - placeholder=i18n("设置文件名: 默认为.json,可选为.md"), - label=i18n("设置保存文件名"), - value=i18n("对话历史记录"), - ).style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button(i18n("💾 保存对话")) - exportMarkdownBtn = gr.Button(i18n("📝 导出为Markdown")) - gr.Markdown(i18n("默认保存于history文件夹")) - with gr.Row(): - with gr.Column(): - downloadFile = gr.File(interactive=True) - - with gr.Tab(label=i18n("高级")): - gr.Markdown(i18n("# ⚠️ 务必谨慎更改 ⚠️\n\n如果无法使用请恢复默认设置")) - gr.HTML(APPEARANCE_SWITCHER, elem_classes="insert_block") - with gr.Accordion(i18n("参数"), open=False): - temperature_slider = gr.Slider( - minimum=-0, - maximum=2.0, - value=1.0, - step=0.1, - interactive=True, - label="temperature", - ) - top_p_slider = gr.Slider( - minimum=-0, - maximum=1.0, - value=1.0, - step=0.05, - interactive=True, - label="top-p", - ) - n_choices_slider = gr.Slider( - minimum=1, - maximum=10, - value=1, - step=1, - interactive=True, - label="n choices", - ) - stop_sequence_txt = gr.Textbox( - show_label=True, - placeholder=i18n("在这里输入停止符,用英文逗号隔开..."), - label="stop", - value="", - lines=1, - ) - max_context_length_slider = gr.Slider( - minimum=1, - maximum=32768, - value=2000, - step=1, - interactive=True, - label="max context", - ) - max_generation_slider = gr.Slider( - minimum=1, - maximum=32768, - value=1000, - step=1, - interactive=True, - label="max generations", - ) - presence_penalty_slider = gr.Slider( - minimum=-2.0, - maximum=2.0, - value=0.0, - step=0.01, - interactive=True, - label="presence penalty", - ) - frequency_penalty_slider = gr.Slider( - minimum=-2.0, - maximum=2.0, - value=0.0, - step=0.01, - interactive=True, - label="frequency penalty", - ) - logit_bias_txt = gr.Textbox( - show_label=True, - placeholder=f"word:likelihood", - label="logit bias", - value="", - lines=1, - ) - user_identifier_txt = gr.Textbox( - show_label=True, - placeholder=i18n("用于定位滥用行为"), - label=i18n("用户名"), - value=user_name.value, - lines=1, - ) - - with gr.Accordion(i18n("网络设置"), open=False, visible=False): - # 优先展示自定义的api_host - apihostTxt = gr.Textbox( - show_label=True, - placeholder=i18n("在这里输入API-Host..."), - label="API-Host", - value=config.api_host or shared.API_HOST, - lines=1, - ) - changeAPIURLBtn = gr.Button(i18n("🔄 切换API地址")) - proxyTxt = gr.Textbox( - show_label=True, - placeholder=i18n("在这里输入代理地址..."), - label=i18n("代理地址(示例:http://127.0.0.1:10809)"), - value="", - lines=2, - ) - changeProxyBtn = gr.Button(i18n("🔄 设置代理地址")) - default_btn = gr.Button(i18n("🔙 恢复默认设置")) - - gr.Markdown(CHUANHU_DESCRIPTION, elem_id="description") - gr.HTML(FOOTER.format(versions=versions_html()), elem_id="footer") - demo.load(refresh_ui_elements_on_load, [current_model, model_select_dropdown], [like_dislike_area], show_progress=False) - chatgpt_predict_args = dict( - fn=predict, - inputs=[ - current_model, - user_question, - chatbot, - use_streaming_checkbox, - use_websearch_checkbox, - index_files, - language_select_dropdown, - ], - outputs=[chatbot, status_display], - show_progress=True, - ) - - start_outputing_args = dict( - fn=start_outputing, - inputs=[], - outputs=[submitBtn, cancelBtn], - show_progress=True, - ) - - end_outputing_args = dict( - fn=end_outputing, inputs=[], outputs=[submitBtn, cancelBtn] - ) - - reset_textbox_args = dict( - fn=reset_textbox, inputs=[], outputs=[user_input] - ) - - transfer_input_args = dict( - fn=transfer_input, inputs=[user_input], outputs=[user_question, user_input, submitBtn, cancelBtn], show_progress=True - ) - - get_usage_args = dict( - fn=billing_info, inputs=[current_model], outputs=[usageTxt], show_progress=False - ) - - load_history_from_file_args = dict( - fn=load_chat_history, - inputs=[current_model, historyFileSelectDropdown, chatbot, user_name], - outputs=[saveFileName, systemPromptTxt, chatbot] - ) - - - # Chatbot - cancelBtn.click(interrupt, [current_model], []) - - user_input.submit(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - user_input.submit(**get_usage_args) - - submitBtn.click(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - submitBtn.click(**get_usage_args) - - index_files.change(handle_file_upload, [current_model, index_files, chatbot], [index_files, chatbot, status_display]) - - emptyBtn.click( - reset, - inputs=[current_model], - outputs=[chatbot, status_display], - show_progress=True, - ) - - retryBtn.click(**start_outputing_args).then( - retry, - [ - current_model, - chatbot, - use_streaming_checkbox, - use_websearch_checkbox, - index_files, - language_select_dropdown, - ], - [chatbot, status_display], - show_progress=True, - ).then(**end_outputing_args) - retryBtn.click(**get_usage_args) - - delFirstBtn.click( - delete_first_conversation, - [current_model], - [status_display], - ) - - delLastBtn.click( - delete_last_conversation, - [current_model, chatbot], - [chatbot, status_display], - show_progress=False - ) - - likeBtn.click( - like, - [current_model], - [status_display], - show_progress=False - ) - - dislikeBtn.click( - dislike, - [current_model], - [status_display], - show_progress=False - ) - - two_column.change(update_doc_config, [two_column], None) - - # LLM Models - keyTxt.change(set_key, [current_model, keyTxt], [user_api_key, status_display]).then(**get_usage_args) - keyTxt.submit(**get_usage_args) - single_turn_checkbox.change(set_single_turn, [current_model, single_turn_checkbox], None) - model_select_dropdown.change(get_model, [model_select_dropdown, lora_select_dropdown, user_api_key, temperature_slider, top_p_slider, systemPromptTxt], [current_model, status_display, lora_select_dropdown], show_progress=True) - model_select_dropdown.change(toggle_like_btn_visibility, [model_select_dropdown], [like_dislike_area], show_progress=False) - lora_select_dropdown.change(get_model, [model_select_dropdown, lora_select_dropdown, user_api_key, temperature_slider, top_p_slider, systemPromptTxt], [current_model, status_display], show_progress=True) - - # Template - systemPromptTxt.change(set_system_prompt, [current_model, systemPromptTxt], None) - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - templateFileSelectDropdown.change( - load_template, - [templateFileSelectDropdown], - [promptTemplates, templateSelectDropdown], - show_progress=True, - ) - templateSelectDropdown.change( - get_template_content, - [promptTemplates, templateSelectDropdown, systemPromptTxt], - [systemPromptTxt], - show_progress=True, - ) - - # S&L - saveHistoryBtn.click( - save_chat_history, - [current_model, saveFileName, chatbot, user_name], - downloadFile, - show_progress=True, - ) - saveHistoryBtn.click(get_history_names, [gr.State(False), user_name], [historyFileSelectDropdown]) - exportMarkdownBtn.click( - export_markdown, - [current_model, saveFileName, chatbot, user_name], - downloadFile, - show_progress=True, - ) - historyRefreshBtn.click(get_history_names, [gr.State(False), user_name], [historyFileSelectDropdown]) - historyFileSelectDropdown.change(**load_history_from_file_args) - downloadFile.change(**load_history_from_file_args) - - # Advanced - max_context_length_slider.change(set_token_upper_limit, [current_model, max_context_length_slider], None) - temperature_slider.change(set_temperature, [current_model, temperature_slider], None) - top_p_slider.change(set_top_p, [current_model, top_p_slider], None) - n_choices_slider.change(set_n_choices, [current_model, n_choices_slider], None) - stop_sequence_txt.change(set_stop_sequence, [current_model, stop_sequence_txt], None) - max_generation_slider.change(set_max_tokens, [current_model, max_generation_slider], None) - presence_penalty_slider.change(set_presence_penalty, [current_model, presence_penalty_slider], None) - frequency_penalty_slider.change(set_frequency_penalty, [current_model, frequency_penalty_slider], None) - logit_bias_txt.change(set_logit_bias, [current_model, logit_bias_txt], None) - user_identifier_txt.change(set_user_identifier, [current_model, user_identifier_txt], None) - - default_btn.click( - reset_default, [], [apihostTxt, proxyTxt, status_display], show_progress=True - ) - changeAPIURLBtn.click( - change_api_host, - [apihostTxt], - [status_display], - show_progress=True, - ) - changeProxyBtn.click( - change_proxy, - [proxyTxt], - [status_display], - show_progress=True, - ) - -logging.info( - colorama.Back.GREEN - + "\n川虎的温馨提示:访问 http://localhost:7860 查看界面" - + colorama.Style.RESET_ALL -) -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = i18n("川虎Chat 🚀") - -if __name__ == "__main__": - reload_javascript() - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - favicon_path="./assets/favicon.ico", - ) - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/pirenderer/util/io.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/pirenderer/util/io.py deleted file mode 100644 index ea1575a2db5a8a45b60aece1e64f7ff3307714e8..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/pirenderer/util/io.py +++ /dev/null @@ -1,118 +0,0 @@ -import os - -import requests -import torch.distributed as dist -import torchvision.utils - -from util.distributed import is_master - - -def save_pilimage_in_jpeg(fullname, output_img): - r"""Save PIL Image to JPEG. - - Args: - fullname (str): Full save path. - output_img (PIL Image): Image to be saved. - """ - dirname = os.path.dirname(fullname) - os.makedirs(dirname, exist_ok=True) - output_img.save(fullname, 'JPEG', quality=99) - - -def save_intermediate_training_results( - visualization_images, logdir, current_epoch, current_iteration): - r"""Save intermediate training results for debugging purpose. - - Args: - visualization_images (tensor): Image where pixel values are in [-1, 1]. - logdir (str): Where to save the image. - current_epoch (int): Current training epoch. - current_iteration (int): Current training iteration. - """ - visualization_images = (visualization_images + 1) / 2 - output_filename = os.path.join( - logdir, 'images', - 'epoch_{:05}iteration{:09}.jpg'.format( - current_epoch, current_iteration)) - print('Save output images to {}'.format(output_filename)) - os.makedirs(os.path.dirname(output_filename), exist_ok=True) - image_grid = torchvision.utils.make_grid( - visualization_images.data, nrow=1, padding=0, normalize=False) - torchvision.utils.save_image(image_grid, output_filename, nrow=1) - - -def download_file_from_google_drive(file_id, destination): - r"""Download a file from the google drive by using the file ID. - - Args: - file_id: Google drive file ID - destination: Path to save the file. - - Returns: - - """ - URL = "https://docs.google.com/uc?export=download" - session = requests.Session() - response = session.get(URL, params={'id': file_id}, stream=True) - token = get_confirm_token(response) - if token: - params = {'id': file_id, 'confirm': token} - response = session.get(URL, params=params, stream=True) - save_response_content(response, destination) - - -def get_confirm_token(response): - r"""Get confirm token - - Args: - response: Check if the file exists. - - Returns: - - """ - for key, value in response.cookies.items(): - if key.startswith('download_warning'): - return value - return None - - -def save_response_content(response, destination): - r"""Save response content - - Args: - response: - destination: Path to save the file. - - Returns: - - """ - chunk_size = 32768 - with open(destination, "wb") as f: - for chunk in response.iter_content(chunk_size): - if chunk: - f.write(chunk) - - -def get_checkpoint(checkpoint_path, url=''): - r"""Get the checkpoint path. If it does not exist yet, download it from - the url. - - Args: - checkpoint_path (str): Checkpoint path. - url (str): URL to download checkpoint. - Returns: - (str): Full checkpoint path. - """ - if 'TORCH_HOME' not in os.environ: - os.environ['TORCH_HOME'] = os.getcwd() - save_dir = os.path.join(os.environ['TORCH_HOME'], 'checkpoints') - os.makedirs(save_dir, exist_ok=True) - full_checkpoint_path = os.path.join(save_dir, checkpoint_path) - if not os.path.exists(full_checkpoint_path): - os.makedirs(os.path.dirname(full_checkpoint_path), exist_ok=True) - if is_master(): - print('Download {}'.format(url)) - download_file_from_google_drive(url, full_checkpoint_path) - if dist.is_available() and dist.is_initialized(): - dist.barrier() - return full_checkpoint_path diff --git a/spaces/daddyjin/TalkingFaceGeneration/FONT/modules/frames_dataset.py b/spaces/daddyjin/TalkingFaceGeneration/FONT/modules/frames_dataset.py deleted file mode 100644 index 3dac34aa7a48422a5c241d8708893fabe69d17de..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/FONT/modules/frames_dataset.py +++ /dev/null @@ -1,451 +0,0 @@ -import os -from skimage import io, img_as_float32, transform -from skimage.color import gray2rgb -from sklearn.model_selection import train_test_split -from imageio import mimread - -import numpy as np -from torch.utils.data import Dataset -import pandas as pd -from augmentation import AllAugmentationTransform -import glob -import pickle -import random -def read_video(name, frame_shape): - """ - Read video which can be: - - an image of concatenated frames - - '.mp4' and'.gif' - - folder with videos - """ - - if os.path.isdir(name): - frames = sorted(os.listdir(name)) - num_frames = len(frames) - video_array = np.array( - [img_as_float32(io.imread(os.path.join(name, frames[idx]))) for idx in range(num_frames)]) - elif name.lower().endswith('.png') or name.lower().endswith('.jpg'): - image = io.imread(name) - - if len(image.shape) == 2 or image.shape[2] == 1: - image = gray2rgb(image) - - if image.shape[2] == 4: - image = image[..., :3] - - image = img_as_float32(image) - - video_array = np.moveaxis(image, 1, 0) - - video_array = video_array.reshape((-1,) + frame_shape) - video_array = np.moveaxis(video_array, 1, 2) - elif name.lower().endswith('.gif') or name.lower().endswith('.mp4') or name.lower().endswith('.mov'): - video = np.array(mimread(name)) - if len(video.shape) == 3: - video = np.array([gray2rgb(frame) for frame in video]) - if video.shape[-1] == 4: - video = video[..., :3] - video_array = img_as_float32(video) - else: - raise Exception("Unknown file extensions %s" % name) - - return video_array - -def get_list(ipath,base_name): -#ipath = '/mnt/lustre/share/jixinya/LRW/pose/train_fo/' - ipath = os.path.join(ipath,base_name) - name_list = os.listdir(ipath) - image_path = os.path.join('/mnt/lustre/share/jixinya/LRW/Image/',base_name) - all = [] - for k in range(len(name_list)): - name = name_list[k] - path_ = os.path.join(ipath,name) - Dir = os.listdir(path_) - for i in range(len(Dir)): - word = Dir[i] - path = os.path.join(path_, word) - if os.path.exists(os.path.join(image_path,name,word.split('.')[0])): - all.append(name+'/'+word.split('.')[0]) - #print(k,name,i,word) - print('get list '+os.path.basename(ipath)) - return all - - -class AudioDataset(Dataset): - """ - Dataset of videos, each video can be represented as: - - an image of concatenated frames - - '.mp4' or '.gif' - - folder with all frames - """ - - def __init__(self, root_dir, frame_shape=(256, 256, 3), id_sampling=False, is_train=True, - random_seed=0, pairs_list=None, augmentation_params=None): - self.root_dir = root_dir - self.audio_dir = os.path.join(root_dir,'MFCC') - self.image_dir = os.path.join(root_dir,'Image') - self.landmark_dir = os.path.join(root_dir,'Landmark') - self.pose_dir = os.path.join(root_dir,'pose') - # assert len(os.listdir(self.audio_dir)) == len(os.listdir(self.image_dir)), 'audio and image length not equal' - - - df=open('../LRW/list/test_fo.txt','rb') - self.videos=pickle.load(df) - df.close() - # self.videos=np.load('../LRW/list/train_fo.npy') - # self.videos = os.listdir(self.landmark_dir) - self.frame_shape = tuple(frame_shape) - self.pairs_list = pairs_list - self.id_sampling = id_sampling - self.pca = np.load('../LRW/list/U_106.npy')[:, :16] - self.mean = np.load('../LRW/list/mean_106.npy') - - if os.path.exists(os.path.join(self.pose_dir, 'train_fo')): - assert os.path.exists(os.path.join(self.pose_dir, 'test_fo')) - print("Use predefined train-test split.") - if id_sampling: - train_videos = {os.path.basename(video).split('#')[0] for video in - os.listdir(os.path.join(self.image_dir, 'train'))} - train_videos = list(train_videos) - else: - train_videos = np.load('../LRW/list/train_fo.npy')# get_list(self.pose_dir, 'train_fo') - df=open('../LRW/list/test_fo.txt','rb') - test_videos=pickle.load(df) - df.close() - # test_videos = np.load('../LRW/list/train_fo.npy') - #get_list(self.pose_dir, 'test_fo') - # self.root_dir = os.path.join(self.root_dir, 'train' if is_train else 'test') - self.landmark_dir = os.path.join(self.landmark_dir, 'train_fo' if is_train else 'test_fo') - self.image_dir = os.path.join(self.image_dir, 'train_fo' if is_train else 'test_fo') - self.audio_dir = os.path.join(self.audio_dir, 'train' if is_train else 'test') - self.pose_dir = os.path.join(self.pose_dir, 'train_fo' if is_train else 'test_fo') - else: - print("Use random train-test split.") - train_videos, test_videos = train_test_split(self.videos, random_state=random_seed, test_size=0.2) - - if is_train: - self.videos = train_videos - else: - self.videos = test_videos - - self.is_train = is_train - - if self.is_train: - self.transform = AllAugmentationTransform(**augmentation_params) - else: - self.transform = None - - def __len__(self): - return len(self.videos) - - def __getitem__(self, idx): - if self.is_train and self.id_sampling: - name = self.videos[idx].split('.')[0] - path = np.random.choice(glob.glob(os.path.join(self.root_dir, name + '*.mp4'))) - else: - name = self.videos[idx].split('.')[0] - landmark_path = os.path.join(self.landmark_dir, name+'.npy') - - audio_path = os.path.join(self.audio_dir, name) - pose_path = os.path.join(self.pose_dir,name) - path = os.path.join(self.image_dir, name) - - video_name = os.path.basename(path) - - if os.path.isdir(path): - # if self.is_train and os.path.isdir(path): - - lmark = np.load(landmark_path).reshape(-1,212)/255 - if np.isnan(lmark).sum() or np.isinf(lmark).sum(): - print('Wrong lmark '+ video_name, file=open('log/wrong.txt', 'a')) - lmark = np.zeros((29,212)) - lmark = lmark - self.mean - lmark = np.dot(lmark, self.pca) - - # mfcc loading - - r = random.choice([x for x in range(3, 8)]) - example_landmark = lmark[r, :] - example_image = img_as_float32(io.imread(os.path.join(path, str(r)+'.png'))) - # example_mfcc = mfcc[(r - 3) * 4: (r + 4) * 4, 1:] - - mfccs = [] - for ind in range(1, 17): - # t_mfcc = mfcc[(r + ind - 3) * 4: (r + ind + 4) * 4, 1:] - try: - t_mfcc = np.load(os.path.join(audio_path,str(r + ind)+'.npy'),allow_pickle=True)[:, 1:] - if np.isnan(t_mfcc).sum() or np.isinf(t_mfcc).sum(): - print('Wrong mfcc '+ video_name+str(r+ind), file=open('log/wrong.txt', 'a')) - t_mfcc = np.zeros((28,13))[:,1:] - except: - t_mfcc = np.zeros((28,13))[:,1:] - mfccs.append(t_mfcc) - mfccs = np.array(mfccs) - if not self.is_train: - poses = [] - video_array = [] - for ind in range(1, 17): - # t_mfcc = mfcc[(r + ind - 3) * 4: (r + ind + 4) * 4, 1:] - t_pose = np.load(os.path.join(pose_path,str(r + ind)+'.npy'))[:-1] - poses.append(t_pose) - image = img_as_float32(io.imread(os.path.join(path, str(r + ind)+'.png'))) - video_array.append(image) - poses = np.array(poses) - video_array = np.array(video_array) - else: - poses = [] - video_array = [] - for ind in range(1, 17): - # t_mfcc = mfcc[(r + ind - 3) * 4: (r + ind + 4) * 4, 1:] - t_pose = np.load(os.path.join(self.pose_dir,name+'.npy'))[r+ind,:-1] - if np.isnan(t_pose).sum() or np.isinf(t_pose).sum(): - print('Wrong pose '+ video_name, file=open('log/wrong.txt', 'a')) - t_pose = np.zeros((6,)) - poses.append(t_pose) - image = img_as_float32(io.imread(os.path.join(path, str(r + ind)+'.png'))) - video_array.append(image) - poses = np.array(poses) - video_array = np.array(video_array) - - #mfccs = torch.FloatTensor(mfccs) - landmark = lmark[r + 1: r + 17, :] - index_32 = [0,4,8,12,16,20,24,28,32,33,35,67,68,40,42,52,55,72,73,58,61,75,76,46,47,51,84,87,90,93,98,102] - driving_landmark = np.load(landmark_path)[r + 1: r + 17, :][:,index_32] - source_landmark = np.load(landmark_path)[r, :][index_32] - else: - video_array = read_video(path, frame_shape=self.frame_shape) - num_frames = len(video_array) - frame_idx = np.sort(np.random.choice(num_frames, replace=True, size=2)) if self.is_train else range( - num_frames) - video_array = video_array[frame_idx] - - if self.transform is not None: - video_array = self.transform(video_array) - - out = {} - if True:#self.is_train: - # a = img_as_float32(io.imread('/media/thea/Data/first-order-model/images_512/102.jpg')) - # source = np.array(a, dtype='float32') - - driving = np.array(video_array, dtype='float32') - - spatial_size = np.array(driving.shape[1:3][::-1])[np.newaxis] - # example_landmark = np.array(2*example_landmark / spatial_size -1, dtype='float32') - driving_landmark = np.array(2*driving_landmark / spatial_size -1, dtype='float32') - source_landmark = np.array(2*source_landmark / spatial_size -1, dtype='float32') - driving_pose = np.array(poses, dtype='float32') - example_landmark = np.array(example_landmark, dtype='float32') - example_image = np.array(example_image, dtype='float32') - # source_cube = np.array(transform.resize(cube_array[0], (64,64)), dtype='float32') - # driving_cube = np.array(transform.resize(cube_array[1], (64,64)), dtype='float32') - # source_heatmap = np.array(heatmap_array[0] , dtype='float32') - # driving_heatmap = np.array(heatmap_array[1] , dtype='float32') - # out['source_cube'] = source_cube - # out['driving_cube'] = driving_cube - out['example_landmark'] = example_landmark - out['example_image'] = example_image.transpose((2, 0, 1)) - out['driving_landmark'] = driving_landmark - out['source_landmark'] = source_landmark - out['driving_pose'] = driving_pose - # out['source_heatmap'] = source_heatmap - # out['driving_heatmap'] = driving_heatmap - out['driving'] = driving.transpose((0, 3, 1, 2)) - # out['source'] = source.transpose((2, 0, 1)) - - # out['source_audio'] = np.array(audio_array[0], dtype='float32') - out['driving_audio'] = np.array(mfccs, dtype='float32') - out['gt_landmark'] = np.array(landmark, dtype='float32') - out['pca'] = np.array(self.pca, dtype='float32') - out['mean'] = np.array(self.mean, dtype='float32') - - - out['name'] = video_name - - return out - -class FramesDataset(Dataset): - """ - Dataset of videos, each video can be represented as: - - an image of concatenated frames - - '.mp4' or '.gif' - - folder with all frames - """ - - def __init__(self, root_dir, frame_shape=(256, 256, 3), id_sampling=False, is_train=True, - random_seed=0, pairs_list=None, augmentation_params=None): - self.root_dir = root_dir - self.audio_dir = os.path.join(root_dir,'audio/') - self.image_dir = os.path.join(root_dir,'image/') - self.landmark_dir = os.path.join(root_dir,'cube/') - # assert len(os.listdir(self.audio_dir)) == len(os.listdir(self.image_dir)), 'audio and image length not equal' - - - df=open('/media/thea/新加卷/MEAD/neutral/train.txt','rb') - self.videos=pickle.load(df) - df.close() - # self.videos = os.listdir(self.landmark_dir) - self.frame_shape = tuple(frame_shape) - self.pairs_list = pairs_list - self.id_sampling = id_sampling - if os.path.exists(os.path.join(self.image_dir, 'train')): - assert os.path.exists(os.path.join(self.image_dir, 'test')) - print("Use predefined train-test split.") - if id_sampling: - train_videos = {os.path.basename(video).split('#')[0] for video in - os.listdir(os.path.join(self.image_dir, 'train'))} - train_videos = list(train_videos) - else: - train_videos = os.listdir(os.path.join(self.image_dir, 'train')) - test_videos = os.listdir(os.path.join(self.image_dir, 'test')) - self.root_dir = os.path.join(self.root_dir, 'train' if is_train else 'test') - self.landmark_dir = os.path.join(self.landmark_dir, 'train' if is_train else 'test') - self.image_dir = os.path.join(self.image_dir, 'train' if is_train else 'test') - self.audio_dir = os.path.join(self.audio_dir, 'train' if is_train else 'test') - - else: - print("Use random train-test split.") - train_videos, test_videos = train_test_split(self.videos, random_state=random_seed, test_size=0.2) - - if is_train: - self.videos = train_videos - else: - self.videos = test_videos - - self.is_train = is_train - - if self.is_train: - self.transform = AllAugmentationTransform(**augmentation_params) - else: - self.transform = None - - def __len__(self): - return len(self.videos) - - def __getitem__(self, idx): - if self.is_train and self.id_sampling: - name = self.videos[idx].split('.')[0] - path = np.random.choice(glob.glob(os.path.join(self.root_dir, name + '*.mp4'))) - else: - name = self.videos[idx].split('.')[0] - landmark_path = os.path.join(self.landmark_dir, name) - - audio_path = os.path.join(self.audio_dir, name) - path = os.path.join(self.image_dir, name) - - video_name = os.path.basename(path) - - if self.is_train and os.path.isdir(path): - frames = os.listdir(audio_path) - num_frames = len(frames) - frame_idx = np.sort(np.random.choice(num_frames-1, replace=True, size=2)) - # landmark = np.load(landmark_path)#+'.npy' - # assert len(os.listdir(path)) == len(landmark), video_name+' length not equal' - video_array = [img_as_float32(io.imread(os.path.join(path, str(idx)+'.png'))) for idx in frame_idx] - cube_array = [img_as_float32(io.imread(os.path.join(landmark_path, str(idx)+'.jpg'))) for idx in frame_idx] - audio_array = [np.load(os.path.join(audio_path, str(idx)+'.npy'))[:,1:] for idx in frame_idx] - index_20 = [0,16,32,35,40,52,55,58,61,46,72,73,75,76,84,87,90,93,98,102] - index_32 = [0,4,8,12,16,20,24,28,32,33,35,67,68,40,42,52,55,72,73,58,61,75,76,46,47,51,84,87,90,93,98,102] - # landmark_array = [landmark[idx] for idx in frame_idx] - # landmark_array = [landmark[idx][index_32] for idx in frame_idx] - else: - video_array = read_video(path, frame_shape=self.frame_shape) - num_frames = len(video_array) - frame_idx = np.sort(np.random.choice(num_frames, replace=True, size=2)) if self.is_train else range( - num_frames) - video_array = video_array[frame_idx] - - if self.transform is not None: - video_array = self.transform(video_array) - - out = {} - if self.is_train: - # a = img_as_float32(io.imread('/media/thea/Data/first-order-model/images_512/102.jpg')) - # source = np.array(a, dtype='float32') - source = np.array(video_array[0], dtype='float32') - driving = np.array(video_array[1], dtype='float32') - - spatial_size = np.array(source.shape[:2][::-1])[np.newaxis] - # source_landmark = np.array(2*landmark_array[0] / spatial_size -1, dtype='float32') - # driving_landmark = np.array(2*landmark_array[1] / spatial_size -1, dtype='float32') - source_cube = np.array(transform.resize(cube_array[0], (64,64)), dtype='float32') - driving_cube = np.array(transform.resize(cube_array[1], (64,64)), dtype='float32') - # source_heatmap = np.array(heatmap_array[0] , dtype='float32') - # driving_heatmap = np.array(heatmap_array[1] , dtype='float32') - out['source_cube'] = source_cube - out['driving_cube'] = driving_cube - # out['source_landmark'] = source_landmark - # out['driving_landmark'] = driving_landmark - # out['source_heatmap'] = source_heatmap - # out['driving_heatmap'] = driving_heatmap - out['driving'] = driving.transpose((2, 0, 1)) - out['source'] = source.transpose((2, 0, 1)) - - out['source_audio'] = np.array(audio_array[0], dtype='float32') - out['driving_audio'] = np.array(audio_array[1], dtype='float32') - - else: - video = np.array(video_array, dtype='float32') - out['video'] = video.transpose((3, 0, 1, 2)) - - out['name'] = video_name - - return out - - -class DatasetRepeater(Dataset): - """ - Pass several times over the same dataset for better i/o performance - """ - - def __init__(self, dataset, num_repeats=100): - self.dataset = dataset - self.num_repeats = num_repeats - - def __len__(self): - return self.num_repeats * self.dataset.__len__() - - def __getitem__(self, idx): - return self.dataset[idx % self.dataset.__len__()]#% self.dataset.__len__() - - -class PairedDataset(Dataset): - """ - Dataset of pairs for animation. - """ - - def __init__(self, initial_dataset, number_of_pairs, seed=0): - self.initial_dataset = initial_dataset - pairs_list = self.initial_dataset.pairs_list - - np.random.seed(seed) - - if pairs_list is None: - max_idx = min(number_of_pairs, len(initial_dataset)) - nx, ny = max_idx, max_idx - xy = np.mgrid[:nx, :ny].reshape(2, -1).T - number_of_pairs = min(xy.shape[0], number_of_pairs) - self.pairs = xy.take(np.random.choice(xy.shape[0], number_of_pairs, replace=False), axis=0) - else: - videos = self.initial_dataset.videos - name_to_index = {name: index for index, name in enumerate(videos)} - pairs = pd.read_csv(pairs_list) - pairs = pairs[np.logical_and(pairs['source'].isin(videos), pairs['driving'].isin(videos))] - - number_of_pairs = min(pairs.shape[0], number_of_pairs) - self.pairs = [] - self.start_frames = [] - for ind in range(number_of_pairs): - self.pairs.append( - (name_to_index[pairs['driving'].iloc[ind]], name_to_index[pairs['source'].iloc[ind]])) - - def __len__(self): - return len(self.pairs) - - def __getitem__(self, idx): - pair = self.pairs[idx] - first = self.initial_dataset[pair[0]] - second = self.initial_dataset[pair[1]] - first = {'driving_' + key: value for key, value in first.items()} - second = {'source_' + key: value for key, value in second.items()} - - return {**first, **second} diff --git a/spaces/dahaoGPT/Llama2-70b-chatmodle-demo/README.md b/spaces/dahaoGPT/Llama2-70b-chatmodle-demo/README.md deleted file mode 100644 index 39e180545f99d9e01d8d888a8f450e3012ad382d..0000000000000000000000000000000000000000 --- a/spaces/dahaoGPT/Llama2-70b-chatmodle-demo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Llama2 70b Chatmodle Demo -emoji: 👀 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/danielcodex/first-prod/README.md b/spaces/danielcodex/first-prod/README.md deleted file mode 100644 index adc20c06432ab0e1edfa1b9a181966dc906a9d33..0000000000000000000000000000000000000000 --- a/spaces/danielcodex/first-prod/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: First Prod -emoji: 💻 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.13.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - - -just making sure we are on the right repo \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/WmfImagePlugin.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/WmfImagePlugin.py deleted file mode 100644 index 0ecab56a824fd3917067fd4b05c530f4abce75a3..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/WmfImagePlugin.py +++ /dev/null @@ -1,178 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# WMF stub codec -# -# history: -# 1996-12-14 fl Created -# 2004-02-22 fl Turned into a stub driver -# 2004-02-23 fl Added EMF support -# -# Copyright (c) Secret Labs AB 1997-2004. All rights reserved. -# Copyright (c) Fredrik Lundh 1996. -# -# See the README file for information on usage and redistribution. -# -# WMF/EMF reference documentation: -# https://winprotocoldoc.blob.core.windows.net/productionwindowsarchives/MS-WMF/[MS-WMF].pdf -# http://wvware.sourceforge.net/caolan/index.html -# http://wvware.sourceforge.net/caolan/ora-wmf.html - -from . import Image, ImageFile -from ._binary import i16le as word -from ._binary import si16le as short -from ._binary import si32le as _long - -_handler = None - - -def register_handler(handler): - """ - Install application-specific WMF image handler. - - :param handler: Handler object. - """ - global _handler - _handler = handler - - -if hasattr(Image.core, "drawwmf"): - # install default handler (windows only) - - class WmfHandler: - def open(self, im): - im.mode = "RGB" - self.bbox = im.info["wmf_bbox"] - - def load(self, im): - im.fp.seek(0) # rewind - return Image.frombytes( - "RGB", - im.size, - Image.core.drawwmf(im.fp.read(), im.size, self.bbox), - "raw", - "BGR", - (im.size[0] * 3 + 3) & -4, - -1, - ) - - register_handler(WmfHandler()) - -# -# -------------------------------------------------------------------- -# Read WMF file - - -def _accept(prefix): - return ( - prefix[:6] == b"\xd7\xcd\xc6\x9a\x00\x00" or prefix[:4] == b"\x01\x00\x00\x00" - ) - - -## -# Image plugin for Windows metafiles. - - -class WmfStubImageFile(ImageFile.StubImageFile): - format = "WMF" - format_description = "Windows Metafile" - - def _open(self): - self._inch = None - - # check placable header - s = self.fp.read(80) - - if s[:6] == b"\xd7\xcd\xc6\x9a\x00\x00": - # placeable windows metafile - - # get units per inch - self._inch = word(s, 14) - - # get bounding box - x0 = short(s, 6) - y0 = short(s, 8) - x1 = short(s, 10) - y1 = short(s, 12) - - # normalize size to 72 dots per inch - self.info["dpi"] = 72 - size = ( - (x1 - x0) * self.info["dpi"] // self._inch, - (y1 - y0) * self.info["dpi"] // self._inch, - ) - - self.info["wmf_bbox"] = x0, y0, x1, y1 - - # sanity check (standard metafile header) - if s[22:26] != b"\x01\x00\t\x00": - msg = "Unsupported WMF file format" - raise SyntaxError(msg) - - elif s[:4] == b"\x01\x00\x00\x00" and s[40:44] == b" EMF": - # enhanced metafile - - # get bounding box - x0 = _long(s, 8) - y0 = _long(s, 12) - x1 = _long(s, 16) - y1 = _long(s, 20) - - # get frame (in 0.01 millimeter units) - frame = _long(s, 24), _long(s, 28), _long(s, 32), _long(s, 36) - - size = x1 - x0, y1 - y0 - - # calculate dots per inch from bbox and frame - xdpi = 2540.0 * (x1 - y0) / (frame[2] - frame[0]) - ydpi = 2540.0 * (y1 - y0) / (frame[3] - frame[1]) - - self.info["wmf_bbox"] = x0, y0, x1, y1 - - if xdpi == ydpi: - self.info["dpi"] = xdpi - else: - self.info["dpi"] = xdpi, ydpi - - else: - msg = "Unsupported file format" - raise SyntaxError(msg) - - self.mode = "RGB" - self._size = size - - loader = self._load() - if loader: - loader.open(self) - - def _load(self): - return _handler - - def load(self, dpi=None): - if dpi is not None and self._inch is not None: - self.info["dpi"] = dpi - x0, y0, x1, y1 = self.info["wmf_bbox"] - self._size = ( - (x1 - x0) * self.info["dpi"] // self._inch, - (y1 - y0) * self.info["dpi"] // self._inch, - ) - return super().load() - - -def _save(im, fp, filename): - if _handler is None or not hasattr(_handler, "save"): - msg = "WMF save handler not installed" - raise OSError(msg) - _handler.save(im, fp, filename) - - -# -# -------------------------------------------------------------------- -# Registry stuff - - -Image.register_open(WmfStubImageFile.format, WmfStubImageFile, _accept) -Image.register_save(WmfStubImageFile.format, _save) - -Image.register_extensions(WmfStubImageFile.format, [".wmf", ".emf"]) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fastapi/param_functions.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fastapi/param_functions.py deleted file mode 100644 index a43afaf311798ebde5fb265e1d47d584d807152d..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fastapi/param_functions.py +++ /dev/null @@ -1,564 +0,0 @@ -from typing import Any, Callable, Dict, List, Optional, Sequence, Union - -from fastapi import params -from fastapi._compat import Undefined -from typing_extensions import Annotated, deprecated - -_Unset: Any = Undefined - - -def Path( # noqa: N802 - default: Any = ..., - *, - default_factory: Union[Callable[[], Any], None] = _Unset, - alias: Optional[str] = None, - alias_priority: Union[int, None] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Union[str, None] = None, - serialization_alias: Union[str, None] = None, - title: Optional[str] = None, - description: Optional[str] = None, - gt: Optional[float] = None, - ge: Optional[float] = None, - lt: Optional[float] = None, - le: Optional[float] = None, - min_length: Optional[int] = None, - max_length: Optional[int] = None, - pattern: Optional[str] = None, - regex: Annotated[ - Optional[str], - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Union[str, None] = None, - strict: Union[bool, None] = _Unset, - multiple_of: Union[float, None] = _Unset, - allow_inf_nan: Union[bool, None] = _Unset, - max_digits: Union[int, None] = _Unset, - decimal_places: Union[int, None] = _Unset, - examples: Optional[List[Any]] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - deprecated: Optional[bool] = None, - include_in_schema: bool = True, - json_schema_extra: Union[Dict[str, Any], None] = None, - **extra: Any, -) -> Any: - return params.Path( - default=default, - default_factory=default_factory, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Query( # noqa: N802 - default: Any = Undefined, - *, - default_factory: Union[Callable[[], Any], None] = _Unset, - alias: Optional[str] = None, - alias_priority: Union[int, None] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Union[str, None] = None, - serialization_alias: Union[str, None] = None, - title: Optional[str] = None, - description: Optional[str] = None, - gt: Optional[float] = None, - ge: Optional[float] = None, - lt: Optional[float] = None, - le: Optional[float] = None, - min_length: Optional[int] = None, - max_length: Optional[int] = None, - pattern: Optional[str] = None, - regex: Annotated[ - Optional[str], - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Union[str, None] = None, - strict: Union[bool, None] = _Unset, - multiple_of: Union[float, None] = _Unset, - allow_inf_nan: Union[bool, None] = _Unset, - max_digits: Union[int, None] = _Unset, - decimal_places: Union[int, None] = _Unset, - examples: Optional[List[Any]] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - deprecated: Optional[bool] = None, - include_in_schema: bool = True, - json_schema_extra: Union[Dict[str, Any], None] = None, - **extra: Any, -) -> Any: - return params.Query( - default=default, - default_factory=default_factory, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Header( # noqa: N802 - default: Any = Undefined, - *, - default_factory: Union[Callable[[], Any], None] = _Unset, - alias: Optional[str] = None, - alias_priority: Union[int, None] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Union[str, None] = None, - serialization_alias: Union[str, None] = None, - convert_underscores: bool = True, - title: Optional[str] = None, - description: Optional[str] = None, - gt: Optional[float] = None, - ge: Optional[float] = None, - lt: Optional[float] = None, - le: Optional[float] = None, - min_length: Optional[int] = None, - max_length: Optional[int] = None, - pattern: Optional[str] = None, - regex: Annotated[ - Optional[str], - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Union[str, None] = None, - strict: Union[bool, None] = _Unset, - multiple_of: Union[float, None] = _Unset, - allow_inf_nan: Union[bool, None] = _Unset, - max_digits: Union[int, None] = _Unset, - decimal_places: Union[int, None] = _Unset, - examples: Optional[List[Any]] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - deprecated: Optional[bool] = None, - include_in_schema: bool = True, - json_schema_extra: Union[Dict[str, Any], None] = None, - **extra: Any, -) -> Any: - return params.Header( - default=default, - default_factory=default_factory, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - convert_underscores=convert_underscores, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Cookie( # noqa: N802 - default: Any = Undefined, - *, - default_factory: Union[Callable[[], Any], None] = _Unset, - alias: Optional[str] = None, - alias_priority: Union[int, None] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Union[str, None] = None, - serialization_alias: Union[str, None] = None, - title: Optional[str] = None, - description: Optional[str] = None, - gt: Optional[float] = None, - ge: Optional[float] = None, - lt: Optional[float] = None, - le: Optional[float] = None, - min_length: Optional[int] = None, - max_length: Optional[int] = None, - pattern: Optional[str] = None, - regex: Annotated[ - Optional[str], - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Union[str, None] = None, - strict: Union[bool, None] = _Unset, - multiple_of: Union[float, None] = _Unset, - allow_inf_nan: Union[bool, None] = _Unset, - max_digits: Union[int, None] = _Unset, - decimal_places: Union[int, None] = _Unset, - examples: Optional[List[Any]] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - deprecated: Optional[bool] = None, - include_in_schema: bool = True, - json_schema_extra: Union[Dict[str, Any], None] = None, - **extra: Any, -) -> Any: - return params.Cookie( - default=default, - default_factory=default_factory, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Body( # noqa: N802 - default: Any = Undefined, - *, - default_factory: Union[Callable[[], Any], None] = _Unset, - embed: bool = False, - media_type: str = "application/json", - alias: Optional[str] = None, - alias_priority: Union[int, None] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Union[str, None] = None, - serialization_alias: Union[str, None] = None, - title: Optional[str] = None, - description: Optional[str] = None, - gt: Optional[float] = None, - ge: Optional[float] = None, - lt: Optional[float] = None, - le: Optional[float] = None, - min_length: Optional[int] = None, - max_length: Optional[int] = None, - pattern: Optional[str] = None, - regex: Annotated[ - Optional[str], - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Union[str, None] = None, - strict: Union[bool, None] = _Unset, - multiple_of: Union[float, None] = _Unset, - allow_inf_nan: Union[bool, None] = _Unset, - max_digits: Union[int, None] = _Unset, - decimal_places: Union[int, None] = _Unset, - examples: Optional[List[Any]] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - deprecated: Optional[bool] = None, - include_in_schema: bool = True, - json_schema_extra: Union[Dict[str, Any], None] = None, - **extra: Any, -) -> Any: - return params.Body( - default=default, - default_factory=default_factory, - embed=embed, - media_type=media_type, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Form( # noqa: N802 - default: Any = Undefined, - *, - default_factory: Union[Callable[[], Any], None] = _Unset, - media_type: str = "application/x-www-form-urlencoded", - alias: Optional[str] = None, - alias_priority: Union[int, None] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Union[str, None] = None, - serialization_alias: Union[str, None] = None, - title: Optional[str] = None, - description: Optional[str] = None, - gt: Optional[float] = None, - ge: Optional[float] = None, - lt: Optional[float] = None, - le: Optional[float] = None, - min_length: Optional[int] = None, - max_length: Optional[int] = None, - pattern: Optional[str] = None, - regex: Annotated[ - Optional[str], - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Union[str, None] = None, - strict: Union[bool, None] = _Unset, - multiple_of: Union[float, None] = _Unset, - allow_inf_nan: Union[bool, None] = _Unset, - max_digits: Union[int, None] = _Unset, - decimal_places: Union[int, None] = _Unset, - examples: Optional[List[Any]] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - deprecated: Optional[bool] = None, - include_in_schema: bool = True, - json_schema_extra: Union[Dict[str, Any], None] = None, - **extra: Any, -) -> Any: - return params.Form( - default=default, - default_factory=default_factory, - media_type=media_type, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def File( # noqa: N802 - default: Any = Undefined, - *, - default_factory: Union[Callable[[], Any], None] = _Unset, - media_type: str = "multipart/form-data", - alias: Optional[str] = None, - alias_priority: Union[int, None] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Union[str, None] = None, - serialization_alias: Union[str, None] = None, - title: Optional[str] = None, - description: Optional[str] = None, - gt: Optional[float] = None, - ge: Optional[float] = None, - lt: Optional[float] = None, - le: Optional[float] = None, - min_length: Optional[int] = None, - max_length: Optional[int] = None, - pattern: Optional[str] = None, - regex: Annotated[ - Optional[str], - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Union[str, None] = None, - strict: Union[bool, None] = _Unset, - multiple_of: Union[float, None] = _Unset, - allow_inf_nan: Union[bool, None] = _Unset, - max_digits: Union[int, None] = _Unset, - decimal_places: Union[int, None] = _Unset, - examples: Optional[List[Any]] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - deprecated: Optional[bool] = None, - include_in_schema: bool = True, - json_schema_extra: Union[Dict[str, Any], None] = None, - **extra: Any, -) -> Any: - return params.File( - default=default, - default_factory=default_factory, - media_type=media_type, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Depends( # noqa: N802 - dependency: Optional[Callable[..., Any]] = None, *, use_cache: bool = True -) -> Any: - return params.Depends(dependency=dependency, use_cache=use_cache) - - -def Security( # noqa: N802 - dependency: Optional[Callable[..., Any]] = None, - *, - scopes: Optional[Sequence[str]] = None, - use_cache: bool = True, -) -> Any: - return params.Security(dependency=dependency, scopes=scopes, use_cache=use_cache) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-322e8a8e.css b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-322e8a8e.css deleted file mode 100644 index aa7186b19dcf31452295d0d5d4dbb3b5aadb3dea..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-322e8a8e.css +++ /dev/null @@ -1 +0,0 @@ -.gallery.svelte-1ayixqk,.gallery.svelte-1viwdyg{padding:var(--size-1) var(--size-2)}div.svelte-1viwdyg{overflow:hidden;min-width:var(--local-text-width);white-space:nowrap}video.svelte-1tntsc1{flex:none;border:2px solid var(--border-color-primary);border-radius:var(--radius-lg);max-width:none}video.svelte-1tntsc1:hover,video.selected.svelte-1tntsc1{border-color:var(--border-color-accent)}.table.svelte-1tntsc1{margin:0 auto;width:var(--size-20);height:var(--size-20);object-fit:cover}.gallery.svelte-1tntsc1{max-height:var(--size-20);object-fit:cover}div.svelte-rgtszb{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.gallery.svelte-rgtszb{display:flex;align-items:center;cursor:pointer;padding:var(--size-1) var(--size-2);text-align:left}table.svelte-1cib1xd.svelte-1cib1xd{position:relative}td.svelte-1cib1xd.svelte-1cib1xd{border:1px solid var(--table-border-color);padding:var(--size-2);font-size:var(--text-sm);font-family:var(--font-mono)}.selected.svelte-1cib1xd td.svelte-1cib1xd{border-color:var(--border-color-accent)}.table.svelte-1cib1xd.svelte-1cib1xd{display:inline-block;margin:0 auto}.gallery.svelte-1cib1xd td.svelte-1cib1xd:first-child{border-left:none}.gallery.svelte-1cib1xd tr:first-child td.svelte-1cib1xd{border-top:none}.gallery.svelte-1cib1xd td.svelte-1cib1xd:last-child{border-right:none}.gallery.svelte-1cib1xd tr:last-child td.svelte-1cib1xd{border-bottom:none}.overlay.svelte-1cib1xd.svelte-1cib1xd{--gradient-to:transparent;position:absolute;bottom:0;background:linear-gradient(to bottom,transparent,var(--gradient-to));width:var(--size-full);height:50%}.odd.svelte-1cib1xd.svelte-1cib1xd{--gradient-to:var(--table-even-background-fill)}.even.svelte-1cib1xd.svelte-1cib1xd{--gradient-to:var(--table-odd-background-fill)}.button.svelte-1cib1xd.svelte-1cib1xd{--gradient-to:var(--background-fill-primary)}div.svelte-h6ogpl{width:var(--size-10);height:var(--size-10)}.table.svelte-h6ogpl{margin:0 auto}.gallery.svelte-1ayixqk{padding:var(--size-1) var(--size-2)}.gallery.svelte-zvfedn{padding:var(--size-2)}pre.svelte-agpzo2{text-align:left}.gallery.svelte-agpzo2{padding:var(--size-1) var(--size-2)}.wrap.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{display:inline-block;width:var(--size-full);max-width:var(--size-full);color:var(--body-text-color)}.hide.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{display:none}.label.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{display:flex;align-items:center;margin-bottom:var(--size-2);color:var(--block-label-text-color);font-weight:var(--block-label-text-weight);font-size:var(--block-label-text-size);line-height:var(--line-sm)}svg.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{margin-right:var(--size-1)}.gallery.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{display:flex;flex-wrap:wrap;gap:var(--spacing-lg)}.gallery-item.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{border:1px solid var(--border-color-primary);border-radius:var(--button-large-radius);overflow:hidden}.gallery-item.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno:hover{border-color:var(--border-color-accent);background:var(--table-row-focus)}.table-wrap.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{border:1px solid var(--border-color-primary);border-radius:var(--table-radius);width:var(--size-full);table-layout:auto;overflow-x:auto;line-height:var(--line-sm)}table.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{width:var(--size-full)}.tr-head.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{box-shadow:var(--shadow-drop-lg);border-bottom:1px solid var(--border-color-primary)}.tr-head.svelte-13hsdno>.svelte-13hsdno+.svelte-13hsdno{border-right-width:0px;border-left-width:1px;border-color:var(--border-color-primary)}th.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{padding:var(--size-2);white-space:nowrap}.tr-body.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{cursor:pointer;border-bottom:1px solid var(--border-color-primary);background:var(--table-even-background-fill)}.tr-body.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno:last-child{border:none}.tr-body.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno:nth-child(odd){background:var(--table-odd-background-fill)}.tr-body.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno:hover{background:var(--table-row-focus)}.tr-body.svelte-13hsdno>.svelte-13hsdno+.svelte-13hsdno{border-right-width:0px;border-left-width:1px;border-color:var(--border-color-primary)}.tr-body.svelte-13hsdno:hover>.svelte-13hsdno+.svelte-13hsdno{border-color:var(--border-color-accent)}td.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{padding:var(--size-2);text-align:center}.paginate.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{display:flex;justify-content:center;align-items:center;gap:var(--spacing-sm);margin-top:var(--size-2);color:var(--block-label-text-color);font-size:var(--text-sm)}button.current-page.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{font-weight:var(--weight-bold)} diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-aee9714f.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-aee9714f.js deleted file mode 100644 index 4e4daa4a94120a216a060347ae4b84207fd001b4..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-aee9714f.js +++ /dev/null @@ -1,14 +0,0 @@ -import{C as R,E as m,L as C,a as u}from"./index-6a7e443e.js";import{s as z,t as e,y as n,h as W,L as I,i as E,w as Y,z as A,d as J,f as L,a as N,A as k,b as D,B,C as H,v as K,E as b,I as M,m as F,x as OO}from"./index-7045bfe3.js";import"./index-9e76ffee.js";import"./Button-30a08c0b.js";import"./Copy-92242405.js";import"./Download-e6704cf2.js";import"./BlockLabel-9545c6da.js";import"./Empty-8e3485c0.js";const y=301,j=1,QO=2,d=302,eO=304,aO=305,iO=3,$O=4,tO=[9,10,11,12,13,32,133,160,5760,8192,8193,8194,8195,8196,8197,8198,8199,8200,8201,8202,8232,8233,8239,8287,12288],_=125,rO=59,x=47,SO=42,PO=43,nO=45,oO=new R({start:!1,shift(O,Q){return Q==iO||Q==$O||Q==eO?O:Q==aO},strict:!1}),ZO=new m((O,Q)=>{let{next:i}=O;(i==_||i==-1||Q.context)&&Q.canShift(d)&&O.acceptToken(d)},{contextual:!0,fallback:!0}),lO=new m((O,Q)=>{let{next:i}=O,a;tO.indexOf(i)>-1||i==x&&((a=O.peek(1))==x||a==SO)||i!=_&&i!=rO&&i!=-1&&!Q.context&&Q.canShift(y)&&O.acceptToken(y)},{contextual:!0}),XO=new m((O,Q)=>{let{next:i}=O;if((i==PO||i==nO)&&(O.advance(),i==O.next)){O.advance();let a=!Q.context&&Q.canShift(j);O.acceptToken(a?j:QO)}},{contextual:!0}),cO=z({"get set async static":e.modifier,"for while do if else switch try catch finally return throw break continue default case":e.controlKeyword,"in of await yield void typeof delete instanceof":e.operatorKeyword,"let var const function class extends":e.definitionKeyword,"import export from":e.moduleKeyword,"with debugger as new":e.keyword,TemplateString:e.special(e.string),super:e.atom,BooleanLiteral:e.bool,this:e.self,null:e.null,Star:e.modifier,VariableName:e.variableName,"CallExpression/VariableName TaggedTemplateExpression/VariableName":e.function(e.variableName),VariableDefinition:e.definition(e.variableName),Label:e.labelName,PropertyName:e.propertyName,PrivatePropertyName:e.special(e.propertyName),"CallExpression/MemberExpression/PropertyName":e.function(e.propertyName),"FunctionDeclaration/VariableDefinition":e.function(e.definition(e.variableName)),"ClassDeclaration/VariableDefinition":e.definition(e.className),PropertyDefinition:e.definition(e.propertyName),PrivatePropertyDefinition:e.definition(e.special(e.propertyName)),UpdateOp:e.updateOperator,LineComment:e.lineComment,BlockComment:e.blockComment,Number:e.number,String:e.string,Escape:e.escape,ArithOp:e.arithmeticOperator,LogicOp:e.logicOperator,BitOp:e.bitwiseOperator,CompareOp:e.compareOperator,RegExp:e.regexp,Equals:e.definitionOperator,Arrow:e.function(e.punctuation),": Spread":e.punctuation,"( )":e.paren,"[ ]":e.squareBracket,"{ }":e.brace,"InterpolationStart InterpolationEnd":e.special(e.brace),".":e.derefOperator,", ;":e.separator,"@":e.meta,TypeName:e.typeName,TypeDefinition:e.definition(e.typeName),"type enum interface implements namespace module declare":e.definitionKeyword,"abstract global Privacy readonly override":e.modifier,"is keyof unique infer":e.operatorKeyword,JSXAttributeValue:e.attributeValue,JSXText:e.content,"JSXStartTag JSXStartCloseTag JSXSelfCloseEndTag JSXEndTag":e.angleBracket,"JSXIdentifier JSXNameSpacedName":e.tagName,"JSXAttribute/JSXIdentifier JSXAttribute/JSXNameSpacedName":e.attributeName,"JSXBuiltin/JSXIdentifier":e.standard(e.tagName)}),sO={__proto__:null,export:14,as:19,from:27,default:30,async:35,function:36,extends:46,this:50,true:58,false:58,null:70,void:74,typeof:78,super:96,new:130,delete:146,yield:155,await:159,class:164,public:219,private:219,protected:219,readonly:221,instanceof:240,satisfies:243,in:244,const:246,import:278,keyof:333,unique:337,infer:343,is:379,abstract:399,implements:401,type:403,let:406,var:408,interface:415,enum:419,namespace:425,module:427,declare:431,global:435,for:456,of:465,while:468,with:472,do:476,if:480,else:482,switch:486,case:492,try:498,catch:502,finally:506,return:510,throw:514,break:518,continue:522,debugger:526},pO={__proto__:null,async:117,get:119,set:121,public:181,private:181,protected:181,static:183,abstract:185,override:187,readonly:193,accessor:195,new:383},gO={__proto__:null,"<":137},YO=C.deserialize({version:14,states:"$BhO`QUOOO%QQUOOO'TQWOOP(_OSOOO*mQ(CjO'#CfO*tOpO'#CgO+SO!bO'#CgO+bO07`O'#DZO-sQUO'#DaO.TQUO'#DlO%QQUO'#DvO0[QUO'#EOOOQ(CY'#EW'#EWO0rQSO'#ETOOQO'#I_'#I_O0zQSO'#GjOOQO'#Eh'#EhO1VQSO'#EgO1[QSO'#EgO3^Q(CjO'#JbO5}Q(CjO'#JcO6kQSO'#FVO6pQ#tO'#FnOOQ(CY'#F_'#F_O6{O&jO'#F_O7ZQ,UO'#FuO8qQSO'#FtOOQ(CY'#Jc'#JcOOQ(CW'#Jb'#JbOOQQ'#J|'#J|O8vQSO'#IOO8{Q(C[O'#IPOOQQ'#JO'#JOOOQQ'#IT'#ITQ`QUOOO%QQUO'#DnO9TQUO'#DzO%QQUO'#D|O9[QSO'#GjO9aQ,UO'#ClO9oQSO'#EfO9zQSO'#EqO:PQ,UO'#F^O:nQSO'#GjO:sQSO'#GnO;OQSO'#GnO;^QSO'#GqO;^QSO'#GrO;^QSO'#GtO9[QSO'#GwO;}QSO'#GzO=`QSO'#CbO=pQSO'#HXO=xQSO'#H_O=xQSO'#HaO`QUO'#HcO=xQSO'#HeO=xQSO'#HhO=}QSO'#HnO>SQ(C]O'#HtO%QQUO'#HvO>_Q(C]O'#HxO>jQ(C]O'#HzO8{Q(C[O'#H|O>uQ(CjO'#CfO?wQWO'#DfQOQSOOO@_QSO'#EPO9aQ,UO'#EfO@jQSO'#EfO@uQ`O'#F^OOQQ'#Cd'#CdOOQ(CW'#Dk'#DkOOQ(CW'#Jf'#JfO%QQUO'#JfOBOQWO'#E_OOQ(CW'#E^'#E^OBYQ(C`O'#E_OBtQWO'#ESOOQO'#Ji'#JiOCYQWO'#ESOCgQWO'#E_OC}QWO'#EeODQQWO'#E_O@}QWO'#E_OBtQWO'#E_PDkO?MpO'#C`POOO)CDm)CDmOOOO'#IU'#IUODvOpO,59ROOQ(CY,59R,59ROOOO'#IV'#IVOEUO!bO,59RO%QQUO'#D]OOOO'#IX'#IXOEdO07`O,59uOOQ(CY,59u,59uOErQUO'#IYOFVQSO'#JdOHXQbO'#JdO+pQUO'#JdOH`QSO,59{OHvQSO'#EhOITQSO'#JqOI`QSO'#JpOI`QSO'#JpOIhQSO,5;UOImQSO'#JoOOQ(CY,5:W,5:WOItQUO,5:WOKuQ(CjO,5:bOLfQSO,5:jOLkQSO'#JmOMeQ(C[O'#JnO:sQSO'#JmOMlQSO'#JmOMtQSO,5;TOMyQSO'#JmOOQ(CY'#Cf'#CfO%QQUO'#EOONmQ`O,5:oOOQO'#Jj'#JjOOQO-E<]-E<]O9[QSO,5=UO! TQSO,5=UO! YQUO,5;RO!#]Q,UO'#EcO!$pQSO,5;RO!&YQ,UO'#DpO!&aQUO'#DuO!&kQWO,5;[O!&sQWO,5;[O%QQUO,5;[OOQQ'#E}'#E}OOQQ'#FP'#FPO%QQUO,5;]O%QQUO,5;]O%QQUO,5;]O%QQUO,5;]O%QQUO,5;]O%QQUO,5;]O%QQUO,5;]O%QQUO,5;]O%QQUO,5;]O%QQUO,5;]O%QQUO,5;]OOQQ'#FT'#FTO!'RQUO,5;nOOQ(CY,5;s,5;sOOQ(CY,5;t,5;tO!)UQSO,5;tOOQ(CY,5;u,5;uO%QQUO'#IeO!)^Q(C[O,5jOOQQ'#JW'#JWOOQQ,5>k,5>kOOQQ-EgQWO'#EkOOQ(CW'#Jo'#JoO!>nQ(C[O'#J}O8{Q(C[O,5=YO;^QSO,5=`OOQO'#Cr'#CrO!>yQWO,5=]O!?RQ,UO,5=^O!?^QSO,5=`O!?cQ`O,5=cO=}QSO'#G|O9[QSO'#HOO!?kQSO'#HOO9aQ,UO'#HRO!?pQSO'#HROOQQ,5=f,5=fO!?uQSO'#HSO!?}QSO'#ClO!@SQSO,58|O!@^QSO,58|O!BfQUO,58|OOQQ,58|,58|O!BsQ(C[O,58|O%QQUO,58|O!COQUO'#HZOOQQ'#H['#H[OOQQ'#H]'#H]O`QUO,5=sO!C`QSO,5=sO`QUO,5=yO`QUO,5={O!CeQSO,5=}O`QUO,5>PO!CjQSO,5>SO!CoQUO,5>YOOQQ,5>`,5>`O%QQUO,5>`O8{Q(C[O,5>bOOQQ,5>d,5>dO!GvQSO,5>dOOQQ,5>f,5>fO!GvQSO,5>fOOQQ,5>h,5>hO!G{QWO'#DXO%QQUO'#JfO!HjQWO'#JfO!IXQWO'#DgO!IjQWO'#DgO!K{QUO'#DgO!LSQSO'#JeO!L[QSO,5:QO!LaQSO'#ElO!LoQSO'#JrO!LwQSO,5;VO!L|QWO'#DgO!MZQWO'#EROOQ(CY,5:k,5:kO%QQUO,5:kO!MbQSO,5:kO=}QSO,5;QO!;xQWO,5;QO!tO+pQUO,5>tOOQO,5>z,5>zO#$vQUO'#IYOOQO-EtO$8XQSO1G5jO$8aQSO1G5vO$8iQbO1G5wO:sQSO,5>zO$8sQSO1G5sO$8sQSO1G5sO:sQSO1G5sO$8{Q(CjO1G5tO%QQUO1G5tO$9]Q(C[O1G5tO$9nQSO,5>|O:sQSO,5>|OOQO,5>|,5>|O$:SQSO,5>|OOQO-E<`-E<`OOQO1G0]1G0]OOQO1G0_1G0_O!)XQSO1G0_OOQQ7+([7+([O!#]Q,UO7+([O%QQUO7+([O$:bQSO7+([O$:mQ,UO7+([O$:{Q(CjO,59nO$=TQ(CjO,5UOOQQ,5>U,5>UO%QQUO'#HkO%&qQSO'#HmOOQQ,5>[,5>[O:sQSO,5>[OOQQ,5>^,5>^OOQQ7+)`7+)`OOQQ7+)f7+)fOOQQ7+)j7+)jOOQQ7+)l7+)lO%&vQWO1G5lO%'[Q$IUO1G0rO%'fQSO1G0rOOQO1G/m1G/mO%'qQ$IUO1G/mO=}QSO1G/mO!'RQUO'#DgOOQO,5>u,5>uOOQO-E{,5>{OOQO-E<_-E<_O!;xQWO1G/mOOQO-E<[-E<[OOQ(CY1G0X1G0XOOQ(CY7+%q7+%qO!MeQSO7+%qOOQ(CY7+&W7+&WO=}QSO7+&WO!;xQWO7+&WOOQO7+%t7+%tO$7kQ(CjO7+&POOQO7+&P7+&PO%QQUO7+&PO%'{Q(C[O7+&PO=}QSO7+%tO!;xQWO7+%tO%(WQ(C[O7+&POBtQWO7+%tO%(fQ(C[O7+&PO%(zQ(C`O7+&PO%)UQWO7+%tOBtQWO7+&PO%)cQWO7+&PO%)yQSO7++_O%)yQSO7++_O%*RQ(CjO7++`O%QQUO7++`OOQO1G4h1G4hO:sQSO1G4hO%*cQSO1G4hOOQO7+%y7+%yO!MeQSO<vOOQO-EwO%QQUO,5>wOOQO-ESQ$IUO1G0wO%>ZQ$IUO1G0wO%@RQ$IUO1G0wO%@fQ(CjO<VOOQQ,5>X,5>XO&#WQSO1G3vO:sQSO7+&^O!'RQUO7+&^OOQO7+%X7+%XO&#]Q$IUO1G5wO=}QSO7+%XOOQ(CY<zAN>zO%QQUOAN?VO=}QSOAN>zO&<^Q(C[OAN?VO!;xQWOAN>zO&zO&RO!V+iO^(qX'j(qX~O#W+mO'|%OO~Og+pO!X$yO'|%OO~O!X+rO~Oy+tO!XXO~O!t+yO~Ob,OO~O's#jO!W(sP~Ob%lO~O%a!OO's%|O~PRO!V,yO!W(fa~O!W2SO~P'TO^%^O#W2]O'j%^O~O^%^O!a#rO#W2]O'j%^O~O^%^O!a#rO!h%ZO!l2aO#W2]O'j%^O'|%OO(`'dO~O!]2bO!^2bO't!iO~PBtO![2eO!]2bO!^2bO#S2fO#T2fO't!iO~PBtO![2eO!]2bO!^2bO#P2gO#S2fO#T2fO't!iO~PBtO^%^O!a#rO!l2aO#W2]O'j%^O(`'dO~O^%^O'j%^O~P!3jO!V$^Oo$ja~O!S&|i!V&|i~P!3jO!V'xO!S(Wi~O!V(PO!S(di~O!S(ei!V(ei~P!3jO!V(]O!g(ai~O!V(bi!g(bi^(bi'j(bi~P!3jO#W2kO!V(bi!g(bi^(bi'j(bi~O|%vO!X%wO!x]O#a2nO#b2mO's%eO~O|%vO!X%wO#b2mO's%eO~Og2uO!X'QO%`2tO~Og2uO!X'QO%`2tO'|%OO~O#cvaPvaXva^vakva!eva!fva!hva!lva#fva#gva#hva#iva#jva#kva#lva#mva#nva#pva#rva#tva#uva'jva(Qva(`va!gva!Sva'hvaova!Xva%`va!ava~P#M{O#c$kaP$kaX$ka^$kak$kaz$ka!e$ka!f$ka!h$ka!l$ka#f$ka#g$ka#h$ka#i$ka#j$ka#k$ka#l$ka#m$ka#n$ka#p$ka#r$ka#t$ka#u$ka'j$ka(Q$ka(`$ka!g$ka!S$ka'h$kao$ka!X$ka%`$ka!a$ka~P#NqO#c$maP$maX$ma^$mak$maz$ma!e$ma!f$ma!h$ma!l$ma#f$ma#g$ma#h$ma#i$ma#j$ma#k$ma#l$ma#m$ma#n$ma#p$ma#r$ma#t$ma#u$ma'j$ma(Q$ma(`$ma!g$ma!S$ma'h$mao$ma!X$ma%`$ma!a$ma~P$ dO#c${aP${aX${a^${ak${az${a!V${a!e${a!f${a!h${a!l${a#f${a#g${a#h${a#i${a#j${a#k${a#l${a#m${a#n${a#p${a#r${a#t${a#u${a'j${a(Q${a(`${a!g${a!S${a'h${a#W${ao${a!X${a%`${a!a${a~P#(yO^#Zq!V#Zq'j#Zq'h#Zq!S#Zq!g#Zqo#Zq!X#Zq%`#Zq!a#Zq~P!3jOd'OX!V'OX~P!$uO!V._Od(Za~O!U2}O!V'PX!g'PX~P%QO!V.bO!g([a~O!V.bO!g([a~P!3jO!S3QO~O#x!ja!W!ja~PI{O#x!ba!V!ba!W!ba~P#?dO#x!na!W!na~P!6TO#x!pa!W!pa~P!8nO!X3dO$TfO$^3eO~O!W3iO~Oo3jO~P#(yO^$gq!V$gq'j$gq'h$gq!S$gq!g$gqo$gq!X$gq%`$gq!a$gq~P!3jO!S3kO~Ol.}O'uTO'xUO~Oy)sO|)tO(h)xOg%Wi(g%Wi!V%Wi#W%Wi~Od%Wi#x%Wi~P$HbOy)sO|)tOg%Yi(g%Yi(h%Yi!V%Yi#W%Yi~Od%Yi#x%Yi~P$ITO(`$WO~P#(yO!U3nO's%eO!V'YX!g'YX~O!V/VO!g(ma~O!V/VO!a#rO!g(ma~O!V/VO!a#rO(`'dO!g(ma~Od$ti!V$ti#W$ti#x$ti~P!-jO!U3vO's*UO!S'[X!V'[X~P!.XO!V/_O!S(na~O!V/_O!S(na~P#(yO!a#rO~O!a#rO#n4OO~Ok4RO!a#rO(`'dO~Od(Oi!V(Oi~P!-jO#W4UOd(Oi!V(Oi~P!-jO!g4XO~O^$hq!V$hq'j$hq'h$hq!S$hq!g$hqo$hq!X$hq%`$hq!a$hq~P!3jO!V4]O!X(oX~P#(yO!f#tO~P3zO!X$rX%TYX^$rX!V$rX'j$rX~P!,aO%T4_OghXyhX|hX!XhX(ghX(hhX^hX!VhX'jhX~O%T4_O~O%a4fO's+WO'uTO'xUO!V'eX!W'eX~O!V0_O!W(ua~OX4jO~O]4kO~O!S4oO~O^%^O'j%^O~P#(yO!X$yO~P#(yO!V4tO#W4vO!W(rX~O!W4wO~Ol!kO|4yO![5WO!]4}O!^4}O!x;oO!|5VO!}5UO#O5UO#P5TO#S5SO#T!wO't!iO'uTO'xUO(T!jO(_!nO~O!W5RO~P%#XOg5]O!X0zO%`5[O~Og5]O!X0zO%`5[O'|%OO~O's#jO!V'dX!W'dX~O!V1VO!W(sa~O'uTO'xUO(T5fO~O]5jO~O!g5mO~P%QO^5oO~O^5oO~P%QO#n5qO&Q5rO~PMPO_1mO!W5vO&`1lO~P`O!a5xO~O!a5zO!V(Yi!W(Yi!a(Yi!h(Yi'|(Yi~O!V#`i!W#`i~P#?dO#W5{O!V#`i!W#`i~O!V!Zi!W!Zi~P#?dO^%^O#W6UO'j%^O~O^%^O!a#rO#W6UO'j%^O~O^%^O!a#rO!l6ZO#W6UO'j%^O(`'dO~O!h%ZO'|%OO~P%(fO!]6[O!^6[O't!iO~PBtO![6_O!]6[O!^6[O#S6`O#T6`O't!iO~PBtO!V(]O!g(aq~O!V(bq!g(bq^(bq'j(bq~P!3jO|%vO!X%wO#b6dO's%eO~O!X'QO%`6gO~Og6jO!X'QO%`6gO~O#c%WiP%WiX%Wi^%Wik%Wiz%Wi!e%Wi!f%Wi!h%Wi!l%Wi#f%Wi#g%Wi#h%Wi#i%Wi#j%Wi#k%Wi#l%Wi#m%Wi#n%Wi#p%Wi#r%Wi#t%Wi#u%Wi'j%Wi(Q%Wi(`%Wi!g%Wi!S%Wi'h%Wio%Wi!X%Wi%`%Wi!a%Wi~P$HbO#c%YiP%YiX%Yi^%Yik%Yiz%Yi!e%Yi!f%Yi!h%Yi!l%Yi#f%Yi#g%Yi#h%Yi#i%Yi#j%Yi#k%Yi#l%Yi#m%Yi#n%Yi#p%Yi#r%Yi#t%Yi#u%Yi'j%Yi(Q%Yi(`%Yi!g%Yi!S%Yi'h%Yio%Yi!X%Yi%`%Yi!a%Yi~P$ITO#c$tiP$tiX$ti^$tik$tiz$ti!V$ti!e$ti!f$ti!h$ti!l$ti#f$ti#g$ti#h$ti#i$ti#j$ti#k$ti#l$ti#m$ti#n$ti#p$ti#r$ti#t$ti#u$ti'j$ti(Q$ti(`$ti!g$ti!S$ti'h$ti#W$tio$ti!X$ti%`$ti!a$ti~P#(yOd'Oa!V'Oa~P!-jO!V'Pa!g'Pa~P!3jO!V.bO!g([i~O#x#Zi!V#Zi!W#Zi~P#?dOP$YOy#vOz#wO|#xO!f#tO!h#uO!l$YO(QVOX#eik#ei!e#ei#g#ei#h#ei#i#ei#j#ei#k#ei#l#ei#m#ei#n#ei#p#ei#r#ei#t#ei#u#ei#x#ei(`#ei(g#ei(h#ei!V#ei!W#ei~O#f#ei~P%2xO#f;wO~P%2xOP$YOy#vOz#wO|#xO!f#tO!h#uO!l$YO#f;wO#g;xO#h;xO#i;xO(QVOX#ei!e#ei#j#ei#k#ei#l#ei#m#ei#n#ei#p#ei#r#ei#t#ei#u#ei#x#ei(`#ei(g#ei(h#ei!V#ei!W#ei~Ok#ei~P%5TOk;yO~P%5TOP$YOk;yOy#vOz#wO|#xO!f#tO!h#uO!l$YO#f;wO#g;xO#h;xO#i;xO#j;zO(QVO#p#ei#r#ei#t#ei#u#ei#x#ei(`#ei(g#ei(h#ei!V#ei!W#ei~OX#ei!e#ei#k#ei#l#ei#m#ei#n#ei~P%7`OXbO^#vy!V#vy'j#vy'h#vy!S#vy!g#vyo#vy!X#vy%`#vy!a#vy~P!3jOg=jOy)sO|)tO(g)vO(h)xO~OP#eiX#eik#eiz#ei!e#ei!f#ei!h#ei!l#ei#f#ei#g#ei#h#ei#i#ei#j#ei#k#ei#l#ei#m#ei#n#ei#p#ei#r#ei#t#ei#u#ei#x#ei(Q#ei(`#ei!V#ei!W#ei~P%AYO!f#tOP(PXX(PXg(PXk(PXy(PXz(PX|(PX!e(PX!h(PX!l(PX#f(PX#g(PX#h(PX#i(PX#j(PX#k(PX#l(PX#m(PX#n(PX#p(PX#r(PX#t(PX#u(PX#x(PX(Q(PX(`(PX(g(PX(h(PX!V(PX!W(PX~O#x#yi!V#yi!W#yi~P#?dO#x!ni!W!ni~P$!qO!W6vO~O!V'Xa!W'Xa~P#?dO!a#rO(`'dO!V'Ya!g'Ya~O!V/VO!g(mi~O!V/VO!a#rO!g(mi~Od$tq!V$tq#W$tq#x$tq~P!-jO!S'[a!V'[a~P#(yO!a6}O~O!V/_O!S(ni~P#(yO!V/_O!S(ni~O!S7RO~O!a#rO#n7WO~Ok7XO!a#rO(`'dO~O!S7ZO~Od$vq!V$vq#W$vq#x$vq~P!-jO^$hy!V$hy'j$hy'h$hy!S$hy!g$hyo$hy!X$hy%`$hy!a$hy~P!3jO!V4]O!X(oa~O^#Zy!V#Zy'j#Zy'h#Zy!S#Zy!g#Zyo#Zy!X#Zy%`#Zy!a#Zy~P!3jOX7`O~O!V0_O!W(ui~O]7fO~O!a5zO~O(T(qO!V'aX!W'aX~O!V4tO!W(ra~O!h%ZO'|%OO^(YX!a(YX!l(YX#W(YX'j(YX(`(YX~O's7oO~P.[O!x;oO!|7rO!}7qO#O7qO#P7pO#S'bO#T'bO~PBtO^%^O!a#rO!l'hO#W'fO'j%^O(`'dO~O!W7vO~P%#XOl!kO'uTO'xUO(T!jO(_!nO~O|7wO~P%MdO![7{O!]7zO!^7zO#P7pO#S'bO#T'bO't!iO~PBtO![7{O!]7zO!^7zO!}7|O#O7|O#P7pO#S'bO#T'bO't!iO~PBtO!]7zO!^7zO't!iO(T!jO(_!nO~O!X0zO~O!X0zO%`8OO~Og8RO!X0zO%`8OO~OX8WO!V'da!W'da~O!V1VO!W(si~O!g8[O~O!g8]O~O!g8^O~O!g8^O~P%QO^8`O~O!a8cO~O!g8dO~O!V(ei!W(ei~P#?dO^%^O#W8lO'j%^O~O^%^O!a#rO#W8lO'j%^O~O^%^O!a#rO!l8pO#W8lO'j%^O(`'dO~O!h%ZO'|%OO~P&$QO!]8qO!^8qO't!iO~PBtO!V(]O!g(ay~O!V(by!g(by^(by'j(by~P!3jO!X'QO%`8uO~O#c$tqP$tqX$tq^$tqk$tqz$tq!V$tq!e$tq!f$tq!h$tq!l$tq#f$tq#g$tq#h$tq#i$tq#j$tq#k$tq#l$tq#m$tq#n$tq#p$tq#r$tq#t$tq#u$tq'j$tq(Q$tq(`$tq!g$tq!S$tq'h$tq#W$tqo$tq!X$tq%`$tq!a$tq~P#(yO#c$vqP$vqX$vq^$vqk$vqz$vq!V$vq!e$vq!f$vq!h$vq!l$vq#f$vq#g$vq#h$vq#i$vq#j$vq#k$vq#l$vq#m$vq#n$vq#p$vq#r$vq#t$vq#u$vq'j$vq(Q$vq(`$vq!g$vq!S$vq'h$vq#W$vqo$vq!X$vq%`$vq!a$vq~P#(yO!V'Pi!g'Pi~P!3jO#x#Zq!V#Zq!W#Zq~P#?dOy/yOz/yO|/zOPvaXvagvakva!eva!fva!hva!lva#fva#gva#hva#iva#jva#kva#lva#mva#nva#pva#rva#tva#uva#xva(Qva(`va(gva(hva!Vva!Wva~Oy)sO|)tOP$kaX$kag$kak$kaz$ka!e$ka!f$ka!h$ka!l$ka#f$ka#g$ka#h$ka#i$ka#j$ka#k$ka#l$ka#m$ka#n$ka#p$ka#r$ka#t$ka#u$ka#x$ka(Q$ka(`$ka(g$ka(h$ka!V$ka!W$ka~Oy)sO|)tOP$maX$mag$mak$maz$ma!e$ma!f$ma!h$ma!l$ma#f$ma#g$ma#h$ma#i$ma#j$ma#k$ma#l$ma#m$ma#n$ma#p$ma#r$ma#t$ma#u$ma#x$ma(Q$ma(`$ma(g$ma(h$ma!V$ma!W$ma~OP${aX${ak${az${a!e${a!f${a!h${a!l${a#f${a#g${a#h${a#i${a#j${a#k${a#l${a#m${a#n${a#p${a#r${a#t${a#u${a#x${a(Q${a(`${a!V${a!W${a~P%AYO#x$gq!V$gq!W$gq~P#?dO#x$hq!V$hq!W$hq~P#?dO!W9PO~O#x9QO~P!-jO!a#rO!V'Yi!g'Yi~O!a#rO(`'dO!V'Yi!g'Yi~O!V/VO!g(mq~O!S'[i!V'[i~P#(yO!V/_O!S(nq~O!S9WO~P#(yO!S9WO~Od(Oy!V(Oy~P!-jO!V'_a!X'_a~P#(yO!X%Sq^%Sq!V%Sq'j%Sq~P#(yOX9]O~O!V0_O!W(uq~O#W9aO!V'aa!W'aa~O!V4tO!W(ri~P#?dOPYXXYXkYXyYXzYX|YX!SYX!VYX!eYX!fYX!hYX!lYX#WYX#ccX#fYX#gYX#hYX#iYX#jYX#kYX#lYX#mYX#nYX#pYX#rYX#tYX#uYX#zYX(QYX(`YX(gYX(hYX~O!a%QX#n%QX~P&6lO#S-cO#T-cO~PBtO#P9eO#S-cO#T-cO~PBtO!}9fO#O9fO#P9eO#S-cO#T-cO~PBtO!]9iO!^9iO't!iO(T!jO(_!nO~O![9lO!]9iO!^9iO#P9eO#S-cO#T-cO't!iO~PBtO!X0zO%`9oO~O'uTO'xUO(T9tO~O!V1VO!W(sq~O!g9wO~O!g9wO~P%QO!g9yO~O!g9zO~O#W9|O!V#`y!W#`y~O!V#`y!W#`y~P#?dO^%^O#W:QO'j%^O~O^%^O!a#rO#W:QO'j%^O~O^%^O!a#rO!l:UO#W:QO'j%^O(`'dO~O!X'QO%`:XO~O#x#vy!V#vy!W#vy~P#?dOP$tiX$tik$tiz$ti!e$ti!f$ti!h$ti!l$ti#f$ti#g$ti#h$ti#i$ti#j$ti#k$ti#l$ti#m$ti#n$ti#p$ti#r$ti#t$ti#u$ti#x$ti(Q$ti(`$ti!V$ti!W$ti~P%AYOy)sO|)tO(h)xOP%WiX%Wig%Wik%Wiz%Wi!e%Wi!f%Wi!h%Wi!l%Wi#f%Wi#g%Wi#h%Wi#i%Wi#j%Wi#k%Wi#l%Wi#m%Wi#n%Wi#p%Wi#r%Wi#t%Wi#u%Wi#x%Wi(Q%Wi(`%Wi(g%Wi!V%Wi!W%Wi~Oy)sO|)tOP%YiX%Yig%Yik%Yiz%Yi!e%Yi!f%Yi!h%Yi!l%Yi#f%Yi#g%Yi#h%Yi#i%Yi#j%Yi#k%Yi#l%Yi#m%Yi#n%Yi#p%Yi#r%Yi#t%Yi#u%Yi#x%Yi(Q%Yi(`%Yi(g%Yi(h%Yi!V%Yi!W%Yi~O#x$hy!V$hy!W$hy~P#?dO#x#Zy!V#Zy!W#Zy~P#?dO!a#rO!V'Yq!g'Yq~O!V/VO!g(my~O!S'[q!V'[q~P#(yO!S:`O~P#(yO!V0_O!W(uy~O!V4tO!W(rq~O#S2fO#T2fO~PBtO#P:gO#S2fO#T2fO~PBtO!]:kO!^:kO't!iO(T!jO(_!nO~O!X0zO%`:nO~O!g:qO~O^%^O#W:vO'j%^O~O^%^O!a#rO#W:vO'j%^O~O!X'QO%`:{O~OP$tqX$tqk$tqz$tq!e$tq!f$tq!h$tq!l$tq#f$tq#g$tq#h$tq#i$tq#j$tq#k$tq#l$tq#m$tq#n$tq#p$tq#r$tq#t$tq#u$tq#x$tq(Q$tq(`$tq!V$tq!W$tq~P%AYOP$vqX$vqk$vqz$vq!e$vq!f$vq!h$vq!l$vq#f$vq#g$vq#h$vq#i$vq#j$vq#k$vq#l$vq#m$vq#n$vq#p$vq#r$vq#t$vq#u$vq#x$vq(Q$vq(`$vq!V$vq!W$vq~P%AYOd%[!Z!V%[!Z#W%[!Z#x%[!Z~P!-jO!V'aq!W'aq~P#?dO#S6`O#T6`O~PBtO!V#`!Z!W#`!Z~P#?dO^%^O#W;ZO'j%^O~O#c%[!ZP%[!ZX%[!Z^%[!Zk%[!Zz%[!Z!V%[!Z!e%[!Z!f%[!Z!h%[!Z!l%[!Z#f%[!Z#g%[!Z#h%[!Z#i%[!Z#j%[!Z#k%[!Z#l%[!Z#m%[!Z#n%[!Z#p%[!Z#r%[!Z#t%[!Z#u%[!Z'j%[!Z(Q%[!Z(`%[!Z!g%[!Z!S%[!Z'h%[!Z#W%[!Zo%[!Z!X%[!Z%`%[!Z!a%[!Z~P#(yOP%[!ZX%[!Zk%[!Zz%[!Z!e%[!Z!f%[!Z!h%[!Z!l%[!Z#f%[!Z#g%[!Z#h%[!Z#i%[!Z#j%[!Z#k%[!Z#l%[!Z#m%[!Z#n%[!Z#p%[!Z#r%[!Z#t%[!Z#u%[!Z#x%[!Z(Q%[!Z(`%[!Z!V%[!Z!W%[!Z~P%AYOo(UX~P1dO't!iO~P!'RO!ScX!VcX#WcX~P&6lOPYXXYXkYXyYXzYX|YX!VYX!VcX!eYX!fYX!hYX!lYX#WYX#WcX#ccX#fYX#gYX#hYX#iYX#jYX#kYX#lYX#mYX#nYX#pYX#rYX#tYX#uYX#zYX(QYX(`YX(gYX(hYX~O!acX!gYX!gcX(`cX~P'!sOP;nOQ;nOa=_Ob!fOikOk;nOlkOmkOskOu;nOw;nO|WO!QkO!RkO!XXO!c;qO!hZO!k;nO!l;nO!m;nO!o;rO!q;sO!t!eO$P!hO$TfO's)RO'uTO'xUO(QVO(_[O(l=]O~O!Vv!>v!BnPPP!BuHdPPPPPPPPPPP!FTP!GiPPHd!HyPHdPHdHdHdHdPHd!J`PP!MiP#!nP#!r#!|##Q##QP!MfP##U##UP#&ZP#&_HdHd#&e#)iAQPAQPAQAQP#*sAQAQ#,mAQ#.zAQ#0nAQAQ#1[#3W#3W#3[#3d#3W#3lP#3WPAQ#4hAQ#5pAQAQ6iPPP#6{PP#7e#7eP#7eP#7z#7ePP#8QP#7wP#7w#8d!1p#7w#9O#9U6f(}#9X(}P#9`#9`#9`P(}P(}P(}P(}PP(}P#9f#9iP#9i(}P#9mP#9pP(}P(}P(}P(}P(}P(}(}PP#9v#9|#:W#:^#:d#:j#:p#;O#;U#;[#;f#;l#b#?r#@Q#@W#@^#@d#@j#@t#@z#AQ#A[#An#AtPPPPPPPPPP#AzPPPPPPP#Bn#FYP#Gu#G|#HUPPPP#L`$ U$'t$'w$'z$)w$)z$)}$*UPP$*[$*`$+X$,X$,]$,qPP$,u$,{$-PP$-S$-W$-Z$.P$.g$.l$.o$.r$.x$.{$/P$/TR!yRmpOXr!X#a%]&d&f&g&i,^,c1g1jU!pQ'Q-OQ%ctQ%kwQ%rzQ&[!TS&x!c,vQ'W!f[']!m!r!s!t!u!vS*[$y*aQ+U%lQ+c%tQ+}&UQ,|'PQ-W'XW-`'^'_'`'aQ/p*cQ1U,OU2b-b-d-eS4}0z5QS6[2e2gU7z5U5V5WQ8q6_S9i7{7|Q:k9lR TypeParamList TypeDefinition extends ThisType this LiteralType ArithOp Number BooleanLiteral TemplateType InterpolationEnd Interpolation InterpolationStart NullType null VoidType void TypeofType typeof MemberExpression . ?. PropertyName [ TemplateString Escape Interpolation super RegExp ] ArrayExpression Spread , } { ObjectExpression Property async get set PropertyDefinition Block : NewExpression new TypeArgList CompareOp < ) ( ArgList UnaryExpression delete LogicOp BitOp YieldExpression yield AwaitExpression await ParenthesizedExpression ClassExpression class ClassBody MethodDeclaration Decorator @ MemberExpression PrivatePropertyName CallExpression Privacy static abstract override PrivatePropertyDefinition PropertyDeclaration readonly accessor Optional TypeAnnotation Equals StaticBlock FunctionExpression ArrowFunction ParamList ParamList ArrayPattern ObjectPattern PatternProperty Privacy readonly Arrow MemberExpression BinaryExpression ArithOp ArithOp ArithOp ArithOp BitOp CompareOp instanceof satisfies in const CompareOp BitOp BitOp BitOp LogicOp LogicOp ConditionalExpression LogicOp LogicOp AssignmentExpression UpdateOp PostfixExpression CallExpression TaggedTemplateExpression DynamicImport import ImportMeta JSXElement JSXSelfCloseEndTag JSXStartTag JSXSelfClosingTag JSXIdentifier JSXBuiltin JSXIdentifier JSXNamespacedName JSXMemberExpression JSXSpreadAttribute JSXAttribute JSXAttributeValue JSXEscape JSXEndTag JSXOpenTag JSXFragmentTag JSXText JSXEscape JSXStartCloseTag JSXCloseTag PrefixCast ArrowFunction TypeParamList SequenceExpression KeyofType keyof UniqueType unique ImportType InferredType infer TypeName ParenthesizedType FunctionSignature ParamList NewSignature IndexedType TupleType Label ArrayType ReadonlyType ObjectType MethodType PropertyType IndexSignature PropertyDefinition CallSignature TypePredicate is NewSignature new UnionType LogicOp IntersectionType LogicOp ConditionalType ParameterizedType ClassDeclaration abstract implements type VariableDeclaration let var TypeAliasDeclaration InterfaceDeclaration interface EnumDeclaration enum EnumBody NamespaceDeclaration namespace module AmbientDeclaration declare GlobalDeclaration global ClassDeclaration ClassBody MethodDeclaration AmbientFunctionDeclaration ExportGroup VariableName VariableName ImportDeclaration ImportGroup ForStatement for ForSpec ForInSpec ForOfSpec of WhileStatement while WithStatement with DoStatement do IfStatement if else SwitchStatement switch SwitchBody CaseLabel case DefaultLabel TryStatement try CatchClause catch FinallyClause finally ReturnStatement return ThrowStatement throw BreakStatement break ContinueStatement continue DebuggerStatement debugger LabeledStatement ExpressionStatement SingleExpression SingleClassItem",maxTerm:362,context:oO,nodeProps:[["group",-26,6,14,16,62,198,202,205,206,208,211,214,225,227,233,235,237,239,242,248,254,256,258,260,262,264,265,"Statement",-32,10,11,25,28,29,35,45,48,49,51,56,64,72,76,78,80,81,102,103,112,113,130,133,135,136,137,138,140,141,161,162,164,"Expression",-23,24,26,30,34,36,38,165,167,169,170,172,173,174,176,177,178,180,181,182,192,194,196,197,"Type",-3,84,95,101,"ClassItem"],["openedBy",31,"InterpolationStart",50,"[",54,"{",69,"(",142,"JSXStartTag",154,"JSXStartTag JSXStartCloseTag"],["closedBy",33,"InterpolationEnd",44,"]",55,"}",70,")",143,"JSXSelfCloseEndTag JSXEndTag",159,"JSXEndTag"]],propSources:[cO],skippedNodes:[0,3,4,268],repeatNodeCount:32,tokenData:"$>y(CSR!bOX%ZXY+gYZ-yZ[+g[]%Z]^.c^p%Zpq+gqr/mrs3cst:_tu>PuvBavwDxwxGgxyMvyz! Qz{!![{|!%O|}!&]}!O!%O!O!P!'g!P!Q!1w!Q!R#0t!R![#3T![!]#@T!]!^#Aa!^!_#Bk!_!`#GS!`!a#In!a!b#N{!b!c$$z!c!}>P!}#O$&U#O#P$'`#P#Q$,w#Q#R$.R#R#S>P#S#T$/`#T#o$0j#o#p$4z#p#q$5p#q#r$7Q#r#s$8^#s$f%Z$f$g+g$g#BY>P#BY#BZ$9h#BZ$IS>P$IS$I_$9h$I_$I|>P$I|$I}$P$JT$JU$9h$JU$KV>P$KV$KW$9h$KW&FU>P&FU&FV$9h&FV;'S>P;'S;=`BZ<%l?HT>P?HT?HU$9h?HUO>P(n%d_$c&j'vp'y!bOY%ZYZ&cZr%Zrs&}sw%Zwx(rx!^%Z!^!_*g!_#O%Z#O#P&c#P#o%Z#o#p*g#p;'S%Z;'S;=`+a<%lO%Z&j&hT$c&jO!^&c!_#o&c#p;'S&c;'S;=`&w<%lO&c&j&zP;=`<%l&c'|'U]$c&j'y!bOY&}YZ&cZw&}wx&cx!^&}!^!_'}!_#O&}#O#P&c#P#o&}#o#p'}#p;'S&};'S;=`(l<%lO&}!b(SU'y!bOY'}Zw'}x#O'}#P;'S'};'S;=`(f<%lO'}!b(iP;=`<%l'}'|(oP;=`<%l&}'[(y]$c&j'vpOY(rYZ&cZr(rrs&cs!^(r!^!_)r!_#O(r#O#P&c#P#o(r#o#p)r#p;'S(r;'S;=`*a<%lO(rp)wU'vpOY)rZr)rs#O)r#P;'S)r;'S;=`*Z<%lO)rp*^P;=`<%l)r'[*dP;=`<%l(r#S*nX'vp'y!bOY*gZr*grs'}sw*gwx)rx#O*g#P;'S*g;'S;=`+Z<%lO*g#S+^P;=`<%l*g(n+dP;=`<%l%Z(CS+rq$c&j'vp'y!b'l(;dOX%ZXY+gYZ&cZ[+g[p%Zpq+gqr%Zrs&}sw%Zwx(rx!^%Z!^!_*g!_#O%Z#O#P&c#P#o%Z#o#p*g#p$f%Z$f$g+g$g#BY%Z#BY#BZ+g#BZ$IS%Z$IS$I_+g$I_$JT%Z$JT$JU+g$JU$KV%Z$KV$KW+g$KW&FU%Z&FU&FV+g&FV;'S%Z;'S;=`+a<%l?HT%Z?HT?HU+g?HUO%Z(CS.ST'w#S$c&j'm(;dO!^&c!_#o&c#p;'S&c;'S;=`&w<%lO&c(CS.n_$c&j'vp'y!b'm(;dOY%ZYZ&cZr%Zrs&}sw%Zwx(rx!^%Z!^!_*g!_#O%Z#O#P&c#P#o%Z#o#p*g#p;'S%Z;'S;=`+a<%lO%Z%#`/x`$c&j!l$Ip'vp'y!bOY%ZYZ&cZr%Zrs&}sw%Zwx(rx!^%Z!^!_*g!_!`0z!`#O%Z#O#P&c#P#o%Z#o#p*g#p;'S%Z;'S;=`+a<%lO%Z%#S1V`#p$Id$c&j'vp'y!bOY%ZYZ&cZr%Zrs&}sw%Zwx(rx!^%Z!^!_*g!_!`2X!`#O%Z#O#P&c#P#o%Z#o#p*g#p;'S%Z;'S;=`+a<%lO%Z%#S2d_#p$Id$c&j'vp'y!bOY%ZYZ&cZr%Zrs&}sw%Zwx(rx!^%Z!^!_*g!_#O%Z#O#P&c#P#o%Z#o#p*g#p;'S%Z;'S;=`+a<%lO%Z$2b3l_'u$(n$c&j'y!bOY4kYZ5qZr4krs7nsw4kwx5qx!^4k!^!_8p!_#O4k#O#P5q#P#o4k#o#p8p#p;'S4k;'S;=`:X<%lO4k*r4r_$c&j'y!bOY4kYZ5qZr4krs7nsw4kwx5qx!^4k!^!_8p!_#O4k#O#P5q#P#o4k#o#p8p#p;'S4k;'S;=`:X<%lO4k)`5vX$c&jOr5qrs6cs!^5q!^!_6y!_#o5q#o#p6y#p;'S5q;'S;=`7h<%lO5q)`6jT$^#t$c&jO!^&c!_#o&c#p;'S&c;'S;=`&w<%lO&c#t6|TOr6yrs7]s;'S6y;'S;=`7b<%lO6y#t7bO$^#t#t7eP;=`<%l6y)`7kP;=`<%l5q*r7w]$^#t$c&j'y!bOY&}YZ&cZw&}wx&cx!^&}!^!_'}!_#O&}#O#P&c#P#o&}#o#p'}#p;'S&};'S;=`(l<%lO&}%W8uZ'y!bOY8pYZ6yZr8prs9hsw8pwx6yx#O8p#O#P6y#P;'S8p;'S;=`:R<%lO8p%W9oU$^#t'y!bOY'}Zw'}x#O'}#P;'S'};'S;=`(f<%lO'}%W:UP;=`<%l8p*r:[P;=`<%l4k#%|:hg$c&j'vp'y!bOY%ZYZ&cZr%Zrs&}st%Ztu`k$c&j'vp'y!b(T!LY's&;d$V#tOY%ZYZ&cZr%Zrs&}st%Ztu>Puw%Zwx(rx}%Z}!O@T!O!Q%Z!Q![>P![!^%Z!^!_*g!_!c%Z!c!}>P!}#O%Z#O#P&c#P#R%Z#R#S>P#S#T%Z#T#o>P#o#p*g#p$g%Z$g;'S>P;'S;=`BZ<%lO>P+d@`k$c&j'vp'y!b$V#tOY%ZYZ&cZr%Zrs&}st%Ztu@Tuw%Zwx(rx}%Z}!O@T!O!Q%Z!Q![@T![!^%Z!^!_*g!_!c%Z!c!}@T!}#O%Z#O#P&c#P#R%Z#R#S@T#S#T%Z#T#o@T#o#p*g#p$g%Z$g;'S@T;'S;=`BT<%lO@T+dBWP;=`<%l@T(CSB^P;=`<%l>P%#SBl`$c&j'vp'y!b#h$IdOY%ZYZ&cZr%Zrs&}sw%Zwx(rx!^%Z!^!_*g!_!`Cn!`#O%Z#O#P&c#P#o%Z#o#p*g#p;'S%Z;'S;=`+a<%lO%Z%#SCy_$c&j#z$Id'vp'y!bOY%ZYZ&cZr%Zrs&}sw%Zwx(rx!^%Z!^!_*g!_#O%Z#O#P&c#P#o%Z#o#p*g#p;'S%Z;'S;=`+a<%lO%Z%DfETa(h%Z![!^%Z!^!_*g!_!c%Z!c!i#>Z!i#O%Z#O#P&c#P#R%Z#R#S#>Z#S#T%Z#T#Z#>Z#Z#o%Z#o#p*g#p;'S%Z;'S;=`+a<%lO%Z$/l#>fi$c&j'vp'y!bl$'|OY%ZYZ&cZr%Zrs&}sw%Zwx(rx!Q%Z!Q![#>Z![!^%Z!^!_*g!_!c%Z!c!i#>Z!i#O%Z#O#P&c#P#R%Z#R#S#>Z#S#T%Z#T#Z#>Z#Z#b%Z#b#c#5T#c#o%Z#o#p*g#p;'S%Z;'S;=`+a<%lO%Z%Gh#@b_!a$b$c&j#x%Puw%Zwx(rx}%Z}!O@T!O!Q%Z!Q![>P![!^%Z!^!_*g!_!c%Z!c!}>P!}#O%Z#O#P&c#P#R%Z#R#S>P#S#T%Z#T#o>P#o#p*g#p$f%Z$f$g+g$g#BY>P#BY#BZ$9h#BZ$IS>P$IS$I_$9h$I_$JT>P$JT$JU$9h$JU$KV>P$KV$KW$9h$KW&FU>P&FU&FV$9h&FV;'S>P;'S;=`BZ<%l?HT>P?HT?HU$9h?HUO>P(CS$=Uk$c&j'vp'y!b'm(;d(T!LY's&;d$V#tOY%ZYZ&cZr%Zrs&}st%Ztu>Puw%Zwx(rx}%Z}!O@T!O!Q%Z!Q![>P![!^%Z!^!_*g!_!c%Z!c!}>P!}#O%Z#O#P&c#P#R%Z#R#S>P#S#T%Z#T#o>P#o#p*g#p$g%Z$g;'S>P;'S;=`BZ<%lO>P",tokenizers:[lO,XO,2,3,4,5,6,7,8,9,10,11,12,13,ZO,new u("$S~RRtu[#O#Pg#S#T#|~_P#o#pb~gOq~~jVO#i!P#i#j!U#j#l!P#l#m!q#m;'S!P;'S;=`#v<%lO!P~!UO!O~~!XS!Q![!e!c!i!e#T#Z!e#o#p#Z~!hR!Q![!q!c!i!q#T#Z!q~!tR!Q![!}!c!i!}#T#Z!}~#QR!Q![!P!c!i!P#T#Z!P~#^R!Q![#g!c!i#g#T#Z#g~#jS!Q![#g!c!i#g#T#Z#g#q#r!P~#yP;=`<%l!P~$RO(S~~",141,325),new u("j~RQYZXz{^~^O'p~~aP!P!Qd~iO'q~~",25,307)],topRules:{Script:[0,5],SingleExpression:[1,266],SingleClassItem:[2,267]},dialects:{jsx:13213,ts:13215},dynamicPrecedences:{76:1,78:1,162:1,190:1},specialized:[{term:311,get:O=>sO[O]||-1},{term:327,get:O=>pO[O]||-1},{term:67,get:O=>gO[O]||-1}],tokenPrec:13238}),bO=[n("function ${name}(${params}) {\n ${}\n}",{label:"function",detail:"definition",type:"keyword"}),n("for (let ${index} = 0; ${index} < ${bound}; ${index}++) {\n ${}\n}",{label:"for",detail:"loop",type:"keyword"}),n("for (let ${name} of ${collection}) {\n ${}\n}",{label:"for",detail:"of loop",type:"keyword"}),n("do {\n ${}\n} while (${})",{label:"do",detail:"loop",type:"keyword"}),n("while (${}) {\n ${}\n}",{label:"while",detail:"loop",type:"keyword"}),n(`try { - \${} -} catch (\${error}) { - \${} -}`,{label:"try",detail:"/ catch block",type:"keyword"}),n("if (${}) {\n ${}\n}",{label:"if",detail:"block",type:"keyword"}),n(`if (\${}) { - \${} -} else { - \${} -}`,{label:"if",detail:"/ else block",type:"keyword"}),n(`class \${name} { - constructor(\${params}) { - \${} - } -}`,{label:"class",detail:"definition",type:"keyword"}),n('import {${names}} from "${module}"\n${}',{label:"import",detail:"named",type:"keyword"}),n('import ${name} from "${module}"\n${}',{label:"import",detail:"default",type:"keyword"})],v=new OO,G=new Set(["Script","Block","FunctionExpression","FunctionDeclaration","ArrowFunction","MethodDeclaration","ForStatement"]);function c(O){return(Q,i)=>{let a=Q.node.getChild("VariableDefinition");return a&&i(a,O),!0}}const hO=["FunctionDeclaration"],mO={FunctionDeclaration:c("function"),ClassDeclaration:c("class"),ClassExpression:()=>!0,EnumDeclaration:c("constant"),TypeAliasDeclaration:c("type"),NamespaceDeclaration:c("namespace"),VariableDefinition(O,Q){O.matchContext(hO)||Q(O,"variable")},TypeDefinition(O,Q){Q(O,"type")},__proto__:null};function q(O,Q){let i=v.get(Q);if(i)return i;let a=[],$=!0;function t(r,S){let o=O.sliceString(r.from,r.to);a.push({label:o,type:S})}return Q.cursor(M.IncludeAnonymous).iterate(r=>{if($)$=!1;else if(r.name){let S=mO[r.name];if(S&&S(r,t)||G.has(r.name))return!1}else if(r.to-r.from>8192){for(let S of q(O,r.node))a.push(S);return!1}}),v.set(Q,a),a}const g=/^[\w$\xa1-\uffff][\w$\d\xa1-\uffff]*$/,U=["TemplateString","String","RegExp","LineComment","BlockComment","VariableDefinition","TypeDefinition","Label","PropertyDefinition","PropertyName","PrivatePropertyDefinition","PrivatePropertyName"];function WO(O){let Q=W(O.state).resolveInner(O.pos,-1);if(U.indexOf(Q.name)>-1)return null;let i=Q.name=="VariableName"||Q.to-Q.from<20&&g.test(O.state.sliceDoc(Q.from,Q.to));if(!i&&!O.explicit)return null;let a=[];for(let $=Q;$;$=$.parent)G.has($.name)&&(a=a.concat(q(O.state.doc,$)));return{options:a,from:i?Q.from:O.pos,validFor:g}}function h(O,Q,i){var a;let $=[];for(;;){let t=Q.firstChild,r;if(t?.name=="VariableName")return $.push(O(t)),{path:$.reverse(),name:i};if(t?.name=="MemberExpression"&&((a=r=t.lastChild)===null||a===void 0?void 0:a.name)=="PropertyName")$.push(O(r)),Q=t;else return null}}function UO(O){let Q=a=>O.state.doc.sliceString(a.from,a.to),i=W(O.state).resolveInner(O.pos,-1);return i.name=="PropertyName"?h(Q,i.parent,Q(i)):U.indexOf(i.name)>-1?null:i.name=="VariableName"||i.to-i.from<20&&g.test(Q(i))?{path:[],name:Q(i)}:(i.name=="."||i.name=="?.")&&i.parent.name=="MemberExpression"?h(Q,i.parent,""):i.name=="MemberExpression"?h(Q,i,""):O.explicit?{path:[],name:""}:null}function fO(O,Q){let i=[],a=new Set;for(let $=0;;$++){for(let r of(Object.getOwnPropertyNames||Object.keys)(O)){if(a.has(r))continue;a.add(r);let S;try{S=O[r]}catch{continue}i.push({label:r,type:typeof S=="function"?/^[A-Z]/.test(r)?"class":Q?"function":"method":Q?"variable":"property",boost:-$})}let t=Object.getPrototypeOf(O);if(!t)return i;O=t}}function IO(O){let Q=new Map;return i=>{let a=UO(i);if(!a)return null;let $=O;for(let r of a.path)if($=$[r],!$)return null;let t=Q.get($);return t||Q.set($,t=fO($,!a.path.length)),{from:i.pos-a.name.length,options:t,validFor:g}}}const X=I.define({name:"javascript",parser:YO.configure({props:[E.add({IfStatement:Y({except:/^\s*({|else\b)/}),TryStatement:Y({except:/^\s*({|catch\b|finally\b)/}),LabeledStatement:A,SwitchBody:O=>{let Q=O.textAfter,i=/^\s*\}/.test(Q),a=/^\s*(case|default)\b/.test(Q);return O.baseIndent+(i?0:a?1:2)*O.unit},Block:J({closing:"}"}),ArrowFunction:O=>O.baseIndent+O.unit,"TemplateString BlockComment":()=>null,"Statement Property":Y({except:/^{/}),JSXElement(O){let Q=/^\s*<\//.test(O.textAfter);return O.lineIndent(O.node.from)+(Q?0:O.unit)},JSXEscape(O){let Q=/\s*\}/.test(O.textAfter);return O.lineIndent(O.node.from)+(Q?0:O.unit)},"JSXOpenTag JSXSelfClosingTag"(O){return O.column(O.node.from)+O.unit}}),L.add({"Block ClassBody SwitchBody EnumBody ObjectExpression ArrayExpression":N,BlockComment(O){return{from:O.from+2,to:O.to-2}}})]}),languageData:{closeBrackets:{brackets:["(","[","{","'",'"',"`"]},commentTokens:{line:"//",block:{open:"/*",close:"*/"}},indentOnInput:/^\s*(?:case |default:|\{|\}|<\/)$/,wordChars:"$"}}),T={test:O=>/^JSX/.test(O.name),facet:F({commentTokens:{block:{open:"{/*",close:"*/}"}}})},uO=X.configure({dialect:"ts"},"typescript"),yO=X.configure({dialect:"jsx",props:[k.add(O=>O.isTop?[T]:void 0)]}),jO=X.configure({dialect:"jsx ts",props:[k.add(O=>O.isTop?[T]:void 0)]},"typescript"),dO="break case const continue default delete export extends false finally in instanceof let new return static super switch this throw true typeof var yield".split(" ").map(O=>({label:O,type:"keyword"}));function EO(O={}){let Q=O.jsx?O.typescript?jO:yO:O.typescript?uO:X;return new D(Q,[X.data.of({autocomplete:B(U,H(bO.concat(dO)))}),X.data.of({autocomplete:WO}),O.jsx?wO:[]])}function xO(O){for(;;){if(O.name=="JSXOpenTag"||O.name=="JSXSelfClosingTag"||O.name=="JSXFragmentTag")return O;if(!O.parent)return null;O=O.parent}}function w(O,Q,i=O.length){for(let a=Q?.firstChild;a;a=a.nextSibling)if(a.name=="JSXIdentifier"||a.name=="JSXBuiltin"||a.name=="JSXNamespacedName"||a.name=="JSXMemberExpression")return O.sliceString(a.from,Math.min(a.to,i));return""}const vO=typeof navigator=="object"&&/Android\b/.test(navigator.userAgent),wO=K.inputHandler.of((O,Q,i,a)=>{if((vO?O.composing:O.compositionStarted)||O.state.readOnly||Q!=i||a!=">"&&a!="/"||!X.isActiveAt(O.state,Q,-1))return!1;let{state:$}=O,t=$.changeByRange(r=>{var S,o;let{head:P}=r,Z=W($).resolveInner(P,-1),s;if(Z.name=="JSXStartTag"&&(Z=Z.parent),a==">"&&Z.name=="JSXFragmentTag")return{range:b.cursor(P+1),changes:{from:P,insert:">"}};if(a=="/"&&Z.name=="JSXFragmentTag"){let l=Z.parent,p=l?.parent;if(l.from==P-1&&((S=p.lastChild)===null||S===void 0?void 0:S.name)!="JSXEndTag"&&(s=w($.doc,p?.firstChild,P))){let f=`/${s}>`;return{range:b.cursor(P+f.length),changes:{from:P,insert:f}}}}else if(a==">"){let l=xO(Z);if(l&&((o=l.lastChild)===null||o===void 0?void 0:o.name)!="JSXEndTag"&&$.sliceDoc(P,P+2)!="`}}}return{range:r}});return t.changes.empty?!1:(O.dispatch(t,{userEvent:"input.type",scrollIntoView:!0}),!0)});function AO(O,Q){return Q||(Q={parserOptions:{ecmaVersion:2019,sourceType:"module"},env:{browser:!0,node:!0,es6:!0,es2015:!0,es2017:!0,es2020:!0},rules:{}},O.getRules().forEach((i,a)=>{i.meta.docs.recommended&&(Q.rules[a]=2)})),i=>{let{state:a}=i,$=[];for(let{from:t,to:r}of X.findRegions(a)){let S=a.doc.lineAt(t),o={line:S.number-1,col:t-S.from,pos:t};for(let P of O.verify(a.sliceDoc(t,r),Q))$.push(VO(P,a.doc,o))}return $}}function V(O,Q,i,a){return i.line(O+a.line).from+Q+(O==1?a.col-1:-1)}function VO(O,Q,i){let a=V(O.line,O.column,Q,i),$={from:a,to:O.endLine!=null&&O.endColumn!=1?V(O.endLine,O.endColumn,Q,i):a,message:O.message,source:O.ruleId?"eslint:"+O.ruleId:"eslint",severity:O.severity==1?"warning":"error"};if(O.fix){let{range:t,text:r}=O.fix,S=t[0]+i.pos-a,o=t[1]+i.pos-a;$.actions=[{name:"fix",apply(P,Z){P.dispatch({changes:{from:Z+S,to:Z+o,insert:r},scrollIntoView:!0})}}]}return $}export{wO as autoCloseTags,UO as completionPath,AO as esLint,EO as javascript,X as javascriptLanguage,yO as jsxLanguage,WO as localCompletionSource,IO as scopeCompletionSource,bO as snippets,jO as tsxLanguage,uO as typescriptLanguage}; -//# sourceMappingURL=index-aee9714f.js.map diff --git a/spaces/deepwisdom/MetaGPT/metagpt/actions/write_code_review.py b/spaces/deepwisdom/MetaGPT/metagpt/actions/write_code_review.py deleted file mode 100644 index 7f6a7a38e6a1ed81614364e3deaac37b7dc1f1a9..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/actions/write_code_review.py +++ /dev/null @@ -1,81 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/11 17:45 -@Author : alexanderwu -@File : write_code_review.py -""" - -from metagpt.actions.action import Action -from metagpt.logs import logger -from metagpt.schema import Message -from metagpt.utils.common import CodeParser -from tenacity import retry, stop_after_attempt, wait_fixed - -PROMPT_TEMPLATE = """ -NOTICE -Role: You are a professional software engineer, and your main task is to review the code. You need to ensure that the code conforms to the PEP8 standards, is elegantly designed and modularized, easy to read and maintain, and is written in Python 3.9 (or in another programming language). -ATTENTION: Use '##' to SPLIT SECTIONS, not '#'. Output format carefully referenced "Format example". - -## Code Review: Based on the following context and code, and following the check list, Provide key, clear, concise, and specific code modification suggestions, up to 5. -``` -1. Check 0: Is the code implemented as per the requirements? -2. Check 1: Are there any issues with the code logic? -3. Check 2: Does the existing code follow the "Data structures and interface definitions"? -4. Check 3: Is there a function in the code that is omitted or not fully implemented that needs to be implemented? -5. Check 4: Does the code have unnecessary or lack dependencies? -``` - -## Rewrite Code: {filename} Base on "Code Review" and the source code, rewrite code with triple quotes. Do your utmost to optimize THIS SINGLE FILE. ------ -# Context -{context} - -## Code: {filename} -``` -{code} -``` ------ - -## Format example ------ -{format_example} ------ - -""" - -FORMAT_EXAMPLE = """ - -## Code Review -1. The code ... -2. ... -3. ... -4. ... -5. ... - -## Rewrite Code: {filename} -```python -## {filename} -... -``` -""" - - -class WriteCodeReview(Action): - def __init__(self, name="WriteCodeReview", context: list[Message] = None, llm=None): - super().__init__(name, context, llm) - - @retry(stop=stop_after_attempt(2), wait=wait_fixed(1)) - async def write_code(self, prompt): - code_rsp = await self._aask(prompt) - code = CodeParser.parse_code(block="", text=code_rsp) - return code - - async def run(self, context, code, filename): - format_example = FORMAT_EXAMPLE.format(filename=filename) - prompt = PROMPT_TEMPLATE.format(context=context, code=code, filename=filename, format_example=format_example) - logger.info(f'Code review {filename}..') - code = await self.write_code(prompt) - # code_rsp = await self._aask_v1(prompt, "code_rsp", OUTPUT_MAPPING) - # self._save(context, filename, code) - return code diff --git a/spaces/denisp1/ChemistryMoleculeModeler/app.py b/spaces/denisp1/ChemistryMoleculeModeler/app.py deleted file mode 100644 index 4f6c65a77cc84eb64fc29a9392a15cf9e402ea20..0000000000000000000000000000000000000000 --- a/spaces/denisp1/ChemistryMoleculeModeler/app.py +++ /dev/null @@ -1,175 +0,0 @@ -import streamlit as st -import ipywidgets -import py3Dmol - - -from rdkit import Chem -from rdkit.Chem import Draw -from PIL import Image -from rdkit import Chem -from rdkit.Chem import AllChem -from ipywidgets import interact,fixed,IntSlider -import streamlit as st -import streamlit.components.v1 as components -import py3Dmol -from rdkit import Chem -from rdkit.Chem import Draw -from rdkit.Chem import AllChem - - -def smi2conf(smiles): - '''Convert SMILES to rdkit.Mol with 3D coordinates''' - mol = Chem.MolFromSmiles(smiles) - if mol is not None: - mol = Chem.AddHs(mol) - AllChem.EmbedMolecule(mol) - AllChem.MMFFOptimizeMolecule(mol, maxIters=200) - return mol - else: - return None - -def MolTo3DView(mol, size=(300, 300), style="stick", surface=False, opacity=0.5): - """Draw molecule in 3D - - Args: - ---- - mol: rdMol, molecule to show - size: tuple(int, int), canvas size - style: str, type of drawing molecule - style can be 'line', 'stick', 'sphere', 'carton' - surface, bool, display SAS - opacity, float, opacity of surface, range 0.0-1.0 - Return: - ---- - viewer: py3Dmol.view, a class for constructing embedded 3Dmol.js views in ipython notebooks. - """ - assert style in ('line', 'stick', 'sphere', 'carton') - mblock = Chem.MolToMolBlock(mol) - viewer = py3Dmol.view(width=size[0], height=size[1]) - viewer.addModel(mblock, 'mol') - viewer.setStyle({style:{}}) - if surface: - viewer.addSurface(py3Dmol.SAS, {'opacity': opacity}) - viewer.zoomTo() - return viewer - -def MakeMolecule(name, ingredients): - st.write(name, ": ", ingredients) - m = Chem.MolFromSmiles(ingredients) - im=Draw.MolToImage(m) - st.image(im) - -def conf_viewer(idx): - mol = confs[idx] - return MolTo3DView(mol).show() - -def style_selector(idx, s): - conf = confs[idx] - return MolTo3DView(conf, style=s).show() - -@interact -def smi2viewer(smi='CC=O'): - try: - conf = smi2conf(smi) - return MolTo3DView(conf).show() - except: - return None - -smi = 'COc3nc(OCc2ccc(C#N)c(c1ccc(C(=O)O)cc1)c2P(=O)(O)O)ccc3C[NH2+]CC(I)NC(=O)C(F)(Cl)Br' -conf = smi2conf(smi) -viewer = MolTo3DView(conf, size=(600, 300), style='sphere') -viewer.show() - -#compound_smiles = 'c1cc(C(=O)O)c(OC(=O)C)cc1' -#m = Chem.MolFromSmiles(compound_smiles) -#im=Draw.MolToImage(m) -#st.image(im) - -viewer = MolTo3DView(conf, size=(600, 300), style='sphere') -viewer.show() - -smis = [ 'COc3nc(OCc2ccc(C#N)c(c1ccc(C(=O)O)cc1)c2P(=O)(O)O)ccc3C[NH2+]CC(I)NC(=O)C(F)(Cl)Br', - 'CC(NCCNCC1=CC=C(OCC2=C(C)C(C3=CC=CC=C3)=CC=C2)N=C1OC)=O', - 'Cc1c(COc2cc(OCc3cccc(c3)C#N)c(CN3C[C@H](O)C[C@H]3C(O)=O)cc2Cl)cccc1-c1ccc2OCCOc2c1', - 'CCCCC(=O)NCCCCC(=O)NCCCCCC(=O)[O-]', - "CC(NCCNCC1=CC=C(OCC2=C(C)C(C3=CC=CC=C3)=CC=C2)N=C1OC)=O"] - -confs = [smi2conf(s) for s in smis] - - -st.title('⚛️🧬Chemical Graph 3D Molecule Modeler🧬⚛️') -def show(smi, style='stick'): - mol = Chem.MolFromSmiles(smi) - mol = Chem.AddHs(mol) - AllChem.EmbedMolecule(mol) - AllChem.MMFFOptimizeMolecule(mol, maxIters=200) - mblock = Chem.MolToMolBlock(mol) - - view = py3Dmol.view(width=400, height=400) - view.addModel(mblock, 'mol') - view.setStyle({style:{}}) - view.zoomTo() - view.show() - view.render() - t =view.js() - f = open('viz.html', 'w') - f.write(t.startjs) - f.write(t.endjs) - f.close() - -compound_smiles=st.text_input('SMILES please','CCCCC(=O)NCCCCC(=O)NCCCCCC(=O)[O-]') -m = Chem.MolFromSmiles(compound_smiles) - -#Draw.MolToFile(m,'mol.png') - -show(compound_smiles) -HtmlFile = open("viz.html", 'r', encoding='utf-8') -source_code = HtmlFile.read() -c1,c2=st.columns(2) -with c1: - st.write('⚛️🧬Chemical Graph 3D Molecule🧬⚛️:') -with c2: - components.html(source_code, height = 400,width=400) - -st.write('Info about SMILES: https://archive.epa.gov/med/med_archive_03/web/html/smiles.html') -st.write('Learn about it at Wikipedia: https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system') -st.write('Search for any compound on PubChem at National Library of Medicine: https://pubchem.ncbi.nlm.nih.gov/#query=vitamin%20e') - - -MakeMolecule("COVID-19 Antiviral Remdesivir GS5734", "CCC(CC)COC(=O)[C@H](C)N[P@](=O)(OC[C@@H]1[C@H]([C@H]([C@](O1)(C#N)C2=CC=C3N2N=CN=C3N)O)O)OC4=CC=CC=C4") -MakeMolecule("Ritonavir", "CC(C)C1=NC(=CS1)CN(C)C(=O)N[C@@H](C(C)C)C(=O)N[C@@H](CC2=CC=CC=C2)C[C@@H]([C@H](CC3=CC=CC=C3)NC(=O)OCC4=CN=CS4)O") -MakeMolecule("Chloroquine", "CCN(CC)CCCC(C)NC1=C2C=CC(=CC2=NC=C1)Cl") -MakeMolecule("Fingolimod", "CCCCCCCCC1=CC=C(C=C1)CCC(CO)(CO)N") -MakeMolecule("N4-Hydroxycytidine", "C1=CN(C(=O)N=C1NO)[C@H]2[C@@H]([C@@H]([C@H](O2)CO)O)O") -MakeMolecule("Favipiravir", "C1=C(N=C(C(=O)N1)C(=O)N)F") - -MakeMolecule("DNA", "C1C(C(OC1N)COP(=O)(O)OC2CC(OC2COP(=O)(O)OC3CC(OC3CO)N)N)O") -MakeMolecule("Trecovirsen DNA", "CC1=CN(C(=O)NC1=O)C2CC(C(O2)COP(=S)(O)OC3CC(OC3COP(=S)(O)OC4CC(OC4COP(=S)(O)OC5CC(OC5COP(=S)(O)OC6CC(OC6COP(=S)(O)OC7CC(OC7COP(=S)(O)OC8CC(OC8COP(=S)(O)OC9CC(OC9COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1CO)N1C=CC(=NC1=O)N)N1C=C(C(=O)NC1=O)C)N1C=CC(=NC1=O)N)N1C=C(C(=O)NC1=O)C)N1C=CC(=NC1=O)N)N1C=NC2=C1N=C(NC2=O)N)N1C=CC(=NC1=O)N)N1C=NC2=C(N=CN=C21)N)N1C=CC(=NC1=O)N)N1C=CC(=NC1=O)N)N1C=CC(=NC1=O)N)N1C=NC2=C(N=CN=C21)N)N1C=C(C(=O)NC1=O)C)N1C=CC(=NC1=O)N)N1C=C(C(=O)NC1=O)C)N1C=CC(=NC1=O)N)N1C=C(C(=O)NC1=O)C)N1C=CC(=NC1=O)N)N1C=C(C(=O)NC1=O)C)N1C=CC(=NC1=O)N)N1C=CC(=NC1=O)N)N1C=C(C(=O)NC1=O)C)N1C=C(C(=O)NC1=O)C)N1C=CC(=NC1=O)N)O") - - - -MakeMolecule("Ibuprofen", "CC(C)CC1=CC=C(C=C1)C(C)C(=O)O") -MakeMolecule("LSD", "CCN(CC)C(=O)[C@H]1CN([C@@H]2CC3=CNC4=CC=CC(=C34)C2=C1)C") - -MakeMolecule("Ethanol", "CCO") -MakeMolecule("Acetic acid", "CC(=O)O") -MakeMolecule("Cyclohexane", "C1CCCCC1") -MakeMolecule("Pyridine", "c1cnccc1") -MakeMolecule("Nicotine", "CN1CCC[C@H]1c2cccnc2") - - -MakeMolecule("Helium", "[3He]") -MakeMolecule("Hydrogen", "[H]") -MakeMolecule("Caffeine", "CN1C=NC2=C1C(=O)N(C(=O)N2C)C") -MakeMolecule("Sugar", "C([C@@H]1[C@H]([C@@H]([C@H]([C@H](O1)O[C@]2([C@H]([C@@H]([C@H](O2)CO)O)O)CO)O)O)O)O") -MakeMolecule("Dinitrogen", "N#N") -MakeMolecule("Methyl isocyanate (MIC)", "CN=C=O") -MakeMolecule("Copper(II) sulfate", "[Cu+2].[O-]S(=O)(=O)[O-]") -MakeMolecule("Flavopereirin (C17H15N2)", "CCc(c1)ccc2[n+]1ccc3c2[nH]c4c3cccc4 CCc1c[n+]2ccc3c4ccccc4[nH]c3c2cc1") -MakeMolecule("Glucose (β-D-glucopyranose) (C6H12O6)", "OC[C@@H](O1)[C@@H](O)[C@H](O)[C@@H](O)[C@H](O)1") -MakeMolecule("Thiamine (vitamin B1, C12H17N4OS+)", "OCCc1c(C)[n+](cs1)Cc2cnc(C)nc2N") -MakeMolecule("cephalostatin-1", "CC(C)(O1)C[C@@H](O)[C@@]1(O2)[C@@H](C)[C@@H]3CC=C4[C@]3(C2)C(=O)C[C@H]5[C@H]4CC[C@@H](C6)[C@]5(C)Cc(n7)c6nc(C[C@@]89(C))c7C[C@@H]8CC[C@@H]%10[C@@H]9C[C@@H](O)[C@@]%11(C)C%10=C[C@H](O%12)[C@]%11(O)[C@H](C)[C@]%12(O%13)[C@H](O)C[C@@]%13(C)CO") -MakeMolecule("Vitamin E", "CC(C)CCC[C@@H](C)CCC[C@@H](C)CCC [C@]1(C)CCc2c(C)c(O)c(C)c(C)c2O1") -MakeMolecule("Vitamin K2", "CC1=C(C(=O)C2=CC=CC=C2C1=O)CC=C(C)CCC=C(C)CCC=C(C)CCC=C(C)C") -MakeMolecule("Vitamin K1", "CC(C)CCCC(C)CCCC(C)CCCC(=CCC12C(=O)C3=CC=CC=C3C(=O)C1(O2)C)C") -MakeMolecule("Vitamin D3", "C[C@@H]([C@@H]1C2([C@H](/C(=C/C=C/3\C(=C)CCC(C3)O)/CCC2)CC1)C)CCCC(C)C.C[C@@H]([C@@H]1C2([C@H](/C(=C/C=C/3\C(=C)CCC(C3)O)/CCC2)CC1)C)CCCC(C)C") \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/FriendsSeason2COMPLETE720pBRripsujaidrpimprg !EXCLUSIVE!.md b/spaces/diacanFperku/AutoGPT/FriendsSeason2COMPLETE720pBRripsujaidrpimprg !EXCLUSIVE!.md deleted file mode 100644 index 4bd4efa6d76c2c42aea43ae7615b9403d2b5f33f..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/FriendsSeason2COMPLETE720pBRripsujaidrpimprg !EXCLUSIVE!.md +++ /dev/null @@ -1,11 +0,0 @@ -
      -

      https://zzquangjao.net/friendsseason2complete720pbrripsujaidrpimprg/ The official app for fanfiction! In Friends season 2 begins three episodes after the season 4 finale, a story. FriendsSeason2COMPLETE720pBRripsujaidrpimprg Favori-O laetarii primesti s2 version.

      -

      FriendsSeason2COMPLETE720pBRripsujaidrpimprg


      Download Ziphttps://gohhs.com/2uFTML



      -

      https://patito.me/miami-study-guide-2016-download/ https://img7.360mb.com/data-c/1/880/137/0/4/0/1138067_lmhvxo.png BTW this is a travel plan I did using a GPS system for a motorcycle trip. FriendsSeason2COMPLETE720pBRripsujaidrpimprg The second season is pretty much a continuation of season 4, which is why theyre not on youtube theyre on .

      -

      https://littlesis.com/friendsseason2complete720pbrripsujaidrpimprg/ https://justonce.org/friendsseason2complete720pbrripsujaidrpimprg/ https://ifccrime.org/hulu-hack-barack-obama-patriot-act-4-11-16-2017/

      -

      https://sachillenger.com/friendsseason2complete720pbrripsujaidrpimprg/ https://www.antobrien.com/friends-season-2-complete-episode-8-review/ https://www.thscookies.com/friends-season-2-complete-part-1/ FriendsSeason2COMPLETE720pBRripsujaidrpimprg The second season of Friends is much like the fourth (though it picks up right after the season 4 finale). .

      -

      -

      https://youshouldknow.com/friendsseason2complete720pbrripsujaidrpimprg/ https://www.thscookies.com/friends-season-2-complete-part-1/ FriendsSeason2COMPLETE720pBRripsujaidrpimprg Friends season 2 complete episode 8 review. .

      -

      https://northridgefamc.com/friendsseason2complete720pbrripsujaidrpimprg/ https://www.mkami.com/how-to-fix-my-calciurina-software-win-7-crack-error.rar. https://avcplus.info/the-full-droid-golden-thread-final-cheat-hack-no-hack-season-6-remake/ FriendsSeason2COMPLETE720pBRripsujaidrpimprg. https://www.zdz.co.uk/download/fountain-deep-friendsseason2complete720pbrripsujaidrpimprg-hack-diy-money.html. FriendsSeason2COMPLETE720pBRripsujaidrpimprg.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Fsdreamteam Gsx Fsx 1.9.0.9 LINK Crack.md b/spaces/diacanFperku/AutoGPT/Fsdreamteam Gsx Fsx 1.9.0.9 LINK Crack.md deleted file mode 100644 index e87cda613216499dd1b4c29cfcf52e213553c10b..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Fsdreamteam Gsx Fsx 1.9.0.9 LINK Crack.md +++ /dev/null @@ -1,93 +0,0 @@ - -

      Fsdreamteam Gsx Fsx 1.9.0.9 Crack: How to Download and Install

      -

      Fsdreamteam Gsx Fsx 1.9.0.9 crack is a software that allows you to add realistic ground services to your flight simulator. GSX stands for Ground Services X, and it is a product of Fsdreamteam, a company that specializes in developing add-ons for flight simulators. GSX works with both FSX and P3D, and it simulates various operations on the ground, such as marshalling, catering, boarding, refueling, pushback, and more. GSX also features many native FSX animations and believable human characters.

      -

      If you are a fan of flight simulation, you might want to download and install Fsdreamteam Gsx Fsx 1.9.0.9 crack to enhance your experience and immersion. However, downloading and installing Fsdreamteam Gsx Fsx 1.9.0.9 crack is not as easy as it sounds. You need to find a reliable and safe source, follow the instructions carefully, and avoid any errors or issues that may occur. In this article, we will show you how to download and install Fsdreamteam Gsx Fsx 1.9.0.9 crack step by step.

      -

      Fsdreamteam Gsx Fsx 1.9.0.9 crack


      Downloadhttps://gohhs.com/2uFUC0



      -

      Where to Download Fsdreamteam Gsx Fsx 1.9.0.9 Crack?

      -

      There are many websites that offer Fsdreamteam Gsx Fsx 1.9.0.9 crack for download, but not all of them are trustworthy and secure. Some of them may contain viruses, malware, or pop-up ads that can harm your device or compromise your privacy. Therefore, you need to be careful and choose only reputable and verified websites that provide Fsdreamteam Gsx Fsx 1.9.0.9 crack for download.

      -

      Here are some of the websites that we recommend:

      -
        -
      • FS Nusantara: This website provides Fsdreamteam Gsx Fsx 1.9.0.9 crack for download in 480p and 720p HD quality, with dual audio (Hindi-English) and English subtitles.
      • -
      • YouTube: This website provides Fsdreamteam Gsx Fsx 1.9.0.9 crack for download in video format, with instructions and proof.
      • -
      • Woodys Wags Grooming /boarding: This website provides Fsdreamteam Gsx Fsx 1.9.0.9 crack for download in zip file format, with a link to a tutorial.
      • -
      -

      How to Download Fsdreamteam Gsx Fsx 1.9.0.9 Crack?

      -

      To download Fsdreamteam Gsx Fsx 1.9.0.9 crack from the websites mentioned above, you need to follow these steps:

      -
        -
      1. Visit the website of your choice and search for Fsdreamteam Gsx Fsx 1.9.0.9 crack.
      2. -
      3. Select the file that you want to download and click on the download link or button.
      4. -
      5. You may be redirected to another page or website that contains the download link or button.
      6. -
      7. You may need to complete a captcha or a verification process to prove that you are not a robot.
      8. -
      9. You may need to wait for a few seconds or minutes before the download starts.
      10. -
      11. Choose the location where you want to save the file on your device and click on save.
      12. -
      13. Wait for the download to finish and extract the file if it is in zip format.
      14. -
      -

      How to Install Fsdreamteam Gsx Fsx 1.9.0.9 Crack?

      -

      To install Fsdreamteam Gsx Fsx 1.9.

      -

      How to Install Fsdreamteam Gsx Fsx 1.9.0.9 Crack?

      -

      To install Fsdreamteam Gsx Fsx 1.9.0.9 crack on your flight simulator, you need to follow these steps:

      -
        -
      1. Make sure that you have FSX or P3D installed on your device.
      2. -
      3. Run the Fsdreamteam Gsx Fsx 1.9.0.9 crack file that you have downloaded and extracted.
      4. -
      5. Follow the instructions on the screen and choose the destination folder where you want to install GSX.
      6. -
      7. Wait for the installation to finish and launch your flight simulator.
      8. -
      9. Enjoy using GSX with realistic ground services on your flights.
      10. -
      -

      What are the Features and Benefits of Fsdreamteam Gsx Fsx 1.9.0.9 Crack?

      -

      Fsdreamteam Gsx Fsx 1.9.0.9 crack is a software that provides you with many features and benefits, such as:

      -
        -
      • It works with every FSX and P3D airport, both default and third-party, even those not released yet.
      • -
      • It supports all default FSX and P3D airplanes and many popular third-party airplanes, such as PMDG, Aerosoft, Captain Sim, Quality Wings, and more.
      • -
      • It offers vehicles in many different types and sizes, depending on the airplane and airport in use.
      • -
      • It has many sound effects and supports 3D surround sound with OpenAL.
      • -
      • It has realistic human animations using FSX bones and skin meshes.
      • -
      • It has an easy to use user interface, fully integrated in FSX and P3D using standard ATC-like menus.
      • -
      • It has an easy user-customization of vehicles, using the provided paint kit.
      • -
      • It has a live update feature that keeps GSX always updated automatically, with new supported airplanes and airports.
      • -
      • It has a direct airplane interface that allows interaction with complex third-party airplanes featuring custom door controls, ground equipment, and more.
      • -
      • It has a support for full airport customization, already enabled with all FSDT sceneries and some third-party sceneries, allowing better integration with any airport.
      • -
      -

      Conclusion

      -

      In this article, we have shown you how to download and install Fsdreamteam Gsx Fsx 1.9.0.9 crack on your flight simulator. We have also provided you with some tips to download it safely and quickly. We have also discussed the features and benefits of Fsdreamteam Gsx Fsx 1.9.0.9 crack and how it can enhance your flight simulation experience and immersion. We hope that this article has been helpful for you and that you have enjoyed using Fsdreamteam Gsx Fsx 1.9.0.9 crack on your flights. If you have any questions or suggestions, please feel free to leave a comment below. Thank you for reading!

      -

      -

      What are the Requirements and Precautions for Fsdreamteam Gsx Fsx 1.9.0.9 Crack?

      -

      Before you download and install Fsdreamteam Gsx Fsx 1.9.0.9 crack on your flight simulator, you need to make sure that you meet the following requirements and precautions:

      -
        -
      • You need to have FSX or P3D installed on your device, with the latest updates and service packs.
      • -
      • You need to have enough disk space and memory to run GSX smoothly and without errors.
      • -
      • You need to have a good internet connection and a compatible device to download GSX from the source websites.
      • -
      • You need to have a backup of your original files and settings, in case something goes wrong or you want to uninstall GSX.
      • -
      • You need to be aware of the legal and ethical issues of downloading and using cracked software, and the possible consequences that may arise.
      • -
      • You need to be careful and cautious of the source websites that you choose to download GSX from, and scan your device for any viruses, malware, or pop-up ads that may harm your device or compromise your privacy.
      • -
      -

      What are the Reviews and Ratings of Fsdreamteam Gsx Fsx 1.9.0.9 Crack?

      -

      Fsdreamteam Gsx Fsx 1.9.0.9 crack is a software that has received many positive reviews and ratings from users and critics alike. Here are some of the reviews and ratings that we have found:

      -
        -
      • "GSX is a must-have for any flight simulator enthusiast. It adds so much realism and immersion to your flights, with realistic ground services and operations. It works with every airport and airplane, and it is easy to use and customize. I highly recommend it." - User review on YouTube
      • -
      • "Fsdreamteam Gsx Fsx 1.9.0.9 crack is a great software that enhances your flight simulation experience with ground services. It is compatible with FSX and P3D, and it supports many third-party airplanes and sceneries. It has many features and benefits, such as vehicles, sound effects, human animations, user interface, live update, direct airplane interface, and airport customization. It is easy to download and install, and it works flawlessly." - User review on Woodys Wags Grooming /boarding
      • -
      • "Fsdreamteam Gsx Fsx 1.9.0.9 crack is one of the best add-ons for flight simulators. It simulates various operations on the ground, such as marshalling, catering, boarding, refueling, pushback, and more. It has many vehicles in different types and sizes, depending on the airplane and airport in use. It has an amazing sound quality and realistic human characters. It has an intuitive user interface and a live update feature that keeps it updated automatically. It is a must-have for any flight simulator fan." - User review on FS Nusantara
      • -
      -

      What are the FAQs and Answers for Fsdreamteam Gsx Fsx 1.9.0.9 Crack?

      -

      Fsdreamteam Gsx Fsx 1.9.0.9 crack is a software that may raise some questions and doubts for users and potential users. Here are some of the frequently asked questions and answers for Fsdreamteam Gsx Fsx 1.9.0.9 crack:

      -
        -
      • Q: Is Fsdreamteam Gsx Fsx 1.9.0.9 crack legal and ethical?
        -A: Fsdreamteam Gsx Fsx 1.9.0.9 crack is a software that violates the copyright and license agreement of Fsdreamteam, the original developer of GSX. Therefore, it is illegal and unethical to download and use Fsdreamteam Gsx Fsx 1.9.0.9 crack, and it may result in legal or penal consequences.
      • -
      • Q: Is Fsdreamteam Gsx Fsx 1.9.0.9 crack safe and secure?
        -A: Fsdreamteam Gsx Fsx 1.9.0.9 crack is a software that may contain viruses, malware, or pop-up ads that can harm your device or compromise your privacy. Therefore, it is not safe and secure to download and use Fsdreamteam Gsx Fsx 1.9.0.9 crack, and it may cause technical difficulties or errors on your device.
      • -
      • Q: Is Fsdreamteam Gsx Fsx 1.9.0.9 crack compatible with my device and simulator?
        -A: Fsdreamteam Gsx Fsx 1.9.0.9 crack is a software that works with both FSX and P3D, and it supports all default and third-party airplanes and airports. However, it may not work properly or at all with some devices or simulators, depending on their specifications and settings.
      • -
      • Q: How can I uninstall Fsdreamteam Gsx Fsx 1.9.0.9 crack?
        -A: To uninstall Fsdreamteam Gsx Fsx 1.9.0.9 crack from your device and simulator, you need to follow these steps:
        -- Delete the GSX folder from your simulator's main folder.
        -- Delete the Addon Manager folder from your simulator's main folder.
        -- Delete the Couatl folder from your simulator's main folder.
        -- Delete the Couatl_Updater.exe file from your simulator's main folder.
        -- Delete the GSX entry from your simulator's scenery library.
        -- Restore your original files and settings from your backup.
      • -
      -

      Conclusion

      -

      In this article, we have shown you how to download and install Fsdreamteam Gsx Fsx 1.9.0.9 crack on your flight simulator. We have also provided you with some tips to download it safely and quickly. We have also discussed the features and benefits of Fsdreamteam Gsx Fsx 1.9.0.9 crack and how it can enhance your flight simulation experience and immersion. We have also addressed some of the challenges and alternatives of downloading and using Fsdreamteam Gsx Fsx 1.9.0.9 crack. We have also answered some of the frequently asked questions and doubts about Fsdreamteam Gsx Fsx 1.9.0.9 crack.

      -

      We hope that this article has been helpful for you and that you have enjoyed using Fsdreamteam Gsx Fsx 1.9.0.9 crack on your flights. However, we also advise you to be aware of the legal and ethical issues of downloading and using cracked software, and the possible consequences that may arise. We also recommend you to support the original developer of GSX, Fsdreamteam, by purchasing their product legally and ethically.

      -

      If you have any questions or suggestions, please feel free to leave a comment below. Thank you for reading!

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Hiljadu-Cudesnih-Sunaca-Haled-Hosseinipdf.md b/spaces/diacanFperku/AutoGPT/Hiljadu-Cudesnih-Sunaca-Haled-Hosseinipdf.md deleted file mode 100644 index 1b908e000d1b39f51aee1634ed6101a04a76c2b9..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Hiljadu-Cudesnih-Sunaca-Haled-Hosseinipdf.md +++ /dev/null @@ -1,38 +0,0 @@ -Hiljadu Cudesnih Sunaca Haled Hosseini.pdf - - - -Download File > [https://maudaracte.blogspot.com/?file=2tvJde](https://maudaracte.blogspot.com/?file=2tvJde) - - - - - - - - - -```markdown -Hiljadu Cudesnih Sunaca: A Review of Haled Hosseini's Novel -Hiljadu Cudesnih Sunaca (A Thousand Splendid Suns) is a novel by Afghan-American author Haled Hosseini, published in 2007. It tells the story of two women, Mariam and Laila, who suffer from the oppression and violence of the Taliban regime in Afghanistan. The novel explores themes such as love, friendship, family, courage, sacrifice, and resilience in the face of hardship. -In this article, we will review the novel and its main characters, plot, style, and message. We will also provide some information about the author and his other works. - -Main Characters -The novel has two main protagonists: Mariam and Laila. Mariam is a harami (illegitimate child) who lives with her bitter mother Nana in a hut outside Herat. She is rejected by her wealthy father Jalil and his family, and forced to marry Rasheed, a cruel and abusive shoemaker in Kabul. Laila is a beautiful and intelligent girl who grows up in a loving family in Kabul. She falls in love with Tariq, a boy from her neighborhood who loses his leg in a landmine explosion. When her parents are killed by a rocket attack, she is rescued by Rasheed and becomes his second wife. -Mariam and Laila initially resent each other, but they gradually develop a bond of friendship and sisterhood. They support each other through the horrors of war, domestic violence, poverty, and oppression. They also share a love for Aziza, Laila's daughter by Tariq, whom Rasheed rejects as his own. Together, they endure the brutality of the Taliban regime, which imposes harsh restrictions on women's rights and freedoms. They also face the threat of Rasheed's violence, which escalates as he becomes more frustrated and paranoid. -The novel also has several secondary characters who play important roles in the story. Some of them are: - -Tariq: Laila's childhood friend and lover, who loses his leg in a landmine explosion. He flees to Pakistan with his family after the Soviet invasion of Afghanistan. He later returns to Kabul to find Laila and rescue her from Rasheed. -Aziza: Laila's daughter by Tariq, whom she gives birth to in secret. She is a smart and brave girl who loves Mariam as her mother. She is sent to an orphanage by Rasheed when he can no longer afford to feed her. -Zalmai: Laila's son by Rasheed, whom she conceives after being raped by him. He is spoiled and favored by Rasheed, who sees him as his heir. He is loyal to his father and distrustful of Tariq. -Mullah Faizullah: Mariam's teacher and friend, who teaches her how to read and write. He is a kind and gentle man who encourages Mariam to pursue her dreams. He dies of old age before Mariam leaves Herat. -Nana: Mariam's mother, who was impregnated by Jalil when she was his housekeeper. She suffers from epilepsy and depression, and blames Mariam for her misfortune. She commits suicide after Mariam leaves her to visit Jalil. -Jalil: Mariam's father, who is a wealthy businessman with three wives and nine legitimate children. He visits Mariam once a week and tells her stories about Herat and the world. He abandons Mariam when she asks to live with him, and arranges her marriage to Rasheed. - - -Plot Summary -The novel spans over three decades of Afghan history, from the 1970s to the 2000s. It covers major events such as the Soviet invasion, the civil war, the rise of the Taliban, and the US intervention. -The novel begins with Mariam's childhood in Herat, where she lives with her mother Nana in a hut outside the city. She longs to visit her father Jalil and his family in their mansion in Herat. On her fifteenth birthday, she decides to go to Herat to see Jalil after he fails to show up for their weekly visit. She is shocked to discover that Jalil has lied to her about his dfd1c89656 - - - diff --git a/spaces/diacanFperku/AutoGPT/SMS Caster 37 Full !!BETTER!! With Keygen.md b/spaces/diacanFperku/AutoGPT/SMS Caster 37 Full !!BETTER!! With Keygen.md deleted file mode 100644 index 1619a16d0e8d75ff23073c64107bafb776b59f4b..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/SMS Caster 37 Full !!BETTER!! With Keygen.md +++ /dev/null @@ -1,7 +0,0 @@ - -

      the government of india has launched smart grid mission (sgm) to achieve its vision of having a low-carbon, sustainable and secure electrical power system. the mission aims to achieve its goal in two phases: phase 1 (‘grid readiness’) and phase 2 (‘smart grid’). the mission envisages the development of a smart grid environment that can provide services to various stakeholders including consumers, service providers, and utilities. the mission proposes to achieve a smart grid environment by: 1) developing an electric power delivery grid (epdg) that provides a secure and reliable power supply for a nation that meets the needs of an increasingly mobile, data-driven society, 2) enabling the integration of smart metering, advanced metering infrastructure (ami) and cyber-physical systems (cps) with epdg to provide timely, accurate, and reliable information, and 3) building an ecosystem of service providers that offer smart services to consumers and utilities.

      -

      conclusion: with greater-than-ever pharmaceutical and technological developments, the question of whether patients will benefit from emerging drugs is a crucial one. we believe that this study highlights the importance of drug safety surveillance in modern drug development and the utility of vigibase. for this reason, we have made the summary and methods available via the site so that they can be used more broadly. we look forward to further innovations and contributions from the vigibase community to improve drug safety for patients.

      -

      SMS Caster 37 Full With Keygen


      Download Zip > https://gohhs.com/2uFT5u



      -

      we have created a quiet drop that can hold up to 330 standard size (6 x 9) books or 850 jeweled media cases. our carts can be up to 100% recyclable (depending on materials) and our carts are super portable. we have a very competitive price for a cart with a non-marring, quality cart.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/utilities/create_triples.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/utilities/create_triples.py deleted file mode 100644 index 6ed2686a87d5687eb715e392c6f9979ef67a4470..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/colbert/utilities/create_triples.py +++ /dev/null @@ -1,65 +0,0 @@ -import random -from colbert.infra.provenance import Provenance - -from utility.utils.save_metadata import save_metadata -from utility.supervision.triples import sample_for_query - -from colbert.utils.utils import print_message - -from colbert.data.ranking import Ranking -from colbert.data.examples import Examples - -MAX_NUM_TRIPLES = 40_000_000 - - -class Triples: - def __init__(self, ranking, seed=12345): - random.seed(seed) # TODO: Use internal RNG instead.. - self.seed = seed - - ranking = Ranking.cast(ranking) - self.ranking_provenance = ranking.provenance() - self.qid2rankings = ranking.todict() - - def create(self, positives, depth): - assert all(len(x) == 2 for x in positives) - assert all(maxBest <= maxDepth for maxBest, maxDepth in positives), positives - - self.positives = positives - self.depth = depth - - Triples = [] - NonEmptyQIDs = 0 - - for processing_idx, qid in enumerate(self.qid2rankings): - l = sample_for_query(qid, self.qid2rankings[qid], positives, depth, False, None) - NonEmptyQIDs += (len(l) > 0) - Triples.extend(l) - - if processing_idx % (10_000) == 0: - print_message(f"#> Done with {processing_idx+1} questions!\t\t " - f"{str(len(Triples) / 1000)}k triples for {NonEmptyQIDs} unqiue QIDs.") - - print_message(f"#> Sub-sample the triples (if > {MAX_NUM_TRIPLES})..") - print_message(f"#> len(Triples) = {len(Triples)}") - - if len(Triples) > MAX_NUM_TRIPLES: - Triples = random.sample(Triples, MAX_NUM_TRIPLES) - - ### Prepare the triples ### - print_message("#> Shuffling the triples...") - random.shuffle(Triples) - - self.Triples = Examples(data=Triples) - - return Triples - - def save(self, new_path): - provenance = Provenance() - provenance.source = 'Triples::create' - provenance.seed = self.seed - provenance.positives = self.positives - provenance.depth = self.depth - provenance.ranking = self.ranking_provenance - - Examples(data=self.Triples, provenance=provenance).save(new_path) diff --git a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/text/chinese_bert.py b/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/text/chinese_bert.py deleted file mode 100644 index cb84ce0b426cd0a1c7954ddcdf41322c10ed14fa..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/text/chinese_bert.py +++ /dev/null @@ -1,50 +0,0 @@ -import torch -from transformers import AutoTokenizer, AutoModelForMaskedLM - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -tokenizer = AutoTokenizer.from_pretrained("./bert/chinese-roberta-wwm-ext-large") -model = AutoModelForMaskedLM.from_pretrained("./bert/chinese-roberta-wwm-ext-large").to(device) - -def get_bert_feature(text, word2ph): - with torch.no_grad(): - inputs = tokenizer(text, return_tensors='pt') - for i in inputs: - inputs[i] = inputs[i].to(device) - res = model(**inputs, output_hidden_states=True) - res = torch.cat(res['hidden_states'][-3:-2], -1)[0].cpu() - - assert len(word2ph) == len(text)+2 - word2phone = word2ph - phone_level_feature = [] - for i in range(len(word2phone)): - repeat_feature = res[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - - - return phone_level_feature.T - -if __name__ == '__main__': - # feature = get_bert_feature('你好,我是说的道理。') - import torch - - word_level_feature = torch.rand(38, 1024) # 12个词,每个词1024维特征 - word2phone = [1, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1] - - # 计算总帧数 - total_frames = sum(word2phone) - print(word_level_feature.shape) - print(word2phone) - phone_level_feature = [] - for i in range(len(word2phone)): - print(word_level_feature[i].shape) - - # 对每个词重复word2phone[i]次 - repeat_feature = word_level_feature[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - print(phone_level_feature.shape) # torch.Size([36, 1024]) - diff --git a/spaces/digitalxingtong/Kino-Bert-VITS2/losses.py b/spaces/digitalxingtong/Kino-Bert-VITS2/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Kino-Bert-VITS2/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/digitalxingtong/Nanami-Bert-VITS2/README_zh.md b/spaces/digitalxingtong/Nanami-Bert-VITS2/README_zh.md deleted file mode 100644 index 8b137891791fe96927ad78e64b0aad7bded08bdc..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Nanami-Bert-VITS2/README_zh.md +++ /dev/null @@ -1 +0,0 @@ - diff --git a/spaces/digitalxingtong/Nanami-Bert-VITS2/train_ms.py b/spaces/digitalxingtong/Nanami-Bert-VITS2/train_ms.py deleted file mode 100644 index 5d109003d40497ea4493e7c73f47c1eb7370a81e..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Nanami-Bert-VITS2/train_ms.py +++ /dev/null @@ -1,402 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -import shutil -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -import commons -import utils -from data_utils import ( - TextAudioSpeakerLoader, - TextAudioSpeakerCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, - DurationDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - -torch.backends.cudnn.benchmark = True -torch.backends.cuda.matmul.allow_tf32 = True -torch.backends.cudnn.allow_tf32 = True -torch.set_float32_matmul_precision('medium') -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '65280' - - hps = utils.get_hparams() - if not hps.cont: - shutil.copy('./pretrained_models/D_0.pth','./logs/OUTPUT_MODEL/D_0.pth') - shutil.copy('./pretrained_models/G_0.pth','./logs/OUTPUT_MODEL/G_0.pth') - shutil.copy('./pretrained_models/DUR_0.pth','./logs/OUTPUT_MODEL/DUR_0.pth') - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader(train_dataset, num_workers=2, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False, - batch_size=1, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - if "use_noise_scaled_mas" in hps.model.keys() and hps.model.use_noise_scaled_mas == True: - print("Using noise scaled MAS for VITS2") - use_noise_scaled_mas = True - mas_noise_scale_initial = 0.01 - noise_scale_delta = 2e-6 - else: - print("Using normal MAS for VITS1") - use_noise_scaled_mas = False - mas_noise_scale_initial = 0.0 - noise_scale_delta = 0.0 - if "use_duration_discriminator" in hps.model.keys() and hps.model.use_duration_discriminator == True: - print("Using duration discriminator for VITS2") - use_duration_discriminator = True - net_dur_disc = DurationDiscriminator( - hps.model.hidden_channels, - hps.model.hidden_channels, - 3, - 0.1, - gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0, - ).cuda(rank) - if "use_spk_conditioned_encoder" in hps.model.keys() and hps.model.use_spk_conditioned_encoder == True: - if hps.data.n_speakers == 0: - raise ValueError("n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model") - use_spk_conditioned_encoder = True - else: - print("Using normal encoder for VITS1") - use_spk_conditioned_encoder = False - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - mas_noise_scale_initial = mas_noise_scale_initial, - noise_scale_delta = noise_scale_delta, - **hps.model).cuda(rank) - - freeze_enc = getattr(hps.model, "freeze_enc", False) - if freeze_enc: - print("freeze encoder !!!") - for param in net_g.enc_p.parameters(): - param.requires_grad = False - - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - filter(lambda p: p.requires_grad, net_g.parameters()), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - if net_dur_disc is not None: - optim_dur_disc = torch.optim.AdamW( - net_dur_disc.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - else: - optim_dur_disc = None - net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if net_dur_disc is not None: - net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True) - - pretrain_dir = None - if pretrain_dir is None: - try: - if net_dur_disc is not None: - _, optim_dur_disc, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), net_dur_disc, optim_dur_disc, skip_optimizer=not hps.cont) - _, optim_g, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g, skip_optimizer=not hps.cont) - _, optim_d, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d, skip_optimizer=not hps.cont) - - epoch_str = max(epoch_str, 1) - global_step = (epoch_str - 1) * len(train_loader) - except Exception as e: - print(e) - epoch_str = 1 - global_step = 0 - else: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "G_*.pth"), net_g, - optim_g, True) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "D_*.pth"), net_d, - optim_d, True) - - - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - if net_dur_disc is not None: - scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - else: - scheduler_dur_disc = None - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - if net_dur_disc is not None: - scheduler_dur_disc.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d, net_dur_disc = nets - optim_g, optim_d, optim_dur_disc = optims - scheduler_g, scheduler_d, scheduler_dur_disc = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - if net_dur_disc is not None: - net_dur_disc.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in tqdm(enumerate(train_loader)): - if net_g.module.use_noise_scaled_mas: - current_mas_noise_scale = net_g.module.mas_noise_scale_initial - net_g.module.noise_scale_delta * global_step - net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0) - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - speakers = speakers.cuda(rank, non_blocking=True) - tone = tone.cuda(rank, non_blocking=True) - language = language.cuda(rank, non_blocking=True) - bert = bert.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask, \ - (z, z_p, m_p, logs_p, m_q, logs_q), (hidden_x, logw, logw_) = net_g(x, x_lengths, spec, spec_lengths, speakers, tone, language, bert) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach()) - with autocast(enabled=False): - # TODO: I think need to mean using the mask, but for now, just mean all - loss_dur_disc, losses_dur_disc_r, losses_dur_disc_g = discriminator_loss(y_dur_hat_r, y_dur_hat_g) - loss_dur_disc_all = loss_dur_disc - optim_dur_disc.zero_grad() - scaler.scale(loss_dur_disc_all).backward() - scaler.unscale_(optim_dur_disc) - grad_norm_dur_disc = commons.clip_grad_value_(net_dur_disc.parameters(), None) - scaler.step(optim_dur_disc) - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - if net_dur_disc is not None: - loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g) - loss_gen_all += loss_dur_gen - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, - "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update( - {"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0, 0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - if net_dur_disc is not None: - utils.save_checkpoint(net_dur_disc, optim_dur_disc, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step))) - keep_ckpts = getattr(hps.train, 'keep_ckpts', 5) - if keep_ckpts > 0: - utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True) - - - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - print("Evaluating ...") - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in enumerate(eval_loader): - x, x_lengths = x.cuda(), x_lengths.cuda() - spec, spec_lengths = spec.cuda(), spec_lengths.cuda() - y, y_lengths = y.cuda(), y_lengths.cuda() - speakers = speakers.cuda() - bert = bert.cuda() - tone = tone.cuda() - language = language.cuda() - for use_sdp in [True, False]: - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, tone, language, bert, y=spec, max_len=1000, sdp_ratio=0.0 if not use_sdp else 1.0) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict.update({ - f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - }) - audio_dict.update({ - f"gen/audio_{batch_idx}_{use_sdp}": y_hat[0, :, :y_hat_lengths[0]] - }) - image_dict.update({f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, :y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - -if __name__ == "__main__": - main() diff --git a/spaces/divyahansg/text-generation-webui-space/extensions/silero_tts/script.py b/spaces/divyahansg/text-generation-webui-space/extensions/silero_tts/script.py deleted file mode 100644 index f611dc27b7480cd357b77c0c407fcc2bd6df2679..0000000000000000000000000000000000000000 --- a/spaces/divyahansg/text-generation-webui-space/extensions/silero_tts/script.py +++ /dev/null @@ -1,169 +0,0 @@ -import time -from pathlib import Path - -import gradio as gr -import torch - -import modules.chat as chat -import modules.shared as shared - -torch._C._jit_set_profiling_mode(False) - -params = { - 'activate': True, - 'speaker': 'en_56', - 'language': 'en', - 'model_id': 'v3_en', - 'sample_rate': 48000, - 'device': 'cpu', - 'show_text': False, - 'autoplay': True, - 'voice_pitch': 'medium', - 'voice_speed': 'medium', -} - -current_params = params.copy() -voices_by_gender = ['en_99', 'en_45', 'en_18', 'en_117', 'en_49', 'en_51', 'en_68', 'en_0', 'en_26', 'en_56', 'en_74', 'en_5', 'en_38', 'en_53', 'en_21', 'en_37', 'en_107', 'en_10', 'en_82', 'en_16', 'en_41', 'en_12', 'en_67', 'en_61', 'en_14', 'en_11', 'en_39', 'en_52', 'en_24', 'en_97', 'en_28', 'en_72', 'en_94', 'en_36', 'en_4', 'en_43', 'en_88', 'en_25', 'en_65', 'en_6', 'en_44', 'en_75', 'en_91', 'en_60', 'en_109', 'en_85', 'en_101', 'en_108', 'en_50', 'en_96', 'en_64', 'en_92', 'en_76', 'en_33', 'en_116', 'en_48', 'en_98', 'en_86', 'en_62', 'en_54', 'en_95', 'en_55', 'en_111', 'en_3', 'en_83', 'en_8', 'en_47', 'en_59', 'en_1', 'en_2', 'en_7', 'en_9', 'en_13', 'en_15', 'en_17', 'en_19', 'en_20', 'en_22', 'en_23', 'en_27', 'en_29', 'en_30', 'en_31', 'en_32', 'en_34', 'en_35', 'en_40', 'en_42', 'en_46', 'en_57', 'en_58', 'en_63', 'en_66', 'en_69', 'en_70', 'en_71', 'en_73', 'en_77', 'en_78', 'en_79', 'en_80', 'en_81', 'en_84', 'en_87', 'en_89', 'en_90', 'en_93', 'en_100', 'en_102', 'en_103', 'en_104', 'en_105', 'en_106', 'en_110', 'en_112', 'en_113', 'en_114', 'en_115'] -voice_pitches = ['x-low', 'low', 'medium', 'high', 'x-high'] -voice_speeds = ['x-slow', 'slow', 'medium', 'fast', 'x-fast'] - -# Used for making text xml compatible, needed for voice pitch and speed control -table = str.maketrans({ - "<": "<", - ">": ">", - "&": "&", - "'": "'", - '"': """, -}) - -def xmlesc(txt): - return txt.translate(table) - -def load_model(): - model, example_text = torch.hub.load(repo_or_dir='snakers4/silero-models', model='silero_tts', language=params['language'], speaker=params['model_id']) - model.to(params['device']) - return model -model = load_model() - -def remove_surrounded_chars(string): - new_string = "" - in_star = False - for char in string: - if char == '*': - in_star = not in_star - elif not in_star: - new_string += char - return new_string - -def remove_tts_from_history(name1, name2): - for i, entry in enumerate(shared.history['internal']): - shared.history['visible'][i] = [shared.history['visible'][i][0], entry[1]] - return chat.generate_chat_output(shared.history['visible'], name1, name2, shared.character) - -def toggle_text_in_history(name1, name2): - for i, entry in enumerate(shared.history['visible']): - visible_reply = entry[1] - if visible_reply.startswith('')[0]}\n\n{reply}"] - else: - shared.history['visible'][i] = [shared.history['visible'][i][0], f"{visible_reply.split('')[0]}"] - return chat.generate_chat_output(shared.history['visible'], name1, name2, shared.character) - -def input_modifier(string): - """ - This function is applied to your text inputs before - they are fed into the model. - """ - - # Remove autoplay from the last reply - if (shared.args.chat or shared.args.cai_chat) and len(shared.history['internal']) > 0: - shared.history['visible'][-1] = [shared.history['visible'][-1][0], shared.history['visible'][-1][1].replace('controls autoplay>','controls>')] - - shared.processing_message = "*Is recording a voice message...*" - return string - -def output_modifier(string): - """ - This function is applied to the model outputs. - """ - - global model, current_params - - for i in params: - if params[i] != current_params[i]: - model = load_model() - current_params = params.copy() - break - - if params['activate'] == False: - return string - - original_string = string - string = remove_surrounded_chars(string) - string = string.replace('"', '') - string = string.replace('“', '') - string = string.replace('\n', ' ') - string = string.strip() - - if string == '': - string = '*Empty reply, try regenerating*' - else: - output_file = Path(f'extensions/silero_tts/outputs/{shared.character}_{int(time.time())}.wav') - prosody = ''.format(params['voice_speed'], params['voice_pitch']) - silero_input = f'{prosody}{xmlesc(string)}' - model.save_wav(ssml_text=silero_input, speaker=params['speaker'], sample_rate=int(params['sample_rate']), audio_path=str(output_file)) - - autoplay = 'autoplay' if params['autoplay'] else '' - string = f'' - if params['show_text']: - string += f'\n\n{original_string}' - - shared.processing_message = "*Is typing...*" - return string - -def bot_prefix_modifier(string): - """ - This function is only applied in chat mode. It modifies - the prefix text for the Bot and can be used to bias its - behavior. - """ - - return string - -def ui(): - # Gradio elements - with gr.Accordion("Silero TTS"): - with gr.Row(): - activate = gr.Checkbox(value=params['activate'], label='Activate TTS') - autoplay = gr.Checkbox(value=params['autoplay'], label='Play TTS automatically') - show_text = gr.Checkbox(value=params['show_text'], label='Show message text under audio player') - voice = gr.Dropdown(value=params['speaker'], choices=voices_by_gender, label='TTS voice') - with gr.Row(): - v_pitch = gr.Dropdown(value=params['voice_pitch'], choices=voice_pitches, label='Voice pitch') - v_speed = gr.Dropdown(value=params['voice_speed'], choices=voice_speeds, label='Voice speed') - with gr.Row(): - convert = gr.Button('Permanently replace audios with the message texts') - convert_cancel = gr.Button('Cancel', visible=False) - convert_confirm = gr.Button('Confirm (cannot be undone)', variant="stop", visible=False) - - # Convert history with confirmation - convert_arr = [convert_confirm, convert, convert_cancel] - convert.click(lambda :[gr.update(visible=True), gr.update(visible=False), gr.update(visible=True)], None, convert_arr) - convert_confirm.click(lambda :[gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, convert_arr) - convert_confirm.click(remove_tts_from_history, [shared.gradio['name1'], shared.gradio['name2']], shared.gradio['display']) - convert_confirm.click(lambda : chat.save_history(timestamp=False), [], [], show_progress=False) - convert_cancel.click(lambda :[gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, convert_arr) - - # Toggle message text in history - show_text.change(lambda x: params.update({"show_text": x}), show_text, None) - show_text.change(toggle_text_in_history, [shared.gradio['name1'], shared.gradio['name2']], shared.gradio['display']) - show_text.change(lambda : chat.save_history(timestamp=False), [], [], show_progress=False) - - # Event functions to update the parameters in the backend - activate.change(lambda x: params.update({"activate": x}), activate, None) - autoplay.change(lambda x: params.update({"autoplay": x}), autoplay, None) - voice.change(lambda x: params.update({"speaker": x}), voice, None) - v_pitch.change(lambda x: params.update({"voice_pitch": x}), v_pitch, None) - v_speed.change(lambda x: params.update({"voice_speed": x}), v_speed, None) diff --git a/spaces/docs-demos/t5-base/app.py b/spaces/docs-demos/t5-base/app.py deleted file mode 100644 index 4dd945ba4ff0c4c114f59997d0658fbd71eb5bc1..0000000000000000000000000000000000000000 --- a/spaces/docs-demos/t5-base/app.py +++ /dev/null @@ -1,44 +0,0 @@ -import gradio as gr - -title = "T5" - -description = "Gradio Demo for T5. To use it, simply add your text, or click one of the examples to load them. Read more at the links below." - -article = "

      Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

      " - -examples = [ - ['My name is Sarah and I live in London',"t5-base"] -] - -io1 = gr.Interface.load("huggingface/t5-base") - -io2 = gr.Interface.load("huggingface/t5-small") - -io3 = gr.Interface.load("huggingface/t5-large") - -io4 = gr.Interface.load("huggingface/t5-3b") - - - -def inference(text, model): - if model == "t5-base": - outtext = io1(text) - elif model == "t5-small": - outtext = io2(text) - elif model == "t5-large": - outtext = io3(text) - else: - outtext = io4(text) - return outtext - - - -gr.Interface( - inference, - [gr.inputs.Textbox(label="Input"),gr.inputs.Dropdown(choices=["t5-base","t5-small","t5-large","t5-3b"], type="value", default="t5-base", label="model") -], - gr.outputs.Textbox(label="Output"), - examples=examples, - article=article, - title=title, - description=description).launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/doevent/ArcaneGAN/app.py b/spaces/doevent/ArcaneGAN/app.py deleted file mode 100644 index a5703da8b4be370b76a62e44c37b7172e59a960d..0000000000000000000000000000000000000000 --- a/spaces/doevent/ArcaneGAN/app.py +++ /dev/null @@ -1,139 +0,0 @@ -import os -# os.system("pip freeze") -from huggingface_hub import hf_hub_download -os.system("pip -qq install facenet_pytorch") -from facenet_pytorch import MTCNN -from torchvision import transforms -import torch, PIL -from tqdm.notebook import tqdm -import gradio as gr -import torch - -modelarcanev4 = hf_hub_download(repo_id="akhaliq/ArcaneGANv0.4", filename="ArcaneGANv0.4.jit") - -mtcnn = MTCNN(image_size=256, margin=80) - -# simplest ye olde trustworthy MTCNN for face detection with landmarks -def detect(img): - - # Detect faces - batch_boxes, batch_probs, batch_points = mtcnn.detect(img, landmarks=True) - # Select faces - if not mtcnn.keep_all: - batch_boxes, batch_probs, batch_points = mtcnn.select_boxes( - batch_boxes, batch_probs, batch_points, img, method=mtcnn.selection_method - ) - - return batch_boxes, batch_points - -# my version of isOdd, should make a separate repo for it :D -def makeEven(_x): - return _x if (_x % 2 == 0) else _x+1 - -# the actual scaler function -def scale(boxes, _img, max_res=1_500_000, target_face=256, fixed_ratio=0, max_upscale=2, VERBOSE=False): - - x, y = _img.size - - ratio = 2 #initial ratio - - #scale to desired face size - if (boxes is not None): - if len(boxes)>0: - ratio = target_face/max(boxes[0][2:]-boxes[0][:2]); - ratio = min(ratio, max_upscale) - if VERBOSE: print('up by', ratio) - - if fixed_ratio>0: - if VERBOSE: print('fixed ratio') - ratio = fixed_ratio - - x*=ratio - y*=ratio - - #downscale to fit into max res - res = x*y - if res > max_res: - ratio = pow(res/max_res,1/2); - if VERBOSE: print(ratio) - x=int(x/ratio) - y=int(y/ratio) - - #make dimensions even, because usually NNs fail on uneven dimensions due skip connection size mismatch - x = makeEven(int(x)) - y = makeEven(int(y)) - - size = (x, y) - - return _img.resize(size) - -""" - A useful scaler algorithm, based on face detection. - Takes PIL.Image, returns a uniformly scaled PIL.Image - boxes: a list of detected bboxes - _img: PIL.Image - max_res: maximum pixel area to fit into. Use to stay below the VRAM limits of your GPU. - target_face: desired face size. Upscale or downscale the whole image to fit the detected face into that dimension. - fixed_ratio: fixed scale. Ignores the face size, but doesn't ignore the max_res limit. - max_upscale: maximum upscale ratio. Prevents from scaling images with tiny faces to a blurry mess. -""" - -def scale_by_face_size(_img, max_res=1_500_000, target_face=256, fix_ratio=0, max_upscale=2, VERBOSE=False): - boxes = None - boxes, _ = detect(_img) - if VERBOSE: print('boxes',boxes) - img_resized = scale(boxes, _img, max_res, target_face, fix_ratio, max_upscale, VERBOSE) - return img_resized - - -size = 256 - -means = [0.485, 0.456, 0.406] -stds = [0.229, 0.224, 0.225] - -t_stds = torch.tensor(stds).cpu().half().float()[:,None,None] -t_means = torch.tensor(means).cpu().half().float()[:,None,None] - -def makeEven(_x): - return int(_x) if (_x % 2 == 0) else int(_x+1) - -img_transforms = transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize(means,stds)]) - -def tensor2im(var): - return var.mul(t_stds).add(t_means).mul(255.).clamp(0,255).permute(1,2,0) - -def proc_pil_img(input_image, model): - transformed_image = img_transforms(input_image)[None,...].cpu().half().float() - - with torch.no_grad(): - result_image = model(transformed_image)[0] - output_image = tensor2im(result_image) - output_image = output_image.detach().cpu().numpy().astype('uint8') - output_image = PIL.Image.fromarray(output_image) - return output_image - - - -modelv4 = torch.jit.load(modelarcanev4,map_location='cpu').eval().cpu().half().float() - -def process(im): - im = scale_by_face_size(im, target_face=256, max_res=1_500_000, max_upscale=1) - res = proc_pil_img(im, modelv4) - return res - -title = "ArcaneGAN" -description = "" -article = "" - -gr.Interface(process, - gr.inputs.Image(type="pil", label="Input"), - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[['groot.jpeg']], - allow_flagging='never', - theme="default", - ).launch(enable_queue=True) diff --git a/spaces/dorkai/text-generation-webui-main/docs/Chat-mode.md b/spaces/dorkai/text-generation-webui-main/docs/Chat-mode.md deleted file mode 100644 index 08dd290dadbd8a590ace65d557b8916a2707fc26..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/docs/Chat-mode.md +++ /dev/null @@ -1,45 +0,0 @@ -## Chat characters - -Custom chat mode characters are defined by `.yaml` files inside the `characters` folder. An example is included: [Example.yaml](https://github.com/oobabooga/text-generation-webui/blob/main/characters/Example.yaml) - -The following fields may be defined: - -| Field | Description | -|-------|-------------| -| `name` or `bot` | The character's name. | -| `your_name` or `user` (optional) | Your name. This overwrites what you had previously written in the `Your name` field in the interface. | -| `context` | A string that appears at the top of the prompt. It usually contains a description of the character's personality. | -| `greeting` (optional) | The character's opening message when a new conversation is started. | -| `example_dialogue` (optional) | A few example messages to guide the model. | -| `turn_template` (optional) | Used to define where the spaces and new line characters should be in Instruct mode. See the characters in `characters/instruction-following` for examples. | - -#### Special tokens - -* `{{char}}` or ``: are replaced with the character's name -* `{{user}}` or ``: are replaced with your name - -These replacements happen when the character is loaded, and they apply to the `context`, `greeting`, and `example_dialogue` fields. - -#### How do I add a profile picture for my character? - -Put an image with the same name as your character's yaml file into the `characters` folder. For example, if your bot is `Character.yaml`, add `Character.jpg` or `Character.png` to the folder. - -#### Is the chat history truncated in the prompt? - -Once your prompt reaches the 2048 token limit, old messages will be removed one at a time. The context string will always stay at the top of the prompt and will never get truncated. - -#### Pygmalion format characters - -These are also supported out of the box. Simply put the JSON file in the `characters` folder, or upload it directly from the web UI by clicking on the "Upload character" tab at the bottom. - -## Chat styles - -Custom chat styles can be defined in the `text-generation-webui/css` folder. Simply create a new file with name starting in `chat_style-` and ending in `.css` and it will automatically appear in the "Chat style" dropdown menu in the interface. Examples: - -``` -chat_style-cai-chat.css -chat_style-TheEncrypted777.css -chat_style-wpp.css -``` - -You should use the same class names as in `chat_style-cai-chat.css` in your custom style. \ No newline at end of file diff --git a/spaces/ds520/bingo/src/pages/api/sydney.ts b/spaces/ds520/bingo/src/pages/api/sydney.ts deleted file mode 100644 index 0e7bbf23d77c2e1a6635185a060eeee58b8c8e66..0000000000000000000000000000000000000000 --- a/spaces/ds520/bingo/src/pages/api/sydney.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { NextApiRequest, NextApiResponse } from 'next' -import { WebSocket, debug } from '@/lib/isomorphic' -import { BingWebBot } from '@/lib/bots/bing' -import { websocketUtils } from '@/lib/bots/bing/utils' -import { WatchDog, createHeaders } from '@/lib/utils' - - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - const conversationContext = req.body - const headers = createHeaders(req.cookies) - debug(headers) - res.setHeader('Content-Type', 'text/stream; charset=UTF-8') - - const ws = new WebSocket('wss://sydney.bing.com/sydney/ChatHub', { - headers: { - ...headers, - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - pragma: 'no-cache', - } - }) - - const closeDog = new WatchDog() - const timeoutDog = new WatchDog() - ws.onmessage = (event) => { - timeoutDog.watch(() => { - ws.send(websocketUtils.packMessage({ type: 6 })) - }, 1500) - closeDog.watch(() => { - ws.close() - }, 10000) - res.write(event.data) - if (/\{"type":([367])\}/.test(String(event.data))) { - const type = parseInt(RegExp.$1, 10) - debug('connection type', type) - if (type === 3) { - ws.close() - } else { - ws.send(websocketUtils.packMessage({ type })) - } - } - } - - ws.onclose = () => { - timeoutDog.reset() - closeDog.reset() - debug('connection close') - res.end() - } - - await new Promise((resolve) => ws.onopen = resolve) - ws.send(websocketUtils.packMessage({ protocol: 'json', version: 1 })) - ws.send(websocketUtils.packMessage({ type: 6 })) - ws.send(websocketUtils.packMessage(BingWebBot.buildChatRequest(conversationContext!))) - req.socket.once('close', () => { - ws.close() - if (!res.closed) { - res.end() - } - }) -} diff --git a/spaces/eIysia/VITS-Umamusume-voice-synthesizer/README.md b/spaces/eIysia/VITS-Umamusume-voice-synthesizer/README.md deleted file mode 100644 index 1b24e6efdb04cb1460e4fe3257d2303677c5a0e1..0000000000000000000000000000000000000000 --- a/spaces/eIysia/VITS-Umamusume-voice-synthesizer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Multilingual Anime TTS -emoji: 🎙🐴 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.7 -app_file: app.py -pinned: false -duplicated_from: Plachta/VITS-Umamusume-voice-synthesizer ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/edemgold/QA-App/app.py b/spaces/edemgold/QA-App/app.py deleted file mode 100644 index 5afba4305e592e91fc652b12422acf717ec3a9ad..0000000000000000000000000000000000000000 --- a/spaces/edemgold/QA-App/app.py +++ /dev/null @@ -1,30 +0,0 @@ -# -*- coding: utf-8 -*- - -# Importing Dependancies - -import gradio as gr -from transformers import pipeline - -"""# Loading Model Name""" - -model_name = "deepset/roberta-base-squad2" - -"""# Get Predictions - -""" - -nlu = pipeline('question-answering', model=model_name, tokenizer=model_name) - -def func(context, question): - input = { - 'question':question, - 'context':context - } - res = nlu(input) - return res["answer"] - -descr = "This is a question and Answer Web app, you give it a context and ask it questions based on the context provided" - -app = gr.Interface(fn=func, inputs=[gr.inputs.Textbox(lines=3, placeholder="put in your context here..."),"text"], outputs="text", title="Question Answer App", description=descr) - -app.launch() \ No newline at end of file diff --git a/spaces/edugp/perplexity-lenses/cli.py b/spaces/edugp/perplexity-lenses/cli.py deleted file mode 100644 index a889d0e03cffacf85f8a401cd4c56d966fa018bb..0000000000000000000000000000000000000000 --- a/spaces/edugp/perplexity-lenses/cli.py +++ /dev/null @@ -1,139 +0,0 @@ -import logging -from functools import partial -from typing import Optional - -import pandas as pd -import typer -from bokeh.plotting import output_file as bokeh_output_file -from bokeh.plotting import save -from embedding_lenses.dimensionality_reduction import ( - get_tsne_embeddings, - get_umap_embeddings, -) -from embedding_lenses.embedding import load_model - -from perplexity_lenses import REGISTRY_DATASET -from perplexity_lenses.data import ( - documents_df_to_sentences_df, - hub_dataset_to_dataframe, -) -from perplexity_lenses.engine import ( - DIMENSIONALITY_REDUCTION_ALGORITHMS, - DOCUMENT_TYPES, - EMBEDDING_MODELS, - LANGUAGES, - PERPLEXITY_MODELS, - SEED, - generate_plot, -) -from perplexity_lenses.perplexity import KenlmModel -from perplexity_lenses.visualization import draw_histogram - -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - - -app = typer.Typer() - - -@app.command() -def main( - dataset: str = typer.Option( - "mc4", help="The name of the hub dataset or local csv/tsv file." - ), - dataset_config: Optional[str] = typer.Option( - "es", - help="The configuration of the hub dataset, if any. Does not apply to local csv/tsv files.", - ), - dataset_split: Optional[str] = typer.Option( - "train", help="The dataset split. Does not apply to local csv/tsv files." - ), - text_column: str = typer.Option("text", help="The text field name."), - language: str = typer.Option( - "es", help=f"The language of the text. Options: {LANGUAGES}" - ), - doc_type: str = typer.Option( - "sentence", - help=f"Whether to embed at the sentence or document level. Options: {DOCUMENT_TYPES}.", - ), - sample: int = typer.Option(1000, help="Maximum number of examples to use."), - perplexity_model: str = typer.Option( - "wikipedia", - help=f"Dataset on which the perplexity model was trained on. Options: {PERPLEXITY_MODELS}", - ), - dimensionality_reduction: str = typer.Option( - DIMENSIONALITY_REDUCTION_ALGORITHMS[0], - help=f"Whether to use UMAP or t-SNE for dimensionality reduction. Options: {DIMENSIONALITY_REDUCTION_ALGORITHMS}.", - ), - model_name: str = typer.Option( - EMBEDDING_MODELS[0], - help=f"The sentence embedding model to use. Options: {EMBEDDING_MODELS}", - ), - output_file: str = typer.Option( - "perplexity", help="The name of the output visualization files." - ), -): - """ - Perplexity Lenses: Visualize text embeddings in 2D using colors to represent perplexity values. - """ - logger.info("Loading embedding model...") - model = load_model(model_name) - dimensionality_reduction_function = ( - partial(get_umap_embeddings, random_state=SEED) - if dimensionality_reduction.lower() == "umap" - else partial(get_tsne_embeddings, random_state=SEED) - ) - logger.info("Loading KenLM model...") - kenlm_model = KenlmModel.from_pretrained( - perplexity_model.lower(), - language, - lower_case=True, - remove_accents=True, - normalize_numbers=True, - punctuation=1, - ) - logger.info("Loading dataset...") - if dataset.endswith(".csv") or dataset.endswith(".tsv"): - df = pd.read_csv(dataset, sep="\t" if dataset.endswith(".tsv") else ",") - if doc_type.lower() == "sentence": - df = documents_df_to_sentences_df(df, text_column, sample, seed=SEED) - df["perplexity"] = df[text_column].map(kenlm_model.get_perplexity) - else: - df = hub_dataset_to_dataframe( - dataset, - dataset_config, - dataset_split, - sample, - text_column, - kenlm_model, - seed=SEED, - doc_type=doc_type, - ) - # Round perplexity - df["perplexity"] = df["perplexity"].round().astype(int) - logger.info( - f"Perplexity range: {df['perplexity'].min()} - {df['perplexity'].max()}" - ) - plot, plot_registry = generate_plot( - df, - text_column, - "perplexity", - None, - dimensionality_reduction_function, - model, - seed=SEED, - hub_dataset=dataset, - ) - logger.info("Saving plots") - bokeh_output_file(f"{output_file}.html") - save(plot) - if dataset == REGISTRY_DATASET: - bokeh_output_file(f"{output_file}_registry.html") - save(plot_registry) - fig = draw_histogram(df["perplexity"].values) - fig.savefig(f"{output_file}_histogram.png") - logger.info("Done") - - -if __name__ == "__main__": - app() diff --git a/spaces/emre/emre-llama-2-13b-mini/app.py b/spaces/emre/emre-llama-2-13b-mini/app.py deleted file mode 100644 index 7bc7fe1227074151aaddbca5a775f3dd03cda3ff..0000000000000000000000000000000000000000 --- a/spaces/emre/emre-llama-2-13b-mini/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/emre/llama-2-13b-mini").launch() \ No newline at end of file diff --git a/spaces/enzostvs/hub-api-playground/app/[type]/[index]/page.tsx b/spaces/enzostvs/hub-api-playground/app/[type]/[index]/page.tsx deleted file mode 100644 index 7b44f7b74964518915d3c13fb7c8deac7925c982..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/hub-api-playground/app/[type]/[index]/page.tsx +++ /dev/null @@ -1,16 +0,0 @@ -import { EditorMain } from "@/components/editor/main"; -import { API_COLLECTIONS } from "@/utils/datas/api_collections"; - -export default async function RouteAPI({ - params: { index, type }, -}: { - params: { index: string; type: string }; -}) { - const endpoint = API_COLLECTIONS.find((col) => col.key === type)?.endpoints[ - parseInt(index) - ]; - - return ( - <>{endpoint ? :
      Not found
      } - ); -} diff --git a/spaces/epexVfeibi/Imagedeblurr/Adjustment Program Reset Impressora Epson TX130TX133TX135 Luzes Piscandorar.md b/spaces/epexVfeibi/Imagedeblurr/Adjustment Program Reset Impressora Epson TX130TX133TX135 Luzes Piscandorar.md deleted file mode 100644 index 90453a3cbccd00b26256c45f78cb4e0c28125248..0000000000000000000000000000000000000000 --- a/spaces/epexVfeibi/Imagedeblurr/Adjustment Program Reset Impressora Epson TX130TX133TX135 Luzes Piscandorar.md +++ /dev/null @@ -1,7 +0,0 @@ -
      -

      epson l1800 a3 photo ink tank printer is one of the latest innovative products that epson released. it was the evolution of the inkjet printer that was released. therefore, it is evident that it is equipped with a refined hardware and with epson's full force software to cater for the needs of the business owners and consumers. moreover, it is also an all-in-one machine with photo printing, scanning, copying, and faxing functionalities. it is also equipped with a wide array of storage options. therefore, it is quite suitable for professionals to utilize it for data storage!

      -

      Adjustment Program Reset Impressora Epson TX130TX133TX135 Luzes Piscandorar


      Download File » https://jinyurl.com/2uEnyl



      -

      ink jet l1800 a3 photo ink tank printer is designed to fit with the epson l1800 a3 photo ink tank printer. therefore, both of these devices can connect to each other via usb. the connection is made possible through the usb id, which is extracted from the epson l1800 a3 photo ink tank printer. the extracted data is then automatically transmitted to the epson l1800 printer. furthermore, this connection is made possible through the usb id, which is extracted from the epson l1800 printer. therefore, this is a very convenient way to make the connection. therefore, you can also remove the ink cartridge manually from the printer. nevertheless, the ink cartridge is connected with a special interface.

      -

      epson l1800 a3 photo ink tank printer has a good memory capacity. therefore, this printer will store some valuable data that can be accessed whenever needed. this is quite convenient for the business owners and consumers. however, the size of the ink cartridge is quite limited. therefore, you can only utilize half of its capacity at a time.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/esraa-abdelmaksoud/Dominant-Ad-Colors-Detection/app.py b/spaces/esraa-abdelmaksoud/Dominant-Ad-Colors-Detection/app.py deleted file mode 100644 index f44aacf7cfe91d7fa401ad8180f7fd1930219d66..0000000000000000000000000000000000000000 --- a/spaces/esraa-abdelmaksoud/Dominant-Ad-Colors-Detection/app.py +++ /dev/null @@ -1,70 +0,0 @@ -import gradio -import cv2 -import numpy as np - -def get_dominant_colors(img): - - reshaped = img.reshape((-1, 3)) - # convert to np.float32 - reshaped = np.float32(reshaped) - # define criteria, number of clusters(K) and apply kmeans() - criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0) - K = 3 - _, _, center = cv2.kmeans(reshaped, K, None, criteria, 10, - cv2.KMEANS_PP_CENTERS) - # Now convert back into uint8 - dominants = np.uint8(center) - - return dominants - - -def bgr_to_hex(dominants): - - color_1, color_2, color_3 = dominants[0], dominants[1], dominants[2] - c1 = "#{:02x}{:02x}{:02x}".format(color_1[0], color_1[1], color_1[2]) - c2 = "#{:02x}{:02x}{:02x}".format(color_2[0], color_2[1], color_2[2]) - c3 = "#{:02x}{:02x}{:02x}".format(color_3[0], color_3[1], color_3[2]) - hex = f"{c1}\n{c2}\n{c3}\n" - colors = [color_1, color_2, color_3] - - return hex, colors - - -def paint_colors(dominants, area=50): - - color_1, color_2, color_3 = dominants[0], dominants[1], dominants[2] - img_w = 3 * area - img_h = 1 * area - - # Create image - out_img = np.zeros((img_h, img_w, 3), np.uint8) - out_img[:, 0:area] = color_1 #(y,x) - out_img[:, area:area*2] = color_2 - out_img[:, area*2:area*3] = color_3 - - return out_img - - -def process_img(img): - - dominants = get_dominant_colors(img) - hex, colors = bgr_to_hex(dominants) - hex_rgb = f"RGB 1: {colors[0]}\nRGB 2: {colors[1]}\nRGB 3: {colors[2]}\n{hex}" - colors_img = paint_colors(dominants, area=50) - - return colors_img, hex_rgb - - -desc = "The performance of social media ads is not only about targeting but also about ad design. The design of the ad is a main performance factor color, but the colors also have an effect. Some types of audiences can be attracted to certain colors, and this is what is covered. This solution extracts the top 3 dominant colors per image and visualizes them. The number 3 was chosen to cover the main 2 design colors and the text color when possible. This is applied to one image in this space, but the solution is meant to create a full palette for designers so they can make the right decision while picking future design colors. © The example ads were designed by Esraa Abdelmaksoud for simplesite.com. Select an image to detect the dominant colors." - -iface = gradio.Interface( - fn=process_img, - inputs='image', - outputs=["image", "text"], - title='Dominant Ad Colors Detection', - description=desc, - examples=["ad_sample.jpg", "ad_sample_2.jpg","ad_sample_3.jpg", - "ad_sample_4.jpg","ad_sample_5.jpg","ad_sample_6.jpg", - "ad_sample_7.jpg","ad_sample_8.jpg"]) - -iface.launch() \ No newline at end of file diff --git a/spaces/eugenkalosha/Semmap/README.md b/spaces/eugenkalosha/Semmap/README.md deleted file mode 100644 index 5a0e3d6d5690cb2a9161f1c774a1c90ce4e1b4bc..0000000000000000000000000000000000000000 --- a/spaces/eugenkalosha/Semmap/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Semantic Map -emoji: 📈 -colorFrom: blue -colorTo: green -sdk: docker -pinned: false -duplicated_from: Panel-Org/panel-template -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/evaluate-metric/sari/app.py b/spaces/evaluate-metric/sari/app.py deleted file mode 100644 index 0666a386de190bab2906c58fef504a081cf52727..0000000000000000000000000000000000000000 --- a/spaces/evaluate-metric/sari/app.py +++ /dev/null @@ -1,6 +0,0 @@ -import evaluate -from evaluate.utils import launch_gradio_widget - - -module = evaluate.load("sari") -launch_gradio_widget(module) diff --git a/spaces/falterWliame/Face_Mask_Detection/City Car Driving 1.2.2 Serial Ke.md b/spaces/falterWliame/Face_Mask_Detection/City Car Driving 1.2.2 Serial Ke.md deleted file mode 100644 index 070181ec1ba79713bd8ef9247a9c3c0e397c9697..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/City Car Driving 1.2.2 Serial Ke.md +++ /dev/null @@ -1,103 +0,0 @@ -
      -

      City Car Driving 1.2.2 Serial Key: A Complete Guide

      - -

      If you are looking for a realistic driving simulator game, you might want to try City Car Driving 1.2.2. This game allows you to practice your driving skills in various traffic conditions, weather, and road situations. You can choose from different cars, modes, and scenarios to test your abilities and learn from your mistakes.

      -

      City Car Driving 1.2.2 Serial Ke


      Download File »»» https://urlca.com/2uDcSA



      - -

      However, to enjoy the full features of City Car Driving 1.2.2, you need a valid serial key to activate the game. A serial key is a unique code that verifies your purchase and unlocks the game for you. Without a serial key, you can only play the demo version of the game, which has limited options and functions.

      - -

      How to Get City Car Driving 1.2.2 Serial Key

      - -

      There are two ways to get a serial key for City Car Driving 1.2.2: buying it from the official website or downloading it from a reliable source.

      - -

      Buying City Car Driving 1.2.2 Serial Key

      - -

      The easiest and safest way to get a serial key for City Car Driving 1.2.2 is to buy it from the official website of the game: https://citycardriving.com/buy/citycardriving. Here, you can choose from different payment methods and currencies to complete your purchase. You will receive an email with your serial key and instructions on how to activate the game.

      -

      - -

      The advantages of buying a serial key from the official website are:

      -
        -
      • You will get a genuine and legal serial key that works for your game.
      • -
      • You will get access to all the updates and patches of the game.
      • -
      • You will get technical support and customer service from the developers.
      • -
      • You will support the creators of the game and help them improve their products.
      • -
      - -

      Downloading City Car Driving 1.2.2 Serial Key

      - -

      Another way to get a serial key for City Car Driving 1.2.2 is to download it from a third-party source, such as a website or a torrent. This method is not recommended, as it may expose you to various risks and problems.

      - -

      The disadvantages of downloading a serial key from an unofficial source are:

      -
        -
      • You may get a fake or invalid serial key that does not work for your game.
      • -
      • You may get a virus or malware that infects your computer or steals your personal information.
      • -
      • You may get into legal trouble for violating the copyright laws and terms of service of the game.
      • -
      • You may miss out on the updates and patches of the game.
      • -
      • You may not get any technical support or customer service from the developers.
      • -
      • You may harm the creators of the game and discourage them from making more games.
      • -
      - -

      How to Activate City Car Driving 1.2.2 with Serial Key

      - -

      Once you have obtained a valid serial key for City Car Driving 1.2.2, you need to activate the game with it. To do this, follow these steps:

      -
        -
      1. Download and install City Car Driving 1.2.2 on your computer.
      2. -
      3. Launch the game and copy the code from the startup window.
      4. -
      5. Open the website https://activate.citycardriving.com/ on your browser.
      6. -
      7. Enter your serial number, the program code you have copied, and your email address.
      8. -
      9. The activation key will be sent to your email address.
      10. -
      11. Enter your activation key into the box in the program window and click “Registration” button.
      12. -
      13. Enjoy playing City Car Driving 1.2.2 with full features!
      14. -
      - -

      Conclusion

      - -

      City Car Driving 1.2.2 is a great driving simulator game that can help you improve your driving skills and have fun at the same time. To play this game with full features, you need a serial key to activate it. You can either buy a serial key from the official website or download it from a reliable source, but be careful of the risks and disadvantages of the latter option. Once you have a serial key, you can easily activate the game and start driving!

      -

      City Car Driving 1.2.2 Mods and Custom Cars

      - -

      One of the most exciting features of City Car Driving 1.2.2 is the ability to add mods and custom cars to the game. Mods are modifications that enhance or change the game in various ways, such as adding new cars, maps, traffic, sounds, etc. Custom cars are user-created cars that you can download and drive in the game.

      - -

      To add mods and custom cars to City Car Driving 1.2.2, you need to use the Steam Workshop. The Steam Workshop is a platform that allows you to easily discover, download, and install fan-created content for your game or software. You can browse through thousands of mods and custom cars created by other users and subscribe to the ones you like. The subscribed content will be automatically available when you start the game.

      - -

      Some of the benefits of using mods and custom cars in City Car Driving 1.2.2 are:

      -
        -
      • You can expand your car collection with different models, brands, styles, and performance.
      • -
      • You can drive in new maps and environments that offer different challenges and scenery.
      • -
      • You can experience new traffic situations and scenarios that test your driving skills and reactions.
      • -
      • You can customize your game with different sounds, graphics, effects, etc.
      • -
      • You can support the creative community and share your own mods and custom cars with others.
      • -
      - -

      City Car Driving 1.2.2 System Requirements and Download

      - -

      Before you can play City Car Driving 1.2.2 with serial key, you need to make sure that your computer meets the minimum system requirements for the game. The system requirements are:

      - - - - - - - - - -
      OSWindows 7 SP1 / 8 / 8.1 / 10 (64 Bit)
      ProcessorIntel Pentium Dual Core 3.2 GHz / AMD Athlon II X4 3.1 GHz
      Memory4 GB RAM
      GraphicsAMD Radeon R7 240 / nVidia GeForce GT 740
      DirectXVersion 11
      Storage10 GB available space
      Sound CardAny sound card compatible with DirectX 9.0
      Additional NotesThe application's stability is not guaranteed on Intel HD Graphics and AMD HD Radeon on-board graphics cards.
      - -

      If your computer meets or exceeds these requirements, you can download City Car Driving 1.2.2 from the official website of the game: https://citycardriving.com/download/citycardriving. Here, you can choose from different download options and payment methods to get your copy of the game.

      - -

      Alternatively, you can buy City Car Driving 1.2.2 from Steam: https://store.steampowered.com/app/493490/City_Car_Driving/. Steam is a digital distribution platform that allows you to buy, download, and play games online. By buying City Car Driving 1.2.2 from Steam, you can also access the Steam Workshop and other Steam features.

      - -

      After you have downloaded City Car Driving 1.2.2, you need to activate it with your serial key as explained in the previous section.

      - - -If you want me to write more paragraphs, you need to give me a specific topic or question that you want me to address. Otherwise, I suggest you move on to the next step of your article writing process, such as checking for grammar errors, adding images or videos, writing a conclusion, or creating a summary. I can help you with any of these tasks if you ask me. -

      City Car Driving 1.2.2 Serial Key: The Final Word

      - -

      City Car Driving 1.2.2 is a game that can offer you a lot of fun and learning. It is a realistic driving simulator that can help you master the basic skills of car driving in different road conditions and situations. It has many features and benefits that make it stand out from other driving games, such as smart traffic, realistic physics, various cars, modes, scenarios, and maps. It also allows you to add mods and custom cars to the game through the Steam Workshop, which can enhance your gaming experience and creativity.

      - -

      To play City Car Driving 1.2.2 with full features, you need a serial key to activate the game. You can get a serial key by buying it from the official website or downloading it from a reliable source. However, you should be careful of the risks and disadvantages of the latter option, such as fake or invalid serial keys, viruses or malware, legal trouble, missing updates and patches, etc. Once you have a serial key, you can easily activate the game and start driving.

      - -

      If you are looking for a game that can challenge your driving skills and entertain you at the same time, City Car Driving 1.2.2 is a great choice for you. It is a game that can teach you how to drive safely and confidently in real life. It is also a game that can let you explore different cars, environments, and situations in a virtual world. It is a game that can give you hours of fun and satisfaction.

      - -

      So what are you waiting for? Get your City Car Driving 1.2.2 serial key today and enjoy the ride!

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Cnc Software Mastercam X5 Crack Rarl.md b/spaces/falterWliame/Face_Mask_Detection/Cnc Software Mastercam X5 Crack Rarl.md deleted file mode 100644 index e9ea5fa8561470292b4152fa773f3807e076a9f0..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Cnc Software Mastercam X5 Crack Rarl.md +++ /dev/null @@ -1,8 +0,0 @@ - -

      the software compliance group is a division of the business software alliance ( bsa ), a not-for-profit that works with governments and businesses to help protect the integrity of the software market. it works with vendors, distributors, retailers and government agencies to help keep software users safe and prevent software piracy.

      -

      one of the many features in the new version is the ability to import stereolithography files, though in this version it only works on the mastercam 2219 in the windows operating system. mastercam has also added the ability to create and edit grid entities, and the new version allows you to merge the grid entities back into the part file. the software also allows you to resize the document and even supports features such as the ability to display and edit 2d and 3d graphics, as well as to display and edit surface models.

      -

      Cnc Software Mastercam X5 Crack Rarl


      DOWNLOAD ::: https://urlca.com/2uDdrU



      -

      mastercam software crack has been updated, a complete brand new version is now available for download. its main goal is to make life easier for the user, making the whole experience more user-friendly. this version has been upgraded with many new features, new intuitive user interface and a lot more. the user guide also contains a lot of useful information that will make your life easier.

      -

      not having to open a document to create a new part is really handy for a freelancer who has a lot of different projects going on and has to switch between them frequently. a new intuitive user interface has been introduced, making it easier to view and edit your parts and models. the user interface is now easier to navigate. this new version of mastercam also has a lot of new features. it supports creating new solid objects, it can also export your model in the obj format, which is the file format commonly used in 3d graphics editing software like 3d studio max. you can create new project files with different settings, such as the shape of the table, or the viewport. it can create 2d drawings directly from the project files.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Onyx Production House X10.2 13.md b/spaces/falterWliame/Face_Mask_Detection/Onyx Production House X10.2 13.md deleted file mode 100644 index b7b92b2dc5a5d8016cfdb7d858376f926278a4fc..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Onyx Production House X10.2 13.md +++ /dev/null @@ -1,17 +0,0 @@ -

      onyx production house x10.2 13


      Download Filehttps://urlca.com/2uDcb3



      - -Learn how to add a new media profile to the Onyx X10: ... Media Profile for HP Designjet L25500 Printer ... HP Designjet L25500 - Description, specifications, test, reviews, prices, photos Digitizing photographic film and slides. -Onyx X10. -Digitizing on a memory card. -Record on a disk or on a computer. -Price action. -Description. -Onyx X10 ... -Onyx X10 - find out prices and detailed specifications. -Watch a video review, read reviews and discuss on the forum. -Pros, cons and analogues. -Buy Onyx X10 with warranty at a low price. -Delivery in Ukraine: Kharkiv, Kiev, Dnipropetrovsk, Odessa, Zaporozhye, Lviv and other cities. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/fatiXbelha/sd/Clash of Clans Real Server Mod APK The Best Way to Experience the Game.md b/spaces/fatiXbelha/sd/Clash of Clans Real Server Mod APK The Best Way to Experience the Game.md deleted file mode 100644 index b955a208b802c37e479095c41b6a957f9190b49a..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Clash of Clans Real Server Mod APK The Best Way to Experience the Game.md +++ /dev/null @@ -1,129 +0,0 @@ -
      -

      Clash of Clans Real Server Mod APK: How to Download and Play

      -

      Clash of Clans is one of the most popular and addictive strategy games in the world. Millions of players build their own villages, train their troops, and battle with other clans online. But what if you want to play the game with unlimited resources, custom mods, and access to all the features without spending any money or waiting for hours? That's where a real server mod apk comes in handy. In this article, we will explain what a real server mod apk is, why you might want to use it, and how to download and install it on your device.

      -

      clash of clans real server mod apk


      Download ->>->>->> https://urllie.com/2uNEQv



      -

      What is Clash of Clans?

      -

      Clash of Clans is a freemium mobile game developed by Supercell, a Finnish company that also created other hit games like Clash Royale, Brawl Stars, and Hay Day. The game was released in 2012 for iOS and in 2013 for Android devices. Since then, it has become one of the most downloaded and highest-grossing apps in the world, with over 500 million downloads and billions of dollars in revenue.

      -

      The game is set in a fantasy world where you can create your own village, join or create a clan, and fight with other players in clan wars or multiplayer battles. You can also upgrade your buildings, defenses, troops, spells, heroes, and pets using various resources like gold, elixir, dark elixir, gems, and magic items. The game is constantly updated with new content, events, challenges, and features to keep you entertained and engaged.

      -

      What is a mod apk?

      -

      A mod apk is a modified version of an original app that has been altered by someone other than the developer. A mod apk can have different features, functions, graphics, or gameplay than the original app. For example, a mod apk can have unlimited resources, unlocked items, custom skins, cheats, hacks, or other enhancements that are not available in the original app.

      -

      clash of clans private server mod apk download
      -clash of clans mod apk unlimited everything real server
      -clash of clans hack mod apk real server 2023
      -clash of clans mod apk plenixclash real server
      -clash of clans mod apk latest version real server
      -clash of clans mod apk offline real server
      -clash of clans mod apk unlimited gems real server
      -clash of clans mod apk town hall 15 real server
      -clash of clans mod apk android 1 real server
      -clash of clans mod apk ios real server
      -clash of clans mod apk unlimited troops real server
      -clash of clans mod apk magic s1 real server
      -clash of clans mod apk fhx real server
      -clash of clans mod apk null's clash real server
      -clash of clans mod apk lights s1 real server
      -clash of clans mod apk darksoul real server
      -clash of clans mod apk miroclash real server
      -clash of clans mod apk hybrid base real server
      -clash of clans mod apk builder base real server
      -clash of clans mod apk supercell id real server
      -clash of clans mod apk no root real server
      -clash of clans mod apk online play real server
      -clash of clans mod apk unlimited gold and elixir real server
      -clash of clans mod apk new update real server
      -clash of clans mod apk with th14 real server
      -clash of clans mod apk free download for android real server
      -clash of clans mod apk unlimited money and gems real server
      -clash of clans mod apk hack version download real server
      -clash of clans mod apk anti ban real server
      -clash of clans mod apk working 100% real server
      -clash of clans mod apk original graphics real server
      -clash of clans mod apk no human verification real server
      -clash of clans mod apk all heroes unlocked real server
      -clash of clans mod apk custom buildings and heroes real server
      -clash of clans mod apk unlimited resources and clan wars real server
      -clash of clans mod apk with battle machine and night mode real server
      -clash of clans mod apk with royal champion and giga inferno real server
      -clash of clans mod apk with siege machines and wall wreckers real server
      -clash of clans mod apk with electro dragon and ice golem real server
      -clash of clans mod apk with pets and super troops real server
      -clash of clans mod apk with new skins and events real server
      -clash of clans mod apk with clan games and friendly challenges real server
      -clash of clans mod apk with global chat and clan chat real server
      -clash of clans mod apk with achievements and leaderboards real server
      -clash of clans mod apk with unlimited spells and traps real server
      -clash of clans mod apk with custom commands and mods menu real server

      -

      A mod apk can be downloaded from third-party websites or sources that are not affiliated with the official app store or developer. However, not all mod apks are safe or legal to use. Some mod apks may contain viruses, malware, spyware, or other harmful software that can damage your device or steal your personal information. Some mod apks may also violate the terms of service or policies of the original app or developer, which can result in bans or legal actions.

      -

      What is a real server mod apk?

      -

      A real server mod apk is a special type of mod apk that connects to the official servers of the original app instead of private servers or offline modes. A real server mod apk allows you to play the original game with all the features and functions that are available on the official servers, but with some modifications or additions that are not possible on the original app.

      -

      For example, a real server mod apk for Clash of Clans can let you play the game with unlimited resources like gold, elixir, dark elixir , gems, and magic items. You can also use custom mods like unlimited troops, spells, heroes, pets, buildings, defenses, or other features that are not available in the original game. You can also access all the events, challenges, seasons, and rewards that are offered on the official servers.

      -

      Why would you want to use a real server mod apk for Clash of Clans?

      -

      There are many reasons why you might want to use a real server mod apk for Clash of Clans. Some of them are:

      -
        -
      • You want to have fun and experiment with different aspects of the game without worrying about the limitations or restrictions of the original game.
      • -
      • You want to save time and money by getting unlimited resources and items without spending any real money or waiting for hours.
      • -
      • You want to test your skills and strategies against other players on the official servers with your modified game.
      • -
      • You want to enjoy the latest updates and features of the game without having to update your app or download a new mod apk every time.
      • -
      • You want to have more control and customization over your game and play it according to your preferences and style.
      • -
      -

      However, there are also some drawbacks and risks of using a real server mod apk for Clash of Clans. Some of them are:

      -
        -
      • You may face technical issues or errors while playing the mod apk, such as crashes, glitches, bugs, or compatibility problems.
      • -
      • You may lose your original game data or progress if you do not backup your files before installing the mod apk.
      • -
      • You may get banned or suspended from the official servers if the developer detects your mod apk or if you abuse the mod features.
      • -
      • You may compromise the security and privacy of your device or account if you download a mod apk from an untrusted or malicious source.
      • -
      • You may miss out on the original experience and challenge of the game as it was intended by the developer.
      • -
      -

      How to download and install a real server mod apk for Clash of Clans?

      -

      If you have decided to try a real server mod apk for Clash of Clans, you need to follow some steps to download and install it on your device. Here is a step-by-step guide on how to do it:

      -

      Where to find a reliable real server mod apk for Clash of Clans?

      -

      The first step is to find a reliable source where you can download a real server mod apk for Clash of Clans. There are many websites and forums that offer mod apks for various games, but not all of them are safe or trustworthy. You need to be careful and do some research before downloading any mod apk from an unknown source. Some of the things you can do are:

      -
        -
      • Check the reviews and ratings of the website or forum where you found the mod apk. See what other users have said about their experience with the mod apk and whether they faced any issues or problems.
      • -
      • Check the date and version of the mod apk. Make sure it is compatible with your device and the latest version of the original game.
      • -
      • Check the size and content of the mod apk. Make sure it does not contain any unwanted or harmful software that can harm your device or account.
      • -
      • Check the permissions and requirements of the mod apk. Make sure it does not ask for any unnecessary or suspicious permissions that can compromise your security or privacy.
      • -

      How to backup your original game data before installing the mod apk?

      -

      The second step is to backup your original game data before installing the mod apk. This is important because you may lose your progress or account if something goes wrong during the installation or if you want to switch back to the original game later. There are different ways to backup your game data, depending on your device and the type of data you want to save. Some of the common methods are:

      -
        -
      • Using Google Play Games or Facebook to sync your game data with your online account. This will allow you to restore your game data on any device that supports these platforms.
      • -
      • Using a file manager app or a computer to copy and paste your game data files from your device's internal storage or SD card to another location. This will allow you to manually restore your game data on the same device or a different device.
      • -
      • Using a cloud service or an external storage device to backup your game data files online or offline. This will allow you to access your game data from anywhere and anytime.
      • -
      -

      Make sure you know where your game data files are located and how to restore them before installing the mod apk. You can also use a backup app or tool that can automate the process for you.

      -

      How to enable unknown sources on your device and install the mod apk?

      -

      The third step is to enable unknown sources on your device and install the mod apk. This is necessary because most devices do not allow installing apps from sources other than the official app store or developer. To enable unknown sources, you need to follow these steps:

      -
        -
      1. Go to your device's settings and look for security or privacy options.
      2. -
      3. Find and tap on the option that says unknown sources, install unknown apps, or something similar.
      4. -
      5. Toggle on the option and confirm your choice if prompted.
      6. -
      -

      Once you have enabled unknown sources, you can proceed to install the mod apk. To install the mod apk, you need to follow these steps:

      -
        -
      1. Locate and tap on the mod apk file that you downloaded from the source.
      2. -
      3. Follow the instructions on the screen and agree to the terms and conditions if asked.
      4. -
      5. Wait for the installation to complete and tap on open or done when finished.
      6. -
      -

      How to launch and play the mod apk on your device?

      -

      The final step is to launch and play the mod apk on your device. This is easy and similar to playing any other app on your device. To launch and play the mod apk, you need to follow these steps:

      -
        -
      1. Find and tap on the icon of the mod apk on your device's home screen or app drawer.
      2. -
      3. Wait for the game to load and sign in with your account if required.
      4. -
      5. Enjoy playing the game with unlimited resources, custom mods, and access to all the features.
      6. -
      -

      Conclusion

      -

      In conclusion, a real server mod apk for Clash of Clans is a modified version of the original game that connects to the official servers and allows you to play with unlimited resources, custom mods, and access to all the features. It can be fun and exciting to use, but it also comes with some drawbacks and risks that you should be aware of. If you want to try a real server mod apk for Clash of Clans, you need to find a reliable source, backup your original game data, enable unknown sources, install the mod apk, and launch and play it on your device. We hope this article has helped you understand what a real server mod apk is, why you might want to use it, and how to download and install it on your device.

      -

      Frequently Asked Questions

      -

      Here are some of the frequently asked questions about real server mod apks for Clash of Clans:

      -

      Is using a real server mod apk for Clash of Clans legal?

      -

      Using a real server mod apk for Clash of Clans may not be legal in some countries or regions, depending on their laws and regulations regarding intellectual property rights, digital piracy, online gaming, or other related matters. You should check with your local authorities before using a real server mod apk for Clash of Clans.

      -

      Is using a real server mod apk for Clash of Clans safe?

      -

      Using a real server mod apk for Clash of Clans may not be safe for your device or account, depending on the source, quality, and content of the mod apk. You should only download a real server mod apk for Clash of Clans from a trusted and reputable source that has positive reviews and ratings from other users. You should also scan the mod apk with an antivirus or anti-malware software before installing it on your device. You should also backup your original game data and enable unknown sources on your device before installing the mod apk. You should also be careful not to abuse the mod features or violate the terms of service or policies of the original game or developer, as this may result in bans or legal actions.

      -

      Is using a real server mod apk for Clash of Clans fair?

      -

      Using a real server mod apk for Clash of Clans may not be fair for other players who play the game without any modifications or enhancements. You may have an unfair advantage over them in terms of resources, items, features, or gameplay. You may also ruin their experience or enjoyment of the game by using cheats, hacks, or mods that affect their gameplay. You should respect other players and play the game in a fair and ethical manner.

      -

      Is using a real server mod apk for Clash of Clans permanent?

      -

      Using a real server mod apk for Clash of Clans is not permanent, as you can always switch back to the original game if you want to. You can uninstall the mod apk from your device and restore your original game data from your backup. You can also update your original game app from the official app store or developer if there are any new updates or features available.

      -

      Is using a real server mod apk for Clash of Clans worth it?

      -

      Using a real server mod apk for Clash of Clans may be worth it for some players who want to have fun and experiment with different aspects of the game without worrying about the limitations or restrictions of the original game. It may also be worth it for some players who want to save time and money by getting unlimited resources and items without spending any real money or waiting for hours. However, it may not be worth it for some players who prefer the original experience and challenge of the game as it was intended by the developer. It may also not be worth it for some players who value their security, privacy, and fairness over their entertainment and enjoyment. Ultimately, it depends on your personal preference and perspective whether using a real server mod apk for Clash of Clans is worth it or not.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Stickman The Flash APK Mod Menu with God Mode and Unlimited Power.md b/spaces/fatiXbelha/sd/Download Stickman The Flash APK Mod Menu with God Mode and Unlimited Power.md deleted file mode 100644 index 3470adf1a0f484d28b9e56ad0fb8cf8c475fa02e..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Stickman The Flash APK Mod Menu with God Mode and Unlimited Power.md +++ /dev/null @@ -1,118 +0,0 @@ -
      -

      Stickman The Flash APK Mod Menu: A Guide for Gamers

      -

      Do you love stickman games? Do you enjoy fast-paced action and epic battles? If yes, then you should try Stickman The Flash, a new game that will test your reflexes and skills. In this game, you will play as a stickman hero who has superpowers and can move faster than light. You will face various enemies and challenges in different modes and levels. You will also be able to customize your character and weapons with various upgrades and items.

      -

      stickman the flash apk mod menu


      Downloadhttps://urllie.com/2uNyPX



      -

      But what if you want to make the game more fun and easy? What if you want to have unlimited power, god mode, unlocked weapons, and more? Well, there is a way to do that. You can use Stickman The Flash APK Mod Menu, a modified version of the game that gives you access to many features and options that are not available in the original game. In this article, we will tell you everything you need to know about Stickman The Flash APK Mod Menu, including what it is, how to download and install it, and how to play the game with it. Let's get started!

      -

      What is Stickman The Flash?

      -

      Stickman The Flash is a 2D action game developed by StormHit Games. It was released in 2021 for Android devices. The game is inspired by the popular DC Comics superhero, The Flash, who can run at superhuman speeds and manipulate time. In the game, you will control a stickman version of The Flash, who has similar abilities and powers. You will use your speed, strength, and skills to fight against various enemies, such as robots, ninjas, zombies, aliens, and more. You will also encounter bosses and mini-bosses that will challenge your abilities.

      -

      Features of Stickman The Flash

      -

      Stickman The Flash has many features that make it an exciting and addictive game. Some of these features are:

      -

      stickman the flash mod apk unlimited money
      -stickman the flash mod apk download for android
      -stickman the flash mod apk latest version 2023
      -stickman the flash mod apk god mode unlocked
      -stickman the flash mod apk free shopping
      -stickman the flash mod apk no ads
      -stickman the flash mod apk all weapons
      -stickman the flash mod apk unlimited power
      -stickman the flash mod apk offline
      -stickman the flash mod apk hack
      -stickman the flash mod menu apk download
      -stickman the flash mod menu apk free
      -stickman the flash mod menu apk 2023
      -stickman the flash mod menu apk god mode
      -stickman the flash mod menu apk unlimited money and power
      -stickman the flash mod menu apk all characters
      -stickman the flash mod menu apk no root
      -stickman the flash mod menu apk latest update
      -stickman the flash mod menu apk android 1
      -stickman the flash mod menu apk revdl
      -download stickman the flash apk mod menu for free
      -download stickman the flash apk mod menu latest version
      -download stickman the flash apk mod menu android 2023
      -download stickman the flash apk mod menu god mode and unlimited power
      -download stickman the flash apk mod menu unlocked everything
      -download stickman the flash apk mod menu no verification
      -download stickman the flash apk mod menu from mediafire
      -download stickman the flash apk mod menu without ads
      -how to install stickman the flash apk mod menu on android
      -how to use stickman the flash apk mod menu features
      -how to update stickman the flash apk mod menu 2023
      -how to get stickman the flash apk mod menu for free
      -how to hack stickman the flash with apk mod menu
      -how to play stickman the flash with apk mod menu offline
      -how to unlock all weapons in stickman the flash with apk mod menu
      -best settings for stickman the flash apk mod menu 2023
      -best tips and tricks for stickman the flash apk mod menu gameplay
      -best guide and tutorial for stickman the flash apk mod menu installation and usage
      -best review and rating for stickman the flash apk mod menu 2023

      -
        -
      • Simple and intuitive controls: You can control your character with just one finger. Tap to move, swipe to dash, and hold to charge your power.
      • -
      • Stunning graphics and animations: The game has colorful and detailed graphics that create a vivid and dynamic environment. The animations are smooth and realistic, showing the effects of your movements and attacks.
      • -
      • Various modes and levels: The game has different modes that offer different challenges and objectives. You can play in story mode, where you will follow the plot and complete missions. You can also play in survival mode, where you will face endless waves of enemies until you die. You can also play in arena mode, where you will fight against other players online.
      • -
      • Customizable character and weapons: You can customize your character's appearance, such as his hair, eyes, clothes, and accessories. You can also upgrade your weapons and skills with coins that you earn from playing the game. You can choose from different types of weapons, such as swords, guns, hammers, axes, etc.
      • -
      • Achievements and leaderboards: You can unlock various achievements by completing tasks and challenges in the game. You can also compete with other players on the leaderboards by scoring high points in each mode.
      • -
      -

      How to play Stickman The Flash

      -

      The gameplay of Stickman The Flash is simple but fun. Here are some basic steps on how to play the game:

      -
        -
      1. Select a mode that you want to play.
      2. -
      3. Select a level or stage that you want to play.
      4. -
      5. Select a character and a weapon that you want to use.
      6. -
      7. Tap the screen to move your character.
      8. -
      9. Swipe the screen to dash or dodge.Hold the screen to charge your power and release it to unleash a special attack.
      10. -
      11. Defeat all the enemies and complete the objectives of each level or stage.
      12. -
      13. Earn coins and rewards for your performance.
      14. -
      -

      That's how you play Stickman The Flash. It's easy to learn but hard to master. You will need to use your reflexes, skills, and strategy to overcome the challenges and enemies in the game.

      -

      What is Stickman The Flash APK Mod Menu?

      -

      Stickman The Flash APK Mod Menu is a modified version of the original game that gives you access to many features and options that are not available in the original game. It is a file that you can download and install on your Android device. It will allow you to modify the game according to your preferences and needs.

      -

      Benefits of using Stickman The Flash APK Mod Menu

      -

      There are many benefits of using Stickman The Flash APK Mod Menu. Some of these benefits are:

      -
        -
      • You can have unlimited power, which means you can use your special attack as much as you want without waiting for it to recharge.
      • -
      • You can have god mode, which means you will not take any damage from enemies or obstacles.
      • -
      • You can have unlocked weapons, which means you can use any weapon in the game without buying or upgrading it.
      • -
      • You can have unlimited coins, which means you can buy and upgrade anything in the game without worrying about the cost.
      • -
      • You can have no ads, which means you will not see any annoying ads while playing the game.
      • -
      -

      How to download and install Stickman The Flash APK Mod Menu

      -

      Downloading and installing Stickman The Flash APK Mod Menu is easy and simple. Here are some steps on how to do it:

      -
        -
      1. Go to a trusted website that provides the link to download Stickman The Flash APK Mod Menu. You can search for it on Google or use this link: .
      2. -
      3. Click on the download button and wait for the file to be downloaded on your device.
      4. -
      5. Go to your device's settings and enable the option to install apps from unknown sources. This will allow you to install the file that you downloaded.
      6. -
      7. Go to your device's file manager and locate the file that you downloaded. Tap on it and follow the instructions to install it.
      8. -
      9. Once the installation is done, you can launch the game and enjoy the mod menu features.
      10. -
      -

      Note: You may need to uninstall the original game before installing the mod menu version. Also, make sure that your device has enough space and meets the minimum requirements for the game.

      -

      Tips and tricks for playing Stickman The Flash with APK Mod Menu

      -

      Playing Stickman The Flash with APK Mod Menu can be very fun and easy. However, if you want to make the most out of it, here are some tips and tricks that you can follow:

      -

      Use your powers wisely

      -

      Even though you have unlimited power, you should still use it wisely. Don't spam your special attack all the time, as it may make the game boring and less challenging. Use it when you need it, such as when you face a boss or a large group of enemies. Also, don't forget to use your dash and dodge abilities, as they can help you avoid damage and move faster.

      -

      Upgrade your weapons and skills

      -

      Even though you have unlocked weapons, you should still upgrade them and your skills. Upgrading them will make them more powerful and effective, as well as give you more options and variety. You can also try different combinations of weapons and skills, such as using a sword with a gun, or using a hammer with a speed boost. Experiment with different styles and find what suits you best.

      -

      Choose your character and mode

      -

      Even though you have god mode, you should still choose your character and mode carefully. Choosing a different character will give you a different appearance and personality, as well as different stats and abilities. Choosing a different mode will give you a different challenge and objective, as well as different rewards and rankings. You can also switch between them anytime you want, so don't be afraid to try new things and have fun.

      -

      Conclusion

      -

      Stickman The Flash is a great game for anyone who loves stickman games, action games, or superhero games. It has simple but fun gameplay, stunning graphics and animations, various modes and levels, customizable character and weapons, achievements and leaderboards, and more. It is also free to play and download on Android devices.

      -

      However, if you want to make the game more fun and easy, you can use Stickman The Flash APK Mod Menu, a modified version of the game that gives you access to many features and options that are not available in the original game. You can have unlimited power, god mode, unlocked weapons, unlimited coins, no ads, and more. You can also modify the game according to your preferences and needs.

      -

      To use Stickman The Flash APK Mod Menu, you need to download and install it on your device. You can find the link to download it on a trusted website or use this link: . You also need to enable the option to install apps from unknown sources on your device's settings. Once you install it, you can launch the game and enjoy the mod menu features.

      -

      Playing Stickman The Flash with APK Mod Menu can be very fun and easy, but you should still use some tips and tricks to make the most out of it. You should use your powers wisely, upgrade your weapons and skills, choose your character and mode, and have fun. You can also switch between the original game and the mod menu version anytime you want.

      -

      We hope this article has helped you learn more about Stickman The Flash APK Mod Menu and how to use it. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!

      -

      FAQs

      -

      Here are some frequently asked questions about Stickman The Flash APK Mod Menu:

      -
        -
      1. Is Stickman The Flash APK Mod Menu safe to use?
      2. -

        Yes, Stickman The Flash APK Mod Menu is safe to use as long as you download it from a trusted website or source. However, you should always be careful when installing apps from unknown sources, as they may contain viruses or malware that can harm your device or data.

        -
      3. Is Stickman The Flash APK Mod Menu legal to use?
      4. -

        No, Stickman The Flash APK Mod Menu is not legal to use, as it violates the terms and conditions of the original game. Using it may result in banning your account or losing your progress in the game. Therefore, we do not recommend using it for any purposes other than entertainment or education.

        -
      5. Does Stickman The Flash APK Mod Menu work on iOS devices?
      6. -

        No, Stickman The Flash APK Mod Menu only works on Android devices. It is not compatible with iOS devices or any other platforms.

        -
      7. Can I play online with Stickman The Flash APK Mod Menu?
      8. -

        Yes, you can play online with Stickman The Flash APK Mod Menu, but you may encounter some problems or issues. For example, you may not be able to connect with other players who are using the original game or a different version of the mod menu. You may also face lagging or crashing issues due to the mod menu features.

        -
      9. Can I update Stickman The Flash APK Mod Menu?
      10. -

        Yes, you can update Stickman The Flash APK Mod Menu whenever there is a new version available. However, you may need to uninstall the previous version and install the new one manually. You may also lose some of your data or settings in the process.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git "a/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" "b/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" deleted file mode 100644 index 73ae45f240f346fec6bb1ec87a2616055e481827..0000000000000000000000000000000000000000 --- "a/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" +++ /dev/null @@ -1,52 +0,0 @@ -from toolbox import CatchException, update_ui -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -import datetime, re - -@CatchException -def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,暂时没有用武之地 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append(("这是什么功能?", "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板(该函数只有20多行代码)。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR!")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - for i in range(5): - currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month - currentDay = (datetime.date.today() + datetime.timedelta(days=i)).day - i_say = f'历史中哪些事件发生在{currentMonth}月{currentDay}日?用中文列举两条,然后分别给出描述事件的两个英文单词。' + '当你给出关键词时,使用以下json格式:{"KeyWords":[EnglishKeyWord1,EnglishKeyWord2]}。' - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, inputs_show_user=i_say, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], - sys_prompt='输出格式示例:1908年,美国消防救援事业发展的“美国消防协会”成立。关键词:{"KeyWords":["Fire","American"]}。' - ) - gpt_say = get_images(gpt_say) - chatbot[-1] = (i_say, gpt_say) - history.append(i_say);history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 - - -def get_images(gpt_say): - def get_image_by_keyword(keyword): - import requests - from bs4 import BeautifulSoup - response = requests.get(f'https://wallhaven.cc/search?q={keyword}', timeout=2) - for image_element in BeautifulSoup(response.content, 'html.parser').findAll("img"): - if "data-src" in image_element: break - return image_element["data-src"] - - for keywords in re.findall('{"KeyWords":\[(.*?)\]}', gpt_say): - keywords = [n.strip('"') for n in keywords.split(',')] - try: - description = keywords[0] - url = get_image_by_keyword(keywords[0]) - img_tag = f"\n\n![{description}]({url})" - gpt_say += img_tag - except: - continue - return gpt_say \ No newline at end of file diff --git a/spaces/fclong/summary/fengshen/examples/finetune_bart_qg/finetune_bart.sh b/spaces/fclong/summary/fengshen/examples/finetune_bart_qg/finetune_bart.sh deleted file mode 100644 index ae88b230fa223c3d2c519e4f09cb1c703319af48..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/finetune_bart_qg/finetune_bart.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=bart_qg # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks-per-node=8 # number of tasks to run per node -#SBATCH --cpus-per-task=10 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:1 # number of gpus per node -#SBATCH -o %x-%j.log # output and error log file names (%x for job id) -set -x -e - -MODEL_NAME=IDEA-CCNL/Randeng-BART-139M -RUN_NAME=bart_v0_test -ROOT_DIR=../../workspace/log/$RUN_NAME - -config_json="$ROOT_DIR/$MODEL_NAME.ds_config.json" -export MASTER_PORT=$[RANDOM%10000+40000] - -MICRO_BATCH_SIZE=32 - -cat < $config_json -{ - "train_micro_batch_size_per_gpu": $MICRO_BATCH_SIZE, - "gradient_clipping": 1, - "zero_optimization": { - "stage": 1 - }, - "fp16": { - "enabled": true, - } -} -EOT -export PL_DEEPSPEED_CONFIG_PATH=$config_json -export TORCH_EXTENSIONS_DIR=../../workspace/torch_extensions - -DATA_ARGS=" \ - --train_file train.json \ - --val_file dev.json \ - --test_file test.json \ - --tokenizer_type bart \ - --num_workers 8 \ - --dataloader_workers 2 \ - --train_batchsize $MICRO_BATCH_SIZE \ - --val_batchsize $MICRO_BATCH_SIZE \ - --test_batchsize $MICRO_BATCH_SIZE \ - --max_seq_lengt 512 \ - --max_src_length 32 \ - --max_kno_length 416 \ - --max_tgt_length 64 \ - --mask_ans_style anstoken_multispan \ - " - -MODEL_ARGS="\ - --model_path $MODEL_NAME/ \ - --learning_rate 1e-4 \ - --min_learning_rate 1e-8 \ - --lr_decay_steps 100000 \ - --weight_decay 1e-2 \ - --warmup_steps 1000 \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_loss \ - --save_top_k 3 \ - --mode min \ - --save_last \ - --every_n_train_steps 5000 \ - --save_ckpt_path $ROOT_DIR/ckpt/ \ - --load_ckpt_path $ROOT_DIR/ckpt/ \ - --filename model-{step:02d}-{train_loss:.4f} \ - " - -TRAINER_ARGS="\ - --gradient_clip_val 1.0 \ - --max_epochs 1 \ - --gpus 1 \ - --num_nodes 1 \ - --strategy ddp \ - --log_every_n_steps 100 \ - --val_check_interval 0.5 \ - --accumulate_grad_batches 1 \ - --default_root_dir $ROOT_DIR \ - --tensorboard_dir $ROOT_DIR \ - --label_smooth 0.1 \ - " - - - -export options=" \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ - " -# test -export SCRIPT_PATH=./finetune_bart.py - -python3 ${SCRIPT_PATH} $options > $ROOT_DIR/test.log - diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bingotingo A Simple and Fast LinkedIn Video Downloader.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bingotingo A Simple and Fast LinkedIn Video Downloader.md deleted file mode 100644 index e0947f0fb988d8a3e3f3dce791d06ca6d3379d9b..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bingotingo A Simple and Fast LinkedIn Video Downloader.md +++ /dev/null @@ -1,127 +0,0 @@ -
      -

      How to Download LinkedIn Video with BingoTingo

      -

      LinkedIn is one of the most popular social media platforms for professionals and businesses. It allows you to share your expertise, network with others, and discover new opportunities. But did you know that you can also share and watch videos on LinkedIn?

      -

      In this article, we will show you how to download LinkedIn video with BingoTingo, a free online video downloader that lets you save any video from any website in seconds. Whether you want to watch a video offline, share it with your friends, or use it for your own projects, BingoTingo can help you do it easily and quickly.

      -

      bingotingo how to download linkedin video


      Download Ziphttps://gohhs.com/2uPuaQ



      -

      What is LinkedIn Video?

      -

      LinkedIn video is a feature that allows you to upload and share videos on your LinkedIn profile, page, or group. You can also watch videos posted by other users on your feed or search for videos by topic or hashtag.

      -

      LinkedIn video can be used for various purposes, such as:

      -
        -
      • Showing your work or portfolio
      • -
      • Demonstrating your skills or knowledge
      • -
      • Sharing your insights or opinions
      • -
      • Promoting your products or services
      • -
      • Engaging with your audience or customers
      • -
      • Learning from experts or influencers
      • -
      -

      Why Download LinkedIn Video?

      -

      Downloading LinkedIn video can be useful for many reasons, such as:

      -
        -
      • You can watch it offline without internet connection
      • -
      • You can save it on your device for future reference
      • -
      • You can edit it or add subtitles or captions
      • -
      • You can share it on other platforms or channels
      • -
      • You can use it for your own presentations or projects
      • -
      -

      What is BingoTingo?

      -

      BingoTingo is a free online video downloader that allows you to download any video from any website in seconds. You don't need to install any software or register any account. You just need to copy and paste the URL of the video you want to download and BingoTingo will do the rest for you.

      -

      How BingoTingo Works

      -

      BingoTingo works by extracting the video source from the URL you provide and converting it into a downloadable file. You can choose from various formats and quality options, such as MP4, WEBM, 3GP, 720p, 480p, 360p, etc. You can also preview the video before downloading it.

      -

      Benefits of BingoTingo

      -

      BingoTingo has many benefits over other video downloaders, such as:

      -

      bingotingo linkedin video downloader online
      -bingotingo how to save linkedin videos to computer
      -bingotingo best way to download videos from linkedin
      -bingotingo how to copy video link from linkedin
      -bingotingo how to download linkedin learning videos
      -bingotingo how to download linkedin live videos
      -bingotingo how to download videos from linkedin app
      -bingotingo how to download videos from linkedin messages
      -bingotingo how to download videos from linkedin profile
      -bingotingo how to download videos from linkedin feed
      -bingotingo how to download videos from linkedin pulse
      -bingotingo how to download videos from linkedin groups
      -bingotingo how to download videos from linkedin stories
      -bingotingo how to download videos from linkedin ads
      -bingotingo how to download videos from linkedin events
      -bingotingo free tool for downloading linkedin videos
      -bingotingo easy steps for downloading linkedin videos
      -bingotingo guide for downloading linkedin videos in 2023
      -bingotingo tips and tricks for downloading linkedin videos
      -bingotingo benefits of downloading linkedin videos
      -bingotingo why you should download linkedin videos
      -bingotingo what you can do with downloaded linkedin videos
      -bingotingo how to edit downloaded linkedin videos
      -bingotingo how to share downloaded linkedin videos
      -bingotingo how to upload downloaded linkedin videos
      -bingotingo how to convert downloaded linkedin videos to different formats
      -bingotingo how to compress downloaded linkedin videos
      -bingotingo how to optimize downloaded linkedin videos for SEO
      -bingotingo how to use downloaded linkedin videos for marketing
      -bingotingo how to use downloaded linkedin videos for education
      -bingotingo how to use downloaded linkedin videos for entertainment
      -bingotingo how to use downloaded linkedin videos for inspiration
      -bingotingo how to use downloaded linkedin videos for networking
      -bingotingo how to use downloaded linkedin videos for personal branding
      -bingotingo how to use downloaded linkedin videos for social media
      -bingotingo alternatives to downloading linkedin videos
      -bingotingo pros and cons of downloading linkedin videos
      -bingotingo reviews of downloading linkedin videos with bingotingo
      -bingotingo testimonials of downloading linkedin videos with bingotingo
      -bingotingo case studies of downloading linkedin videos with bingotingo
      -bingotingo FAQs of downloading linkedin videos with bingotingo
      -bingotingo features of downloading linkedin videos with bingotingo
      -bingotingo pricing of downloading linkedin videos with bingotingo
      -bingotingo support of downloading linkedin videos with bingotingo
      -bingotingo comparison of downloading linkedin videos with other tools
      -bingotingo challenges of downloading linkedin videos with other tools
      -bingotingo solutions of downloading linkedin videos with other tools
      -bingotingo recommendations of downloading linkedin videos with other tools
      -bingotingo best practices of downloading linkedin videos with other tools

      -
        -
      • It is free and unlimited
      • -
      • It is fast and easy
      • -
      • It supports any website and any device
      • -
      • It does not require any installation or registration
      • -
      • It does not contain any ads or malware
      • -
      • It respects your privacy and security
      • -
      -

      How to Download LinkedIn Video with BingoTingo

      -

      Downloading LinkedIn video with BingoTingo is very simple and straightforward. You just need to follow these five steps:

      -

      Step 1: Find the LinkedIn Video You Want to Download

      -

      The first step is to find the LinkedIn video you want to download. You can do this by browsing your feed, searching by topic or hashtag, or visiting a specific profile, page, or group. Once you find the video, click on it to open it in a new tab.

      -

      Step 2: Copy the URL of the LinkedIn Video

      -

      The second step is to copy the URL of the LinkedIn video. You can do this by selecting the address bar of your browser and pressing Ctrl+C (Windows) or Command+C (Mac). Alternatively, you can right-click on the video and choose Copy Video URL from the menu.

      -

      Step 3: Paste the URL into BingoTingo's Search Box

      -

      The third step is to paste the URL into BingoTingo's search box. You can do this by visiting bingotingo.com, clicking on the search box, and pressing Ctrl+V (Windows) or Command+V (Mac). Alternatively, you can right-click on the search box and choose Paste from the menu.

      -

      Step 4: Choose Your Preferred Format and Quality

      -

      The fourth step is to choose your preferred format and quality for your downloaded video. You can do this by clicking on the drop-down menu next to the search box and selecting one of the available options. You can also preview the video by clicking on the Play button.

      -

      Step 5: Click on Download and Enjoy Your Video

      -

      The fifth and final step is to click on the Download button and enjoy your video. You can do this by clicking on the green Download button below the preview window. Your video will start downloading automatically to your device. You can then watch it offline, share it with others, or use it for your own purposes.

      -

      Tips and Tricks for Downloading LinkedIn Video with BingoTingo

      -

      To make your experience of downloading LinkedIn video with BingoTingo even better, here are some tips and tricks you can follow:

      -

      Use a Reliable Internet Connection

      -

      To ensure a smooth and fast download process, make sure you have a reliable internet connection. Avoid using public Wi-Fi networks or mobile data that may be slow or unstable. If possible, use a wired connection or a strong Wi-Fi signal.

      -

      Check the Video Permissions Before Downloading

      -

      To respect the rights of the video creators and avoid any legal issues, check the video permissions before downloading. Some videos may be private, restricted, or copyrighted. In that case, you may need to ask for permission from the video owner or follow the terms and conditions of LinkedIn. You can check the video permissions by clicking on the three dots icon on the top right corner of the video and choosing View Video Details from the menu.

      -

      Use a Good Video Player to Watch Your Downloaded Videos

      -

      To enjoy your downloaded videos in the best quality and performance, use a good video player to watch them. Some video players may not support certain formats or quality options, or may have issues with playback or sound. We recommend using VLC Media Player, which is a free and versatile video player that supports almost any format and quality.

      -

      Conclusion

      -

      Downloading LinkedIn video with BingoTingo is a great way to save and watch any video from LinkedIn offline, share it with others, or use it for your own projects. BingoTingo is a free, fast, and easy online video downloader that supports any website and any device. You just need to copy and paste the URL of the video you want to download and choose your preferred format and quality. BingoTingo will do the rest for you in seconds.

      -

      We hope this article has helped you learn how to download LinkedIn video with BingoTingo. If you have any questions or feedback, please feel free to contact us or leave a comment below. We would love to hear from you.

      -

      FAQs

      -

      Here are some frequently asked questions about downloading LinkedIn video with BingoTingo:

      -
        -
      1. Is BingoTingo safe to use?
      2. -

        Yes, BingoTingo is safe to use. It does not contain any ads or malware, and it does not collect or store any of your personal data or information. It also respects your privacy and security by using encryption and HTTPS protocols.

        -
      3. Can I download LinkedIn live videos with BingoTingo?
      4. -

        Yes, you can download LinkedIn live videos with BingoTingo. However, you need to wait until the live stream is over and the video is available on the website. Then, you can follow the same steps as described above to download it.

        -
      5. Can I download multiple LinkedIn videos at once with BingoTingo?
      6. -

        No, you cannot download multiple LinkedIn videos at once with BingoTingo. You need to download each video individually by copying and pasting its URL into BingoTingo's search box. However, you can open multiple tabs or windows of BingoTingo and download different videos simultaneously.

        -
      7. Can I download LinkedIn videos on my mobile device with BingoTingo?
      8. -

        Yes, you can download LinkedIn videos on your mobile device with BingoTingo. You can use any browser on your smartphone or tablet to access bingotingo.com and follow the same steps as described above to download any video from LinkedIn.

        -
      9. Can I download LinkedIn videos in HD quality with BingoTingo?
      10. -

        Yes, you can download LinkedIn videos in HD quality with BingoTingo. You can choose from various quality options, such as 720p, 1080p, or 4K, depending on the availability of the video source. However, keep in mind that higher quality videos will take longer to download and occupy more space on your device.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fffiloni/Music_Source_Separation/bytesep/plot_results/musdb18.py b/spaces/fffiloni/Music_Source_Separation/bytesep/plot_results/musdb18.py deleted file mode 100644 index eb91faa60b79f0f34aba1bb4810c2be7be8438f3..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Music_Source_Separation/bytesep/plot_results/musdb18.py +++ /dev/null @@ -1,198 +0,0 @@ -import argparse -import os -import pickle - -import matplotlib.pyplot as plt -import numpy as np - - -def load_sdrs(workspace, task_name, filename, config, gpus, source_type): - - stat_path = os.path.join( - workspace, - "statistics", - task_name, - filename, - "config={},gpus={}".format(config, gpus), - "statistics.pkl", - ) - - stat_dict = pickle.load(open(stat_path, 'rb')) - - median_sdrs = [e['median_sdr_dict'][source_type] for e in stat_dict['test']] - - return median_sdrs - - -def plot_statistics(args): - - # arguments & parameters - workspace = args.workspace - select = args.select - task_name = "musdb18" - filename = "train" - - # paths - fig_path = os.path.join('results', task_name, "sdr_{}.pdf".format(select)) - os.makedirs(os.path.dirname(fig_path), exist_ok=True) - - linewidth = 1 - lines = [] - fig, ax = plt.subplots(1, 1, figsize=(8, 6)) - - if select == '1a': - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='vocals-accompaniment,unet', - gpus=1, - source_type="vocals", - ) - (line,) = ax.plot(sdrs, label='UNet,l1_wav', linewidth=linewidth) - lines.append(line) - ylim = 15 - - elif select == '1b': - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='accompaniment-vocals,unet', - gpus=1, - source_type="accompaniment", - ) - (line,) = ax.plot(sdrs, label='UNet,l1_wav', linewidth=linewidth) - lines.append(line) - ylim = 20 - - if select == '1c': - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='vocals-accompaniment,unet', - gpus=1, - source_type="vocals", - ) - (line,) = ax.plot(sdrs, label='UNet,l1_wav', linewidth=linewidth) - lines.append(line) - - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='vocals-accompaniment,resunet', - gpus=2, - source_type="vocals", - ) - (line,) = ax.plot(sdrs, label='ResUNet_ISMIR2021,l1_wav', linewidth=linewidth) - lines.append(line) - - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='vocals-accompaniment,unet_subbandtime', - gpus=1, - source_type="vocals", - ) - (line,) = ax.plot(sdrs, label='unet_subband,l1_wav', linewidth=linewidth) - lines.append(line) - - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='vocals-accompaniment,resunet_subbandtime', - gpus=1, - source_type="vocals", - ) - (line,) = ax.plot(sdrs, label='resunet_subband,l1_wav', linewidth=linewidth) - lines.append(line) - - ylim = 15 - - elif select == '1d': - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='accompaniment-vocals,unet', - gpus=1, - source_type="accompaniment", - ) - (line,) = ax.plot(sdrs, label='UNet,l1_wav', linewidth=linewidth) - lines.append(line) - - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='accompaniment-vocals,resunet', - gpus=2, - source_type="accompaniment", - ) - (line,) = ax.plot(sdrs, label='ResUNet_ISMIR2021,l1_wav', linewidth=linewidth) - lines.append(line) - - # sdrs = load_sdrs( - # workspace, - # task_name, - # filename, - # config='accompaniment-vocals,unet_subbandtime', - # gpus=1, - # source_type="accompaniment", - # ) - # (line,) = ax.plot(sdrs, label='UNet_subbtandtime,l1_wav', linewidth=linewidth) - # lines.append(line) - - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='accompaniment-vocals,resunet_subbandtime', - gpus=1, - source_type="accompaniment", - ) - (line,) = ax.plot( - sdrs, label='ResUNet_subbtandtime,l1_wav', linewidth=linewidth - ) - lines.append(line) - - ylim = 20 - - else: - raise Exception('Error!') - - eval_every_iterations = 10000 - total_ticks = 50 - ticks_freq = 10 - - ax.set_ylim(0, ylim) - ax.set_xlim(0, total_ticks) - ax.xaxis.set_ticks(np.arange(0, total_ticks + 1, ticks_freq)) - ax.xaxis.set_ticklabels( - np.arange( - 0, - total_ticks * eval_every_iterations + 1, - ticks_freq * eval_every_iterations, - ) - ) - ax.yaxis.set_ticks(np.arange(ylim + 1)) - ax.yaxis.set_ticklabels(np.arange(ylim + 1)) - ax.grid(color='b', linestyle='solid', linewidth=0.3) - plt.legend(handles=lines, loc=4) - - plt.savefig(fig_path) - print('Save figure to {}'.format(fig_path)) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--workspace', type=str, required=True) - parser.add_argument('--select', type=str, required=True) - - args = parser.parse_args() - - plot_statistics(args) diff --git a/spaces/fffiloni/SplitTrack2MusicGen/tests/utils/__init__.py b/spaces/fffiloni/SplitTrack2MusicGen/tests/utils/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/SplitTrack2MusicGen/tests/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/lp_train.py b/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/lp_train.py deleted file mode 100644 index 24a19bacd0a4b789415cfccbce1f8bc99bc493ed..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/lp_train.py +++ /dev/null @@ -1,301 +0,0 @@ -import json -import logging -import math -import os -import time -from contextlib import suppress - -import numpy as np -import torch -import torch.nn.functional as F - -try: - import wandb -except ImportError: - wandb = None - -from open_clip import LPLoss, LPMetrics, lp_gather_features -from open_clip.utils import do_mixup, get_mix_lambda -from .distributed import is_master -from .zero_shot import zero_shot_eval - - -class AverageMeter(object): - """Computes and stores the average and current value""" - - def __init__(self): - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - -def unwrap_model(model): - if hasattr(model, "module"): - return model.module - else: - return model - - -def train_one_epoch( - model, - data, - epoch, - optimizer, - scaler, - scheduler, - args, - tb_writer=None, - extra_suffix="", -): - device = torch.device(args.device) - autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress - model.train() - loss = LPLoss(args.lp_loss) - - dataloader, sampler = data["train"].dataloader, data["train"].sampler - if args.distributed and sampler is not None: - sampler.set_epoch(epoch) - num_batches_per_epoch = dataloader.num_batches - sample_digits = math.ceil(math.log(dataloader.num_samples + 1, 10)) - - # for toy dataset - if args.dataset_type == "toy": - dataloader.dataset.generate_queue() - - loss_m = AverageMeter() - batch_time_m = AverageMeter() - data_time_m = AverageMeter() - end = time.time() - - for i, batch in enumerate(dataloader): - step = num_batches_per_epoch * epoch + i - - if isinstance(scheduler, dict): - for s in scheduler.values(): - s(step) - else: - scheduler(step) - - audio = batch # contains mel_spec, wavform, and longer list - class_label = batch["class_label"] - # audio = audio.to(device=device, non_blocking=True) - class_label = class_label.to(device=device, non_blocking=True) - - if args.mixup: - # https://github.com/RetroCirce/HTS-Audio-Transformer/blob/main/utils.py#L146 - mix_lambda = torch.from_numpy( - get_mix_lambda(0.5, len(audio["waveform"])) - ).to(device) - class_label = do_mixup(class_label, mix_lambda) - else: - mix_lambda = None - - data_time_m.update(time.time() - end) - if isinstance(optimizer, dict): - for o_ in optimizer.values(): - o_.zero_grad() - else: - optimizer.zero_grad() - - with autocast(): - pred = model(audio, mix_lambda=mix_lambda, device=device) - total_loss = loss(pred, class_label) - - if isinstance(optimizer, dict): - if scaler is not None: - scaler.scale(total_loss).backward() - for o_ in optimizer.values(): - if args.horovod: - o_.synchronize() - scaler.unscale_(o_) - with o_.skip_synchronize(): - scaler.step(o_) - else: - scaler.step(o_) - scaler.update() - else: - total_loss.backward() - for o_ in optimizer.values(): - o_.step() - else: - if scaler is not None: - scaler.scale(total_loss).backward() - if args.horovod: - optimizer.synchronize() - scaler.unscale_(optimizer) - with optimizer.skip_synchronize(): - scaler.step(optimizer) - else: - scaler.step(optimizer) - scaler.update() - else: - total_loss.backward() - optimizer.step() - - # Note: we clamp to 4.6052 = ln(100), as in the original paper. - with torch.no_grad(): - unwrap_model(model).clap_model.logit_scale_a.clamp_(0, math.log(100)) - unwrap_model(model).clap_model.logit_scale_t.clamp_(0, math.log(100)) - - batch_time_m.update(time.time() - end) - end = time.time() - batch_count = i + 1 - - if is_master(args) and (i % 100 == 0 or batch_count == num_batches_per_epoch): - if isinstance(audio, dict): - batch_size = len(audio["waveform"]) - else: - batch_size = len(audio) - num_samples = batch_count * batch_size * args.world_size - samples_per_epoch = dataloader.num_samples - percent_complete = 100.0 * batch_count / num_batches_per_epoch - - # NOTE loss is coarsely sampled, just master node and per log update - loss_m.update(total_loss.item(), batch_size) - if isinstance(optimizer, dict): - logging.info( - f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] " - f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) " - f"Data (t): {data_time_m.avg:.3f} " - f"Batch (t): {batch_time_m.avg:.3f} " - f"LR: {[o_.param_groups[0]['lr'] for o_ in optimizer.values()]}" - ) - log_data = { - "loss": loss_m.val, - "data_time": data_time_m.val, - "batch_time": batch_time_m.val, - "lr": [o_.param_groups[0]["lr"] for o_ in optimizer.values()], - } - else: - logging.info( - f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] " - f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) " - f"Data (t): {data_time_m.avg:.3f} " - f"Batch (t): {batch_time_m.avg:.3f} " - f"LR: {optimizer.param_groups[0]['lr']:5f} " - ) - - # Save train loss / etc. Using non avg meter values as loggers have their own smoothing - log_data = { - "loss": loss_m.val, - "data_time": data_time_m.val, - "batch_time": batch_time_m.val, - "lr": optimizer.param_groups[0]["lr"], - } - for name, val in log_data.items(): - name = f"train{extra_suffix}/{name}" - if tb_writer is not None: - tb_writer.add_scalar(name, val, step) - if args.wandb: - assert wandb is not None, "Please install wandb." - wandb.log({name: val, "step": step}) - - # resetting batch / data time meters per log window - batch_time_m.reset() - data_time_m.reset() - # end for - - -def evaluate(model, data, epoch, args, tb_writer=None, extra_suffix=""): - metrics = {} - if not args.parallel_eval: - if not is_master(args): - return metrics - device = torch.device(args.device) - model.eval() - - # CHANGE - # zero_shot_metrics = zero_shot_eval(model, data, epoch, args) - # metrics.update(zero_shot_metrics) - if is_master(args): - print("Evaluating...") - metric_names = args.lp_metrics.split(",") - eval_tool = LPMetrics(metric_names=metric_names) - - autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress - if "val" in data and ( - args.val_frequency - and ((epoch % args.val_frequency) == 0 or epoch == args.epochs) - ): - if args.parallel_eval: - dataloader, sampler = data["val"].dataloader, data["val"].sampler - if args.distributed and sampler is not None: - sampler.set_epoch(epoch) - samples_per_val = dataloader.num_samples - else: - dataloader = data["val"].dataloader - num_samples = 0 - samples_per_val = dataloader.num_samples - - eval_info = {"pred": [], "target": []} - with torch.no_grad(): - for i, batch in enumerate(dataloader): - audio = batch # contains mel_spec, wavform, and longer list - class_label = batch["class_label"] - - # audio = audio.to(device=device, non_blocking=True) - class_label = class_label.to(device=device, non_blocking=True) - - with autocast(): - pred = model(audio, device=device) - if args.parallel_eval: - pred, class_label = lp_gather_features( - pred, class_label, args.world_size, args.horovod - ) - eval_info["pred"].append(pred) - eval_info["target"].append(class_label) - - num_samples += class_label.shape[0] - - if (i % 100) == 0: # and i != 0: - logging.info( - f"Eval Epoch: {epoch} [{num_samples} / {samples_per_val}]" - ) - - if is_master(args): - eval_info["pred"] = torch.cat(eval_info["pred"], 0).cpu() - eval_info["target"] = torch.cat(eval_info["target"], 0).cpu() - metric_dict = eval_tool.evaluate_mertics( - eval_info["pred"], eval_info["target"] - ) - metrics.update(metric_dict) - if "epoch" not in metrics.keys(): - metrics.update({"epoch": epoch}) - - if is_master(args): - if not metrics: - return metrics - - logging.info( - f"Eval Epoch: {epoch} " - + "\n".join( - ["\t".join([f"{m}: {round(metrics[m], 4):.4f}"]) for m in metrics] - ) - ) - if args.save_logs: - for name, val in metrics.items(): - if tb_writer is not None: - tb_writer.add_scalar(f"val{extra_suffix}/{name}", val, epoch) - - with open(os.path.join(args.checkpoint_path, "results.jsonl"), "a+") as f: - f.write(json.dumps(metrics)) - f.write("\n") - - if args.wandb: - assert wandb is not None, "Please install wandb." - for name, val in metrics.items(): - wandb.log({f"val{extra_suffix}/{name}": val, "epoch": epoch}) - - return metrics - else: - return metrics diff --git a/spaces/fffiloni/instant-TTS-Bark-cloning/README.md b/spaces/fffiloni/instant-TTS-Bark-cloning/README.md deleted file mode 100644 index 900b006e17eeab3c485722571d87875989c12aa3..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/instant-TTS-Bark-cloning/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Coqui Bark Voice Cloning -emoji: 🐸🐶 -colorFrom: yellow -colorTo: gray -python_version: 3.10.12 -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/fffiloni/sd-img-variations/app.py b/spaces/fffiloni/sd-img-variations/app.py deleted file mode 100644 index a1f219ab043065d045d5e5f3451e55305c787aba..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/sd-img-variations/app.py +++ /dev/null @@ -1,87 +0,0 @@ -import gradio as gr -import torch -from PIL import Image - -from lambda_diffusers import StableDiffusionImageEmbedPipeline - -def ask(input_im, scale, steps, seed, images): - images = images - generator = torch.Generator(device=device).manual_seed(int(seed)) - - images_list = pipe( - 2*[input_im], - guidance_scale=scale, - num_inference_steps=steps, - generator=generator, - ) - - for i, image in enumerate(images_list["sample"]): - if(images_list["nsfw_content_detected"][i]): - safe_image = Image.open(r"unsafe.png") - images.append(safe_image) - else: - images.append(image) - return images - -def main(input_im, n_pairs, scale, steps, seed): - print('Start the magic !') - images = [] - for i in range(n_pairs): - print('Asking for a new pair of image [' + str(i + 1) + '/' + str(n_pairs) + ']') - seed = seed+i - images = ask(input_im, scale, steps, seed, images) - print('Thanks to Sylvain, it worked like a charm!') - return images - -device = "cuda" if torch.cuda.is_available() else "cpu" -pipe = StableDiffusionImageEmbedPipeline.from_pretrained( - "lambdalabs/sd-image-variations-diffusers", - revision="273115e88df42350019ef4d628265b8c29ef4af5", - ) -pipe = pipe.to(device) - -inputs = [ - gr.Image(), - gr.Slider(1, 3, value=2, step=1, label="Pairs of images to ask"), - gr.Slider(0, 25, value=3, step=1, label="Guidance scale"), - gr.Slider(5, 50, value=25, step=5, label="Steps"), - gr.Slider(label = "Seed", minimum = 0, maximum = 2147483647, step = 1, randomize = True) -] -output = gr.Gallery(label="Generated variations") -output.style(grid=2, height="") - -description = \ -""" -

      This demo is running on CPU. Working version fixed by Sylvain @fffiloni. You'll get n pairs of images variations.
      -Asking for pairs of images instead of more than 2 images in a row helps us to avoid heavy CPU load and connection error out ;)
      -Waiting time (for 2 pairs): ~5/10 minutes • NSFW filters enabled • visitor badge
      -Generate variations on an input image using a fine-tuned version of Stable Diffusion.
      -Trained by Justin Pinkney (@Buntworthy) at Lambda
      -This version has been ported to 🤗 Diffusers library, see more details on how to use this version in the Lambda Diffusers repo.
      -For the original training code see this repo. - -

      -""" - -article = \ -""" -— -## How does this work? -The normal Stable Diffusion model is trained to be conditioned on text input. This version has had the original text encoder (from CLIP) removed, and replaced with -the CLIP _image_ encoder instead. So instead of generating images based a text input, images are generated to match CLIP's embedding of the image. -This creates images which have the same rough style and content, but different details, in particular the composition is generally quite different. -This is a totally different approach to the img2img script of the original Stable Diffusion and gives very different results. -The model was fine tuned on the [LAION aethetics v2 6+ dataset](https://laion.ai/blog/laion-aesthetics/) to accept the new conditioning. -Training was done on 4xA6000 GPUs on [Lambda GPU Cloud](https://lambdalabs.com/service/gpu-cloud). -More details on the method and training will come in a future blog post. -""" - -demo = gr.Interface( - fn=main, - title="Stable Diffusion Image Variations", - inputs=inputs, - outputs=output, - description=description, - article=article - ) -demo.launch() diff --git "a/spaces/fkhuggingme/gpt-academic/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py" "b/spaces/fkhuggingme/gpt-academic/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py" deleted file mode 100644 index 49f41b18b986d229d4dd91aa6a0be74dee6d1296..0000000000000000000000000000000000000000 --- "a/spaces/fkhuggingme/gpt-academic/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py" +++ /dev/null @@ -1,310 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import input_clipping - -def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import os, copy - from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive - msg = '正常' - inputs_array = [] - inputs_show_user_array = [] - history_array = [] - sys_prompt_array = [] - report_part_1 = [] - - assert len(file_manifest) <= 512, "源文件太多(超过512个), 请缩减输入文件的数量。或者,您也可以选择删除此行警告,并修改代码拆分file_manifest列表,从而实现分批次处理。" - ############################## <第一步,逐个文件分析,多线程> ################################## - for index, fp in enumerate(file_manifest): - # 读取文件 - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - prefix = "接下来请你逐文件分析下面的工程" if index==0 else "" - i_say = prefix + f'请对下面的程序文件做一个概述文件名是{os.path.relpath(fp, project_folder)},文件代码是 ```{file_content}```' - i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的程序文件做一个概述: {os.path.abspath(fp)}' - # 装载请求内容 - inputs_array.append(i_say) - inputs_show_user_array.append(i_say_show_user) - history_array.append([]) - sys_prompt_array.append("你是一个程序架构分析师,正在分析一个源代码项目。你的回答必须简单明了。") - - # 文件读取完成,对每一个源代码文件,生成一个请求线程,发送到chatgpt进行分析 - gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array = inputs_array, - inputs_show_user_array = inputs_show_user_array, - history_array = history_array, - sys_prompt_array = sys_prompt_array, - llm_kwargs = llm_kwargs, - chatbot = chatbot, - show_user_at_complete = True - ) - - # 全部文件解析完成,结果写入文件,准备对工程源代码进行汇总分析 - report_part_1 = copy.deepcopy(gpt_response_collection) - history_to_return = report_part_1 - res = write_results_to_file(report_part_1) - chatbot.append(("完成?", "逐个文件分析已完成。" + res + "\n\n正在开始汇总。")) - yield from update_ui(chatbot=chatbot, history=history_to_return) # 刷新界面 - - ############################## <第二步,综合,单线程,分组+迭代处理> ################################## - batchsize = 16 # 10个文件为一组 - report_part_2 = [] - previous_iteration_files = [] - last_iteration_result = "" - while True: - if len(file_manifest) == 0: break - this_iteration_file_manifest = file_manifest[:batchsize] - this_iteration_gpt_response_collection = gpt_response_collection[:batchsize*2] - file_rel_path = [os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)] - # 把“请对下面的程序文件做一个概述” 替换成 精简的 "文件名:{all_file[index]}" - for index, content in enumerate(this_iteration_gpt_response_collection): - if index%2==0: this_iteration_gpt_response_collection[index] = f"{file_rel_path[index//2]}" # 只保留文件名节省token - previous_iteration_files.extend([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)]) - previous_iteration_files_string = ', '.join(previous_iteration_files) - current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)]) - i_say = f'用一张Markdown表格简要描述以下文件的功能:{previous_iteration_files_string}。根据以上分析,用一句话概括程序的整体功能。' - inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。' - this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection) - this_iteration_history.append(last_iteration_result) - # 裁剪input - inputs, this_iteration_history_feed = input_clipping(inputs=i_say, history=this_iteration_history, max_token_limit=2560) - result = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=inputs, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot, - history=this_iteration_history_feed, # 迭代之前的分析 - sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。") - report_part_2.extend([i_say, result]) - last_iteration_result = result - - file_manifest = file_manifest[batchsize:] - gpt_response_collection = gpt_response_collection[batchsize*2:] - - ############################## ################################## - history_to_return.extend(report_part_2) - res = write_results_to_file(history_to_return) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history_to_return) # 刷新界面 - - -@CatchException -def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob - file_manifest = [f for f in glob.glob('./*.py') if ('test_project' not in f) and ('gpt_log' not in f)] + \ - [f for f in glob.glob('./crazy_functions/*.py') if ('test_project' not in f) and ('gpt_log' not in f)]+ \ - [f for f in glob.glob('./request_llm/*.py') if ('test_project' not in f) and ('gpt_log' not in f)] - project_folder = './' - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何python文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - -@CatchException -def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.py', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何python文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - - -@CatchException -def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.h', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.hpp', recursive=True)] #+ \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.h头文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - -@CatchException -def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.h', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.hpp', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.h头文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - - -@CatchException -def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.java', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.jar', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.xml', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.sh', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何java文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - - -@CatchException -def 解析一个Rect项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.ts', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.tsx', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.json', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.js', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.jsx', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何Rect文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - - -@CatchException -def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.go', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/go.mod', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/go.sum', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/go.work', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何golang文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - - -@CatchException -def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.lua', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.xml', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.json', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.toml', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何lua文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - - -@CatchException -def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.cs', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.csproj', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何CSharp文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - - -@CatchException -def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - txt_pattern = plugin_kwargs.get("advanced_arg") - txt_pattern = txt_pattern.replace(",", ",") - # 将要匹配的模式(例如: *.c, *.cpp, *.py, config.toml) - pattern_include = [_.lstrip(" ,").rstrip(" ,") for _ in txt_pattern.split(",") if _ != "" and not _.strip().startswith("^")] - if not pattern_include: pattern_include = ["*"] # 不输入即全部匹配 - # 将要忽略匹配的文件后缀(例如: ^*.c, ^*.cpp, ^*.py) - pattern_except_suffix = [_.lstrip(" ^*.,").rstrip(" ,") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^*.")] - pattern_except_suffix += ['zip', 'rar', '7z', 'tar', 'gz'] # 避免解析压缩文件 - # 将要忽略匹配的文件名(例如: ^README.md) - pattern_except_name = [_.lstrip(" ^*,").rstrip(" ,").replace(".", "\.") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^") and not _.strip().startswith("^*.")] - # 生成正则表达式 - pattern_except = '/[^/]+\.(' + "|".join(pattern_except_suffix) + ')$' - pattern_except += '|/(' + "|".join(pattern_except_name) + ')$' if pattern_except_name != [] else '' - - history.clear() - import glob, os, re - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - # 若上传压缩文件, 先寻找到解压的文件夹路径, 从而避免解析压缩文件 - maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)] - if len(maybe_dir)>0 and maybe_dir[0].endswith('.extract'): - extract_folder_path = maybe_dir[0] - else: - extract_folder_path = project_folder - # 按输入的匹配模式寻找上传的非压缩文件和已解压的文件 - file_manifest = [f for pattern in pattern_include for f in glob.glob(f'{extract_folder_path}/**/{pattern}', recursive=True) if "" != extract_folder_path and \ - os.path.isfile(f) and (not re.search(pattern_except, f) or pattern.endswith('.' + re.search(pattern_except, f).group().split('.')[-1]))] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) \ No newline at end of file diff --git a/spaces/flax-community/t5-vae/t5_vae_flax_alt/src/vae.py b/spaces/flax-community/t5-vae/t5_vae_flax_alt/src/vae.py deleted file mode 100644 index 676546fa95c86f36584846cda85955e2d40c12a1..0000000000000000000000000000000000000000 --- a/spaces/flax-community/t5-vae/t5_vae_flax_alt/src/vae.py +++ /dev/null @@ -1,30 +0,0 @@ -import jax.numpy as jnp -import flax.linen as nn - -from t5_vae_flax_alt.src.encoders import VAE_ENCODER_MODELS -from t5_vae_flax_alt.src.decoders import VAE_DECODER_MODELS -from t5_vae_flax_alt.src.config import T5VaeConfig - - -class VAE(nn.Module): - # see https://github.com/google/flax#what-does-flax-look-like - """ - An MMD-VAE used with encoder-decoder models. - Encodes all token encodings into a single latent & spits them back out. - """ - config: T5VaeConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.encoder = VAE_ENCODER_MODELS[self.config.vae_encoder_model](self.config.latent_token_size, self.config.n_latent_tokens) - self.decoder = VAE_DECODER_MODELS[self.config.vae_decoder_model](self.config.t5.d_model, self.config.n_latent_tokens) - - def __call__(self, encoding=None, latent_codes=None): - latent_codes = self.encode(encoding) - return self.decode(latent_codes), latent_codes - - def encode(self, encoding): - return self.encoder(encoding) - - def decode(self, latent): - return self.decoder(latent) diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/manual_control.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/manual_control.py deleted file mode 100644 index b0745707fc12872c52a96f82eaf1ab1f204f9a40..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/manual_control.py +++ /dev/null @@ -1,115 +0,0 @@ -#!/usr/bin/env python3 -raise DeprecationWarning("Use the one in ./scipts") - -import time -import argparse -import numpy as np -import gym -import gym_minigrid -from gym_minigrid.wrappers import * -from gym_minigrid.window import Window - -def redraw(img): - if not args.agent_view: - img = env.render('rgb_array', tile_size=args.tile_size) - - window.show_img(img) - -def reset(): - if args.seed != -1: - env.seed(args.seed) - - obs = env.reset() - - if hasattr(env, 'mission'): - print('Mission: %s' % env.mission) - window.set_caption(env.mission) - - redraw(obs) - -def step(action): - obs, reward, done, info = env.step(action) - print('step=%s, reward=%.2f' % (env.step_count, reward)) - - if done: - print('done!') - reset() - else: - redraw(obs) - -def key_handler(event): - print('pressed', event.key) - - if event.key == 'escape': - window.close() - return - - if event.key == 'backspace': - reset() - return - - if event.key == 'left': - step(env.actions.left) - return - if event.key == 'right': - step(env.actions.right) - return - if event.key == 'up': - step(env.actions.forward) - return - - # Spacebar - if event.key == ' ': - step(env.actions.toggle) - return - if event.key == 'pageup': - step(env.actions.pickup) - return - if event.key == 'pagedown': - step(env.actions.drop) - return - - if event.key == 'enter': - step(env.actions.done) - return - -parser = argparse.ArgumentParser() -parser.add_argument( - "--env", - help="gym environment to load", - default='MiniGrid-MultiRoom-N6-v0' -) -parser.add_argument( - "--seed", - type=int, - help="random seed to generate the environment with", - default=-1 -) -parser.add_argument( - "--tile_size", - type=int, - help="size at which to render tiles", - default=32 -) -parser.add_argument( - '--agent_view', - default=False, - help="draw the agent sees (partially observable view)", - action='store_true' -) - -args = parser.parse_args() - -env = gym.make(args.env) - -if args.agent_view: - env = RGBImgPartialObsWrapper(env) - env = ImgObsWrapper(env) - -window = Window('gym_minigrid - ' + args.env) -window.reg_key_handler(key_handler) - -reset() - -# Blocking event loop -window.show(block=True) diff --git a/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Aichat.py b/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Aichat.py deleted file mode 100644 index d78375ce7e62b634c82e163c693a5557b8e2f860..0000000000000000000000000000000000000000 --- a/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Aichat.py +++ /dev/null @@ -1,35 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://hteyun.com' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - } - data = { - 'model': model, - 'temperature': 0.7, - 'presence_penalty': 0, - 'messages': messages, - } - response = requests.post(url + '/api/chat-stream', - json=data, stream=True) - - if stream: - for chunk in response.iter_content(chunk_size=None): - chunk = chunk.decode('utf-8') - if chunk.strip(): - message = json.loads(chunk)['choices'][0]['message']['content'] - yield message - else: - message = response.json()['choices'][0]['message']['content'] - yield message - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/cnn/bricks/transformer.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/cnn/bricks/transformer.py deleted file mode 100644 index e61ae0dd941a7be00b3e41a3de833ec50470a45f..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/cnn/bricks/transformer.py +++ /dev/null @@ -1,595 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -import torch -import torch.nn as nn - -from annotator.uniformer.mmcv import ConfigDict, deprecated_api_warning -from annotator.uniformer.mmcv.cnn import Linear, build_activation_layer, build_norm_layer -from annotator.uniformer.mmcv.runner.base_module import BaseModule, ModuleList, Sequential -from annotator.uniformer.mmcv.utils import build_from_cfg -from .drop import build_dropout -from .registry import (ATTENTION, FEEDFORWARD_NETWORK, POSITIONAL_ENCODING, - TRANSFORMER_LAYER, TRANSFORMER_LAYER_SEQUENCE) - -# Avoid BC-breaking of importing MultiScaleDeformableAttention from this file -try: - from annotator.uniformer.mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention # noqa F401 - warnings.warn( - ImportWarning( - '``MultiScaleDeformableAttention`` has been moved to ' - '``mmcv.ops.multi_scale_deform_attn``, please change original path ' # noqa E501 - '``from annotator.uniformer.mmcv.cnn.bricks.transformer import MultiScaleDeformableAttention`` ' # noqa E501 - 'to ``from annotator.uniformer.mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention`` ' # noqa E501 - )) - -except ImportError: - warnings.warn('Fail to import ``MultiScaleDeformableAttention`` from ' - '``mmcv.ops.multi_scale_deform_attn``, ' - 'You should install ``mmcv-full`` if you need this module. ') - - -def build_positional_encoding(cfg, default_args=None): - """Builder for Position Encoding.""" - return build_from_cfg(cfg, POSITIONAL_ENCODING, default_args) - - -def build_attention(cfg, default_args=None): - """Builder for attention.""" - return build_from_cfg(cfg, ATTENTION, default_args) - - -def build_feedforward_network(cfg, default_args=None): - """Builder for feed-forward network (FFN).""" - return build_from_cfg(cfg, FEEDFORWARD_NETWORK, default_args) - - -def build_transformer_layer(cfg, default_args=None): - """Builder for transformer layer.""" - return build_from_cfg(cfg, TRANSFORMER_LAYER, default_args) - - -def build_transformer_layer_sequence(cfg, default_args=None): - """Builder for transformer encoder and transformer decoder.""" - return build_from_cfg(cfg, TRANSFORMER_LAYER_SEQUENCE, default_args) - - -@ATTENTION.register_module() -class MultiheadAttention(BaseModule): - """A wrapper for ``torch.nn.MultiheadAttention``. - - This module implements MultiheadAttention with identity connection, - and positional encoding is also passed as input. - - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. - attn_drop (float): A Dropout layer on attn_output_weights. - Default: 0.0. - proj_drop (float): A Dropout layer after `nn.MultiheadAttention`. - Default: 0.0. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - batch_first (bool): When it is True, Key, Query and Value are shape of - (batch, n, embed_dim), otherwise (n, batch, embed_dim). - Default to False. - """ - - def __init__(self, - embed_dims, - num_heads, - attn_drop=0., - proj_drop=0., - dropout_layer=dict(type='Dropout', drop_prob=0.), - init_cfg=None, - batch_first=False, - **kwargs): - super(MultiheadAttention, self).__init__(init_cfg) - if 'dropout' in kwargs: - warnings.warn('The arguments `dropout` in MultiheadAttention ' - 'has been deprecated, now you can separately ' - 'set `attn_drop`(float), proj_drop(float), ' - 'and `dropout_layer`(dict) ') - attn_drop = kwargs['dropout'] - dropout_layer['drop_prob'] = kwargs.pop('dropout') - - self.embed_dims = embed_dims - self.num_heads = num_heads - self.batch_first = batch_first - - self.attn = nn.MultiheadAttention(embed_dims, num_heads, attn_drop, - **kwargs) - - self.proj_drop = nn.Dropout(proj_drop) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else nn.Identity() - - @deprecated_api_warning({'residual': 'identity'}, - cls_name='MultiheadAttention') - def forward(self, - query, - key=None, - value=None, - identity=None, - query_pos=None, - key_pos=None, - attn_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `MultiheadAttention`. - - **kwargs allow passing a more general data flow when combining - with other operations in `transformerlayer`. - - Args: - query (Tensor): The input query with shape [num_queries, bs, - embed_dims] if self.batch_first is False, else - [bs, num_queries embed_dims]. - key (Tensor): The key tensor with shape [num_keys, bs, - embed_dims] if self.batch_first is False, else - [bs, num_keys, embed_dims] . - If None, the ``query`` will be used. Defaults to None. - value (Tensor): The value tensor with same shape as `key`. - Same in `nn.MultiheadAttention.forward`. Defaults to None. - If None, the `key` will be used. - identity (Tensor): This tensor, with the same shape as x, - will be used for the identity link. - If None, `x` will be used. Defaults to None. - query_pos (Tensor): The positional encoding for query, with - the same shape as `x`. If not None, it will - be added to `x` before forward function. Defaults to None. - key_pos (Tensor): The positional encoding for `key`, with the - same shape as `key`. Defaults to None. If not None, it will - be added to `key` before forward function. If None, and - `query_pos` has the same shape as `key`, then `query_pos` - will be used for `key_pos`. Defaults to None. - attn_mask (Tensor): ByteTensor mask with shape [num_queries, - num_keys]. Same in `nn.MultiheadAttention.forward`. - Defaults to None. - key_padding_mask (Tensor): ByteTensor with shape [bs, num_keys]. - Defaults to None. - - Returns: - Tensor: forwarded results with shape - [num_queries, bs, embed_dims] - if self.batch_first is False, else - [bs, num_queries embed_dims]. - """ - - if key is None: - key = query - if value is None: - value = key - if identity is None: - identity = query - if key_pos is None: - if query_pos is not None: - # use query_pos if key_pos is not available - if query_pos.shape == key.shape: - key_pos = query_pos - else: - warnings.warn(f'position encoding of key is' - f'missing in {self.__class__.__name__}.') - if query_pos is not None: - query = query + query_pos - if key_pos is not None: - key = key + key_pos - - # Because the dataflow('key', 'query', 'value') of - # ``torch.nn.MultiheadAttention`` is (num_query, batch, - # embed_dims), We should adjust the shape of dataflow from - # batch_first (batch, num_query, embed_dims) to num_query_first - # (num_query ,batch, embed_dims), and recover ``attn_output`` - # from num_query_first to batch_first. - if self.batch_first: - query = query.transpose(0, 1) - key = key.transpose(0, 1) - value = value.transpose(0, 1) - - out = self.attn( - query=query, - key=key, - value=value, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask)[0] - - if self.batch_first: - out = out.transpose(0, 1) - - return identity + self.dropout_layer(self.proj_drop(out)) - - -@FEEDFORWARD_NETWORK.register_module() -class FFN(BaseModule): - """Implements feed-forward networks (FFNs) with identity connection. - - Args: - embed_dims (int): The feature dimension. Same as - `MultiheadAttention`. Defaults: 256. - feedforward_channels (int): The hidden dimension of FFNs. - Defaults: 1024. - num_fcs (int, optional): The number of fully-connected layers in - FFNs. Default: 2. - act_cfg (dict, optional): The activation config for FFNs. - Default: dict(type='ReLU') - ffn_drop (float, optional): Probability of an element to be - zeroed in FFN. Default 0.0. - add_identity (bool, optional): Whether to add the - identity connection. Default: `True`. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - @deprecated_api_warning( - { - 'dropout': 'ffn_drop', - 'add_residual': 'add_identity' - }, - cls_name='FFN') - def __init__(self, - embed_dims=256, - feedforward_channels=1024, - num_fcs=2, - act_cfg=dict(type='ReLU', inplace=True), - ffn_drop=0., - dropout_layer=None, - add_identity=True, - init_cfg=None, - **kwargs): - super(FFN, self).__init__(init_cfg) - assert num_fcs >= 2, 'num_fcs should be no less ' \ - f'than 2. got {num_fcs}.' - self.embed_dims = embed_dims - self.feedforward_channels = feedforward_channels - self.num_fcs = num_fcs - self.act_cfg = act_cfg - self.activate = build_activation_layer(act_cfg) - - layers = [] - in_channels = embed_dims - for _ in range(num_fcs - 1): - layers.append( - Sequential( - Linear(in_channels, feedforward_channels), self.activate, - nn.Dropout(ffn_drop))) - in_channels = feedforward_channels - layers.append(Linear(feedforward_channels, embed_dims)) - layers.append(nn.Dropout(ffn_drop)) - self.layers = Sequential(*layers) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else torch.nn.Identity() - self.add_identity = add_identity - - @deprecated_api_warning({'residual': 'identity'}, cls_name='FFN') - def forward(self, x, identity=None): - """Forward function for `FFN`. - - The function would add x to the output tensor if residue is None. - """ - out = self.layers(x) - if not self.add_identity: - return self.dropout_layer(out) - if identity is None: - identity = x - return identity + self.dropout_layer(out) - - -@TRANSFORMER_LAYER.register_module() -class BaseTransformerLayer(BaseModule): - """Base `TransformerLayer` for vision transformer. - - It can be built from `mmcv.ConfigDict` and support more flexible - customization, for example, using any number of `FFN or LN ` and - use different kinds of `attention` by specifying a list of `ConfigDict` - named `attn_cfgs`. It is worth mentioning that it supports `prenorm` - when you specifying `norm` as the first element of `operation_order`. - More details about the `prenorm`: `On Layer Normalization in the - Transformer Architecture `_ . - - Args: - attn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )): - Configs for `self_attention` or `cross_attention` modules, - The order of the configs in the list should be consistent with - corresponding attentions in operation_order. - If it is a dict, all of the attention modules in operation_order - will be built with this config. Default: None. - ffn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )): - Configs for FFN, The order of the configs in the list should be - consistent with corresponding ffn in operation_order. - If it is a dict, all of the attention modules in operation_order - will be built with this config. - operation_order (tuple[str]): The execution order of operation - in transformer. Such as ('self_attn', 'norm', 'ffn', 'norm'). - Support `prenorm` when you specifying first element as `norm`. - Default:None. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - batch_first (bool): Key, Query and Value are shape - of (batch, n, embed_dim) - or (n, batch, embed_dim). Default to False. - """ - - def __init__(self, - attn_cfgs=None, - ffn_cfgs=dict( - type='FFN', - embed_dims=256, - feedforward_channels=1024, - num_fcs=2, - ffn_drop=0., - act_cfg=dict(type='ReLU', inplace=True), - ), - operation_order=None, - norm_cfg=dict(type='LN'), - init_cfg=None, - batch_first=False, - **kwargs): - - deprecated_args = dict( - feedforward_channels='feedforward_channels', - ffn_dropout='ffn_drop', - ffn_num_fcs='num_fcs') - for ori_name, new_name in deprecated_args.items(): - if ori_name in kwargs: - warnings.warn( - f'The arguments `{ori_name}` in BaseTransformerLayer ' - f'has been deprecated, now you should set `{new_name}` ' - f'and other FFN related arguments ' - f'to a dict named `ffn_cfgs`. ') - ffn_cfgs[new_name] = kwargs[ori_name] - - super(BaseTransformerLayer, self).__init__(init_cfg) - - self.batch_first = batch_first - - assert set(operation_order) & set( - ['self_attn', 'norm', 'ffn', 'cross_attn']) == \ - set(operation_order), f'The operation_order of' \ - f' {self.__class__.__name__} should ' \ - f'contains all four operation type ' \ - f"{['self_attn', 'norm', 'ffn', 'cross_attn']}" - - num_attn = operation_order.count('self_attn') + operation_order.count( - 'cross_attn') - if isinstance(attn_cfgs, dict): - attn_cfgs = [copy.deepcopy(attn_cfgs) for _ in range(num_attn)] - else: - assert num_attn == len(attn_cfgs), f'The length ' \ - f'of attn_cfg {num_attn} is ' \ - f'not consistent with the number of attention' \ - f'in operation_order {operation_order}.' - - self.num_attn = num_attn - self.operation_order = operation_order - self.norm_cfg = norm_cfg - self.pre_norm = operation_order[0] == 'norm' - self.attentions = ModuleList() - - index = 0 - for operation_name in operation_order: - if operation_name in ['self_attn', 'cross_attn']: - if 'batch_first' in attn_cfgs[index]: - assert self.batch_first == attn_cfgs[index]['batch_first'] - else: - attn_cfgs[index]['batch_first'] = self.batch_first - attention = build_attention(attn_cfgs[index]) - # Some custom attentions used as `self_attn` - # or `cross_attn` can have different behavior. - attention.operation_name = operation_name - self.attentions.append(attention) - index += 1 - - self.embed_dims = self.attentions[0].embed_dims - - self.ffns = ModuleList() - num_ffns = operation_order.count('ffn') - if isinstance(ffn_cfgs, dict): - ffn_cfgs = ConfigDict(ffn_cfgs) - if isinstance(ffn_cfgs, dict): - ffn_cfgs = [copy.deepcopy(ffn_cfgs) for _ in range(num_ffns)] - assert len(ffn_cfgs) == num_ffns - for ffn_index in range(num_ffns): - if 'embed_dims' not in ffn_cfgs[ffn_index]: - ffn_cfgs['embed_dims'] = self.embed_dims - else: - assert ffn_cfgs[ffn_index]['embed_dims'] == self.embed_dims - self.ffns.append( - build_feedforward_network(ffn_cfgs[ffn_index], - dict(type='FFN'))) - - self.norms = ModuleList() - num_norms = operation_order.count('norm') - for _ in range(num_norms): - self.norms.append(build_norm_layer(norm_cfg, self.embed_dims)[1]) - - def forward(self, - query, - key=None, - value=None, - query_pos=None, - key_pos=None, - attn_masks=None, - query_key_padding_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `TransformerDecoderLayer`. - - **kwargs contains some specific arguments of attentions. - - Args: - query (Tensor): The input query with shape - [num_queries, bs, embed_dims] if - self.batch_first is False, else - [bs, num_queries embed_dims]. - key (Tensor): The key tensor with shape [num_keys, bs, - embed_dims] if self.batch_first is False, else - [bs, num_keys, embed_dims] . - value (Tensor): The value tensor with same shape as `key`. - query_pos (Tensor): The positional encoding for `query`. - Default: None. - key_pos (Tensor): The positional encoding for `key`. - Default: None. - attn_masks (List[Tensor] | None): 2D Tensor used in - calculation of corresponding attention. The length of - it should equal to the number of `attention` in - `operation_order`. Default: None. - query_key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_queries]. Only used in `self_attn` layer. - Defaults to None. - key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_keys]. Default: None. - - Returns: - Tensor: forwarded results with shape [num_queries, bs, embed_dims]. - """ - - norm_index = 0 - attn_index = 0 - ffn_index = 0 - identity = query - if attn_masks is None: - attn_masks = [None for _ in range(self.num_attn)] - elif isinstance(attn_masks, torch.Tensor): - attn_masks = [ - copy.deepcopy(attn_masks) for _ in range(self.num_attn) - ] - warnings.warn(f'Use same attn_mask in all attentions in ' - f'{self.__class__.__name__} ') - else: - assert len(attn_masks) == self.num_attn, f'The length of ' \ - f'attn_masks {len(attn_masks)} must be equal ' \ - f'to the number of attention in ' \ - f'operation_order {self.num_attn}' - - for layer in self.operation_order: - if layer == 'self_attn': - temp_key = temp_value = query - query = self.attentions[attn_index]( - query, - temp_key, - temp_value, - identity if self.pre_norm else None, - query_pos=query_pos, - key_pos=query_pos, - attn_mask=attn_masks[attn_index], - key_padding_mask=query_key_padding_mask, - **kwargs) - attn_index += 1 - identity = query - - elif layer == 'norm': - query = self.norms[norm_index](query) - norm_index += 1 - - elif layer == 'cross_attn': - query = self.attentions[attn_index]( - query, - key, - value, - identity if self.pre_norm else None, - query_pos=query_pos, - key_pos=key_pos, - attn_mask=attn_masks[attn_index], - key_padding_mask=key_padding_mask, - **kwargs) - attn_index += 1 - identity = query - - elif layer == 'ffn': - query = self.ffns[ffn_index]( - query, identity if self.pre_norm else None) - ffn_index += 1 - - return query - - -@TRANSFORMER_LAYER_SEQUENCE.register_module() -class TransformerLayerSequence(BaseModule): - """Base class for TransformerEncoder and TransformerDecoder in vision - transformer. - - As base-class of Encoder and Decoder in vision transformer. - Support customization such as specifying different kind - of `transformer_layer` in `transformer_coder`. - - Args: - transformerlayer (list[obj:`mmcv.ConfigDict`] | - obj:`mmcv.ConfigDict`): Config of transformerlayer - in TransformerCoder. If it is obj:`mmcv.ConfigDict`, - it would be repeated `num_layer` times to a - list[`mmcv.ConfigDict`]. Default: None. - num_layers (int): The number of `TransformerLayer`. Default: None. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, transformerlayers=None, num_layers=None, init_cfg=None): - super(TransformerLayerSequence, self).__init__(init_cfg) - if isinstance(transformerlayers, dict): - transformerlayers = [ - copy.deepcopy(transformerlayers) for _ in range(num_layers) - ] - else: - assert isinstance(transformerlayers, list) and \ - len(transformerlayers) == num_layers - self.num_layers = num_layers - self.layers = ModuleList() - for i in range(num_layers): - self.layers.append(build_transformer_layer(transformerlayers[i])) - self.embed_dims = self.layers[0].embed_dims - self.pre_norm = self.layers[0].pre_norm - - def forward(self, - query, - key, - value, - query_pos=None, - key_pos=None, - attn_masks=None, - query_key_padding_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `TransformerCoder`. - - Args: - query (Tensor): Input query with shape - `(num_queries, bs, embed_dims)`. - key (Tensor): The key tensor with shape - `(num_keys, bs, embed_dims)`. - value (Tensor): The value tensor with shape - `(num_keys, bs, embed_dims)`. - query_pos (Tensor): The positional encoding for `query`. - Default: None. - key_pos (Tensor): The positional encoding for `key`. - Default: None. - attn_masks (List[Tensor], optional): Each element is 2D Tensor - which is used in calculation of corresponding attention in - operation_order. Default: None. - query_key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_queries]. Only used in self-attention - Default: None. - key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_keys]. Default: None. - - Returns: - Tensor: results with shape [num_queries, bs, embed_dims]. - """ - for layer in self.layers: - query = layer( - query, - key, - value, - query_pos=query_pos, - key_pos=key_pos, - attn_masks=attn_masks, - query_key_padding_mask=query_key_padding_mask, - key_padding_mask=key_padding_mask, - **kwargs) - return query diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/datasets/stare.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/datasets/stare.py deleted file mode 100644 index cbd14e0920e7f6a73baff1432e5a32ccfdb0dfae..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/datasets/stare.py +++ /dev/null @@ -1,27 +0,0 @@ -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class STAREDataset(CustomDataset): - """STARE dataset. - - In segmentation map annotation for STARE, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '.ah.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(STAREDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='.ah.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/spaces/godelbach/onlyjitz/app.py b/spaces/godelbach/onlyjitz/app.py deleted file mode 100644 index 0808d24bdd57d04255c469cdba802ddf388f28a6..0000000000000000000000000000000000000000 --- a/spaces/godelbach/onlyjitz/app.py +++ /dev/null @@ -1,22 +0,0 @@ -import gradio as gr -from fastai.vision.all import * - -categories = ("Armbar", "Triangle") - - -learn = load_learner("model.pkl") - -image = gr.Image(shape=(224, 224)) - -examples = ["images/armbar1.jpeg", "images/armbar2.jpeg", "images/armbar3.webp", "images/armbar4.png", "images/flying_armbar.jpeg", - "images/triangle.jpeg", "images/triangle2.webp", "images/triangle3.jpeg", "images/triangle4.jpeg", "images/triangle_armbar1.jpeg"] - - -def image_classifier(img): - _, _, probs = learn.predict(img) - return dict(zip(categories, map(float, probs))) - - -app = gr.Interface(fn=image_classifier, inputs=image, - outputs="label", examples=examples) -app.launch() diff --git a/spaces/gradio/HuBERT/examples/criss/README.md b/spaces/gradio/HuBERT/examples/criss/README.md deleted file mode 100644 index 4689ed7c10497a5100b28fe6d6801a7c089da569..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/criss/README.md +++ /dev/null @@ -1,61 +0,0 @@ -# Cross-lingual Retrieval for Iterative Self-Supervised Training - -https://arxiv.org/pdf/2006.09526.pdf - -## Introduction - -CRISS is a multilingual sequence-to-sequnce pretraining method where mining and training processes are applied iteratively, improving cross-lingual alignment and translation ability at the same time. - -## Requirements: - -* faiss: https://github.com/facebookresearch/faiss -* mosesdecoder: https://github.com/moses-smt/mosesdecoder -* flores: https://github.com/facebookresearch/flores -* LASER: https://github.com/facebookresearch/LASER - -## Unsupervised Machine Translation -##### 1. Download and decompress CRISS checkpoints -``` -cd examples/criss -wget https://dl.fbaipublicfiles.com/criss/criss_3rd_checkpoints.tar.gz -tar -xf criss_checkpoints.tar.gz -``` -##### 2. Download and preprocess Flores test dataset -Make sure to run all scripts from examples/criss directory -``` -bash download_and_preprocess_flores_test.sh -``` - -##### 3. Run Evaluation on Sinhala-English -``` -bash unsupervised_mt/eval.sh -``` - -## Sentence Retrieval -##### 1. Download and preprocess Tatoeba dataset -``` -bash download_and_preprocess_tatoeba.sh -``` - -##### 2. Run Sentence Retrieval on Tatoeba Kazakh-English -``` -bash sentence_retrieval/sentence_retrieval_tatoeba.sh -``` - -## Mining -##### 1. Install faiss -Follow instructions on https://github.com/facebookresearch/faiss/blob/master/INSTALL.md -##### 2. Mine pseudo-parallel data between Kazakh and English -``` -bash mining/mine_example.sh -``` - -## Citation -```bibtex -@article{tran2020cross, - title={Cross-lingual retrieval for iterative self-supervised training}, - author={Tran, Chau and Tang, Yuqing and Li, Xian and Gu, Jiatao}, - journal={arXiv preprint arXiv:2006.09526}, - year={2020} -} -``` diff --git a/spaces/gradio/HuBERT/scripts/sacrebleu.sh b/spaces/gradio/HuBERT/scripts/sacrebleu.sh deleted file mode 100644 index c10bf2b76ea032deabab6f5c9d8a3e1e884f1642..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/scripts/sacrebleu.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash - -if [ $# -ne 4 ]; then - echo "usage: $0 TESTSET SRCLANG TGTLANG GEN" - exit 1 -fi - -TESTSET=$1 -SRCLANG=$2 -TGTLANG=$3 - -GEN=$4 - -if ! command -v sacremoses &> /dev/null -then - echo "sacremoses could not be found, please install with: pip install sacremoses" - exit -fi - -grep ^H $GEN \ -| sed 's/^H\-//' \ -| sort -n -k 1 \ -| cut -f 3 \ -| sacremoses detokenize \ -> $GEN.sorted.detok - -sacrebleu --test-set $TESTSET --language-pair "${SRCLANG}-${TGTLANG}" < $GEN.sorted.detok diff --git a/spaces/gulabpatel/GFP_GAN/tests/test_gfpgan_model.py b/spaces/gulabpatel/GFP_GAN/tests/test_gfpgan_model.py deleted file mode 100644 index 1408ddd7c909c7257fbcea79f8576231a40f9211..0000000000000000000000000000000000000000 --- a/spaces/gulabpatel/GFP_GAN/tests/test_gfpgan_model.py +++ /dev/null @@ -1,132 +0,0 @@ -import tempfile -import torch -import yaml -from basicsr.archs.stylegan2_arch import StyleGAN2Discriminator -from basicsr.data.paired_image_dataset import PairedImageDataset -from basicsr.losses.losses import GANLoss, L1Loss, PerceptualLoss - -from gfpgan.archs.arcface_arch import ResNetArcFace -from gfpgan.archs.gfpganv1_arch import FacialComponentDiscriminator, GFPGANv1 -from gfpgan.models.gfpgan_model import GFPGANModel - - -def test_gfpgan_model(): - with open('tests/data/test_gfpgan_model.yml', mode='r') as f: - opt = yaml.load(f, Loader=yaml.FullLoader) - - # build model - model = GFPGANModel(opt) - # test attributes - assert model.__class__.__name__ == 'GFPGANModel' - assert isinstance(model.net_g, GFPGANv1) # generator - assert isinstance(model.net_d, StyleGAN2Discriminator) # discriminator - # facial component discriminators - assert isinstance(model.net_d_left_eye, FacialComponentDiscriminator) - assert isinstance(model.net_d_right_eye, FacialComponentDiscriminator) - assert isinstance(model.net_d_mouth, FacialComponentDiscriminator) - # identity network - assert isinstance(model.network_identity, ResNetArcFace) - # losses - assert isinstance(model.cri_pix, L1Loss) - assert isinstance(model.cri_perceptual, PerceptualLoss) - assert isinstance(model.cri_gan, GANLoss) - assert isinstance(model.cri_l1, L1Loss) - # optimizer - assert isinstance(model.optimizers[0], torch.optim.Adam) - assert isinstance(model.optimizers[1], torch.optim.Adam) - - # prepare data - gt = torch.rand((1, 3, 512, 512), dtype=torch.float32) - lq = torch.rand((1, 3, 512, 512), dtype=torch.float32) - loc_left_eye = torch.rand((1, 4), dtype=torch.float32) - loc_right_eye = torch.rand((1, 4), dtype=torch.float32) - loc_mouth = torch.rand((1, 4), dtype=torch.float32) - data = dict(gt=gt, lq=lq, loc_left_eye=loc_left_eye, loc_right_eye=loc_right_eye, loc_mouth=loc_mouth) - model.feed_data(data) - # check data shape - assert model.lq.shape == (1, 3, 512, 512) - assert model.gt.shape == (1, 3, 512, 512) - assert model.loc_left_eyes.shape == (1, 4) - assert model.loc_right_eyes.shape == (1, 4) - assert model.loc_mouths.shape == (1, 4) - - # ----------------- test optimize_parameters -------------------- # - model.feed_data(data) - model.optimize_parameters(1) - assert model.output.shape == (1, 3, 512, 512) - assert isinstance(model.log_dict, dict) - # check returned keys - expected_keys = [ - 'l_g_pix', 'l_g_percep', 'l_g_style', 'l_g_gan', 'l_g_gan_left_eye', 'l_g_gan_right_eye', 'l_g_gan_mouth', - 'l_g_comp_style_loss', 'l_identity', 'l_d', 'real_score', 'fake_score', 'l_d_r1', 'l_d_left_eye', - 'l_d_right_eye', 'l_d_mouth' - ] - assert set(expected_keys).issubset(set(model.log_dict.keys())) - - # ----------------- remove pyramid_loss_weight-------------------- # - model.feed_data(data) - model.optimize_parameters(100000) # large than remove_pyramid_loss = 50000 - assert model.output.shape == (1, 3, 512, 512) - assert isinstance(model.log_dict, dict) - # check returned keys - expected_keys = [ - 'l_g_pix', 'l_g_percep', 'l_g_style', 'l_g_gan', 'l_g_gan_left_eye', 'l_g_gan_right_eye', 'l_g_gan_mouth', - 'l_g_comp_style_loss', 'l_identity', 'l_d', 'real_score', 'fake_score', 'l_d_r1', 'l_d_left_eye', - 'l_d_right_eye', 'l_d_mouth' - ] - assert set(expected_keys).issubset(set(model.log_dict.keys())) - - # ----------------- test save -------------------- # - with tempfile.TemporaryDirectory() as tmpdir: - model.opt['path']['models'] = tmpdir - model.opt['path']['training_states'] = tmpdir - model.save(0, 1) - - # ----------------- test the test function -------------------- # - model.test() - assert model.output.shape == (1, 3, 512, 512) - # delete net_g_ema - model.__delattr__('net_g_ema') - model.test() - assert model.output.shape == (1, 3, 512, 512) - assert model.net_g.training is True # should back to training mode after testing - - # ----------------- test nondist_validation -------------------- # - # construct dataloader - dataset_opt = dict( - name='Demo', - dataroot_gt='tests/data/gt', - dataroot_lq='tests/data/gt', - io_backend=dict(type='disk'), - scale=4, - phase='val') - dataset = PairedImageDataset(dataset_opt) - dataloader = torch.utils.data.DataLoader(dataset=dataset, batch_size=1, shuffle=False, num_workers=0) - assert model.is_train is True - with tempfile.TemporaryDirectory() as tmpdir: - model.opt['path']['visualization'] = tmpdir - model.nondist_validation(dataloader, 1, None, save_img=True) - assert model.is_train is True - # check metric_results - assert 'psnr' in model.metric_results - assert isinstance(model.metric_results['psnr'], float) - - # validation - with tempfile.TemporaryDirectory() as tmpdir: - model.opt['is_train'] = False - model.opt['val']['suffix'] = 'test' - model.opt['path']['visualization'] = tmpdir - model.opt['val']['pbar'] = True - model.nondist_validation(dataloader, 1, None, save_img=True) - # check metric_results - assert 'psnr' in model.metric_results - assert isinstance(model.metric_results['psnr'], float) - - # if opt['val']['suffix'] is None - model.opt['val']['suffix'] = None - model.opt['name'] = 'demo' - model.opt['path']['visualization'] = tmpdir - model.nondist_validation(dataloader, 1, None, save_img=True) - # check metric_results - assert 'psnr' in model.metric_results - assert isinstance(model.metric_results['psnr'], float) diff --git a/spaces/gwang-kim/DATID-3D/eg3d/training/dataset.py b/spaces/gwang-kim/DATID-3D/eg3d/training/dataset.py deleted file mode 100644 index b4d7c4fb13d1541f9d11af92a76cc859d71f5547..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/eg3d/training/dataset.py +++ /dev/null @@ -1,244 +0,0 @@ -# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# SPDX-License-Identifier: LicenseRef-NvidiaProprietary -# -# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual -# property and proprietary rights in and to this material, related -# documentation and any modifications thereto. Any use, reproduction, -# disclosure or distribution of this material and related documentation -# without an express license agreement from NVIDIA CORPORATION or -# its affiliates is strictly prohibited. - -"""Streaming images and labels from datasets created with dataset_tool.py.""" - -import os -import numpy as np -import zipfile -import PIL.Image -import json -import torch -import dnnlib - -try: - import pyspng -except ImportError: - pyspng = None - -#---------------------------------------------------------------------------- - -class Dataset(torch.utils.data.Dataset): - def __init__(self, - name, # Name of the dataset. - raw_shape, # Shape of the raw image data (NCHW). - max_size = None, # Artificially limit the size of the dataset. None = no limit. Applied before xflip. - use_labels = False, # Enable conditioning labels? False = label dimension is zero. - xflip = False, # Artificially double the size of the dataset via x-flips. Applied after max_size. - random_seed = 0, # Random seed to use when applying max_size. - ): - self._name = name - self._raw_shape = list(raw_shape) - self._use_labels = use_labels - self._raw_labels = None - self._label_shape = None - - # Apply max_size. - self._raw_idx = np.arange(self._raw_shape[0], dtype=np.int64) - if (max_size is not None) and (self._raw_idx.size > max_size): - np.random.RandomState(random_seed).shuffle(self._raw_idx) - self._raw_idx = np.sort(self._raw_idx[:max_size]) - - # Apply xflip. - self._xflip = np.zeros(self._raw_idx.size, dtype=np.uint8) - if xflip: - self._raw_idx = np.tile(self._raw_idx, 2) - self._xflip = np.concatenate([self._xflip, np.ones_like(self._xflip)]) - - def _get_raw_labels(self): - if self._raw_labels is None: - self._raw_labels = self._load_raw_labels() if self._use_labels else None - if self._raw_labels is None: - self._raw_labels = np.zeros([self._raw_shape[0], 0], dtype=np.float32) - assert isinstance(self._raw_labels, np.ndarray) - assert self._raw_labels.shape[0] == self._raw_shape[0] - assert self._raw_labels.dtype in [np.float32, np.int64] - if self._raw_labels.dtype == np.int64: - assert self._raw_labels.ndim == 1 - assert np.all(self._raw_labels >= 0) - self._raw_labels_std = self._raw_labels.std(0) - return self._raw_labels - - def close(self): # to be overridden by subclass - pass - - def _load_raw_image(self, raw_idx): # to be overridden by subclass - raise NotImplementedError - - def _load_raw_labels(self): # to be overridden by subclass - raise NotImplementedError - - def __getstate__(self): - return dict(self.__dict__, _raw_labels=None) - - def __del__(self): - try: - self.close() - except: - pass - - def __len__(self): - return self._raw_idx.size - - def __getitem__(self, idx): - image = self._load_raw_image(self._raw_idx[idx]) - assert isinstance(image, np.ndarray) - assert list(image.shape) == self.image_shape - assert image.dtype == np.uint8 - if self._xflip[idx]: - assert image.ndim == 3 # CHW - image = image[:, :, ::-1] - return image.copy(), self.get_label(idx) - - def get_label(self, idx): - label = self._get_raw_labels()[self._raw_idx[idx]] - if label.dtype == np.int64: - onehot = np.zeros(self.label_shape, dtype=np.float32) - onehot[label] = 1 - label = onehot - return label.copy() - - def get_details(self, idx): - d = dnnlib.EasyDict() - d.raw_idx = int(self._raw_idx[idx]) - d.xflip = (int(self._xflip[idx]) != 0) - d.raw_label = self._get_raw_labels()[d.raw_idx].copy() - return d - - def get_label_std(self): - return self._raw_labels_std - - @property - def name(self): - return self._name - - @property - def image_shape(self): - return list(self._raw_shape[1:]) - - @property - def num_channels(self): - assert len(self.image_shape) == 3 # CHW - return self.image_shape[0] - - @property - def resolution(self): - assert len(self.image_shape) == 3 # CHW - assert self.image_shape[1] == self.image_shape[2] - return self.image_shape[1] - - @property - def label_shape(self): - if self._label_shape is None: - raw_labels = self._get_raw_labels() - if raw_labels.dtype == np.int64: - self._label_shape = [int(np.max(raw_labels)) + 1] - else: - self._label_shape = raw_labels.shape[1:] - return list(self._label_shape) - - @property - def label_dim(self): - assert len(self.label_shape) == 1 - return self.label_shape[0] - - @property - def has_labels(self): - return any(x != 0 for x in self.label_shape) - - @property - def has_onehot_labels(self): - return self._get_raw_labels().dtype == np.int64 - -#---------------------------------------------------------------------------- - -class ImageFolderDataset(Dataset): - def __init__(self, - path, # Path to directory or zip. - resolution = None, # Ensure specific resolution, None = highest available. - **super_kwargs, # Additional arguments for the Dataset base class. - ): - self._path = path - self._zipfile = None - - if os.path.isdir(self._path): - self._type = 'dir' - self._all_fnames = {os.path.relpath(os.path.join(root, fname), start=self._path) for root, _dirs, files in os.walk(self._path) for fname in files} - elif self._file_ext(self._path) == '.zip': - self._type = 'zip' - self._all_fnames = set(self._get_zipfile().namelist()) - else: - raise IOError('Path must point to a directory or zip') - - PIL.Image.init() - self._image_fnames = sorted(fname for fname in self._all_fnames if self._file_ext(fname) in PIL.Image.EXTENSION) - if len(self._image_fnames) == 0: - raise IOError('No image files found in the specified path') - - name = os.path.splitext(os.path.basename(self._path))[0] - raw_shape = [len(self._image_fnames)] + list(self._load_raw_image(0).shape) - if resolution is not None and (raw_shape[2] != resolution or raw_shape[3] != resolution): - raise IOError('Image files do not match the specified resolution') - super().__init__(name=name, raw_shape=raw_shape, **super_kwargs) - - @staticmethod - def _file_ext(fname): - return os.path.splitext(fname)[1].lower() - - def _get_zipfile(self): - assert self._type == 'zip' - if self._zipfile is None: - self._zipfile = zipfile.ZipFile(self._path) - return self._zipfile - - def _open_file(self, fname): - if self._type == 'dir': - return open(os.path.join(self._path, fname), 'rb') - if self._type == 'zip': - return self._get_zipfile().open(fname, 'r') - return None - - def close(self): - try: - if self._zipfile is not None: - self._zipfile.close() - finally: - self._zipfile = None - - def __getstate__(self): - return dict(super().__getstate__(), _zipfile=None) - - def _load_raw_image(self, raw_idx): - fname = self._image_fnames[raw_idx] - with self._open_file(fname) as f: - if pyspng is not None and self._file_ext(fname) == '.png': - image = pyspng.load(f.read()) - else: - image = np.array(PIL.Image.open(f)) - if image.ndim == 2: - image = image[:, :, np.newaxis] # HW => HWC - image = image.transpose(2, 0, 1) # HWC => CHW - return image - - def _load_raw_labels(self): - fname = 'dataset.json' - if fname not in self._all_fnames: - return None - with self._open_file(fname) as f: - labels = json.load(f)['labels'] - if labels is None: - return None - labels = dict(labels) - labels = [labels[fname.replace('\\', '/')] for fname in self._image_fnames] - labels = np.array(labels) - labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim]) - return labels - -#---------------------------------------------------------------------------- diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/encoders/helpers.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/encoders/helpers.py deleted file mode 100644 index c5e907be6703ccc43f263b4c40f7d1b84bc47755..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/encoders/helpers.py +++ /dev/null @@ -1,145 +0,0 @@ -from collections import namedtuple -import torch -import torch.nn.functional as F -from torch.nn import Conv2d, BatchNorm2d, PReLU, ReLU, Sigmoid, MaxPool2d, AdaptiveAvgPool2d, Sequential, Module - -""" -ArcFace implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Flatten(Module): - def forward(self, input): - return input.view(input.size(0), -1) - - -def l2_norm(input, axis=1): - norm = torch.norm(input, 2, axis, True) - output = torch.div(input, norm) - return output - - -class Bottleneck(namedtuple('Block', ['in_channel', 'depth', 'stride'])): - """ A named tuple describing a ResNet block. """ - - -def get_block(in_channel, depth, num_units, stride=2): - return [Bottleneck(in_channel, depth, stride)] + [Bottleneck(depth, depth, 1) for i in range(num_units - 1)] - - -def get_blocks(num_layers): - if num_layers == 50: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=4), - get_block(in_channel=128, depth=256, num_units=14), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 100: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=13), - get_block(in_channel=128, depth=256, num_units=30), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 152: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=8), - get_block(in_channel=128, depth=256, num_units=36), - get_block(in_channel=256, depth=512, num_units=3) - ] - else: - raise ValueError( - "Invalid number of layers: {}. Must be one of [50, 100, 152]".format(num_layers)) - return blocks - - -class SEModule(Module): - def __init__(self, channels, reduction): - super(SEModule, self).__init__() - self.avg_pool = AdaptiveAvgPool2d(1) - self.fc1 = Conv2d(channels, channels // reduction, - kernel_size=1, padding=0, bias=False) - self.relu = ReLU(inplace=True) - self.fc2 = Conv2d(channels // reduction, channels, - kernel_size=1, padding=0, bias=False) - self.sigmoid = Sigmoid() - - def forward(self, x): - module_input = x - x = self.avg_pool(x) - x = self.fc1(x) - x = self.relu(x) - x = self.fc2(x) - x = self.sigmoid(x) - return module_input * x - - -class bottleneck_IR(Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR, self).__init__() - if in_channel == depth: - self.shortcut_layer = MaxPool2d(1, stride) - else: - self.shortcut_layer = Sequential( - Conv2d(in_channel, depth, (1, 1), stride, bias=False), - BatchNorm2d(depth) - ) - self.res_layer = Sequential( - BatchNorm2d(in_channel), - Conv2d(in_channel, depth, (3, 3), (1, 1), - 1, bias=False), PReLU(depth), - Conv2d(depth, depth, (3, 3), stride, 1, - bias=False), BatchNorm2d(depth) - ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut - - -class bottleneck_IR_SE(Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR_SE, self).__init__() - if in_channel == depth: - self.shortcut_layer = MaxPool2d(1, stride) - else: - self.shortcut_layer = Sequential( - Conv2d(in_channel, depth, (1, 1), stride, bias=False), - BatchNorm2d(depth) - ) - self.res_layer = Sequential( - BatchNorm2d(in_channel), - Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), - PReLU(depth), - Conv2d(depth, depth, (3, 3), stride, 1, bias=False), - BatchNorm2d(depth), - SEModule(depth, 16) - ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut - - -def _upsample_add(x, y): - """Upsample and add two feature maps. - Args: - x: (Variable) top feature map to be upsampled. - y: (Variable) lateral feature map. - Returns: - (Variable) added feature map. - Note in PyTorch, when input size is odd, the upsampled feature map - with `F.upsample(..., scale_factor=2, mode='nearest')` - maybe not equal to the lateral feature map size. - e.g. - original input size: [N,_,15,15] -> - conv2d feature map size: [N,_,8,8] -> - upsampled feature map size: [N,_,16,16] - So we choose bilinear upsample which supports arbitrary output sizes. - """ - _, _, H, W = y.size() - return F.interpolate(x, size=(H, W), mode='bilinear', align_corners=True) + y diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.cpp b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.cpp deleted file mode 100644 index d2e633dc896433c205e18bc3e455539192ff968e..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.cpp +++ /dev/null @@ -1,23 +0,0 @@ -#include - - -torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1) { - CHECK_CUDA(input); - CHECK_CUDA(kernel); - - return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)"); -} \ No newline at end of file diff --git a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/utils.py b/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/utils.py deleted file mode 100644 index 51e80c5e296b24cae130ab0459baf268e0db7673..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/utils.py +++ /dev/null @@ -1,60 +0,0 @@ -from itertools import repeat -import collections.abc - -from torch import nn as nn -from torchvision.ops.misc import FrozenBatchNorm2d - - -def freeze_batch_norm_2d(module, module_match={}, name=''): - """ - Converts all `BatchNorm2d` and `SyncBatchNorm` layers of provided module into `FrozenBatchNorm2d`. If `module` is - itself an instance of either `BatchNorm2d` or `SyncBatchNorm`, it is converted into `FrozenBatchNorm2d` and - returned. Otherwise, the module is walked recursively and submodules are converted in place. - - Args: - module (torch.nn.Module): Any PyTorch module. - module_match (dict): Dictionary of full module names to freeze (all if empty) - name (str): Full module name (prefix) - - Returns: - torch.nn.Module: Resulting module - - Inspired by https://github.com/pytorch/pytorch/blob/a5895f85be0f10212791145bfedc0261d364f103/torch/nn/modules/batchnorm.py#L762 - """ - res = module - is_match = True - if module_match: - is_match = name in module_match - if is_match and isinstance(module, (nn.modules.batchnorm.BatchNorm2d, nn.modules.batchnorm.SyncBatchNorm)): - res = FrozenBatchNorm2d(module.num_features) - res.num_features = module.num_features - res.affine = module.affine - if module.affine: - res.weight.data = module.weight.data.clone().detach() - res.bias.data = module.bias.data.clone().detach() - res.running_mean.data = module.running_mean.data - res.running_var.data = module.running_var.data - res.eps = module.eps - else: - for child_name, child in module.named_children(): - full_child_name = '.'.join([name, child_name]) if name else child_name - new_child = freeze_batch_norm_2d(child, module_match, full_child_name) - if new_child is not child: - res.add_module(child_name, new_child) - return res - - -# From PyTorch internals -def _ntuple(n): - def parse(x): - if isinstance(x, collections.abc.Iterable): - return x - return tuple(repeat(x, n)) - return parse - - -to_1tuple = _ntuple(1) -to_2tuple = _ntuple(2) -to_3tuple = _ntuple(3) -to_4tuple = _ntuple(4) -to_ntuple = lambda n, x: _ntuple(n)(x) diff --git a/spaces/hanstyle/tts/wav2lip_train.py b/spaces/hanstyle/tts/wav2lip_train.py deleted file mode 100644 index 6e0811808af55464a803be1e268be33f1b8a31a9..0000000000000000000000000000000000000000 --- a/spaces/hanstyle/tts/wav2lip_train.py +++ /dev/null @@ -1,374 +0,0 @@ -from os.path import dirname, join, basename, isfile -from tqdm import tqdm - -from models import SyncNet_color as SyncNet -from models import Wav2Lip as Wav2Lip -import audio - -import torch -from torch import nn -from torch import optim -import torch.backends.cudnn as cudnn -from torch.utils import data as data_utils -import numpy as np - -from glob import glob - -import os, random, cv2, argparse -from hparams import hparams, get_image_list - -parser = argparse.ArgumentParser(description='Code to train the Wav2Lip model without the visual quality discriminator') - -parser.add_argument("--data_root", help="Root folder of the preprocessed LRS2 dataset", required=True, type=str) - -parser.add_argument('--checkpoint_dir', help='Save checkpoints to this directory', required=True, type=str) -parser.add_argument('--syncnet_checkpoint_path', help='Load the pre-trained Expert discriminator', required=True, type=str) - -parser.add_argument('--checkpoint_path', help='Resume from this checkpoint', default=None, type=str) - -args = parser.parse_args() - - -global_step = 0 -global_epoch = 0 -use_cuda = torch.cuda.is_available() -print('use_cuda: {}'.format(use_cuda)) - -syncnet_T = 5 -syncnet_mel_step_size = 16 - -class Dataset(object): - def __init__(self, split): - self.all_videos = get_image_list(args.data_root, split) - - def get_frame_id(self, frame): - return int(basename(frame).split('.')[0]) - - def get_window(self, start_frame): - start_id = self.get_frame_id(start_frame) - vidname = dirname(start_frame) - - window_fnames = [] - for frame_id in range(start_id, start_id + syncnet_T): - frame = join(vidname, '{}.jpg'.format(frame_id)) - if not isfile(frame): - return None - window_fnames.append(frame) - return window_fnames - - def read_window(self, window_fnames): - if window_fnames is None: return None - window = [] - for fname in window_fnames: - img = cv2.imread(fname) - if img is None: - return None - try: - img = cv2.resize(img, (hparams.img_size, hparams.img_size)) - except Exception as e: - return None - - window.append(img) - - return window - - def crop_audio_window(self, spec, start_frame): - if type(start_frame) == int: - start_frame_num = start_frame - else: - start_frame_num = self.get_frame_id(start_frame) # 0-indexing ---> 1-indexing - start_idx = int(80. * (start_frame_num / float(hparams.fps))) - - end_idx = start_idx + syncnet_mel_step_size - - return spec[start_idx : end_idx, :] - - def get_segmented_mels(self, spec, start_frame): - mels = [] - assert syncnet_T == 5 - start_frame_num = self.get_frame_id(start_frame) + 1 # 0-indexing ---> 1-indexing - if start_frame_num - 2 < 0: return None - for i in range(start_frame_num, start_frame_num + syncnet_T): - m = self.crop_audio_window(spec, i - 2) - if m.shape[0] != syncnet_mel_step_size: - return None - mels.append(m.T) - - mels = np.asarray(mels) - - return mels - - def prepare_window(self, window): - # 3 x T x H x W - x = np.asarray(window) / 255. - x = np.transpose(x, (3, 0, 1, 2)) - - return x - - def __len__(self): - return len(self.all_videos) - - def __getitem__(self, idx): - while 1: - idx = random.randint(0, len(self.all_videos) - 1) - vidname = self.all_videos[idx] - img_names = list(glob(join(vidname, '*.jpg'))) - if len(img_names) <= 3 * syncnet_T: - continue - - img_name = random.choice(img_names) - wrong_img_name = random.choice(img_names) - while wrong_img_name == img_name: - wrong_img_name = random.choice(img_names) - - window_fnames = self.get_window(img_name) - wrong_window_fnames = self.get_window(wrong_img_name) - if window_fnames is None or wrong_window_fnames is None: - continue - - window = self.read_window(window_fnames) - if window is None: - continue - - wrong_window = self.read_window(wrong_window_fnames) - if wrong_window is None: - continue - - try: - wavpath = join(vidname, "audio.wav") - wav = audio.load_wav(wavpath, hparams.sample_rate) - - orig_mel = audio.melspectrogram(wav).T - except Exception as e: - continue - - mel = self.crop_audio_window(orig_mel.copy(), img_name) - - if (mel.shape[0] != syncnet_mel_step_size): - continue - - indiv_mels = self.get_segmented_mels(orig_mel.copy(), img_name) - if indiv_mels is None: continue - - window = self.prepare_window(window) - y = window.copy() - window[:, :, window.shape[2]//2:] = 0. - - wrong_window = self.prepare_window(wrong_window) - x = np.concatenate([window, wrong_window], axis=0) - - x = torch.FloatTensor(x) - mel = torch.FloatTensor(mel.T).unsqueeze(0) - indiv_mels = torch.FloatTensor(indiv_mels).unsqueeze(1) - y = torch.FloatTensor(y) - return x, indiv_mels, mel, y - -def save_sample_images(x, g, gt, global_step, checkpoint_dir): - x = (x.detach().cpu().numpy().transpose(0, 2, 3, 4, 1) * 255.).astype(np.uint8) - g = (g.detach().cpu().numpy().transpose(0, 2, 3, 4, 1) * 255.).astype(np.uint8) - gt = (gt.detach().cpu().numpy().transpose(0, 2, 3, 4, 1) * 255.).astype(np.uint8) - - refs, inps = x[..., 3:], x[..., :3] - folder = join(checkpoint_dir, "samples_step{:09d}".format(global_step)) - if not os.path.exists(folder): os.mkdir(folder) - collage = np.concatenate((refs, inps, g, gt), axis=-2) - for batch_idx, c in enumerate(collage): - for t in range(len(c)): - cv2.imwrite('{}/{}_{}.jpg'.format(folder, batch_idx, t), c[t]) - -logloss = nn.BCELoss() -def cosine_loss(a, v, y): - d = nn.functional.cosine_similarity(a, v) - loss = logloss(d.unsqueeze(1), y) - - return loss - -device = torch.device("cuda" if use_cuda else "cpu") -syncnet = SyncNet().to(device) -for p in syncnet.parameters(): - p.requires_grad = False - -recon_loss = nn.L1Loss() -def get_sync_loss(mel, g): - g = g[:, :, :, g.size(3)//2:] - g = torch.cat([g[:, :, i] for i in range(syncnet_T)], dim=1) - # B, 3 * T, H//2, W - a, v = syncnet(mel, g) - y = torch.ones(g.size(0), 1).float().to(device) - return cosine_loss(a, v, y) - -def train(device, model, train_data_loader, test_data_loader, optimizer, - checkpoint_dir=None, checkpoint_interval=None, nepochs=None): - - global global_step, global_epoch - resumed_step = global_step - - while global_epoch < nepochs: - print('Starting Epoch: {}'.format(global_epoch)) - running_sync_loss, running_l1_loss = 0., 0. - prog_bar = tqdm(enumerate(train_data_loader)) - for step, (x, indiv_mels, mel, gt) in prog_bar: - model.train() - optimizer.zero_grad() - - # Move data to CUDA device - x = x.to(device) - mel = mel.to(device) - indiv_mels = indiv_mels.to(device) - gt = gt.to(device) - - g = model(indiv_mels, x) - - if hparams.syncnet_wt > 0.: - sync_loss = get_sync_loss(mel, g) - else: - sync_loss = 0. - - l1loss = recon_loss(g, gt) - - loss = hparams.syncnet_wt * sync_loss + (1 - hparams.syncnet_wt) * l1loss - loss.backward() - optimizer.step() - - if global_step % checkpoint_interval == 0: - save_sample_images(x, g, gt, global_step, checkpoint_dir) - - global_step += 1 - cur_session_steps = global_step - resumed_step - - running_l1_loss += l1loss.item() - if hparams.syncnet_wt > 0.: - running_sync_loss += sync_loss.item() - else: - running_sync_loss += 0. - - if global_step == 1 or global_step % checkpoint_interval == 0: - save_checkpoint( - model, optimizer, global_step, checkpoint_dir, global_epoch) - - if global_step == 1 or global_step % hparams.eval_interval == 0: - with torch.no_grad(): - average_sync_loss = eval_model(test_data_loader, global_step, device, model, checkpoint_dir) - - if average_sync_loss < .75: - hparams.set_hparam('syncnet_wt', 0.01) # without image GAN a lesser weight is sufficient - - prog_bar.set_description('L1: {}, Sync Loss: {}'.format(running_l1_loss / (step + 1), - running_sync_loss / (step + 1))) - - global_epoch += 1 - - -def eval_model(test_data_loader, global_step, device, model, checkpoint_dir): - eval_steps = 700 - print('Evaluating for {} steps'.format(eval_steps)) - sync_losses, recon_losses = [], [] - step = 0 - while 1: - for x, indiv_mels, mel, gt in test_data_loader: - step += 1 - model.eval() - - # Move data to CUDA device - x = x.to(device) - gt = gt.to(device) - indiv_mels = indiv_mels.to(device) - mel = mel.to(device) - - g = model(indiv_mels, x) - - sync_loss = get_sync_loss(mel, g) - l1loss = recon_loss(g, gt) - - sync_losses.append(sync_loss.item()) - recon_losses.append(l1loss.item()) - - if step > eval_steps: - averaged_sync_loss = sum(sync_losses) / len(sync_losses) - averaged_recon_loss = sum(recon_losses) / len(recon_losses) - - print('L1: {}, Sync loss: {}'.format(averaged_recon_loss, averaged_sync_loss)) - - return averaged_sync_loss - -def save_checkpoint(model, optimizer, step, checkpoint_dir, epoch): - - checkpoint_path = join( - checkpoint_dir, "checkpoint_step{:09d}.pth".format(global_step)) - optimizer_state = optimizer.state_dict() if hparams.save_optimizer_state else None - torch.save({ - "state_dict": model.state_dict(), - "optimizer": optimizer_state, - "global_step": step, - "global_epoch": epoch, - }, checkpoint_path) - print("Saved checkpoint:", checkpoint_path) - - -def _load(checkpoint_path): - if use_cuda: - checkpoint = torch.load(checkpoint_path) - else: - checkpoint = torch.load(checkpoint_path, - map_location=lambda storage, loc: storage) - return checkpoint - -def load_checkpoint(path, model, optimizer, reset_optimizer=False, overwrite_global_states=True): - global global_step - global global_epoch - - print("Load checkpoint from: {}".format(path)) - checkpoint = _load(path) - s = checkpoint["state_dict"] - new_s = {} - for k, v in s.items(): - new_s[k.replace('module.', '')] = v - model.load_state_dict(new_s) - if not reset_optimizer: - optimizer_state = checkpoint["optimizer"] - if optimizer_state is not None: - print("Load optimizer state from {}".format(path)) - optimizer.load_state_dict(checkpoint["optimizer"]) - if overwrite_global_states: - global_step = checkpoint["global_step"] - global_epoch = checkpoint["global_epoch"] - - return model - -if __name__ == "__main__": - checkpoint_dir = args.checkpoint_dir - - # Dataset and Dataloader setup - train_dataset = Dataset('train') - test_dataset = Dataset('val') - - train_data_loader = data_utils.DataLoader( - train_dataset, batch_size=hparams.batch_size, shuffle=True, - num_workers=hparams.num_workers) - - test_data_loader = data_utils.DataLoader( - test_dataset, batch_size=hparams.batch_size, - num_workers=4) - - device = torch.device("cuda" if use_cuda else "cpu") - - # Model - model = Wav2Lip().to(device) - print('total trainable params {}'.format(sum(p.numel() for p in model.parameters() if p.requires_grad))) - - optimizer = optim.Adam([p for p in model.parameters() if p.requires_grad], - lr=hparams.initial_learning_rate) - - if args.checkpoint_path is not None: - load_checkpoint(args.checkpoint_path, model, optimizer, reset_optimizer=False) - - load_checkpoint(args.syncnet_checkpoint_path, syncnet, None, reset_optimizer=True, overwrite_global_states=False) - - if not os.path.exists(checkpoint_dir): - os.mkdir(checkpoint_dir) - - # Train! - train(device, model, train_data_loader, test_data_loader, optimizer, - checkpoint_dir=checkpoint_dir, - checkpoint_interval=hparams.checkpoint_interval, - nepochs=hparams.nepochs) diff --git a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/autoanchor.py b/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/autoanchor.py deleted file mode 100644 index 4c11ab3decec6f30f46fcd6121a3cfd5bc7957c2..0000000000000000000000000000000000000000 --- a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/autoanchor.py +++ /dev/null @@ -1,169 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license -""" -AutoAnchor utils -""" - -import random - -import numpy as np -import torch -import yaml -from tqdm import tqdm - -from utils import TryExcept -from utils.general import LOGGER, TQDM_BAR_FORMAT, colorstr - -PREFIX = colorstr('AutoAnchor: ') - - -def check_anchor_order(m): - # Check anchor order against stride order for YOLOv5 Detect() module m, and correct if necessary - a = m.anchors.prod(-1).mean(-1).view(-1) # mean anchor area per output layer - da = a[-1] - a[0] # delta a - ds = m.stride[-1] - m.stride[0] # delta s - if da and (da.sign() != ds.sign()): # same order - LOGGER.info(f'{PREFIX}Reversing anchor order') - m.anchors[:] = m.anchors.flip(0) - - -@TryExcept(f'{PREFIX}ERROR') -def check_anchors(dataset, model, thr=4.0, imgsz=640): - # Check anchor fit to data, recompute if necessary - m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect() - shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True) - scale = np.random.uniform(0.9, 1.1, size=(shapes.shape[0], 1)) # augment scale - wh = torch.tensor(np.concatenate([l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)])).float() # wh - - def metric(k): # compute metric - r = wh[:, None] / k[None] - x = torch.min(r, 1 / r).min(2)[0] # ratio metric - best = x.max(1)[0] # best_x - aat = (x > 1 / thr).float().sum(1).mean() # anchors above threshold - bpr = (best > 1 / thr).float().mean() # best possible recall - return bpr, aat - - stride = m.stride.to(m.anchors.device).view(-1, 1, 1) # model strides - anchors = m.anchors.clone() * stride # current anchors - bpr, aat = metric(anchors.cpu().view(-1, 2)) - s = f'\n{PREFIX}{aat:.2f} anchors/target, {bpr:.3f} Best Possible Recall (BPR). ' - if bpr > 0.98: # threshold to recompute - LOGGER.info(f'{s}Current anchors are a good fit to dataset ✅') - else: - LOGGER.info(f'{s}Anchors are a poor fit to dataset ⚠️, attempting to improve...') - na = m.anchors.numel() // 2 # number of anchors - anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False) - new_bpr = metric(anchors)[0] - if new_bpr > bpr: # replace anchors - anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors) - m.anchors[:] = anchors.clone().view_as(m.anchors) - check_anchor_order(m) # must be in pixel-space (not grid-space) - m.anchors /= stride - s = f'{PREFIX}Done ✅ (optional: update model *.yaml to use these anchors in the future)' - else: - s = f'{PREFIX}Done ⚠️ (original anchors better than new anchors, proceeding with original anchors)' - LOGGER.info(s) - - -def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True): - """ Creates kmeans-evolved anchors from training dataset - - Arguments: - dataset: path to data.yaml, or a loaded dataset - n: number of anchors - img_size: image size used for training - thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0 - gen: generations to evolve anchors using genetic algorithm - verbose: print all results - - Return: - k: kmeans evolved anchors - - Usage: - from utils.autoanchor import *; _ = kmean_anchors() - """ - from scipy.cluster.vq import kmeans - - npr = np.random - thr = 1 / thr - - def metric(k, wh): # compute metrics - r = wh[:, None] / k[None] - x = torch.min(r, 1 / r).min(2)[0] # ratio metric - # x = wh_iou(wh, torch.tensor(k)) # iou metric - return x, x.max(1)[0] # x, best_x - - def anchor_fitness(k): # mutation fitness - _, best = metric(torch.tensor(k, dtype=torch.float32), wh) - return (best * (best > thr).float()).mean() # fitness - - def print_results(k, verbose=True): - k = k[np.argsort(k.prod(1))] # sort small to large - x, best = metric(k, wh0) - bpr, aat = (best > thr).float().mean(), (x > thr).float().mean() * n # best possible recall, anch > thr - s = f'{PREFIX}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr\n' \ - f'{PREFIX}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, ' \ - f'past_thr={x[x > thr].mean():.3f}-mean: ' - for x in k: - s += '%i,%i, ' % (round(x[0]), round(x[1])) - if verbose: - LOGGER.info(s[:-2]) - return k - - if isinstance(dataset, str): # *.yaml file - with open(dataset, errors='ignore') as f: - data_dict = yaml.safe_load(f) # model dict - from utils.dataloaders import LoadImagesAndLabels - dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True) - - # Get label wh - shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True) - wh0 = np.concatenate([l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)]) # wh - - # Filter - i = (wh0 < 3.0).any(1).sum() - if i: - LOGGER.info(f'{PREFIX}WARNING ⚠️ Extremely small objects found: {i} of {len(wh0)} labels are <3 pixels in size') - wh = wh0[(wh0 >= 2.0).any(1)].astype(np.float32) # filter > 2 pixels - # wh = wh * (npr.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1 - - # Kmeans init - try: - LOGGER.info(f'{PREFIX}Running kmeans for {n} anchors on {len(wh)} points...') - assert n <= len(wh) # apply overdetermined constraint - s = wh.std(0) # sigmas for whitening - k = kmeans(wh / s, n, iter=30)[0] * s # points - assert n == len(k) # kmeans may return fewer points than requested if wh is insufficient or too similar - except Exception: - LOGGER.warning(f'{PREFIX}WARNING ⚠️ switching strategies from kmeans to random init') - k = np.sort(npr.rand(n * 2)).reshape(n, 2) * img_size # random init - wh, wh0 = (torch.tensor(x, dtype=torch.float32) for x in (wh, wh0)) - k = print_results(k, verbose=False) - - # Plot - # k, d = [None] * 20, [None] * 20 - # for i in tqdm(range(1, 21)): - # k[i-1], d[i-1] = kmeans(wh / s, i) # points, mean distance - # fig, ax = plt.subplots(1, 2, figsize=(14, 7), tight_layout=True) - # ax = ax.ravel() - # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.') - # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) # plot wh - # ax[0].hist(wh[wh[:, 0]<100, 0],400) - # ax[1].hist(wh[wh[:, 1]<100, 1],400) - # fig.savefig('wh.png', dpi=200) - - # Evolve - f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma - pbar = tqdm(range(gen), bar_format=TQDM_BAR_FORMAT) # progress bar - for _ in pbar: - v = np.ones(sh) - while (v == 1).all(): # mutate until a change occurs (prevent duplicates) - v = ((npr.random(sh) < mp) * random.random() * npr.randn(*sh) * s + 1).clip(0.3, 3.0) - kg = (k.copy() * v).clip(min=2.0) - fg = anchor_fitness(kg) - if fg > f: - f, k = fg, kg.copy() - pbar.desc = f'{PREFIX}Evolving anchors with Genetic Algorithm: fitness = {f:.4f}' - if verbose: - print_results(k, verbose) - - return print_results(k).astype(np.float32) diff --git a/spaces/henryezell/freewilly/app.py b/spaces/henryezell/freewilly/app.py deleted file mode 100644 index 8be47e7462d04255ee691ae31eeae8b73920f87b..0000000000000000000000000000000000000000 --- a/spaces/henryezell/freewilly/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/FreeWilly2").launch() \ No newline at end of file diff --git a/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py b/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py deleted file mode 100644 index 846e39849535ed08accb10d7001f2431a851d372..0000000000000000000000000000000000000000 --- a/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py +++ /dev/null @@ -1,31 +0,0 @@ -import ONNXVITS_models -import utils -from text import text_to_sequence -import torch -import commons - -def get_text(text, hps): - text_norm = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - -hps = utils.get_hparams_from_file("../vits/pretrained_models/uma87.json") -symbols = hps.symbols -net_g = ONNXVITS_models.SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model) -_ = net_g.eval() -_ = utils.load_checkpoint("../vits/pretrained_models/uma_1153000.pth", net_g) - -text1 = get_text("ありがとうございます。", hps) -stn_tst = text1 -with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]) - sid = torch.tensor([0]) - o = net_g(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, length_scale=1) \ No newline at end of file diff --git a/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/text/mandarin.py b/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/text/mandarin.py deleted file mode 100644 index 093d8826809aa2681f6088174427337a59e0c882..0000000000000000000000000000000000000000 --- a/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/text/mandarin.py +++ /dev/null @@ -1,329 +0,0 @@ -import os -import sys -import re -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba -import cn2an -import logging - -logging.getLogger('jieba').setLevel(logging.WARNING) -jieba.initialize() - - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (romaji, ipa) pairs: -_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ʃy', 'ʃ'), - ('ʧʰy', 'ʧʰ'), - ('ʧ⁼y', 'ʧ⁼'), - ('NN', 'n'), - ('Ng', 'ŋ'), - ('y', 'j'), - ('h', 'x') -]] - -# List of (bopomofo, ipa) pairs: -_bopomofo_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'x'), - ('ㄐ', 'tʃ⁼'), - ('ㄑ', 'tʃʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ts`⁼'), - ('ㄔ', 'ts`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ts⁼'), - ('ㄘ', 'tsʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'ɛ'), - ('ㄞ', 'aɪ'), - ('ㄟ', 'eɪ'), - ('ㄠ', 'ɑʊ'), - ('ㄡ', 'oʊ'), - ('ㄧㄢ', 'jɛn'), - ('ㄩㄢ', 'ɥæn'), - ('ㄢ', 'an'), - ('ㄧㄣ', 'in'), - ('ㄩㄣ', 'ɥn'), - ('ㄣ', 'ən'), - ('ㄤ', 'ɑŋ'), - ('ㄧㄥ', 'iŋ'), - ('ㄨㄥ', 'ʊŋ'), - ('ㄩㄥ', 'jʊŋ'), - ('ㄥ', 'əŋ'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (bopomofo, ipa2) pairs: -_bopomofo_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'pwo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'tɕ'), - ('ㄑ', 'tɕʰ'), - ('ㄒ', 'ɕ'), - ('ㄓ', 'tʂ'), - ('ㄔ', 'tʂʰ'), - ('ㄕ', 'ʂ'), - ('ㄖ', 'ɻ'), - ('ㄗ', 'ts'), - ('ㄘ', 'tsʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ɤ'), - ('ㄝ', 'ɛ'), - ('ㄞ', 'aɪ'), - ('ㄟ', 'eɪ'), - ('ㄠ', 'ɑʊ'), - ('ㄡ', 'oʊ'), - ('ㄧㄢ', 'jɛn'), - ('ㄩㄢ', 'yæn'), - ('ㄢ', 'an'), - ('ㄧㄣ', 'in'), - ('ㄩㄣ', 'yn'), - ('ㄣ', 'ən'), - ('ㄤ', 'ɑŋ'), - ('ㄧㄥ', 'iŋ'), - ('ㄨㄥ', 'ʊŋ'), - ('ㄩㄥ', 'jʊŋ'), - ('ㄥ', 'ɤŋ'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'y'), - ('ˉ', '˥'), - ('ˊ', '˧˥'), - ('ˇ', '˨˩˦'), - ('ˋ', '˥˩'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def number_to_chinese(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - return text - - -def chinese_to_bopomofo(text): - text = text.replace('、', ',').replace(';', ',').replace(':', ',') - words = jieba.lcut(text, cut_all=False) - text = '' - for word in words: - bopomofos = lazy_pinyin(word, BOPOMOFO) - if not re.search('[\u4e00-\u9fff]', word): - text += word - continue - for i in range(len(bopomofos)): - bopomofos[i] = re.sub(r'([\u3105-\u3129])$', r'\1ˉ', bopomofos[i]) - if text != '': - text += ' ' - text += ''.join(bopomofos) - return text - - -def latin_to_bopomofo(text): - for regex, replacement in _latin_to_bopomofo: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_romaji(text): - for regex, replacement in _bopomofo_to_romaji: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_ipa(text): - for regex, replacement in _bopomofo_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_ipa2(text): - for regex, replacement in _bopomofo_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_romaji(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_romaji(text) - text = re.sub('i([aoe])', r'y\1', text) - text = re.sub('u([aoəe])', r'w\1', text) - text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', - r'\1ɹ`\2', text).replace('ɻ', 'ɹ`') - text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text) - return text - - -def chinese_to_lazy_ipa(text): - text = chinese_to_romaji(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_ipa(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_ipa(text) - text = re.sub('i([aoe])', r'j\1', text) - text = re.sub('u([aoəe])', r'w\1', text) - text = re.sub('([sɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', - r'\1ɹ`\2', text).replace('ɻ', 'ɹ`') - text = re.sub('([s][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text) - return text - - -def chinese_to_ipa2(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_ipa2(text) - text = re.sub(r'i([aoe])', r'j\1', text) - text = re.sub(r'u([aoəe])', r'w\1', text) - text = re.sub(r'([ʂɹ]ʰ?)([˩˨˧˦˥ ]+|$)', r'\1ʅ\2', text) - text = re.sub(r'(sʰ?)([˩˨˧˦˥ ]+|$)', r'\1ɿ\2', text) - return text \ No newline at end of file diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/create_plots_new/front_change.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/create_plots_new/front_change.py deleted file mode 100644 index 6689ca39d92ece151aa27e93692b17e665a80075..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/create_plots_new/front_change.py +++ /dev/null @@ -1,228 +0,0 @@ -import cv2 - -import numpy as np -import os -import plotly.express as px -import plotly.figure_factory as ff -import datetime -import plotly.io as pio -import plotly.graph_objs as go - -pio.kaleido.scope.mathjax = None -import math -# import pylab -from matplotlib.colors import LinearSegmentedColormap -from PIL import ImageColor - - -def distribute_glacier(list_of_samples): - list_of_glaciers = {} - for glacier in ['JAC']: - #for glacier in [ 'COL', 'Mapple', 'Crane', 'Jorum','DBE','SI', 'JAC']: - list_of_glaciers[glacier] = [sample for sample in list_of_samples if glacier in sample] - return list_of_glaciers - - -def create_dict(list_of_samples): - list_dict = [] - for sample in list_of_samples: - sample_split = sample.split('_') - finish_date = datetime.datetime.fromisoformat(sample_split[1]) + datetime.timedelta(days=50) - sample_dict = { - 'Glacier': sample_split[0], - 'Start': sample_split[1], - 'Finish': str(finish_date), - 'Satellite:': sample_split[2] - } - list_dict.append(sample_dict) - return list_dict - - -if __name__ == '__main__': - train_dir = '/home/ho11laqe/PycharmProjects/data_raw/fronts/train/' - test_dir = '/home/ho11laqe/PycharmProjects/data_raw/fronts/test/' - - list_of_train_samples = os.listdir(train_dir) - list_of_test_samples = os.listdir(test_dir) - list_of_samples = list_of_train_samples + list_of_test_samples - list_of_glaciers = distribute_glacier(list_of_samples) - list_dict = create_dict(list_of_samples) - - # define color map - colormap = px.colors.sequential.Reds[-1::-1] - for glacier in list_of_glaciers: - print(glacier) - list_of_glaciers[glacier].sort() - - - if glacier in ['COL', 'Mapple']: - data_directory = test_dir - image_directory = '/home/ho11laqe/PycharmProjects/data_raw/sar_images/test/' - else: - data_directory = train_dir - image_directory = '/home/ho11laqe/PycharmProjects/data_raw/sar_images/train/' - - - # define SAR blackground image - if glacier == 'COL': - canvas = cv2.imread(image_directory + 'COL_2011-11-13_TDX_7_1_092.png') - shape = canvas.shape - - elif glacier == 'JAC': - canvas = cv2.imread(image_directory + 'JAC_2009-06-21_TSX_6_1_005.png') - shape = canvas.shape - - elif glacier == 'Jorum': - canvas = cv2.imread(image_directory + 'Jorum_2011-09-04_TSX_7_4_034.png') - shape = canvas.shape - - elif glacier == 'Mapple': - canvas = cv2.imread(image_directory + 'Mapple_2008-10-13_TSX_7_2_034.png') - shape = canvas.shape - - elif glacier == 'SI': - canvas = cv2.imread(image_directory + 'SI_2013-08-14_TSX_7_1_125.png') - - elif glacier == 'Crane': - canvas = cv2.imread(image_directory + 'Crane_2008-10-13_TSX_7_3_034.png') - - elif glacier == 'DBE': - canvas = cv2.imread(image_directory + 'DBE_2008-03-30_TSX_7_3_049.png') - - else: - print('No image for background') - exit() - - number_images = len(list_of_glaciers[glacier]) - kernel = np.ones((3, 3), np.uint8) - - # iterate over all fronts of one glacier - for i, image_name in enumerate(list_of_glaciers[glacier]): - front = cv2.imread(data_directory + image_name) - - # if front label has to be resized to fit background image - # the front is not dilated. - if front.shape != canvas.shape: - front = cv2.resize(front, (shape[1], shape[0])) - - else: - front = cv2.dilate(front, kernel) - - # color interpolation based on position in dataset - # TODO based on actual date - index = (1 - i / number_images) * (len(colormap) - 1) - up = math.ceil(index) - down = up - 1 - color_up = np.array(ImageColor.getcolor(colormap[up], 'RGB')) - color_down = np.array(ImageColor.getcolor(colormap[down], 'RGB')) - dif = up - down - color = color_up * (1 - dif) + color_down * dif - - # draw front on canvas - non_zeros = np.nonzero(front) - canvas[non_zeros[:2]] = np.uint([color for _ in non_zeros[0]]) - - #scale reference for fontsize - ref_x = 15000 / 7 - - if glacier == 'COL': - image = canvas[750:, 200:2800] - new_shape = image.shape - res = 7 - scale = new_shape[1] / ref_x - fig = px.imshow(image, height=new_shape[0]- int(80 * scale), width=new_shape[1]) - legend = dict(thickness=int(50 * scale), tickvals=[-4.4, 4.4], - ticktext=['2011
      (+0.8°C)', '2020
      (+1.2°C)'], - outlinewidth=0) - - elif glacier == 'Mapple': - image = canvas - new_shape = image.shape - res = 7 - scale = new_shape[1] / ref_x - fig = px.imshow(image, height=new_shape[0] - int(150 * scale), width=new_shape[1]) - legend = dict(thickness=int(50 * scale), tickvals=[-4.8, 4.8], ticktext=['2006', '2020 '], - outlinewidth=0) - - elif glacier == 'Crane': - image = canvas[:2500,:] - new_shape = image.shape - res = 7 - scale = new_shape[1] / ref_x - fig = px.imshow(image, height=new_shape[0] - int(150 * scale), width=new_shape[1]) - legend = dict(thickness=int(50 * scale), tickvals=[-4.8, 4.8], ticktext=['2002', '2014'], - outlinewidth=0) - - elif glacier == 'Jorum': - image = canvas#[200:1600, 1500:] - new_shape = image.shape - res = 7 - scale = new_shape[1] / ref_x - fig = px.imshow(image, height=new_shape[0] - int(240 * scale), width=new_shape[1]) - legend = dict(thickness=int(50 * scale), tickvals=[-4.8, 4.8], ticktext=['2003', '2020'], - outlinewidth=0) - - elif glacier == 'DBE': - image = canvas[700:, 750:] - new_shape = image.shape - res = 7 - scale = new_shape[1] / ref_x - fig = px.imshow(image, height=new_shape[0] - int(150 * scale), width=new_shape[1]) - legend = dict(thickness=int(50 * scale), tickvals=[-4.7, 4.7], ticktext=['1995', '2014'], - outlinewidth=0) - - elif glacier == 'SI': - image = canvas - new_shape = image.shape - res = 7 - scale = new_shape[0] / ref_x - fig = px.imshow(image, height=new_shape[0] - int(240 * scale), width=new_shape[1]) - legend = dict(thickness=int(50 * scale), tickvals=[-4.8, 4.8], ticktext=['1995', '2014'], - outlinewidth=0) - - elif glacier == 'JAC': - image = canvas[:, :] - new_shape = image.shape - res = 6 - scale = new_shape[1] / ref_x - fig = px.imshow(image, height=new_shape[0] - int(340 * scale), width=new_shape[1]) - legend = dict(thickness=int(50 * scale), tickvals=[-4.6, 4.7], - ticktext=['2009
      (+0.7°C)', '2015
      (+0.9°C)'], - outlinewidth=0) - else: - fig = px.imshow(canvas) - res = 7 - scale = 1 - - colorbar_trace = go.Scatter(x=[None], - y=[None], - mode='markers', - marker=dict( - colorscale=colormap[::-1], - showscale=True, - cmin=-5, - cmax=5, - colorbar=legend - ), - hoverinfo='none' - ) - fig.update_layout(yaxis=dict(tickmode='array', - tickvals=[0, 5000 / res, 10000 / res, 15000 / res, 20000 / res, 25000 / res], - ticktext=[0, 5, 10, 15, 20, 25], - title='km')) - fig.update_layout(xaxis=dict(tickmode='array', - tickvals=[0, 5000 / res, 10000 / res, 15000 / res, 20000 / res, 25000 / res], - ticktext=[0, 5, 10, 15, 20, 25], - title='km')) - - fig.update_xaxes(tickfont=dict(size=int(40 * scale))) - fig.update_yaxes(tickfont=dict(size=int(40 * scale))) - fig.update_layout(font=dict(size=int(60 * scale), family="Computer Modern")) - fig.update_coloraxes(colorbar_x=0) - fig['layout']['xaxis']['title']['font']['size'] = int(60 * scale) - fig['layout']['yaxis']['title']['font']['size'] = int(60 * scale) - - fig['layout']['showlegend'] = False - fig.add_trace(colorbar_trace) - fig.write_image('output/' + glacier + "_front_change.pdf", format='pdf') - # fig.show() \ No newline at end of file diff --git a/spaces/hra/GPT4-makes-BabyAGI/README.md b/spaces/hra/GPT4-makes-BabyAGI/README.md deleted file mode 100644 index e93cb2ef6fae4bcff7e254e0d5adbefdcd3b059c..0000000000000000000000000000000000000000 --- a/spaces/hra/GPT4-makes-BabyAGI/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: GPT4 Makes BabyAGI -emoji: 📊 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: cc-by-nc-sa-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hrdtbs/rvc-mochinoa/infer_pack/commons.py b/spaces/hrdtbs/rvc-mochinoa/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/hrdtbs/rvc-mochinoa/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/huggingchat/chat-ui/src/lib/utils/chunk.ts b/spaces/huggingchat/chat-ui/src/lib/utils/chunk.ts deleted file mode 100644 index 3d8f924eba449978957a62c39c7406f819edf49a..0000000000000000000000000000000000000000 --- a/spaces/huggingchat/chat-ui/src/lib/utils/chunk.ts +++ /dev/null @@ -1,33 +0,0 @@ -/** - * Chunk array into arrays of length at most `chunkSize` - * - * @param chunkSize must be greater than or equal to 1 - */ -export function chunk(arr: T, chunkSize: number): T[] { - if (isNaN(chunkSize) || chunkSize < 1) { - throw new RangeError("Invalid chunk size: " + chunkSize); - } - - if (!arr.length) { - return []; - } - - /// Small optimization to not chunk buffers unless needed - if (arr.length <= chunkSize) { - return [arr]; - } - - return range(Math.ceil(arr.length / chunkSize)).map((i) => { - return arr.slice(i * chunkSize, (i + 1) * chunkSize); - }) as T[]; -} - -function range(n: number, b?: number): number[] { - return b - ? Array(b - n) - .fill(0) - .map((_, i) => n + i) - : Array(n) - .fill(0) - .map((_, i) => i); -} diff --git a/spaces/hyuan5040/ChatWithSpeech/app.py b/spaces/hyuan5040/ChatWithSpeech/app.py deleted file mode 100644 index 122358ddec17831bbfea06dd04fd346cb77b5da4..0000000000000000000000000000000000000000 --- a/spaces/hyuan5040/ChatWithSpeech/app.py +++ /dev/null @@ -1,177 +0,0 @@ -import tempfile -import gradio as gr -import openai -from neon_tts_plugin_coqui import CoquiTTS - -def Question(Ask_Question): - # pass the generated text to audio - openai.api_key = "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" - # Set up the model and prompt - model_engine = "text-davinci-003" - #prompt = "who is alon musk?" - # Generate a response - completion = openai.Completion.create( - engine=model_engine, - prompt=Ask_Question, - max_tokens=1024, - n=1, - stop=None, - temperature=0.5,) - response = completion.choices[0].text - #out_result=resp['message'] - return response - -LANGUAGES = list(CoquiTTS.langs.keys()) -default_lang = "en" -import telnetlib -#import whisper -#whisper_model = whisper.load_model("small") -whisper = gr.Interface.load(name="spaces/sanchit-gandhi/whisper-large-v2") -#chatgpt = gr.Blocks.load(name="spaces/fffiloni/whisper-to-chatGPT") -import os -import json -session_token = os.environ.get('SessionToken') -#api_endpoint = os.environ.get('API_EndPoint') -# ChatGPT -#from revChatGPT.ChatGPT import Chatbot -#chatbot = Chatbot({"session_token": session_token}) # You can start a custom conversation -import asyncio -from pygpt import PyGPT - -title = "Speech to ChatGPT to Speech" -#info = "more info at [Neon Coqui TTS Plugin](https://github.com/NeonGeckoCom/neon-tts-plugin-coqui), [Coqui TTS](https://github.com/coqui-ai/TTS)" -#badge = "https://visitor-badge-reloaded.herokuapp.com/badge?page_id=neongeckocom.neon-tts-plugin-coqui" -coquiTTS = CoquiTTS() -chat_id = {'conversation_id': None, 'parent_id': None} -headers = {'Authorization': 'yusin'} - -async def chat_gpt_ask(prompt): - chat_gpt = PyGPT(session_token) - await chat_gpt.connect() - await chat_gpt.wait_for_ready() - answer = await chat_gpt.ask(prompt) - print(answer) - await chat_gpt.disconnect() - -# ChatGPT -def chat_hf(audio, custom_token, language): - #output = chatgpt(audio, "transcribe", fn_index=0) - #whisper_text, gpt_response = output[0], output[1] - try: - whisper_text = translate(audio) - if whisper_text == "ERROR: You have to either use the microphone or upload an audio file": - gpt_response = "MISSING AUDIO: Record your voice by clicking the microphone button, do not forget to stop recording before sending your message ;)" - else: - #gpt_response = chatbot.ask(whisper_text, conversation_id=conversation_id, parent_id=None) - gpt_response = asyncio.run(chat_gpt_ask(whisper_text, id='yusin')) - #if chat_id['conversation_id'] != None: - # data = {"content": whisper_text, "conversation_id": chat_id['conversation_id'], "parent_id": chat_id['parent_id']} - #else: - # data = {"content": whisper_text} - #print(data) - #res = requests.get('http://myip.ipip.net', timeout=5).text - #print(res) - #response = requests.post('api_endpoint', headers=headers, json=data, verify=False, timeout=5) - #print('this is my answear', response.text) - #chat_id['parent_id'] = response.json()["response_id"] - #chat_id['conversation_id'] = response.json()["conversation_id"] - #gpt_response = response.json()["content"] - #response = requests.get('https://api.pawan.krd/chat/gpt?text=' + whisper_text + '&cache=false', verify=False, timeout=5) - #print(response.text) - - #whisper_text = translate(audio) - #api = ChatGPT(session_token) - #resp = api.send_message(whisper_text) - - #api.refresh_auth() # refresh the authorization token - #api.reset_conversation() # reset the conversation - #gpt_response = resp['message'] - - except: - whisper_text = translate(audio) - gpt_response = """Sorry, I'm quite busy right now, but please try again later :)""" - #whisper_text = translate(audio) - #api = ChatGPT(custom_token) - #resp = api.send_message(whisper_text) - - #api.refresh_auth() # refresh the authorization token - #api.reset_conversation() # reset the conversation - #gpt_response = resp['message'] - - ## call openai - gpt_response = Question(whisper_text) - - # to voice - with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp: - coquiTTS.get_tts(gpt_response, fp, speaker = {"language" : language}) - - return whisper_text, gpt_response, fp.name - -# whisper -#def translate(audio): -# print(""" -# — -# Sending audio to Whisper ... -# — -# """) -# -# audio = whisper.load_audio(audio) -# audio = whisper.pad_or_trim(audio) -# -# mel = whisper.log_mel_spectrogram(audio).to(whisper_model.device) -# -# _, probs = whisper_model.detect_language(mel) -# -# transcript_options = whisper.DecodingOptions(task="transcribe", fp16 = False) -# -# transcription = whisper.decode(whisper_model, mel, transcript_options) -# -# print("language spoken: " + transcription.language) -# print("transcript: " + transcription.text) -# print("———————————————————————————————————————————") -# -# return transcription.text - -def translate(audio): - print(""" - — - Sending audio to Whisper ... - — - """) - - text_result = whisper(audio, None, "transcribe", fn_index=0) - #print(text_result) - return text_result - - -with gr.Blocks() as blocks: - gr.Markdown("

      " - + title - + "

      ") - #gr.Markdown(description) - radio = gr.Radio(label="Language",choices=LANGUAGES,value=default_lang) - with gr.Row(equal_height=True):# equal_height=False - with gr.Column():# variant="panel" - audio_file = gr.Audio(source="microphone",type="filepath") - custom_token = gr.Textbox(label='If it fails, use your own session token', placeholder="your own session token") - with gr.Row():# mobile_collapse=False - submit = gr.Button("Submit", variant="primary") - with gr.Column(): - text1 = gr.Textbox(label="Speech to Text") - text2 = gr.Textbox(label="ChatGPT Response") - audio = gr.Audio(label="Output", interactive=False) - #gr.Markdown(info) - #gr.Markdown("
      " - # +f'visitors badge' - # +"
      ") - - # actions - submit.click( - chat_hf, - [audio_file, custom_token, radio], - [text1, text2, audio], - ) - radio.change(lambda lang: CoquiTTS.langs[lang]["sentence"], radio, text2) - - -blocks.launch(debug=True) diff --git a/spaces/iamironman4279/SadTalker/src/face3d/data/__init__.py b/spaces/iamironman4279/SadTalker/src/face3d/data/__init__.py deleted file mode 100644 index 9a9761c518a1b07c5996165869742af0a52c82bc..0000000000000000000000000000000000000000 --- a/spaces/iamironman4279/SadTalker/src/face3d/data/__init__.py +++ /dev/null @@ -1,116 +0,0 @@ -"""This package includes all the modules related to data loading and preprocessing - - To add a custom dataset class called 'dummy', you need to add a file called 'dummy_dataset.py' and define a subclass 'DummyDataset' inherited from BaseDataset. - You need to implement four functions: - -- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt). - -- <__len__>: return the size of dataset. - -- <__getitem__>: get a data point from data loader. - -- : (optionally) add dataset-specific options and set default options. - -Now you can use the dataset class by specifying flag '--dataset_mode dummy'. -See our template dataset class 'template_dataset.py' for more details. -""" -import numpy as np -import importlib -import torch.utils.data -from face3d.data.base_dataset import BaseDataset - - -def find_dataset_using_name(dataset_name): - """Import the module "data/[dataset_name]_dataset.py". - - In the file, the class called DatasetNameDataset() will - be instantiated. It has to be a subclass of BaseDataset, - and it is case-insensitive. - """ - dataset_filename = "data." + dataset_name + "_dataset" - datasetlib = importlib.import_module(dataset_filename) - - dataset = None - target_dataset_name = dataset_name.replace('_', '') + 'dataset' - for name, cls in datasetlib.__dict__.items(): - if name.lower() == target_dataset_name.lower() \ - and issubclass(cls, BaseDataset): - dataset = cls - - if dataset is None: - raise NotImplementedError("In %s.py, there should be a subclass of BaseDataset with class name that matches %s in lowercase." % (dataset_filename, target_dataset_name)) - - return dataset - - -def get_option_setter(dataset_name): - """Return the static method of the dataset class.""" - dataset_class = find_dataset_using_name(dataset_name) - return dataset_class.modify_commandline_options - - -def create_dataset(opt, rank=0): - """Create a dataset given the option. - - This function wraps the class CustomDatasetDataLoader. - This is the main interface between this package and 'train.py'/'test.py' - - Example: - >>> from data import create_dataset - >>> dataset = create_dataset(opt) - """ - data_loader = CustomDatasetDataLoader(opt, rank=rank) - dataset = data_loader.load_data() - return dataset - -class CustomDatasetDataLoader(): - """Wrapper class of Dataset class that performs multi-threaded data loading""" - - def __init__(self, opt, rank=0): - """Initialize this class - - Step 1: create a dataset instance given the name [dataset_mode] - Step 2: create a multi-threaded data loader. - """ - self.opt = opt - dataset_class = find_dataset_using_name(opt.dataset_mode) - self.dataset = dataset_class(opt) - self.sampler = None - print("rank %d %s dataset [%s] was created" % (rank, self.dataset.name, type(self.dataset).__name__)) - if opt.use_ddp and opt.isTrain: - world_size = opt.world_size - self.sampler = torch.utils.data.distributed.DistributedSampler( - self.dataset, - num_replicas=world_size, - rank=rank, - shuffle=not opt.serial_batches - ) - self.dataloader = torch.utils.data.DataLoader( - self.dataset, - sampler=self.sampler, - num_workers=int(opt.num_threads / world_size), - batch_size=int(opt.batch_size / world_size), - drop_last=True) - else: - self.dataloader = torch.utils.data.DataLoader( - self.dataset, - batch_size=opt.batch_size, - shuffle=(not opt.serial_batches) and opt.isTrain, - num_workers=int(opt.num_threads), - drop_last=True - ) - - def set_epoch(self, epoch): - self.dataset.current_epoch = epoch - if self.sampler is not None: - self.sampler.set_epoch(epoch) - - def load_data(self): - return self - - def __len__(self): - """Return the number of data in the dataset""" - return min(len(self.dataset), self.opt.max_dataset_size) - - def __iter__(self): - """Return a batch of data""" - for i, data in enumerate(self.dataloader): - if i * self.opt.batch_size >= self.opt.max_dataset_size: - break - yield data diff --git a/spaces/inamXcontru/PoeticTTS/Bibliotecacon65534librosenespaolEPUB67GBSerialKey Access the Largest Collection of Spanish eBooks.md b/spaces/inamXcontru/PoeticTTS/Bibliotecacon65534librosenespaolEPUB67GBSerialKey Access the Largest Collection of Spanish eBooks.md deleted file mode 100644 index cdad5e983a71b2f089312822ff2cad8d78b7b93a..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Bibliotecacon65534librosenespaolEPUB67GBSerialKey Access the Largest Collection of Spanish eBooks.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Bibliotecacon65534librosenespaolEPUB67GBSerialKey


      DOWNLOADhttps://gohhs.com/2uz3yN



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/inamXcontru/PoeticTTS/CRACK DVD X Copy Platinum 3.2.1.md b/spaces/inamXcontru/PoeticTTS/CRACK DVD X Copy Platinum 3.2.1.md deleted file mode 100644 index 122f7b1a21fc24523d185274761f58dc90110b04..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/CRACK DVD X Copy Platinum 3.2.1.md +++ /dev/null @@ -1,16 +0,0 @@ -

      CRACK DVD X Copy Platinum 3.2.1


      Download ··· https://gohhs.com/2uz2QO



      - -7 Apr To X Copy your DVDs on DVD-R/RW or DVD-9, use this GUI or command-line application, also known as dvd xcopy. - -XCopy is the Free version of DVD/CD XCopy, a powerful and easy-to-use DVD burning software with ability to copy DVD and Blu-ray discs and. 19 Sep Use XCopy to copy a DVD disc with subtitles to a new blank disc and vice versa. I need it for my parents so that they can watch their. DVD X Copy is a powerful and easy-to-use DVD copying software that supports burning Blu-ray discs and copying DVDs and.The present invention relates to the field of lithium secondary batteries, and more particularly, to a nonaqueous electrolyte for lithium secondary batteries capable of improving battery safety and manufacturing method thereof. - -With the development of mobile electronic appliances, such as mobile phones, camcorders, and notebook computers, the demand for small, light-weight, and high-capacity secondary batteries used as power sources is rapidly increasing. Among secondary batteries developed so far, lithium secondary batteries are a great advantage since they can realize high energy density and high discharge voltage, when compared to other types of secondary batteries. Accordingly, lithium secondary batteries are widely used as power sources for various applications. - -A lithium secondary battery is prepared by injecting a nonaqueous electrolyte obtained by dissolving a lithium salt in a nonaqueous solvent into an electrode assembly, and then placing the electrode assembly in a battery case together with a lithium foil, a collection of lithiated carbon, or the like, which functions as an anode. - -Among lithium secondary batteries developed so far, a lithium-ion battery includes a cathode, an anode, and an electrolyte in which a nonaqueous solvent is dissolved. At this time, when the nonaqueous solvent in the electrolyte is decomposed during an overcharge or an overdischarge of the battery, the nonaqueous solvent is transformed to generate gas. As a result, the pressure of the electrolyte increases so that a battery safety problem may occur. For this reason, nonaqueous solvents having excellent thermal stability at high temperature are required. In addition, safety performance should be further improved and processability should be improved. - -Meanwhile, when a battery is charged or discharged, lithium ions are electrochemically transferred between the anode and the cathode via the electrolyte. At this time, the stability of the electrolyte is important. 4fefd39f24
      -
      -
      -

      diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Fast And Furious 8 (English) Video Songs Hd 1080p Blu-ray Download Movie [BEST].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Fast And Furious 8 (English) Video Songs Hd 1080p Blu-ray Download Movie [BEST].md deleted file mode 100644 index c565a8c093e9ce2d6c365b4cd0e218d2eaee5e80..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Fast And Furious 8 (English) Video Songs Hd 1080p Blu-ray Download Movie [BEST].md +++ /dev/null @@ -1,6 +0,0 @@ -

      Fast And Furious 8 (English) video songs hd 1080p blu-ray download movie


      Download ::: https://urlin.us/2uEyN3



      -
      -Download Fast & Furious 8 Hind Dubbed 720p & 480p &. 1080p~ hdmoviesflix.in.. Fast And Furious 8 (English) video songs hd 1080p blu-ray download movie. 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/inreVtussa/clothingai/Examples/Anjaane The Unknown Download !EXCLUSIVE! 1080p Movie.md b/spaces/inreVtussa/clothingai/Examples/Anjaane The Unknown Download !EXCLUSIVE! 1080p Movie.md deleted file mode 100644 index 59900677e92be1b24c3a5439a4a2f6292811bc37..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Anjaane The Unknown Download !EXCLUSIVE! 1080p Movie.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Anjaane The Unknown Download 1080p Movie


      Download Ziphttps://tiurll.com/2uCjuv



      - -Bangla Song full Movie Download kickass torrent 1080p HD , . . Wake Up Sid ... Anjaane - The Unknown tamil dubbed movie mp3 songs download. Shiva Ka . 1fdad05405
      -
      -
      -

      diff --git a/spaces/inreVtussa/clothingai/Examples/Contos Animados Tufos Baixar 4shared [UPDATED].md b/spaces/inreVtussa/clothingai/Examples/Contos Animados Tufos Baixar 4shared [UPDATED].md deleted file mode 100644 index 84fc066952c3af1a691ad354bb9e036cc6d0a5ff..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Contos Animados Tufos Baixar 4shared [UPDATED].md +++ /dev/null @@ -1,10 +0,0 @@ -

      Contos Animados Tufos Baixar 4shared


      Download Ziphttps://tiurll.com/2uClxV



      - -Aug 24, 2020 — AV Music Morpher Gold 4.0.60 With Serial Crack Morpher Gold 4.0.60 with Serial Key Free Download ... -Ace Stream Media Video Editor for PC free download ... -Oct 13, 2019 — AV Music Morpher Gold 4.0.60 + Crack is a powerful and free music software that allows you to modify your music files ... -Oct 29, 2019 — AV Music Morpher Gold 4.0.60 with Serial key Free Download ... -Oct 11, 2019 — AV Music Morpher Gold 4.0.60 with Serial key Free Download ... 8a78ff9644
      -
      -
      -

      diff --git a/spaces/ipvikas/ALL_NLP_Tasks/OCR_Image_to_Text.py b/spaces/ipvikas/ALL_NLP_Tasks/OCR_Image_to_Text.py deleted file mode 100644 index 314a4e0747d091bca30890eb4654329bf463b662..0000000000000000000000000000000000000000 --- a/spaces/ipvikas/ALL_NLP_Tasks/OCR_Image_to_Text.py +++ /dev/null @@ -1,30 +0,0 @@ -import gradio as gr -import os -#from googletrans import Translator -import easyocr -#import PIL -from PIL import Image -from PIL import ImageDraw - -def Get_OCR_demo(image): - reader = easyocr.Reader(['en']) #IMP 'hi' - translator = Translator() - - #image_file =Image.open(image,mode = 'r') - im = Image.open(image,mode = 'r') - #im =Image.open(image,mode = 'r') - text_list = reader.readtext(im,add_margin = 0.55,width_ths=0.7, link_threshold=0.8,decoder='beamsearch', blocklist='=-',detail = 0 ) - - #text_list = reader.readtext(image_file,add_margin = 0.55,width_ths=0.7, link_threshold=0.8,decoder='beamsearch', blocklist='=-',detail = 0 ) - - text_comb =' '.join(text_list) #changed into a single line - return text_comb - - -title = "Upload an image and extract Text from it" -description = "OCR tool for text extraction" -examples=[["english.png"],["Upload Parag_Letter_j.jpg"]] - -get_OCR_demo = gr.Interface(fn=Get_OCR_demo, inputs="image",outputs=['text'],title = title,description=description,examples=[["english.png","Parag_Letter.jfif"]],cache_examples=False) -# if __name__ == "__main__": -# demo.launch() \ No newline at end of file diff --git a/spaces/iqovocn/ChuanhuChatGPT/Dockerfile b/spaces/iqovocn/ChuanhuChatGPT/Dockerfile deleted file mode 100644 index 335c2dba28ba8c365de9306858462a59dea25f28..0000000000000000000000000000000000000000 --- a/spaces/iqovocn/ChuanhuChatGPT/Dockerfile +++ /dev/null @@ -1,15 +0,0 @@ -FROM python:3.9 as builder -RUN apt-get update && apt-get install -y build-essential -COPY requirements.txt . -COPY requirements_advanced.txt . -RUN pip install --user -r requirements.txt -# RUN pip install --user -r requirements_advanced.txt - -FROM python:3.9 -MAINTAINER iskoldt -COPY --from=builder /root/.local /root/.local -ENV PATH=/root/.local/bin:$PATH -COPY . /app -WORKDIR /app -ENV dockerrun yes -CMD ["python3", "-u", "ChuanhuChatbot.py", "2>&1", "|", "tee", "/var/log/application.log"] diff --git a/spaces/isabel/club-project/reader.py b/spaces/isabel/club-project/reader.py deleted file mode 100644 index 2089f121665bf06f1c4d8a54d78df7b435b01ae9..0000000000000000000000000000000000000000 --- a/spaces/isabel/club-project/reader.py +++ /dev/null @@ -1,161 +0,0 @@ -import os -from yattag import Doc -## --------------------------------- ### -### reading: info.txt ### -### -------------------------------- ### -# placeholders in case info.txt does not exist -def get_article(acc, most_imp_feat): - filename = "info.txt" - placeholder = "please create an info.txt to customize this text" - note = "**Note that model accuracy is based on the uploaded data.csv and reflects how well the AI model can give correct recommendations for that dataset. An accuracy of 50% means that half of the model's predictions for that dataset were accurate. Model accuracy and most important feature can be helpful for understanding how the model works, but should not be considered absolute facts about the real world." - - title = bkgd = data_collection = priv_cons = bias_cons = img_src = membs = description = placeholder - # check if info.txt is present - if os.path.isfile(filename): - # open info.txt in read mode - info = open(filename, "r") - - # read each line to a string - description = "An AI project created by " + info.readline() - title = info.readline() - bkgd = info.readline() - data_collection = info.readline() - priv_cons = info.readline() - bias_cons = info.readline() - img_src = info.readline() - membs = info.readline() - - # close file - info.close() - - # use yattag library to generate html - doc, tag, text, line = Doc().ttl() - # create html based on info.txt - with tag('div'): - with tag('div', klass='box model-container'): - with tag('div', klass='spacer'): - with tag('div', klass='box model-div'): - line('h2', "Model Accuracy", klass='acc') - line('p', acc) - with tag('div', klass='box model-div'): - line('h2', "Most Important Feature", klass='feat') - line('p', most_imp_feat) - with tag('div', klass='spacer'): - line('p', note) - with tag('div', klass='box'): - line('h2', 'Problem Statement and Research Summary', klass='prj') - line('p', bkgd) - with tag('div', klass='box'): - line('h2', 'Data Collection Plan', klass='data') - line('p', data_collection) - with tag('div', klass='box'): - line('h2', 'Ethical Considerations (Data Privacy and Bias)', klass='ethics') - with tag('ul'): - line('li', priv_cons) - line('li', bias_cons) - with tag('div', klass='box'): - line('h2', 'Our Team', klass='team') - line('p', membs) - doc.stag('img', src=img_src) - - css = ''' - .box { - border: 2px solid black; - text-align: center; - margin: 10px; - padding: 5%; - } - ul { - display: inline-block; - text-align: left; - } - img { - display: block; - margin: auto; - } - .description { - text-align: center; - } - .panel_button { - display: block !important; - width: 100% !important; - background-color: #00EACD !important; - color: #000; - transition: all .2s ease-out 0s !important; - box-shadow: 0 10px #00AEAB !important; - border-radius: 10px !important; - } - .panel_button:hover { - box-shadow: 0 5px #00AEAB; - transform: translateY(5px); - } - .submit { - color: black !important; - } - .selected { - background-color: #656bd6 !important; - } - .radio_item { - border-radius: 10px; - padding-left: 10px !important; - padding-right: 10px !important; - } - .radio_item:hover { - color: #656bd6 !important; - } - .title { - background-image: url(https://media.giphy.com/media/26BROrSHlmyzzHf3i/giphy.gif); - background-size: cover; - color: transparent; - -moz-background-clip: text; - -webkit-background-clip: text; - text-transform: uppercase; - font-size: 60px; - line-height: .75; - margin: 10px 0; - } - .panel_header { - color: black !important; - } - input { - background-color: #efeffa !important; - } - .acc, .feat { - background-color: #FF3399 !important - } - .prj { - background-color: #FFCE3B !important; - } - .data { - background-color: #ED6800 !important; - } - .ethics { - background-color: #3EE6F9 !important; - } - .team { - background-color: #9581EF !important; - } - .model-container { - display: flex; - flex-direction: column; - justify-content: center; - } - .spacer { - display: flex; - justify-content: center; - } - .model-div { - width: 45%; - } - @media screen and (max-width: 700px) { - .model-container { - flex-wrap: wrap; - } - } - ''' - return { - 'article': doc.getvalue(), - 'css': css, - 'title': title, - 'description': description, - } \ No newline at end of file diff --git a/spaces/jbilcke-hf/Panoremix/src/lib/fonts.ts b/spaces/jbilcke-hf/Panoremix/src/lib/fonts.ts deleted file mode 100644 index 75afdee901dd6d17526aac7d6801b007d12fe752..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/Panoremix/src/lib/fonts.ts +++ /dev/null @@ -1,29 +0,0 @@ -import { Ubuntu } from "next/font/google" -import localFont from "next/font/local" - -export const actionman = localFont({ - src: "../fonts/Action-Man/Action-Man.woff2", - variable: "--font-action-man" -}) - -// https://nextjs.org/docs/pages/building-your-application/optimizing/fonts -// If loading a variable font, you don"t need to specify the font weight -export const fonts = { - actionman, - // ubuntu: Ubuntu -} - -// https://nextjs.org/docs/pages/building-your-application/optimizing/fonts -// If loading a variable font, you don"t need to specify the font weight -export const fontList = Object.keys(fonts) - -export type FontName = keyof typeof fonts - -export const defaultFont = "actionman" as FontName - -export const classNames = Object.values(fonts).map(font => font.className) - -export const className = classNames.join(" ") - -export type FontClass = - | "font-actionman" diff --git a/spaces/jdinh/freeze-detection/README.md b/spaces/jdinh/freeze-detection/README.md deleted file mode 100644 index 9591b4fb021abd680d05d67e9ead8719f12e5163..0000000000000000000000000000000000000000 --- a/spaces/jdinh/freeze-detection/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Pose Detection -emoji: 🔥 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/TiffImagePlugin.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/TiffImagePlugin.py deleted file mode 100644 index d5148828506b36c72bac626b2032ebf129a62678..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/TiffImagePlugin.py +++ /dev/null @@ -1,2163 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# TIFF file handling -# -# TIFF is a flexible, if somewhat aged, image file format originally -# defined by Aldus. Although TIFF supports a wide variety of pixel -# layouts and compression methods, the name doesn't really stand for -# "thousands of incompatible file formats," it just feels that way. -# -# To read TIFF data from a stream, the stream must be seekable. For -# progressive decoding, make sure to use TIFF files where the tag -# directory is placed first in the file. -# -# History: -# 1995-09-01 fl Created -# 1996-05-04 fl Handle JPEGTABLES tag -# 1996-05-18 fl Fixed COLORMAP support -# 1997-01-05 fl Fixed PREDICTOR support -# 1997-08-27 fl Added support for rational tags (from Perry Stoll) -# 1998-01-10 fl Fixed seek/tell (from Jan Blom) -# 1998-07-15 fl Use private names for internal variables -# 1999-06-13 fl Rewritten for PIL 1.0 (1.0) -# 2000-10-11 fl Additional fixes for Python 2.0 (1.1) -# 2001-04-17 fl Fixed rewind support (seek to frame 0) (1.2) -# 2001-05-12 fl Added write support for more tags (from Greg Couch) (1.3) -# 2001-12-18 fl Added workaround for broken Matrox library -# 2002-01-18 fl Don't mess up if photometric tag is missing (D. Alan Stewart) -# 2003-05-19 fl Check FILLORDER tag -# 2003-09-26 fl Added RGBa support -# 2004-02-24 fl Added DPI support; fixed rational write support -# 2005-02-07 fl Added workaround for broken Corel Draw 10 files -# 2006-01-09 fl Added support for float/double tags (from Russell Nelson) -# -# Copyright (c) 1997-2006 by Secret Labs AB. All rights reserved. -# Copyright (c) 1995-1997 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -import io -import itertools -import logging -import math -import os -import struct -import warnings -from collections.abc import MutableMapping -from fractions import Fraction -from numbers import Number, Rational - -from . import ExifTags, Image, ImageFile, ImageOps, ImagePalette, TiffTags -from ._binary import i16be as i16 -from ._binary import i32be as i32 -from ._binary import o8 -from .TiffTags import TYPES - -logger = logging.getLogger(__name__) - -# Set these to true to force use of libtiff for reading or writing. -READ_LIBTIFF = False -WRITE_LIBTIFF = False -IFD_LEGACY_API = True -STRIP_SIZE = 65536 - -II = b"II" # little-endian (Intel style) -MM = b"MM" # big-endian (Motorola style) - -# -# -------------------------------------------------------------------- -# Read TIFF files - -# a few tag names, just to make the code below a bit more readable -IMAGEWIDTH = 256 -IMAGELENGTH = 257 -BITSPERSAMPLE = 258 -COMPRESSION = 259 -PHOTOMETRIC_INTERPRETATION = 262 -FILLORDER = 266 -IMAGEDESCRIPTION = 270 -STRIPOFFSETS = 273 -SAMPLESPERPIXEL = 277 -ROWSPERSTRIP = 278 -STRIPBYTECOUNTS = 279 -X_RESOLUTION = 282 -Y_RESOLUTION = 283 -PLANAR_CONFIGURATION = 284 -RESOLUTION_UNIT = 296 -TRANSFERFUNCTION = 301 -SOFTWARE = 305 -DATE_TIME = 306 -ARTIST = 315 -PREDICTOR = 317 -COLORMAP = 320 -TILEWIDTH = 322 -TILELENGTH = 323 -TILEOFFSETS = 324 -TILEBYTECOUNTS = 325 -SUBIFD = 330 -EXTRASAMPLES = 338 -SAMPLEFORMAT = 339 -JPEGTABLES = 347 -YCBCRSUBSAMPLING = 530 -REFERENCEBLACKWHITE = 532 -COPYRIGHT = 33432 -IPTC_NAA_CHUNK = 33723 # newsphoto properties -PHOTOSHOP_CHUNK = 34377 # photoshop properties -ICCPROFILE = 34675 -EXIFIFD = 34665 -XMP = 700 -JPEGQUALITY = 65537 # pseudo-tag by libtiff - -# https://github.com/imagej/ImageJA/blob/master/src/main/java/ij/io/TiffDecoder.java -IMAGEJ_META_DATA_BYTE_COUNTS = 50838 -IMAGEJ_META_DATA = 50839 - -COMPRESSION_INFO = { - # Compression => pil compression name - 1: "raw", - 2: "tiff_ccitt", - 3: "group3", - 4: "group4", - 5: "tiff_lzw", - 6: "tiff_jpeg", # obsolete - 7: "jpeg", - 8: "tiff_adobe_deflate", - 32771: "tiff_raw_16", # 16-bit padding - 32773: "packbits", - 32809: "tiff_thunderscan", - 32946: "tiff_deflate", - 34676: "tiff_sgilog", - 34677: "tiff_sgilog24", - 34925: "lzma", - 50000: "zstd", - 50001: "webp", -} - -COMPRESSION_INFO_REV = {v: k for k, v in COMPRESSION_INFO.items()} - -OPEN_INFO = { - # (ByteOrder, PhotoInterpretation, SampleFormat, FillOrder, BitsPerSample, - # ExtraSamples) => mode, rawmode - (II, 0, (1,), 1, (1,), ()): ("1", "1;I"), - (MM, 0, (1,), 1, (1,), ()): ("1", "1;I"), - (II, 0, (1,), 2, (1,), ()): ("1", "1;IR"), - (MM, 0, (1,), 2, (1,), ()): ("1", "1;IR"), - (II, 1, (1,), 1, (1,), ()): ("1", "1"), - (MM, 1, (1,), 1, (1,), ()): ("1", "1"), - (II, 1, (1,), 2, (1,), ()): ("1", "1;R"), - (MM, 1, (1,), 2, (1,), ()): ("1", "1;R"), - (II, 0, (1,), 1, (2,), ()): ("L", "L;2I"), - (MM, 0, (1,), 1, (2,), ()): ("L", "L;2I"), - (II, 0, (1,), 2, (2,), ()): ("L", "L;2IR"), - (MM, 0, (1,), 2, (2,), ()): ("L", "L;2IR"), - (II, 1, (1,), 1, (2,), ()): ("L", "L;2"), - (MM, 1, (1,), 1, (2,), ()): ("L", "L;2"), - (II, 1, (1,), 2, (2,), ()): ("L", "L;2R"), - (MM, 1, (1,), 2, (2,), ()): ("L", "L;2R"), - (II, 0, (1,), 1, (4,), ()): ("L", "L;4I"), - (MM, 0, (1,), 1, (4,), ()): ("L", "L;4I"), - (II, 0, (1,), 2, (4,), ()): ("L", "L;4IR"), - (MM, 0, (1,), 2, (4,), ()): ("L", "L;4IR"), - (II, 1, (1,), 1, (4,), ()): ("L", "L;4"), - (MM, 1, (1,), 1, (4,), ()): ("L", "L;4"), - (II, 1, (1,), 2, (4,), ()): ("L", "L;4R"), - (MM, 1, (1,), 2, (4,), ()): ("L", "L;4R"), - (II, 0, (1,), 1, (8,), ()): ("L", "L;I"), - (MM, 0, (1,), 1, (8,), ()): ("L", "L;I"), - (II, 0, (1,), 2, (8,), ()): ("L", "L;IR"), - (MM, 0, (1,), 2, (8,), ()): ("L", "L;IR"), - (II, 1, (1,), 1, (8,), ()): ("L", "L"), - (MM, 1, (1,), 1, (8,), ()): ("L", "L"), - (II, 1, (2,), 1, (8,), ()): ("L", "L"), - (MM, 1, (2,), 1, (8,), ()): ("L", "L"), - (II, 1, (1,), 2, (8,), ()): ("L", "L;R"), - (MM, 1, (1,), 2, (8,), ()): ("L", "L;R"), - (II, 1, (1,), 1, (12,), ()): ("I;16", "I;12"), - (II, 0, (1,), 1, (16,), ()): ("I;16", "I;16"), - (II, 1, (1,), 1, (16,), ()): ("I;16", "I;16"), - (MM, 1, (1,), 1, (16,), ()): ("I;16B", "I;16B"), - (II, 1, (1,), 2, (16,), ()): ("I;16", "I;16R"), - (II, 1, (2,), 1, (16,), ()): ("I", "I;16S"), - (MM, 1, (2,), 1, (16,), ()): ("I", "I;16BS"), - (II, 0, (3,), 1, (32,), ()): ("F", "F;32F"), - (MM, 0, (3,), 1, (32,), ()): ("F", "F;32BF"), - (II, 1, (1,), 1, (32,), ()): ("I", "I;32N"), - (II, 1, (2,), 1, (32,), ()): ("I", "I;32S"), - (MM, 1, (2,), 1, (32,), ()): ("I", "I;32BS"), - (II, 1, (3,), 1, (32,), ()): ("F", "F;32F"), - (MM, 1, (3,), 1, (32,), ()): ("F", "F;32BF"), - (II, 1, (1,), 1, (8, 8), (2,)): ("LA", "LA"), - (MM, 1, (1,), 1, (8, 8), (2,)): ("LA", "LA"), - (II, 2, (1,), 1, (8, 8, 8), ()): ("RGB", "RGB"), - (MM, 2, (1,), 1, (8, 8, 8), ()): ("RGB", "RGB"), - (II, 2, (1,), 2, (8, 8, 8), ()): ("RGB", "RGB;R"), - (MM, 2, (1,), 2, (8, 8, 8), ()): ("RGB", "RGB;R"), - (II, 2, (1,), 1, (8, 8, 8, 8), ()): ("RGBA", "RGBA"), # missing ExtraSamples - (MM, 2, (1,), 1, (8, 8, 8, 8), ()): ("RGBA", "RGBA"), # missing ExtraSamples - (II, 2, (1,), 1, (8, 8, 8, 8), (0,)): ("RGBX", "RGBX"), - (MM, 2, (1,), 1, (8, 8, 8, 8), (0,)): ("RGBX", "RGBX"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8), (0, 0)): ("RGBX", "RGBXX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8), (0, 0)): ("RGBX", "RGBXX"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (0, 0, 0)): ("RGBX", "RGBXXX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (0, 0, 0)): ("RGBX", "RGBXXX"), - (II, 2, (1,), 1, (8, 8, 8, 8), (1,)): ("RGBA", "RGBa"), - (MM, 2, (1,), 1, (8, 8, 8, 8), (1,)): ("RGBA", "RGBa"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8), (1, 0)): ("RGBA", "RGBaX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8), (1, 0)): ("RGBA", "RGBaX"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (1, 0, 0)): ("RGBA", "RGBaXX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (1, 0, 0)): ("RGBA", "RGBaXX"), - (II, 2, (1,), 1, (8, 8, 8, 8), (2,)): ("RGBA", "RGBA"), - (MM, 2, (1,), 1, (8, 8, 8, 8), (2,)): ("RGBA", "RGBA"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8), (2, 0)): ("RGBA", "RGBAX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8), (2, 0)): ("RGBA", "RGBAX"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (2, 0, 0)): ("RGBA", "RGBAXX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (2, 0, 0)): ("RGBA", "RGBAXX"), - (II, 2, (1,), 1, (8, 8, 8, 8), (999,)): ("RGBA", "RGBA"), # Corel Draw 10 - (MM, 2, (1,), 1, (8, 8, 8, 8), (999,)): ("RGBA", "RGBA"), # Corel Draw 10 - (II, 2, (1,), 1, (16, 16, 16), ()): ("RGB", "RGB;16L"), - (MM, 2, (1,), 1, (16, 16, 16), ()): ("RGB", "RGB;16B"), - (II, 2, (1,), 1, (16, 16, 16, 16), ()): ("RGBA", "RGBA;16L"), - (MM, 2, (1,), 1, (16, 16, 16, 16), ()): ("RGBA", "RGBA;16B"), - (II, 2, (1,), 1, (16, 16, 16, 16), (0,)): ("RGBX", "RGBX;16L"), - (MM, 2, (1,), 1, (16, 16, 16, 16), (0,)): ("RGBX", "RGBX;16B"), - (II, 2, (1,), 1, (16, 16, 16, 16), (1,)): ("RGBA", "RGBa;16L"), - (MM, 2, (1,), 1, (16, 16, 16, 16), (1,)): ("RGBA", "RGBa;16B"), - (II, 2, (1,), 1, (16, 16, 16, 16), (2,)): ("RGBA", "RGBA;16L"), - (MM, 2, (1,), 1, (16, 16, 16, 16), (2,)): ("RGBA", "RGBA;16B"), - (II, 3, (1,), 1, (1,), ()): ("P", "P;1"), - (MM, 3, (1,), 1, (1,), ()): ("P", "P;1"), - (II, 3, (1,), 2, (1,), ()): ("P", "P;1R"), - (MM, 3, (1,), 2, (1,), ()): ("P", "P;1R"), - (II, 3, (1,), 1, (2,), ()): ("P", "P;2"), - (MM, 3, (1,), 1, (2,), ()): ("P", "P;2"), - (II, 3, (1,), 2, (2,), ()): ("P", "P;2R"), - (MM, 3, (1,), 2, (2,), ()): ("P", "P;2R"), - (II, 3, (1,), 1, (4,), ()): ("P", "P;4"), - (MM, 3, (1,), 1, (4,), ()): ("P", "P;4"), - (II, 3, (1,), 2, (4,), ()): ("P", "P;4R"), - (MM, 3, (1,), 2, (4,), ()): ("P", "P;4R"), - (II, 3, (1,), 1, (8,), ()): ("P", "P"), - (MM, 3, (1,), 1, (8,), ()): ("P", "P"), - (II, 3, (1,), 1, (8, 8), (2,)): ("PA", "PA"), - (MM, 3, (1,), 1, (8, 8), (2,)): ("PA", "PA"), - (II, 3, (1,), 2, (8,), ()): ("P", "P;R"), - (MM, 3, (1,), 2, (8,), ()): ("P", "P;R"), - (II, 5, (1,), 1, (8, 8, 8, 8), ()): ("CMYK", "CMYK"), - (MM, 5, (1,), 1, (8, 8, 8, 8), ()): ("CMYK", "CMYK"), - (II, 5, (1,), 1, (8, 8, 8, 8, 8), (0,)): ("CMYK", "CMYKX"), - (MM, 5, (1,), 1, (8, 8, 8, 8, 8), (0,)): ("CMYK", "CMYKX"), - (II, 5, (1,), 1, (8, 8, 8, 8, 8, 8), (0, 0)): ("CMYK", "CMYKXX"), - (MM, 5, (1,), 1, (8, 8, 8, 8, 8, 8), (0, 0)): ("CMYK", "CMYKXX"), - (II, 5, (1,), 1, (16, 16, 16, 16), ()): ("CMYK", "CMYK;16L"), - # JPEG compressed images handled by LibTiff and auto-converted to RGBX - # Minimal Baseline TIFF requires YCbCr images to have 3 SamplesPerPixel - (II, 6, (1,), 1, (8, 8, 8), ()): ("RGB", "RGBX"), - (MM, 6, (1,), 1, (8, 8, 8), ()): ("RGB", "RGBX"), - (II, 8, (1,), 1, (8, 8, 8), ()): ("LAB", "LAB"), - (MM, 8, (1,), 1, (8, 8, 8), ()): ("LAB", "LAB"), -} - -MAX_SAMPLESPERPIXEL = max(len(key_tp[4]) for key_tp in OPEN_INFO) - -PREFIXES = [ - b"MM\x00\x2A", # Valid TIFF header with big-endian byte order - b"II\x2A\x00", # Valid TIFF header with little-endian byte order - b"MM\x2A\x00", # Invalid TIFF header, assume big-endian - b"II\x00\x2A", # Invalid TIFF header, assume little-endian - b"MM\x00\x2B", # BigTIFF with big-endian byte order - b"II\x2B\x00", # BigTIFF with little-endian byte order -] - - -def _accept(prefix): - return prefix[:4] in PREFIXES - - -def _limit_rational(val, max_val): - inv = abs(val) > 1 - n_d = IFDRational(1 / val if inv else val).limit_rational(max_val) - return n_d[::-1] if inv else n_d - - -def _limit_signed_rational(val, max_val, min_val): - frac = Fraction(val) - n_d = frac.numerator, frac.denominator - - if min(n_d) < min_val: - n_d = _limit_rational(val, abs(min_val)) - - if max(n_d) > max_val: - val = Fraction(*n_d) - n_d = _limit_rational(val, max_val) - - return n_d - - -## -# Wrapper for TIFF IFDs. - -_load_dispatch = {} -_write_dispatch = {} - - -class IFDRational(Rational): - """Implements a rational class where 0/0 is a legal value to match - the in the wild use of exif rationals. - - e.g., DigitalZoomRatio - 0.00/0.00 indicates that no digital zoom was used - """ - - """ If the denominator is 0, store this as a float('nan'), otherwise store - as a fractions.Fraction(). Delegate as appropriate - - """ - - __slots__ = ("_numerator", "_denominator", "_val") - - def __init__(self, value, denominator=1): - """ - :param value: either an integer numerator, a - float/rational/other number, or an IFDRational - :param denominator: Optional integer denominator - """ - if isinstance(value, IFDRational): - self._numerator = value.numerator - self._denominator = value.denominator - self._val = value._val - return - - if isinstance(value, Fraction): - self._numerator = value.numerator - self._denominator = value.denominator - else: - self._numerator = value - self._denominator = denominator - - if denominator == 0: - self._val = float("nan") - elif denominator == 1: - self._val = Fraction(value) - else: - self._val = Fraction(value, denominator) - - @property - def numerator(self): - return self._numerator - - @property - def denominator(self): - return self._denominator - - def limit_rational(self, max_denominator): - """ - - :param max_denominator: Integer, the maximum denominator value - :returns: Tuple of (numerator, denominator) - """ - - if self.denominator == 0: - return self.numerator, self.denominator - - f = self._val.limit_denominator(max_denominator) - return f.numerator, f.denominator - - def __repr__(self): - return str(float(self._val)) - - def __hash__(self): - return self._val.__hash__() - - def __eq__(self, other): - val = self._val - if isinstance(other, IFDRational): - other = other._val - if isinstance(other, float): - val = float(val) - return val == other - - def __getstate__(self): - return [self._val, self._numerator, self._denominator] - - def __setstate__(self, state): - IFDRational.__init__(self, 0) - _val, _numerator, _denominator = state - self._val = _val - self._numerator = _numerator - self._denominator = _denominator - - def _delegate(op): - def delegate(self, *args): - return getattr(self._val, op)(*args) - - return delegate - - """ a = ['add','radd', 'sub', 'rsub', 'mul', 'rmul', - 'truediv', 'rtruediv', 'floordiv', 'rfloordiv', - 'mod','rmod', 'pow','rpow', 'pos', 'neg', - 'abs', 'trunc', 'lt', 'gt', 'le', 'ge', 'bool', - 'ceil', 'floor', 'round'] - print("\n".join("__%s__ = _delegate('__%s__')" % (s,s) for s in a)) - """ - - __add__ = _delegate("__add__") - __radd__ = _delegate("__radd__") - __sub__ = _delegate("__sub__") - __rsub__ = _delegate("__rsub__") - __mul__ = _delegate("__mul__") - __rmul__ = _delegate("__rmul__") - __truediv__ = _delegate("__truediv__") - __rtruediv__ = _delegate("__rtruediv__") - __floordiv__ = _delegate("__floordiv__") - __rfloordiv__ = _delegate("__rfloordiv__") - __mod__ = _delegate("__mod__") - __rmod__ = _delegate("__rmod__") - __pow__ = _delegate("__pow__") - __rpow__ = _delegate("__rpow__") - __pos__ = _delegate("__pos__") - __neg__ = _delegate("__neg__") - __abs__ = _delegate("__abs__") - __trunc__ = _delegate("__trunc__") - __lt__ = _delegate("__lt__") - __gt__ = _delegate("__gt__") - __le__ = _delegate("__le__") - __ge__ = _delegate("__ge__") - __bool__ = _delegate("__bool__") - __ceil__ = _delegate("__ceil__") - __floor__ = _delegate("__floor__") - __round__ = _delegate("__round__") - # Python >= 3.11 - if hasattr(Fraction, "__int__"): - __int__ = _delegate("__int__") - - -class ImageFileDirectory_v2(MutableMapping): - """This class represents a TIFF tag directory. To speed things up, we - don't decode tags unless they're asked for. - - Exposes a dictionary interface of the tags in the directory:: - - ifd = ImageFileDirectory_v2() - ifd[key] = 'Some Data' - ifd.tagtype[key] = TiffTags.ASCII - print(ifd[key]) - 'Some Data' - - Individual values are returned as the strings or numbers, sequences are - returned as tuples of the values. - - The tiff metadata type of each item is stored in a dictionary of - tag types in - :attr:`~PIL.TiffImagePlugin.ImageFileDirectory_v2.tagtype`. The types - are read from a tiff file, guessed from the type added, or added - manually. - - Data Structures: - - * ``self.tagtype = {}`` - - * Key: numerical TIFF tag number - * Value: integer corresponding to the data type from - :py:data:`.TiffTags.TYPES` - - .. versionadded:: 3.0.0 - - 'Internal' data structures: - - * ``self._tags_v2 = {}`` - - * Key: numerical TIFF tag number - * Value: decoded data, as tuple for multiple values - - * ``self._tagdata = {}`` - - * Key: numerical TIFF tag number - * Value: undecoded byte string from file - - * ``self._tags_v1 = {}`` - - * Key: numerical TIFF tag number - * Value: decoded data in the v1 format - - Tags will be found in the private attributes ``self._tagdata``, and in - ``self._tags_v2`` once decoded. - - ``self.legacy_api`` is a value for internal use, and shouldn't be changed - from outside code. In cooperation with - :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v1`, if ``legacy_api`` - is true, then decoded tags will be populated into both ``_tags_v1`` and - ``_tags_v2``. ``_tags_v2`` will be used if this IFD is used in the TIFF - save routine. Tags should be read from ``_tags_v1`` if - ``legacy_api == true``. - - """ - - def __init__(self, ifh=b"II\052\0\0\0\0\0", prefix=None, group=None): - """Initialize an ImageFileDirectory. - - To construct an ImageFileDirectory from a real file, pass the 8-byte - magic header to the constructor. To only set the endianness, pass it - as the 'prefix' keyword argument. - - :param ifh: One of the accepted magic headers (cf. PREFIXES); also sets - endianness. - :param prefix: Override the endianness of the file. - """ - if not _accept(ifh): - msg = f"not a TIFF file (header {repr(ifh)} not valid)" - raise SyntaxError(msg) - self._prefix = prefix if prefix is not None else ifh[:2] - if self._prefix == MM: - self._endian = ">" - elif self._prefix == II: - self._endian = "<" - else: - msg = "not a TIFF IFD" - raise SyntaxError(msg) - self._bigtiff = ifh[2] == 43 - self.group = group - self.tagtype = {} - """ Dictionary of tag types """ - self.reset() - (self.next,) = ( - self._unpack("Q", ifh[8:]) if self._bigtiff else self._unpack("L", ifh[4:]) - ) - self._legacy_api = False - - prefix = property(lambda self: self._prefix) - offset = property(lambda self: self._offset) - legacy_api = property(lambda self: self._legacy_api) - - @legacy_api.setter - def legacy_api(self, value): - msg = "Not allowing setting of legacy api" - raise Exception(msg) - - def reset(self): - self._tags_v1 = {} # will remain empty if legacy_api is false - self._tags_v2 = {} # main tag storage - self._tagdata = {} - self.tagtype = {} # added 2008-06-05 by Florian Hoech - self._next = None - self._offset = None - - def __str__(self): - return str(dict(self)) - - def named(self): - """ - :returns: dict of name|key: value - - Returns the complete tag dictionary, with named tags where possible. - """ - return { - TiffTags.lookup(code, self.group).name: value - for code, value in self.items() - } - - def __len__(self): - return len(set(self._tagdata) | set(self._tags_v2)) - - def __getitem__(self, tag): - if tag not in self._tags_v2: # unpack on the fly - data = self._tagdata[tag] - typ = self.tagtype[tag] - size, handler = self._load_dispatch[typ] - self[tag] = handler(self, data, self.legacy_api) # check type - val = self._tags_v2[tag] - if self.legacy_api and not isinstance(val, (tuple, bytes)): - val = (val,) - return val - - def __contains__(self, tag): - return tag in self._tags_v2 or tag in self._tagdata - - def __setitem__(self, tag, value): - self._setitem(tag, value, self.legacy_api) - - def _setitem(self, tag, value, legacy_api): - basetypes = (Number, bytes, str) - - info = TiffTags.lookup(tag, self.group) - values = [value] if isinstance(value, basetypes) else value - - if tag not in self.tagtype: - if info.type: - self.tagtype[tag] = info.type - else: - self.tagtype[tag] = TiffTags.UNDEFINED - if all(isinstance(v, IFDRational) for v in values): - self.tagtype[tag] = ( - TiffTags.RATIONAL - if all(v >= 0 for v in values) - else TiffTags.SIGNED_RATIONAL - ) - elif all(isinstance(v, int) for v in values): - if all(0 <= v < 2**16 for v in values): - self.tagtype[tag] = TiffTags.SHORT - elif all(-(2**15) < v < 2**15 for v in values): - self.tagtype[tag] = TiffTags.SIGNED_SHORT - else: - self.tagtype[tag] = ( - TiffTags.LONG - if all(v >= 0 for v in values) - else TiffTags.SIGNED_LONG - ) - elif all(isinstance(v, float) for v in values): - self.tagtype[tag] = TiffTags.DOUBLE - elif all(isinstance(v, str) for v in values): - self.tagtype[tag] = TiffTags.ASCII - elif all(isinstance(v, bytes) for v in values): - self.tagtype[tag] = TiffTags.BYTE - - if self.tagtype[tag] == TiffTags.UNDEFINED: - values = [ - v.encode("ascii", "replace") if isinstance(v, str) else v - for v in values - ] - elif self.tagtype[tag] == TiffTags.RATIONAL: - values = [float(v) if isinstance(v, int) else v for v in values] - - is_ifd = self.tagtype[tag] == TiffTags.LONG and isinstance(values, dict) - if not is_ifd: - values = tuple(info.cvt_enum(value) for value in values) - - dest = self._tags_v1 if legacy_api else self._tags_v2 - - # Three branches: - # Spec'd length == 1, Actual length 1, store as element - # Spec'd length == 1, Actual > 1, Warn and truncate. Formerly barfed. - # No Spec, Actual length 1, Formerly (<4.2) returned a 1 element tuple. - # Don't mess with the legacy api, since it's frozen. - if not is_ifd and ( - (info.length == 1) - or self.tagtype[tag] == TiffTags.BYTE - or (info.length is None and len(values) == 1 and not legacy_api) - ): - # Don't mess with the legacy api, since it's frozen. - if legacy_api and self.tagtype[tag] in [ - TiffTags.RATIONAL, - TiffTags.SIGNED_RATIONAL, - ]: # rationals - values = (values,) - try: - (dest[tag],) = values - except ValueError: - # We've got a builtin tag with 1 expected entry - warnings.warn( - f"Metadata Warning, tag {tag} had too many entries: " - f"{len(values)}, expected 1" - ) - dest[tag] = values[0] - - else: - # Spec'd length > 1 or undefined - # Unspec'd, and length > 1 - dest[tag] = values - - def __delitem__(self, tag): - self._tags_v2.pop(tag, None) - self._tags_v1.pop(tag, None) - self._tagdata.pop(tag, None) - - def __iter__(self): - return iter(set(self._tagdata) | set(self._tags_v2)) - - def _unpack(self, fmt, data): - return struct.unpack(self._endian + fmt, data) - - def _pack(self, fmt, *values): - return struct.pack(self._endian + fmt, *values) - - def _register_loader(idx, size): - def decorator(func): - from .TiffTags import TYPES - - if func.__name__.startswith("load_"): - TYPES[idx] = func.__name__[5:].replace("_", " ") - _load_dispatch[idx] = size, func # noqa: F821 - return func - - return decorator - - def _register_writer(idx): - def decorator(func): - _write_dispatch[idx] = func # noqa: F821 - return func - - return decorator - - def _register_basic(idx_fmt_name): - from .TiffTags import TYPES - - idx, fmt, name = idx_fmt_name - TYPES[idx] = name - size = struct.calcsize("=" + fmt) - _load_dispatch[idx] = ( # noqa: F821 - size, - lambda self, data, legacy_api=True: ( - self._unpack(f"{len(data) // size}{fmt}", data) - ), - ) - _write_dispatch[idx] = lambda self, *values: ( # noqa: F821 - b"".join(self._pack(fmt, value) for value in values) - ) - - list( - map( - _register_basic, - [ - (TiffTags.SHORT, "H", "short"), - (TiffTags.LONG, "L", "long"), - (TiffTags.SIGNED_BYTE, "b", "signed byte"), - (TiffTags.SIGNED_SHORT, "h", "signed short"), - (TiffTags.SIGNED_LONG, "l", "signed long"), - (TiffTags.FLOAT, "f", "float"), - (TiffTags.DOUBLE, "d", "double"), - (TiffTags.IFD, "L", "long"), - (TiffTags.LONG8, "Q", "long8"), - ], - ) - ) - - @_register_loader(1, 1) # Basic type, except for the legacy API. - def load_byte(self, data, legacy_api=True): - return data - - @_register_writer(1) # Basic type, except for the legacy API. - def write_byte(self, data): - if isinstance(data, IFDRational): - data = int(data) - if isinstance(data, int): - data = bytes((data,)) - return data - - @_register_loader(2, 1) - def load_string(self, data, legacy_api=True): - if data.endswith(b"\0"): - data = data[:-1] - return data.decode("latin-1", "replace") - - @_register_writer(2) - def write_string(self, value): - # remerge of https://github.com/python-pillow/Pillow/pull/1416 - if isinstance(value, int): - value = str(value) - if not isinstance(value, bytes): - value = value.encode("ascii", "replace") - return value + b"\0" - - @_register_loader(5, 8) - def load_rational(self, data, legacy_api=True): - vals = self._unpack(f"{len(data) // 4}L", data) - - def combine(a, b): - return (a, b) if legacy_api else IFDRational(a, b) - - return tuple(combine(num, denom) for num, denom in zip(vals[::2], vals[1::2])) - - @_register_writer(5) - def write_rational(self, *values): - return b"".join( - self._pack("2L", *_limit_rational(frac, 2**32 - 1)) for frac in values - ) - - @_register_loader(7, 1) - def load_undefined(self, data, legacy_api=True): - return data - - @_register_writer(7) - def write_undefined(self, value): - if isinstance(value, int): - value = str(value).encode("ascii", "replace") - return value - - @_register_loader(10, 8) - def load_signed_rational(self, data, legacy_api=True): - vals = self._unpack(f"{len(data) // 4}l", data) - - def combine(a, b): - return (a, b) if legacy_api else IFDRational(a, b) - - return tuple(combine(num, denom) for num, denom in zip(vals[::2], vals[1::2])) - - @_register_writer(10) - def write_signed_rational(self, *values): - return b"".join( - self._pack("2l", *_limit_signed_rational(frac, 2**31 - 1, -(2**31))) - for frac in values - ) - - def _ensure_read(self, fp, size): - ret = fp.read(size) - if len(ret) != size: - msg = ( - "Corrupt EXIF data. " - f"Expecting to read {size} bytes but only got {len(ret)}. " - ) - raise OSError(msg) - return ret - - def load(self, fp): - self.reset() - self._offset = fp.tell() - - try: - tag_count = ( - self._unpack("Q", self._ensure_read(fp, 8)) - if self._bigtiff - else self._unpack("H", self._ensure_read(fp, 2)) - )[0] - for i in range(tag_count): - tag, typ, count, data = ( - self._unpack("HHQ8s", self._ensure_read(fp, 20)) - if self._bigtiff - else self._unpack("HHL4s", self._ensure_read(fp, 12)) - ) - - tagname = TiffTags.lookup(tag, self.group).name - typname = TYPES.get(typ, "unknown") - msg = f"tag: {tagname} ({tag}) - type: {typname} ({typ})" - - try: - unit_size, handler = self._load_dispatch[typ] - except KeyError: - logger.debug(msg + f" - unsupported type {typ}") - continue # ignore unsupported type - size = count * unit_size - if size > (8 if self._bigtiff else 4): - here = fp.tell() - (offset,) = self._unpack("Q" if self._bigtiff else "L", data) - msg += f" Tag Location: {here} - Data Location: {offset}" - fp.seek(offset) - data = ImageFile._safe_read(fp, size) - fp.seek(here) - else: - data = data[:size] - - if len(data) != size: - warnings.warn( - "Possibly corrupt EXIF data. " - f"Expecting to read {size} bytes but only got {len(data)}." - f" Skipping tag {tag}" - ) - logger.debug(msg) - continue - - if not data: - logger.debug(msg) - continue - - self._tagdata[tag] = data - self.tagtype[tag] = typ - - msg += " - value: " + ( - "" % size if size > 32 else repr(data) - ) - logger.debug(msg) - - (self.next,) = ( - self._unpack("Q", self._ensure_read(fp, 8)) - if self._bigtiff - else self._unpack("L", self._ensure_read(fp, 4)) - ) - except OSError as msg: - warnings.warn(str(msg)) - return - - def tobytes(self, offset=0): - # FIXME What about tagdata? - result = self._pack("H", len(self._tags_v2)) - - entries = [] - offset = offset + len(result) + len(self._tags_v2) * 12 + 4 - stripoffsets = None - - # pass 1: convert tags to binary format - # always write tags in ascending order - for tag, value in sorted(self._tags_v2.items()): - if tag == STRIPOFFSETS: - stripoffsets = len(entries) - typ = self.tagtype.get(tag) - logger.debug(f"Tag {tag}, Type: {typ}, Value: {repr(value)}") - is_ifd = typ == TiffTags.LONG and isinstance(value, dict) - if is_ifd: - if self._endian == "<": - ifh = b"II\x2A\x00\x08\x00\x00\x00" - else: - ifh = b"MM\x00\x2A\x00\x00\x00\x08" - ifd = ImageFileDirectory_v2(ifh, group=tag) - values = self._tags_v2[tag] - for ifd_tag, ifd_value in values.items(): - ifd[ifd_tag] = ifd_value - data = ifd.tobytes(offset) - else: - values = value if isinstance(value, tuple) else (value,) - data = self._write_dispatch[typ](self, *values) - - tagname = TiffTags.lookup(tag, self.group).name - typname = "ifd" if is_ifd else TYPES.get(typ, "unknown") - msg = f"save: {tagname} ({tag}) - type: {typname} ({typ})" - msg += " - value: " + ( - "" % len(data) if len(data) >= 16 else str(values) - ) - logger.debug(msg) - - # count is sum of lengths for string and arbitrary data - if is_ifd: - count = 1 - elif typ in [TiffTags.BYTE, TiffTags.ASCII, TiffTags.UNDEFINED]: - count = len(data) - else: - count = len(values) - # figure out if data fits into the entry - if len(data) <= 4: - entries.append((tag, typ, count, data.ljust(4, b"\0"), b"")) - else: - entries.append((tag, typ, count, self._pack("L", offset), data)) - offset += (len(data) + 1) // 2 * 2 # pad to word - - # update strip offset data to point beyond auxiliary data - if stripoffsets is not None: - tag, typ, count, value, data = entries[stripoffsets] - if data: - msg = "multistrip support not yet implemented" - raise NotImplementedError(msg) - value = self._pack("L", self._unpack("L", value)[0] + offset) - entries[stripoffsets] = tag, typ, count, value, data - - # pass 2: write entries to file - for tag, typ, count, value, data in entries: - logger.debug(f"{tag} {typ} {count} {repr(value)} {repr(data)}") - result += self._pack("HHL4s", tag, typ, count, value) - - # -- overwrite here for multi-page -- - result += b"\0\0\0\0" # end of entries - - # pass 3: write auxiliary data to file - for tag, typ, count, value, data in entries: - result += data - if len(data) & 1: - result += b"\0" - - return result - - def save(self, fp): - if fp.tell() == 0: # skip TIFF header on subsequent pages - # tiff header -- PIL always starts the first IFD at offset 8 - fp.write(self._prefix + self._pack("HL", 42, 8)) - - offset = fp.tell() - result = self.tobytes(offset) - fp.write(result) - return offset + len(result) - - -ImageFileDirectory_v2._load_dispatch = _load_dispatch -ImageFileDirectory_v2._write_dispatch = _write_dispatch -for idx, name in TYPES.items(): - name = name.replace(" ", "_") - setattr(ImageFileDirectory_v2, "load_" + name, _load_dispatch[idx][1]) - setattr(ImageFileDirectory_v2, "write_" + name, _write_dispatch[idx]) -del _load_dispatch, _write_dispatch, idx, name - - -# Legacy ImageFileDirectory support. -class ImageFileDirectory_v1(ImageFileDirectory_v2): - """This class represents the **legacy** interface to a TIFF tag directory. - - Exposes a dictionary interface of the tags in the directory:: - - ifd = ImageFileDirectory_v1() - ifd[key] = 'Some Data' - ifd.tagtype[key] = TiffTags.ASCII - print(ifd[key]) - ('Some Data',) - - Also contains a dictionary of tag types as read from the tiff image file, - :attr:`~PIL.TiffImagePlugin.ImageFileDirectory_v1.tagtype`. - - Values are returned as a tuple. - - .. deprecated:: 3.0.0 - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._legacy_api = True - - tags = property(lambda self: self._tags_v1) - tagdata = property(lambda self: self._tagdata) - - # defined in ImageFileDirectory_v2 - tagtype: dict - """Dictionary of tag types""" - - @classmethod - def from_v2(cls, original): - """Returns an - :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v1` - instance with the same data as is contained in the original - :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v2` - instance. - - :returns: :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v1` - - """ - - ifd = cls(prefix=original.prefix) - ifd._tagdata = original._tagdata - ifd.tagtype = original.tagtype - ifd.next = original.next # an indicator for multipage tiffs - return ifd - - def to_v2(self): - """Returns an - :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v2` - instance with the same data as is contained in the original - :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v1` - instance. - - :returns: :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v2` - - """ - - ifd = ImageFileDirectory_v2(prefix=self.prefix) - ifd._tagdata = dict(self._tagdata) - ifd.tagtype = dict(self.tagtype) - ifd._tags_v2 = dict(self._tags_v2) - return ifd - - def __contains__(self, tag): - return tag in self._tags_v1 or tag in self._tagdata - - def __len__(self): - return len(set(self._tagdata) | set(self._tags_v1)) - - def __iter__(self): - return iter(set(self._tagdata) | set(self._tags_v1)) - - def __setitem__(self, tag, value): - for legacy_api in (False, True): - self._setitem(tag, value, legacy_api) - - def __getitem__(self, tag): - if tag not in self._tags_v1: # unpack on the fly - data = self._tagdata[tag] - typ = self.tagtype[tag] - size, handler = self._load_dispatch[typ] - for legacy in (False, True): - self._setitem(tag, handler(self, data, legacy), legacy) - val = self._tags_v1[tag] - if not isinstance(val, (tuple, bytes)): - val = (val,) - return val - - -# undone -- switch this pointer when IFD_LEGACY_API == False -ImageFileDirectory = ImageFileDirectory_v1 - - -## -# Image plugin for TIFF files. - - -class TiffImageFile(ImageFile.ImageFile): - format = "TIFF" - format_description = "Adobe TIFF" - _close_exclusive_fp_after_loading = False - - def __init__(self, fp=None, filename=None): - self.tag_v2 = None - """ Image file directory (tag dictionary) """ - - self.tag = None - """ Legacy tag entries """ - - super().__init__(fp, filename) - - def _open(self): - """Open the first image in a TIFF file""" - - # Header - ifh = self.fp.read(8) - if ifh[2] == 43: - ifh += self.fp.read(8) - - self.tag_v2 = ImageFileDirectory_v2(ifh) - - # legacy IFD entries will be filled in later - self.ifd = None - - # setup frame pointers - self.__first = self.__next = self.tag_v2.next - self.__frame = -1 - self._fp = self.fp - self._frame_pos = [] - self._n_frames = None - - logger.debug("*** TiffImageFile._open ***") - logger.debug(f"- __first: {self.__first}") - logger.debug(f"- ifh: {repr(ifh)}") # Use repr to avoid str(bytes) - - # and load the first frame - self._seek(0) - - @property - def n_frames(self): - if self._n_frames is None: - current = self.tell() - self._seek(len(self._frame_pos)) - while self._n_frames is None: - self._seek(self.tell() + 1) - self.seek(current) - return self._n_frames - - def seek(self, frame): - """Select a given frame as current image""" - if not self._seek_check(frame): - return - self._seek(frame) - # Create a new core image object on second and - # subsequent frames in the image. Image may be - # different size/mode. - Image._decompression_bomb_check(self.size) - self.im = Image.core.new(self.mode, self.size) - - def _seek(self, frame): - self.fp = self._fp - - # reset buffered io handle in case fp - # was passed to libtiff, invalidating the buffer - self.fp.tell() - - while len(self._frame_pos) <= frame: - if not self.__next: - msg = "no more images in TIFF file" - raise EOFError(msg) - logger.debug( - f"Seeking to frame {frame}, on frame {self.__frame}, " - f"__next {self.__next}, location: {self.fp.tell()}" - ) - self.fp.seek(self.__next) - self._frame_pos.append(self.__next) - logger.debug("Loading tags, location: %s" % self.fp.tell()) - self.tag_v2.load(self.fp) - if self.tag_v2.next in self._frame_pos: - # This IFD has already been processed - # Declare this to be the end of the image - self.__next = 0 - else: - self.__next = self.tag_v2.next - if self.__next == 0: - self._n_frames = frame + 1 - if len(self._frame_pos) == 1: - self.is_animated = self.__next != 0 - self.__frame += 1 - self.fp.seek(self._frame_pos[frame]) - self.tag_v2.load(self.fp) - self._reload_exif() - # fill the legacy tag/ifd entries - self.tag = self.ifd = ImageFileDirectory_v1.from_v2(self.tag_v2) - self.__frame = frame - self._setup() - - def tell(self): - """Return the current frame number""" - return self.__frame - - def getxmp(self): - """ - Returns a dictionary containing the XMP tags. - Requires defusedxml to be installed. - - :returns: XMP tags in a dictionary. - """ - return self._getxmp(self.tag_v2[XMP]) if XMP in self.tag_v2 else {} - - def get_photoshop_blocks(self): - """ - Returns a dictionary of Photoshop "Image Resource Blocks". - The keys are the image resource ID. For more information, see - https://www.adobe.com/devnet-apps/photoshop/fileformatashtml/#50577409_pgfId-1037727 - - :returns: Photoshop "Image Resource Blocks" in a dictionary. - """ - blocks = {} - val = self.tag_v2.get(ExifTags.Base.ImageResources) - if val: - while val[:4] == b"8BIM": - id = i16(val[4:6]) - n = math.ceil((val[6] + 1) / 2) * 2 - size = i32(val[6 + n : 10 + n]) - data = val[10 + n : 10 + n + size] - blocks[id] = {"data": data} - - val = val[math.ceil((10 + n + size) / 2) * 2 :] - return blocks - - def load(self): - if self.tile and self.use_load_libtiff: - return self._load_libtiff() - return super().load() - - def load_end(self): - if self._tile_orientation: - method = { - 2: Image.Transpose.FLIP_LEFT_RIGHT, - 3: Image.Transpose.ROTATE_180, - 4: Image.Transpose.FLIP_TOP_BOTTOM, - 5: Image.Transpose.TRANSPOSE, - 6: Image.Transpose.ROTATE_270, - 7: Image.Transpose.TRANSVERSE, - 8: Image.Transpose.ROTATE_90, - }.get(self._tile_orientation) - if method is not None: - self.im = self.im.transpose(method) - self._size = self.im.size - - # allow closing if we're on the first frame, there's no next - # This is the ImageFile.load path only, libtiff specific below. - if not self.is_animated: - self._close_exclusive_fp_after_loading = True - - # reset buffered io handle in case fp - # was passed to libtiff, invalidating the buffer - self.fp.tell() - - # load IFD data from fp before it is closed - exif = self.getexif() - for key in TiffTags.TAGS_V2_GROUPS: - if key not in exif: - continue - exif.get_ifd(key) - - def _load_libtiff(self): - """Overload method triggered when we detect a compressed tiff - Calls out to libtiff""" - - Image.Image.load(self) - - self.load_prepare() - - if not len(self.tile) == 1: - msg = "Not exactly one tile" - raise OSError(msg) - - # (self._compression, (extents tuple), - # 0, (rawmode, self._compression, fp)) - extents = self.tile[0][1] - args = list(self.tile[0][3]) - - # To be nice on memory footprint, if there's a - # file descriptor, use that instead of reading - # into a string in python. - try: - fp = hasattr(self.fp, "fileno") and self.fp.fileno() - # flush the file descriptor, prevents error on pypy 2.4+ - # should also eliminate the need for fp.tell - # in _seek - if hasattr(self.fp, "flush"): - self.fp.flush() - except OSError: - # io.BytesIO have a fileno, but returns an OSError if - # it doesn't use a file descriptor. - fp = False - - if fp: - args[2] = fp - - decoder = Image._getdecoder( - self.mode, "libtiff", tuple(args), self.decoderconfig - ) - try: - decoder.setimage(self.im, extents) - except ValueError as e: - msg = "Couldn't set the image" - raise OSError(msg) from e - - close_self_fp = self._exclusive_fp and not self.is_animated - if hasattr(self.fp, "getvalue"): - # We've got a stringio like thing passed in. Yay for all in memory. - # The decoder needs the entire file in one shot, so there's not - # a lot we can do here other than give it the entire file. - # unless we could do something like get the address of the - # underlying string for stringio. - # - # Rearranging for supporting byteio items, since they have a fileno - # that returns an OSError if there's no underlying fp. Easier to - # deal with here by reordering. - logger.debug("have getvalue. just sending in a string from getvalue") - n, err = decoder.decode(self.fp.getvalue()) - elif fp: - # we've got a actual file on disk, pass in the fp. - logger.debug("have fileno, calling fileno version of the decoder.") - if not close_self_fp: - self.fp.seek(0) - # 4 bytes, otherwise the trace might error out - n, err = decoder.decode(b"fpfp") - else: - # we have something else. - logger.debug("don't have fileno or getvalue. just reading") - self.fp.seek(0) - # UNDONE -- so much for that buffer size thing. - n, err = decoder.decode(self.fp.read()) - - self.tile = [] - self.readonly = 0 - - self.load_end() - - if close_self_fp: - self.fp.close() - self.fp = None # might be shared - - if err < 0: - raise OSError(err) - - return Image.Image.load(self) - - def _setup(self): - """Setup this image object based on current tags""" - - if 0xBC01 in self.tag_v2: - msg = "Windows Media Photo files not yet supported" - raise OSError(msg) - - # extract relevant tags - self._compression = COMPRESSION_INFO[self.tag_v2.get(COMPRESSION, 1)] - self._planar_configuration = self.tag_v2.get(PLANAR_CONFIGURATION, 1) - - # photometric is a required tag, but not everyone is reading - # the specification - photo = self.tag_v2.get(PHOTOMETRIC_INTERPRETATION, 0) - - # old style jpeg compression images most certainly are YCbCr - if self._compression == "tiff_jpeg": - photo = 6 - - fillorder = self.tag_v2.get(FILLORDER, 1) - - logger.debug("*** Summary ***") - logger.debug(f"- compression: {self._compression}") - logger.debug(f"- photometric_interpretation: {photo}") - logger.debug(f"- planar_configuration: {self._planar_configuration}") - logger.debug(f"- fill_order: {fillorder}") - logger.debug(f"- YCbCr subsampling: {self.tag.get(YCBCRSUBSAMPLING)}") - - # size - xsize = int(self.tag_v2.get(IMAGEWIDTH)) - ysize = int(self.tag_v2.get(IMAGELENGTH)) - self._size = xsize, ysize - - logger.debug(f"- size: {self.size}") - - sample_format = self.tag_v2.get(SAMPLEFORMAT, (1,)) - if len(sample_format) > 1 and max(sample_format) == min(sample_format) == 1: - # SAMPLEFORMAT is properly per band, so an RGB image will - # be (1,1,1). But, we don't support per band pixel types, - # and anything more than one band is a uint8. So, just - # take the first element. Revisit this if adding support - # for more exotic images. - sample_format = (1,) - - bps_tuple = self.tag_v2.get(BITSPERSAMPLE, (1,)) - extra_tuple = self.tag_v2.get(EXTRASAMPLES, ()) - if photo in (2, 6, 8): # RGB, YCbCr, LAB - bps_count = 3 - elif photo == 5: # CMYK - bps_count = 4 - else: - bps_count = 1 - bps_count += len(extra_tuple) - bps_actual_count = len(bps_tuple) - samples_per_pixel = self.tag_v2.get( - SAMPLESPERPIXEL, - 3 if self._compression == "tiff_jpeg" and photo in (2, 6) else 1, - ) - - if samples_per_pixel > MAX_SAMPLESPERPIXEL: - # DOS check, samples_per_pixel can be a Long, and we extend the tuple below - logger.error( - "More samples per pixel than can be decoded: %s", samples_per_pixel - ) - msg = "Invalid value for samples per pixel" - raise SyntaxError(msg) - - if samples_per_pixel < bps_actual_count: - # If a file has more values in bps_tuple than expected, - # remove the excess. - bps_tuple = bps_tuple[:samples_per_pixel] - elif samples_per_pixel > bps_actual_count and bps_actual_count == 1: - # If a file has only one value in bps_tuple, when it should have more, - # presume it is the same number of bits for all of the samples. - bps_tuple = bps_tuple * samples_per_pixel - - if len(bps_tuple) != samples_per_pixel: - msg = "unknown data organization" - raise SyntaxError(msg) - - # mode: check photometric interpretation and bits per pixel - key = ( - self.tag_v2.prefix, - photo, - sample_format, - fillorder, - bps_tuple, - extra_tuple, - ) - logger.debug(f"format key: {key}") - try: - self.mode, rawmode = OPEN_INFO[key] - except KeyError as e: - logger.debug("- unsupported format") - msg = "unknown pixel mode" - raise SyntaxError(msg) from e - - logger.debug(f"- raw mode: {rawmode}") - logger.debug(f"- pil mode: {self.mode}") - - self.info["compression"] = self._compression - - xres = self.tag_v2.get(X_RESOLUTION, 1) - yres = self.tag_v2.get(Y_RESOLUTION, 1) - - if xres and yres: - resunit = self.tag_v2.get(RESOLUTION_UNIT) - if resunit == 2: # dots per inch - self.info["dpi"] = (xres, yres) - elif resunit == 3: # dots per centimeter. convert to dpi - self.info["dpi"] = (xres * 2.54, yres * 2.54) - elif resunit is None: # used to default to 1, but now 2) - self.info["dpi"] = (xres, yres) - # For backward compatibility, - # we also preserve the old behavior - self.info["resolution"] = xres, yres - else: # No absolute unit of measurement - self.info["resolution"] = xres, yres - - # build tile descriptors - x = y = layer = 0 - self.tile = [] - self.use_load_libtiff = READ_LIBTIFF or self._compression != "raw" - if self.use_load_libtiff: - # Decoder expects entire file as one tile. - # There's a buffer size limit in load (64k) - # so large g4 images will fail if we use that - # function. - # - # Setup the one tile for the whole image, then - # use the _load_libtiff function. - - # libtiff handles the fillmode for us, so 1;IR should - # actually be 1;I. Including the R double reverses the - # bits, so stripes of the image are reversed. See - # https://github.com/python-pillow/Pillow/issues/279 - if fillorder == 2: - # Replace fillorder with fillorder=1 - key = key[:3] + (1,) + key[4:] - logger.debug(f"format key: {key}") - # this should always work, since all the - # fillorder==2 modes have a corresponding - # fillorder=1 mode - self.mode, rawmode = OPEN_INFO[key] - # libtiff always returns the bytes in native order. - # we're expecting image byte order. So, if the rawmode - # contains I;16, we need to convert from native to image - # byte order. - if rawmode == "I;16": - rawmode = "I;16N" - if ";16B" in rawmode: - rawmode = rawmode.replace(";16B", ";16N") - if ";16L" in rawmode: - rawmode = rawmode.replace(";16L", ";16N") - - # YCbCr images with new jpeg compression with pixels in one plane - # unpacked straight into RGB values - if ( - photo == 6 - and self._compression == "jpeg" - and self._planar_configuration == 1 - ): - rawmode = "RGB" - - # Offset in the tile tuple is 0, we go from 0,0 to - # w,h, and we only do this once -- eds - a = (rawmode, self._compression, False, self.tag_v2.offset) - self.tile.append(("libtiff", (0, 0, xsize, ysize), 0, a)) - - elif STRIPOFFSETS in self.tag_v2 or TILEOFFSETS in self.tag_v2: - # striped image - if STRIPOFFSETS in self.tag_v2: - offsets = self.tag_v2[STRIPOFFSETS] - h = self.tag_v2.get(ROWSPERSTRIP, ysize) - w = self.size[0] - else: - # tiled image - offsets = self.tag_v2[TILEOFFSETS] - w = self.tag_v2.get(TILEWIDTH) - h = self.tag_v2.get(TILELENGTH) - - for offset in offsets: - if x + w > xsize: - stride = w * sum(bps_tuple) / 8 # bytes per line - else: - stride = 0 - - tile_rawmode = rawmode - if self._planar_configuration == 2: - # each band on it's own layer - tile_rawmode = rawmode[layer] - # adjust stride width accordingly - stride /= bps_count - - a = (tile_rawmode, int(stride), 1) - self.tile.append( - ( - self._compression, - (x, y, min(x + w, xsize), min(y + h, ysize)), - offset, - a, - ) - ) - x = x + w - if x >= self.size[0]: - x, y = 0, y + h - if y >= self.size[1]: - x = y = 0 - layer += 1 - else: - logger.debug("- unsupported data organization") - msg = "unknown data organization" - raise SyntaxError(msg) - - # Fix up info. - if ICCPROFILE in self.tag_v2: - self.info["icc_profile"] = self.tag_v2[ICCPROFILE] - - # fixup palette descriptor - - if self.mode in ["P", "PA"]: - palette = [o8(b // 256) for b in self.tag_v2[COLORMAP]] - self.palette = ImagePalette.raw("RGB;L", b"".join(palette)) - - self._tile_orientation = self.tag_v2.get(ExifTags.Base.Orientation) - - -# -# -------------------------------------------------------------------- -# Write TIFF files - -# little endian is default except for image modes with -# explicit big endian byte-order - -SAVE_INFO = { - # mode => rawmode, byteorder, photometrics, - # sampleformat, bitspersample, extra - "1": ("1", II, 1, 1, (1,), None), - "L": ("L", II, 1, 1, (8,), None), - "LA": ("LA", II, 1, 1, (8, 8), 2), - "P": ("P", II, 3, 1, (8,), None), - "PA": ("PA", II, 3, 1, (8, 8), 2), - "I": ("I;32S", II, 1, 2, (32,), None), - "I;16": ("I;16", II, 1, 1, (16,), None), - "I;16S": ("I;16S", II, 1, 2, (16,), None), - "F": ("F;32F", II, 1, 3, (32,), None), - "RGB": ("RGB", II, 2, 1, (8, 8, 8), None), - "RGBX": ("RGBX", II, 2, 1, (8, 8, 8, 8), 0), - "RGBA": ("RGBA", II, 2, 1, (8, 8, 8, 8), 2), - "CMYK": ("CMYK", II, 5, 1, (8, 8, 8, 8), None), - "YCbCr": ("YCbCr", II, 6, 1, (8, 8, 8), None), - "LAB": ("LAB", II, 8, 1, (8, 8, 8), None), - "I;32BS": ("I;32BS", MM, 1, 2, (32,), None), - "I;16B": ("I;16B", MM, 1, 1, (16,), None), - "I;16BS": ("I;16BS", MM, 1, 2, (16,), None), - "F;32BF": ("F;32BF", MM, 1, 3, (32,), None), -} - - -def _save(im, fp, filename): - try: - rawmode, prefix, photo, format, bits, extra = SAVE_INFO[im.mode] - except KeyError as e: - msg = f"cannot write mode {im.mode} as TIFF" - raise OSError(msg) from e - - ifd = ImageFileDirectory_v2(prefix=prefix) - - encoderinfo = im.encoderinfo - encoderconfig = im.encoderconfig - try: - compression = encoderinfo["compression"] - except KeyError: - compression = im.info.get("compression") - if isinstance(compression, int): - # compression value may be from BMP. Ignore it - compression = None - if compression is None: - compression = "raw" - elif compression == "tiff_jpeg": - # OJPEG is obsolete, so use new-style JPEG compression instead - compression = "jpeg" - elif compression == "tiff_deflate": - compression = "tiff_adobe_deflate" - - libtiff = WRITE_LIBTIFF or compression != "raw" - - # required for color libtiff images - ifd[PLANAR_CONFIGURATION] = 1 - - ifd[IMAGEWIDTH] = im.size[0] - ifd[IMAGELENGTH] = im.size[1] - - # write any arbitrary tags passed in as an ImageFileDirectory - if "tiffinfo" in encoderinfo: - info = encoderinfo["tiffinfo"] - elif "exif" in encoderinfo: - info = encoderinfo["exif"] - if isinstance(info, bytes): - exif = Image.Exif() - exif.load(info) - info = exif - else: - info = {} - logger.debug("Tiffinfo Keys: %s" % list(info)) - if isinstance(info, ImageFileDirectory_v1): - info = info.to_v2() - for key in info: - if isinstance(info, Image.Exif) and key in TiffTags.TAGS_V2_GROUPS: - ifd[key] = info.get_ifd(key) - else: - ifd[key] = info.get(key) - try: - ifd.tagtype[key] = info.tagtype[key] - except Exception: - pass # might not be an IFD. Might not have populated type - - # additions written by Greg Couch, gregc@cgl.ucsf.edu - # inspired by image-sig posting from Kevin Cazabon, kcazabon@home.com - if hasattr(im, "tag_v2"): - # preserve tags from original TIFF image file - for key in ( - RESOLUTION_UNIT, - X_RESOLUTION, - Y_RESOLUTION, - IPTC_NAA_CHUNK, - PHOTOSHOP_CHUNK, - XMP, - ): - if key in im.tag_v2: - ifd[key] = im.tag_v2[key] - ifd.tagtype[key] = im.tag_v2.tagtype[key] - - # preserve ICC profile (should also work when saving other formats - # which support profiles as TIFF) -- 2008-06-06 Florian Hoech - icc = encoderinfo.get("icc_profile", im.info.get("icc_profile")) - if icc: - ifd[ICCPROFILE] = icc - - for key, name in [ - (IMAGEDESCRIPTION, "description"), - (X_RESOLUTION, "resolution"), - (Y_RESOLUTION, "resolution"), - (X_RESOLUTION, "x_resolution"), - (Y_RESOLUTION, "y_resolution"), - (RESOLUTION_UNIT, "resolution_unit"), - (SOFTWARE, "software"), - (DATE_TIME, "date_time"), - (ARTIST, "artist"), - (COPYRIGHT, "copyright"), - ]: - if name in encoderinfo: - ifd[key] = encoderinfo[name] - - dpi = encoderinfo.get("dpi") - if dpi: - ifd[RESOLUTION_UNIT] = 2 - ifd[X_RESOLUTION] = dpi[0] - ifd[Y_RESOLUTION] = dpi[1] - - if bits != (1,): - ifd[BITSPERSAMPLE] = bits - if len(bits) != 1: - ifd[SAMPLESPERPIXEL] = len(bits) - if extra is not None: - ifd[EXTRASAMPLES] = extra - if format != 1: - ifd[SAMPLEFORMAT] = format - - if PHOTOMETRIC_INTERPRETATION not in ifd: - ifd[PHOTOMETRIC_INTERPRETATION] = photo - elif im.mode in ("1", "L") and ifd[PHOTOMETRIC_INTERPRETATION] == 0: - if im.mode == "1": - inverted_im = im.copy() - px = inverted_im.load() - for y in range(inverted_im.height): - for x in range(inverted_im.width): - px[x, y] = 0 if px[x, y] == 255 else 255 - im = inverted_im - else: - im = ImageOps.invert(im) - - if im.mode in ["P", "PA"]: - lut = im.im.getpalette("RGB", "RGB;L") - colormap = [] - colors = len(lut) // 3 - for i in range(3): - colormap += [v * 256 for v in lut[colors * i : colors * (i + 1)]] - colormap += [0] * (256 - colors) - ifd[COLORMAP] = colormap - # data orientation - stride = len(bits) * ((im.size[0] * bits[0] + 7) // 8) - # aim for given strip size (64 KB by default) when using libtiff writer - if libtiff: - im_strip_size = encoderinfo.get("strip_size", STRIP_SIZE) - rows_per_strip = 1 if stride == 0 else min(im_strip_size // stride, im.size[1]) - # JPEG encoder expects multiple of 8 rows - if compression == "jpeg": - rows_per_strip = min(((rows_per_strip + 7) // 8) * 8, im.size[1]) - else: - rows_per_strip = im.size[1] - if rows_per_strip == 0: - rows_per_strip = 1 - strip_byte_counts = 1 if stride == 0 else stride * rows_per_strip - strips_per_image = (im.size[1] + rows_per_strip - 1) // rows_per_strip - ifd[ROWSPERSTRIP] = rows_per_strip - if strip_byte_counts >= 2**16: - ifd.tagtype[STRIPBYTECOUNTS] = TiffTags.LONG - ifd[STRIPBYTECOUNTS] = (strip_byte_counts,) * (strips_per_image - 1) + ( - stride * im.size[1] - strip_byte_counts * (strips_per_image - 1), - ) - ifd[STRIPOFFSETS] = tuple( - range(0, strip_byte_counts * strips_per_image, strip_byte_counts) - ) # this is adjusted by IFD writer - # no compression by default: - ifd[COMPRESSION] = COMPRESSION_INFO_REV.get(compression, 1) - - if im.mode == "YCbCr": - for tag, value in { - YCBCRSUBSAMPLING: (1, 1), - REFERENCEBLACKWHITE: (0, 255, 128, 255, 128, 255), - }.items(): - ifd.setdefault(tag, value) - - blocklist = [TILEWIDTH, TILELENGTH, TILEOFFSETS, TILEBYTECOUNTS] - if libtiff: - if "quality" in encoderinfo: - quality = encoderinfo["quality"] - if not isinstance(quality, int) or quality < 0 or quality > 100: - msg = "Invalid quality setting" - raise ValueError(msg) - if compression != "jpeg": - msg = "quality setting only supported for 'jpeg' compression" - raise ValueError(msg) - ifd[JPEGQUALITY] = quality - - logger.debug("Saving using libtiff encoder") - logger.debug("Items: %s" % sorted(ifd.items())) - _fp = 0 - if hasattr(fp, "fileno"): - try: - fp.seek(0) - _fp = os.dup(fp.fileno()) - except io.UnsupportedOperation: - pass - - # optional types for non core tags - types = {} - # STRIPOFFSETS and STRIPBYTECOUNTS are added by the library - # based on the data in the strip. - # The other tags expect arrays with a certain length (fixed or depending on - # BITSPERSAMPLE, etc), passing arrays with a different length will result in - # segfaults. Block these tags until we add extra validation. - # SUBIFD may also cause a segfault. - blocklist += [ - REFERENCEBLACKWHITE, - STRIPBYTECOUNTS, - STRIPOFFSETS, - TRANSFERFUNCTION, - SUBIFD, - ] - - # bits per sample is a single short in the tiff directory, not a list. - atts = {BITSPERSAMPLE: bits[0]} - # Merge the ones that we have with (optional) more bits from - # the original file, e.g x,y resolution so that we can - # save(load('')) == original file. - legacy_ifd = {} - if hasattr(im, "tag"): - legacy_ifd = im.tag.to_v2() - - # SAMPLEFORMAT is determined by the image format and should not be copied - # from legacy_ifd. - supplied_tags = {**getattr(im, "tag_v2", {}), **legacy_ifd} - if SAMPLEFORMAT in supplied_tags: - del supplied_tags[SAMPLEFORMAT] - - for tag, value in itertools.chain(ifd.items(), supplied_tags.items()): - # Libtiff can only process certain core items without adding - # them to the custom dictionary. - # Custom items are supported for int, float, unicode, string and byte - # values. Other types and tuples require a tagtype. - if tag not in TiffTags.LIBTIFF_CORE: - if not getattr(Image.core, "libtiff_support_custom_tags", False): - continue - - if tag in ifd.tagtype: - types[tag] = ifd.tagtype[tag] - elif not (isinstance(value, (int, float, str, bytes))): - continue - else: - type = TiffTags.lookup(tag).type - if type: - types[tag] = type - if tag not in atts and tag not in blocklist: - if isinstance(value, str): - atts[tag] = value.encode("ascii", "replace") + b"\0" - elif isinstance(value, IFDRational): - atts[tag] = float(value) - else: - atts[tag] = value - - if SAMPLEFORMAT in atts and len(atts[SAMPLEFORMAT]) == 1: - atts[SAMPLEFORMAT] = atts[SAMPLEFORMAT][0] - - logger.debug("Converted items: %s" % sorted(atts.items())) - - # libtiff always expects the bytes in native order. - # we're storing image byte order. So, if the rawmode - # contains I;16, we need to convert from native to image - # byte order. - if im.mode in ("I;16B", "I;16"): - rawmode = "I;16N" - - # Pass tags as sorted list so that the tags are set in a fixed order. - # This is required by libtiff for some tags. For example, the JPEGQUALITY - # pseudo tag requires that the COMPRESS tag was already set. - tags = list(atts.items()) - tags.sort() - a = (rawmode, compression, _fp, filename, tags, types) - e = Image._getencoder(im.mode, "libtiff", a, encoderconfig) - e.setimage(im.im, (0, 0) + im.size) - while True: - # undone, change to self.decodermaxblock: - errcode, data = e.encode(16 * 1024)[1:] - if not _fp: - fp.write(data) - if errcode: - break - if _fp: - try: - os.close(_fp) - except OSError: - pass - if errcode < 0: - msg = f"encoder error {errcode} when writing image file" - raise OSError(msg) - - else: - for tag in blocklist: - del ifd[tag] - offset = ifd.save(fp) - - ImageFile._save( - im, fp, [("raw", (0, 0) + im.size, offset, (rawmode, stride, 1))] - ) - - # -- helper for multi-page save -- - if "_debug_multipage" in encoderinfo: - # just to access o32 and o16 (using correct byte order) - im._debug_multipage = ifd - - -class AppendingTiffWriter: - fieldSizes = [ - 0, # None - 1, # byte - 1, # ascii - 2, # short - 4, # long - 8, # rational - 1, # sbyte - 1, # undefined - 2, # sshort - 4, # slong - 8, # srational - 4, # float - 8, # double - 4, # ifd - 2, # unicode - 4, # complex - 8, # long8 - ] - - # StripOffsets = 273 - # FreeOffsets = 288 - # TileOffsets = 324 - # JPEGQTables = 519 - # JPEGDCTables = 520 - # JPEGACTables = 521 - Tags = {273, 288, 324, 519, 520, 521} - - def __init__(self, fn, new=False): - if hasattr(fn, "read"): - self.f = fn - self.close_fp = False - else: - self.name = fn - self.close_fp = True - try: - self.f = open(fn, "w+b" if new else "r+b") - except OSError: - self.f = open(fn, "w+b") - self.beginning = self.f.tell() - self.setup() - - def setup(self): - # Reset everything. - self.f.seek(self.beginning, os.SEEK_SET) - - self.whereToWriteNewIFDOffset = None - self.offsetOfNewPage = 0 - - self.IIMM = iimm = self.f.read(4) - if not iimm: - # empty file - first page - self.isFirst = True - return - - self.isFirst = False - if iimm == b"II\x2a\x00": - self.setEndian("<") - elif iimm == b"MM\x00\x2a": - self.setEndian(">") - else: - msg = "Invalid TIFF file header" - raise RuntimeError(msg) - - self.skipIFDs() - self.goToEnd() - - def finalize(self): - if self.isFirst: - return - - # fix offsets - self.f.seek(self.offsetOfNewPage) - - iimm = self.f.read(4) - if not iimm: - # msg = "nothing written into new page" - # raise RuntimeError(msg) - # Make it easy to finish a frame without committing to a new one. - return - - if iimm != self.IIMM: - msg = "IIMM of new page doesn't match IIMM of first page" - raise RuntimeError(msg) - - ifd_offset = self.readLong() - ifd_offset += self.offsetOfNewPage - self.f.seek(self.whereToWriteNewIFDOffset) - self.writeLong(ifd_offset) - self.f.seek(ifd_offset) - self.fixIFD() - - def newFrame(self): - # Call this to finish a frame. - self.finalize() - self.setup() - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, traceback): - if self.close_fp: - self.close() - return False - - def tell(self): - return self.f.tell() - self.offsetOfNewPage - - def seek(self, offset, whence=io.SEEK_SET): - if whence == os.SEEK_SET: - offset += self.offsetOfNewPage - - self.f.seek(offset, whence) - return self.tell() - - def goToEnd(self): - self.f.seek(0, os.SEEK_END) - pos = self.f.tell() - - # pad to 16 byte boundary - pad_bytes = 16 - pos % 16 - if 0 < pad_bytes < 16: - self.f.write(bytes(pad_bytes)) - self.offsetOfNewPage = self.f.tell() - - def setEndian(self, endian): - self.endian = endian - self.longFmt = self.endian + "L" - self.shortFmt = self.endian + "H" - self.tagFormat = self.endian + "HHL" - - def skipIFDs(self): - while True: - ifd_offset = self.readLong() - if ifd_offset == 0: - self.whereToWriteNewIFDOffset = self.f.tell() - 4 - break - - self.f.seek(ifd_offset) - num_tags = self.readShort() - self.f.seek(num_tags * 12, os.SEEK_CUR) - - def write(self, data): - return self.f.write(data) - - def readShort(self): - (value,) = struct.unpack(self.shortFmt, self.f.read(2)) - return value - - def readLong(self): - (value,) = struct.unpack(self.longFmt, self.f.read(4)) - return value - - def rewriteLastShortToLong(self, value): - self.f.seek(-2, os.SEEK_CUR) - bytes_written = self.f.write(struct.pack(self.longFmt, value)) - if bytes_written is not None and bytes_written != 4: - msg = f"wrote only {bytes_written} bytes but wanted 4" - raise RuntimeError(msg) - - def rewriteLastShort(self, value): - self.f.seek(-2, os.SEEK_CUR) - bytes_written = self.f.write(struct.pack(self.shortFmt, value)) - if bytes_written is not None and bytes_written != 2: - msg = f"wrote only {bytes_written} bytes but wanted 2" - raise RuntimeError(msg) - - def rewriteLastLong(self, value): - self.f.seek(-4, os.SEEK_CUR) - bytes_written = self.f.write(struct.pack(self.longFmt, value)) - if bytes_written is not None and bytes_written != 4: - msg = f"wrote only {bytes_written} bytes but wanted 4" - raise RuntimeError(msg) - - def writeShort(self, value): - bytes_written = self.f.write(struct.pack(self.shortFmt, value)) - if bytes_written is not None and bytes_written != 2: - msg = f"wrote only {bytes_written} bytes but wanted 2" - raise RuntimeError(msg) - - def writeLong(self, value): - bytes_written = self.f.write(struct.pack(self.longFmt, value)) - if bytes_written is not None and bytes_written != 4: - msg = f"wrote only {bytes_written} bytes but wanted 4" - raise RuntimeError(msg) - - def close(self): - self.finalize() - self.f.close() - - def fixIFD(self): - num_tags = self.readShort() - - for i in range(num_tags): - tag, field_type, count = struct.unpack(self.tagFormat, self.f.read(8)) - - field_size = self.fieldSizes[field_type] - total_size = field_size * count - is_local = total_size <= 4 - if not is_local: - offset = self.readLong() - offset += self.offsetOfNewPage - self.rewriteLastLong(offset) - - if tag in self.Tags: - cur_pos = self.f.tell() - - if is_local: - self.fixOffsets( - count, isShort=(field_size == 2), isLong=(field_size == 4) - ) - self.f.seek(cur_pos + 4) - else: - self.f.seek(offset) - self.fixOffsets( - count, isShort=(field_size == 2), isLong=(field_size == 4) - ) - self.f.seek(cur_pos) - - offset = cur_pos = None - - elif is_local: - # skip the locally stored value that is not an offset - self.f.seek(4, os.SEEK_CUR) - - def fixOffsets(self, count, isShort=False, isLong=False): - if not isShort and not isLong: - msg = "offset is neither short nor long" - raise RuntimeError(msg) - - for i in range(count): - offset = self.readShort() if isShort else self.readLong() - offset += self.offsetOfNewPage - if isShort and offset >= 65536: - # offset is now too large - we must convert shorts to longs - if count != 1: - msg = "not implemented" - raise RuntimeError(msg) # XXX TODO - - # simple case - the offset is just one and therefore it is - # local (not referenced with another offset) - self.rewriteLastShortToLong(offset) - self.f.seek(-10, os.SEEK_CUR) - self.writeShort(TiffTags.LONG) # rewrite the type to LONG - self.f.seek(8, os.SEEK_CUR) - elif isShort: - self.rewriteLastShort(offset) - else: - self.rewriteLastLong(offset) - - -def _save_all(im, fp, filename): - encoderinfo = im.encoderinfo.copy() - encoderconfig = im.encoderconfig - append_images = list(encoderinfo.get("append_images", [])) - if not hasattr(im, "n_frames") and not append_images: - return _save(im, fp, filename) - - cur_idx = im.tell() - try: - with AppendingTiffWriter(fp) as tf: - for ims in [im] + append_images: - ims.encoderinfo = encoderinfo - ims.encoderconfig = encoderconfig - if not hasattr(ims, "n_frames"): - nfr = 1 - else: - nfr = ims.n_frames - - for idx in range(nfr): - ims.seek(idx) - ims.load() - _save(ims, tf, filename) - tf.newFrame() - finally: - im.seek(cur_idx) - - -# -# -------------------------------------------------------------------- -# Register - -Image.register_open(TiffImageFile.format, TiffImageFile, _accept) -Image.register_save(TiffImageFile.format, _save) -Image.register_save_all(TiffImageFile.format, _save_all) - -Image.register_extensions(TiffImageFile.format, [".tif", ".tiff"]) - -Image.register_mime(TiffImageFile.format, "image/tiff") diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/middleware/trustedhost.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/middleware/trustedhost.py deleted file mode 100644 index 08d7e035315677856fd2cd0be2044689b57619bf..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/middleware/trustedhost.py +++ /dev/null @@ -1,3 +0,0 @@ -from starlette.middleware.trustedhost import ( # noqa - TrustedHostMiddleware as TrustedHostMiddleware, -) diff --git a/spaces/jonathanmg96/TFG-YOLOP/README.md b/spaces/jonathanmg96/TFG-YOLOP/README.md deleted file mode 100644 index 864e2aadc977bc304042a4133657cd6429a50cb8..0000000000000000000000000000000000000000 --- a/spaces/jonathanmg96/TFG-YOLOP/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: TFG YOLOP -emoji: 🔥 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jordonpeter01/MusicGen/audiocraft/modules/seanet.py b/spaces/jordonpeter01/MusicGen/audiocraft/modules/seanet.py deleted file mode 100644 index 3e5998e9153afb6e68ea410d565e00ea835db248..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/MusicGen/audiocraft/modules/seanet.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import numpy as np -import torch.nn as nn - -from .conv import StreamableConv1d, StreamableConvTranspose1d -from .lstm import StreamableLSTM - - -class SEANetResnetBlock(nn.Module): - """Residual block from SEANet model. - - Args: - dim (int): Dimension of the input/output. - kernel_sizes (list): List of kernel sizes for the convolutions. - dilations (list): List of dilations for the convolutions. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection. - """ - def __init__(self, dim: int, kernel_sizes: tp.List[int] = [3, 1], dilations: tp.List[int] = [1, 1], - activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, causal: bool = False, - pad_mode: str = 'reflect', compress: int = 2, true_skip: bool = True): - super().__init__() - assert len(kernel_sizes) == len(dilations), 'Number of kernel sizes should match number of dilations' - act = getattr(nn, activation) - hidden = dim // compress - block = [] - for i, (kernel_size, dilation) in enumerate(zip(kernel_sizes, dilations)): - in_chs = dim if i == 0 else hidden - out_chs = dim if i == len(kernel_sizes) - 1 else hidden - block += [ - act(**activation_params), - StreamableConv1d(in_chs, out_chs, kernel_size=kernel_size, dilation=dilation, - norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - self.block = nn.Sequential(*block) - self.shortcut: nn.Module - if true_skip: - self.shortcut = nn.Identity() - else: - self.shortcut = StreamableConv1d(dim, dim, kernel_size=1, norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode) - - def forward(self, x): - return self.shortcut(x) + self.block(x) - - -class SEANetEncoder(nn.Module): - """SEANet encoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. The encoder uses downsampling ratios instead of - upsampling ratios, hence it will use the ratios in the reverse order to the ones specified here - that must match the decoder order. We use the decoder order as some models may only employ the decoder. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the encoder, it corresponds to the N first blocks. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0): - super().__init__() - self.channels = channels - self.dimension = dimension - self.n_filters = n_filters - self.ratios = list(reversed(ratios)) - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = 1 - model: tp.List[nn.Module] = [ - StreamableConv1d(channels, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Downsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= i + 2 else norm - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - norm=block_norm, norm_params=norm_params, - activation=activation, activation_params=activation_params, - causal=causal, pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - # Add downsampling layers - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, mult * n_filters * 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - mult *= 2 - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, dimension, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - self.model = nn.Sequential(*model) - - def forward(self, x): - return self.model(x) - - -class SEANetDecoder(nn.Module): - """SEANet decoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - final_activation (str): Final activation function after all convolutions. - final_activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple. - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the decoder, it corresponds to the N last blocks. - trim_right_ratio (float): Ratio for trimming at the right of the transposed convolution under the causal setup. - If equal to 1.0, it means that all the trimming is done at the right. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - final_activation: tp.Optional[str] = None, final_activation_params: tp.Optional[dict] = None, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0, trim_right_ratio: float = 1.0): - super().__init__() - self.dimension = dimension - self.channels = channels - self.n_filters = n_filters - self.ratios = ratios - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = int(2 ** len(self.ratios)) - model: tp.List[nn.Module] = [ - StreamableConv1d(dimension, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - # Upsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= self.n_blocks - (i + 1) else norm - # Add upsampling layers - model += [ - act(**activation_params), - StreamableConvTranspose1d(mult * n_filters, mult * n_filters // 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, trim_right_ratio=trim_right_ratio), - ] - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters // 2, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - activation=activation, activation_params=activation_params, - norm=block_norm, norm_params=norm_params, causal=causal, - pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - mult //= 2 - - # Add final layers - model += [ - act(**activation_params), - StreamableConv1d(n_filters, channels, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Add optional final activation to decoder (eg. tanh) - if final_activation is not None: - final_act = getattr(nn, final_activation) - final_activation_params = final_activation_params or {} - model += [ - final_act(**final_activation_params) - ] - self.model = nn.Sequential(*model) - - def forward(self, z): - y = self.model(z) - return y diff --git a/spaces/jsxyhelu/skyseg/unet/u2net_refactor.py b/spaces/jsxyhelu/skyseg/unet/u2net_refactor.py deleted file mode 100644 index e668de2c2bc67cbef280eaa5f789c762c4745fa4..0000000000000000000000000000000000000000 --- a/spaces/jsxyhelu/skyseg/unet/u2net_refactor.py +++ /dev/null @@ -1,168 +0,0 @@ -import torch -import torch.nn as nn - -import math - -__all__ = ['U2NET_full', 'U2NET_lite'] - - -def _upsample_like(x, size): - return nn.Upsample(size=size, mode='bilinear', align_corners=False)(x) - - -def _size_map(x, height): - # {height: size} for Upsample - size = list(x.shape[-2:]) - sizes = {} - for h in range(1, height): - sizes[h] = size - size = [math.ceil(w / 2) for w in size] - return sizes - - -class REBNCONV(nn.Module): - def __init__(self, in_ch=3, out_ch=3, dilate=1): - super(REBNCONV, self).__init__() - - self.conv_s1 = nn.Conv2d(in_ch, out_ch, 3, padding=1 * dilate, dilation=1 * dilate) - self.bn_s1 = nn.BatchNorm2d(out_ch) - self.relu_s1 = nn.ReLU(inplace=True) - - def forward(self, x): - return self.relu_s1(self.bn_s1(self.conv_s1(x))) - - -class RSU(nn.Module): - def __init__(self, name, height, in_ch, mid_ch, out_ch, dilated=False): - super(RSU, self).__init__() - self.name = name - self.height = height - self.dilated = dilated - self._make_layers(height, in_ch, mid_ch, out_ch, dilated) - - def forward(self, x): - sizes = _size_map(x, self.height) - x = self.rebnconvin(x) - - # U-Net like symmetric encoder-decoder structure - def unet(x, height=1): - if height < self.height: - x1 = getattr(self, f'rebnconv{height}')(x) - if not self.dilated and height < self.height - 1: - x2 = unet(getattr(self, 'downsample')(x1), height + 1) - else: - x2 = unet(x1, height + 1) - - x = getattr(self, f'rebnconv{height}d')(torch.cat((x2, x1), 1)) - return _upsample_like(x, sizes[height - 1]) if not self.dilated and height > 1 else x - else: - return getattr(self, f'rebnconv{height}')(x) - - return x + unet(x) - - def _make_layers(self, height, in_ch, mid_ch, out_ch, dilated=False): - self.add_module('rebnconvin', REBNCONV(in_ch, out_ch)) - self.add_module('downsample', nn.MaxPool2d(2, stride=2, ceil_mode=True)) - - self.add_module(f'rebnconv1', REBNCONV(out_ch, mid_ch)) - self.add_module(f'rebnconv1d', REBNCONV(mid_ch * 2, out_ch)) - - for i in range(2, height): - dilate = 1 if not dilated else 2 ** (i - 1) - self.add_module(f'rebnconv{i}', REBNCONV(mid_ch, mid_ch, dilate=dilate)) - self.add_module(f'rebnconv{i}d', REBNCONV(mid_ch * 2, mid_ch, dilate=dilate)) - - dilate = 2 if not dilated else 2 ** (height - 1) - self.add_module(f'rebnconv{height}', REBNCONV(mid_ch, mid_ch, dilate=dilate)) - - -class U2NET(nn.Module): - def __init__(self, cfgs, out_ch): - super(U2NET, self).__init__() - self.out_ch = out_ch - self._make_layers(cfgs) - - def forward(self, x): - sizes = _size_map(x, self.height) - maps = [] # storage for maps - - # side saliency map - def unet(x, height=1): - if height < 6: - x1 = getattr(self, f'stage{height}')(x) - x2 = unet(getattr(self, 'downsample')(x1), height + 1) - x = getattr(self, f'stage{height}d')(torch.cat((x2, x1), 1)) - side(x, height) - return _upsample_like(x, sizes[height - 1]) if height > 1 else x - else: - x = getattr(self, f'stage{height}')(x) - side(x, height) - return _upsample_like(x, sizes[height - 1]) - - def side(x, h): - # side output saliency map (before sigmoid) - x = getattr(self, f'side{h}')(x) - x = _upsample_like(x, sizes[1]) - maps.append(x) - - def fuse(): - # fuse saliency probability maps - maps.reverse() - x = torch.cat(maps, 1) - x = getattr(self, 'outconv')(x) - maps.insert(0, x) - return [torch.sigmoid(x) for x in maps] - - unet(x) - maps = fuse() - return maps - - def _make_layers(self, cfgs): - self.height = int((len(cfgs) + 1) / 2) - self.add_module('downsample', nn.MaxPool2d(2, stride=2, ceil_mode=True)) - for k, v in cfgs.items(): - # build rsu block - self.add_module(k, RSU(v[0], *v[1])) - if v[2] > 0: - # build side layer - self.add_module(f'side{v[0][-1]}', nn.Conv2d(v[2], self.out_ch, 3, padding=1)) - # build fuse layer - self.add_module('outconv', nn.Conv2d(int(self.height * self.out_ch), self.out_ch, 1)) - - -def U2NET_full(): - full = { - # cfgs for building RSUs and sides - # {stage : [name, (height(L), in_ch, mid_ch, out_ch, dilated), side]} - 'stage1': ['En_1', (7, 3, 32, 64), -1], - 'stage2': ['En_2', (6, 64, 32, 128), -1], - 'stage3': ['En_3', (5, 128, 64, 256), -1], - 'stage4': ['En_4', (4, 256, 128, 512), -1], - 'stage5': ['En_5', (4, 512, 256, 512, True), -1], - 'stage6': ['En_6', (4, 512, 256, 512, True), 512], - 'stage5d': ['De_5', (4, 1024, 256, 512, True), 512], - 'stage4d': ['De_4', (4, 1024, 128, 256), 256], - 'stage3d': ['De_3', (5, 512, 64, 128), 128], - 'stage2d': ['De_2', (6, 256, 32, 64), 64], - 'stage1d': ['De_1', (7, 128, 16, 64), 64], - } - return U2NET(cfgs=full, out_ch=1) - - -def U2NET_lite(): - lite = { - # cfgs for building RSUs and sides - # {stage : [name, (height(L), in_ch, mid_ch, out_ch, dilated), side]} - 'stage1': ['En_1', (7, 3, 16, 64), -1], - 'stage2': ['En_2', (6, 64, 16, 64), -1], - 'stage3': ['En_3', (5, 64, 16, 64), -1], - 'stage4': ['En_4', (4, 64, 16, 64), -1], - 'stage5': ['En_5', (4, 64, 16, 64, True), -1], - 'stage6': ['En_6', (4, 64, 16, 64, True), 64], - 'stage5d': ['De_5', (4, 128, 16, 64, True), 64], - 'stage4d': ['De_4', (4, 128, 16, 64), 64], - 'stage3d': ['De_3', (5, 128, 16, 64), 64], - 'stage2d': ['De_2', (6, 128, 16, 64), 64], - 'stage1d': ['De_1', (7, 128, 16, 64), 64], - } - return U2NET(cfgs=lite, out_ch=1) diff --git a/spaces/jvcanavarro/traits-prediction/README.md b/spaces/jvcanavarro/traits-prediction/README.md deleted file mode 100644 index a290f00b43a271689fdd536bd3edf945a66dd27b..0000000000000000000000000000000000000000 --- a/spaces/jvcanavarro/traits-prediction/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Trait Recognition -emoji: 👁 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/uix/lit_sidebar.py b/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/uix/lit_sidebar.py deleted file mode 100644 index 4dcf6910f3c0e1688ce1d00ee4146df1ba81f536..0000000000000000000000000000000000000000 --- a/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/uix/lit_sidebar.py +++ /dev/null @@ -1,43 +0,0 @@ -import streamlit as st -import importlib -from uix import lit_packages - -from uix.pages import lit_home, lit_about, lit_diagnosis -from uix.pages import lit_qaConfigCheck - -m_kblnTraceOn=False - - -#--- alt define sidebar pages -m_aryPages = { - "Home": lit_home, #--- TODO: update - "Diagnosis: Single Tile": lit_diagnosis, - #"QA: File Check": lit_qaConfigCheck, - "About": lit_about -} - - -#--- define module-level vars -m_aryModNames = lit_packages.packages() -m_aryDescr = [] -m_aryMods = [] - - -def init(): - #--- upper panel - with st.sidebar: - kstrUrl_image = "bin/images/logo_omdena_saudi.png" - st.sidebar.image(kstrUrl_image, width=200) - - #--- get radio selection - strKey = st.sidebar.radio("rdoPageSel", list(m_aryPages.keys()), label_visibility="hidden") - pagSel = m_aryPages[strKey] - writePage(pagSel) - - -def writePage(uixFile): - #--- writes out the page for the selected combo - - # _reload_module(page) - uixFile.run() - diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/pre.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/pre.py deleted file mode 100644 index 17fd0f710153bfb71b717678998a853e364c8cd8..0000000000000000000000000000000000000000 --- a/spaces/kira4424/Tacotron-zero-short-voice-clone/pre.py +++ /dev/null @@ -1,76 +0,0 @@ -from synthesizer.preprocess import create_embeddings -from utils.argutils import print_args -from pathlib import Path -import argparse - -from synthesizer.preprocess import preprocess_dataset -from synthesizer.hparams import hparams -from utils.argutils import print_args -from pathlib import Path -import argparse - -recognized_datasets = [ - "aidatatang_200zh", - "magicdata", - "aishell3", - "data_aishell" -] - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="Preprocesses audio files from datasets, encodes them as mel spectrograms " - "and writes them to the disk. Audio files are also saved, to be used by the " - "vocoder for training.", - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - parser.add_argument("datasets_root", type=Path, help=\ - "Path to the directory containing your datasets.") - parser.add_argument("-o", "--out_dir", type=Path, default=argparse.SUPPRESS, help=\ - "Path to the output directory that will contain the mel spectrograms, the audios and the " - "embeds. Defaults to /SV2TTS/synthesizer/") - parser.add_argument("-n", "--n_processes", type=int, default=1, help=\ - "Number of processes in parallel.") - parser.add_argument("-s", "--skip_existing", action="store_true", help=\ - "Whether to overwrite existing files with the same name. Useful if the preprocessing was " - "interrupted. ") - parser.add_argument("--hparams", type=str, default="", help=\ - "Hyperparameter overrides as a comma-separated list of name-value pairs") - parser.add_argument("--no_trim", action="store_true", help=\ - "Preprocess audio without trimming silences (not recommended).") - parser.add_argument("--no_alignments", action="store_true", help=\ - "Use this option when dataset does not include alignments\ - (these are used to split long audio files into sub-utterances.)") - parser.add_argument("-d", "--dataset", type=str, default="aidatatang_200zh", help=\ - "Name of the dataset to process, allowing values: magicdata, aidatatang_200zh, aishell3, data_aishell.") - parser.add_argument("-e", "--encoder_model_fpath", type=Path, default="encoder/saved_models/pretrained.pt", help=\ - "Path your trained encoder model.") - parser.add_argument("-ne", "--n_processes_embed", type=int, default=1, help=\ - "Number of processes in parallel.An encoder is created for each, so you may need to lower " - "this value on GPUs with low memory. Set it to 1 if CUDA is unhappy") - args = parser.parse_args() - - # Process the arguments - if not hasattr(args, "out_dir"): - args.out_dir = args.datasets_root.joinpath("SV2TTS", "synthesizer") - assert args.dataset in recognized_datasets, 'is not supported, please vote for it in https://github.com/babysor/MockingBird/issues/10' - # Create directories - assert args.datasets_root.exists() - args.out_dir.mkdir(exist_ok=True, parents=True) - - # Verify webrtcvad is available - if not args.no_trim: - try: - import webrtcvad - except: - raise ModuleNotFoundError("Package 'webrtcvad' not found. This package enables " - "noise removal and is recommended. Please install and try again. If installation fails, " - "use --no_trim to disable this error message.") - encoder_model_fpath = args.encoder_model_fpath - del args.no_trim, args.encoder_model_fpath - - args.hparams = hparams.parse(args.hparams) - n_processes_embed = args.n_processes_embed - del args.n_processes_embed - preprocess_dataset(**vars(args)) - - create_embeddings(synthesizer_root=args.out_dir, n_processes=n_processes_embed, encoder_model_fpath=encoder_model_fpath) diff --git a/spaces/koyomimi/Real-CUGAN/upcunet_v3.py b/spaces/koyomimi/Real-CUGAN/upcunet_v3.py deleted file mode 100644 index f7919a6cc9efe3b8af73a73e30825a4c7d7d76da..0000000000000000000000000000000000000000 --- a/spaces/koyomimi/Real-CUGAN/upcunet_v3.py +++ /dev/null @@ -1,714 +0,0 @@ -import torch -from torch import nn as nn -from torch.nn import functional as F -import os, sys -import numpy as np - -root_path = os.path.abspath('.') -sys.path.append(root_path) - - -class SEBlock(nn.Module): - def __init__(self, in_channels, reduction=8, bias=False): - super(SEBlock, self).__init__() - self.conv1 = nn.Conv2d(in_channels, in_channels // reduction, 1, 1, 0, bias=bias) - self.conv2 = nn.Conv2d(in_channels // reduction, in_channels, 1, 1, 0, bias=bias) - - def forward(self, x): - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - x0 = torch.mean(x.float(), dim=(2, 3), keepdim=True).half() - else: - x0 = torch.mean(x, dim=(2, 3), keepdim=True) - x0 = self.conv1(x0) - x0 = F.relu(x0, inplace=True) - x0 = self.conv2(x0) - x0 = torch.sigmoid(x0) - x = torch.mul(x, x0) - return x - - def forward_mean(self, x, x0): - x0 = self.conv1(x0) - x0 = F.relu(x0, inplace=True) - x0 = self.conv2(x0) - x0 = torch.sigmoid(x0) - x = torch.mul(x, x0) - return x - - -class UNetConv(nn.Module): - def __init__(self, in_channels, mid_channels, out_channels, se): - super(UNetConv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d(in_channels, mid_channels, 3, 1, 0), - nn.LeakyReLU(0.1, inplace=True), - nn.Conv2d(mid_channels, out_channels, 3, 1, 0), - nn.LeakyReLU(0.1, inplace=True), - ) - if se: - self.seblock = SEBlock(out_channels, reduction=8, bias=True) - else: - self.seblock = None - - def forward(self, x): - z = self.conv(x) - if self.seblock is not None: - z = self.seblock(z) - return z - - -class UNet1(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet1, self).__init__() - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 128, 64, se=True) - self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv3 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - def forward_a(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x1, x2): - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - -class UNet1x3(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet1x3, self).__init__() - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 128, 64, se=True) - self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv3 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 5, 3, 2) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - def forward_a(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x1, x2): - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - -class UNet2(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet2, self).__init__() - - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 64, 128, se=True) - self.conv2_down = nn.Conv2d(128, 128, 2, 2, 0) - self.conv3 = UNetConv(128, 256, 128, se=True) - self.conv3_up = nn.ConvTranspose2d(128, 128, 2, 2, 0) - self.conv4 = UNetConv(128, 64, 64, se=True) - self.conv4_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv5 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - - x3 = self.conv2_down(x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - x3 = self.conv3(x3) - x3 = self.conv3_up(x3) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - - x2 = F.pad(x2, (-4, -4, -4, -4)) - x4 = self.conv4(x2 + x3) - x4 = self.conv4_up(x4) - x4 = F.leaky_relu(x4, 0.1, inplace=True) - - x1 = F.pad(x1, (-16, -16, -16, -16)) - x5 = self.conv5(x1 + x4) - x5 = F.leaky_relu(x5, 0.1, inplace=True) - - z = self.conv_bottom(x5) - return z - - def forward_a(self, x): # conv234结尾有se - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x2): # conv234结尾有se - x3 = self.conv2_down(x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - x3 = self.conv3.conv(x3) - return x3 - - def forward_c(self, x2, x3): # conv234结尾有se - x3 = self.conv3_up(x3) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - - x2 = F.pad(x2, (-4, -4, -4, -4)) - x4 = self.conv4.conv(x2 + x3) - return x4 - - def forward_d(self, x1, x4): # conv234结尾有se - x4 = self.conv4_up(x4) - x4 = F.leaky_relu(x4, 0.1, inplace=True) - - x1 = F.pad(x1, (-16, -16, -16, -16)) - x5 = self.conv5(x1 + x4) - x5 = F.leaky_relu(x5, 0.1, inplace=True) - - z = self.conv_bottom(x5) - return z - - -class UpCunet2x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet2x, self).__init__() - self.unet1 = UNet1(in_channels, out_channels, deconv=True) - self.unet2 = UNet2(in_channels, out_channels, deconv=False) - - def forward(self, x, tile_mode): # 1.7G - n, c, h0, w0 = x.shape - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 2 + 1) * 2 - pw = ((w0 - 1) // 2 + 1) * 2 - x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 2, :w0 * 2] - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除 - else: - crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.2G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 36, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 36, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 36, j:j + crop_size[1] + 36] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 36, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - opt_res_dict[i][j] = x_crop - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 2 - 72, w * 2 - 72)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - res[:, :, i * 2:i * 2 + h1 * 2 - 72, j * 2:j * 2 + w1 * 2 - 72] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 2, :w0 * 2] - return res # - - -class UpCunet3x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet3x, self).__init__() - self.unet1 = UNet1x3(in_channels, out_channels, deconv=True) - self.unet2 = UNet2(in_channels, out_channels, deconv=False) - - def forward(self, x, tile_mode): # 1.7G - n, c, h0, w0 = x.shape - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 4 + 1) * 4 - pw = ((w0 - 1) // 4 + 1) * 4 - x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 3, :w0 * 3] - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除 - crop_size_h = (h0 - 1) // 4 * 4 + 4 # 能被4整除 - else: - crop_size_h = ((h0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除 - crop_size_w = (w0 - 1) // 4 * 4 + 4 # 能被4整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 2, ((w0 - 1) // 8 * 8 + 8) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 12 * 12 + 12) // 3, ((w0 - 1) // 12 * 12 + 12) // 3) # 4.2G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 16 * 16 + 16) // 4, ((w0 - 1) // 16 * 16 + 16) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 28, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 28, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 28, j:j + crop_size[1] + 28] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 28, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - opt_res_dict[i][j] = x_crop # - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 3 - 84, w * 3 - 84)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - res[:, :, i * 3:i * 3 + h1 * 3 - 84, j * 3:j * 3 + w1 * 3 - 84] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 3, :w0 * 3] - return res - - -class UpCunet4x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet4x, self).__init__() - self.unet1 = UNet1(in_channels, 64, deconv=True) - self.unet2 = UNet2(64, 64, deconv=False) - self.ps = nn.PixelShuffle(2) - self.conv_final = nn.Conv2d(64, 12, 3, 1, padding=0, bias=True) - - def forward(self, x, tile_mode): - n, c, h0, w0 = x.shape - x00 = x - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 2 + 1) * 2 - pw = ((w0 - 1) // 2 + 1) * 2 - x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - x = self.conv_final(x) - x = F.pad(x, (-1, -1, -1, -1)) - x = self.ps(x) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 4, :w0 * 4] - x += F.interpolate(x00, scale_factor=4, mode='nearest') - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除 - else: - crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.1G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 38, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 38, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 38, j:j + crop_size[1] + 38] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 38, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - x_crop = self.conv_final(x_crop) - x_crop = F.pad(x_crop, (-1, -1, -1, -1)) - x_crop = self.ps(x_crop) - opt_res_dict[i][j] = x_crop - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 4 - 152, w * 4 - 152)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - # print(opt_res_dict[i][j].shape,res[:, :, i * 4:i * 4 + h1 * 4 - 144, j * 4:j * 4 + w1 * 4 - 144].shape) - res[:, :, i * 4:i * 4 + h1 * 4 - 152, j * 4:j * 4 + w1 * 4 - 152] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 4, :w0 * 4] - res += F.interpolate(x00, scale_factor=4, mode='nearest') - return res # - - -class RealWaifuUpScaler(object): - def __init__(self, scale, weight_path, half, device): - weight = torch.load(weight_path, map_location="cpu") - self.model = eval("UpCunet%sx" % scale)() - if (half == True): - self.model = self.model.half().to(device) - else: - self.model = self.model.to(device) - self.model.load_state_dict(weight, strict=True) - self.model.eval() - self.half = half - self.device = device - - def np2tensor(self, np_frame): - if (self.half == False): - return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).float() / 255 - else: - return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).half() / 255 - - def tensor2np(self, tensor): - if (self.half == False): - return ( - np.transpose((tensor.data.squeeze() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), (1, 2, 0))) - else: - return (np.transpose((tensor.data.squeeze().float() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), - (1, 2, 0))) - - def __call__(self, frame, tile_mode): - with torch.no_grad(): - tensor = self.np2tensor(frame) - result = self.tensor2np(self.model(tensor, tile_mode)) - return result - - -if __name__ == "__main__": - ###########inference_img - import time, cv2, sys - from time import time as ttime - - for weight_path, scale in [("weights_v3/up2x-latest-denoise3x.pth", 2), ("weights_v3/up3x-latest-denoise3x.pth", 3), - ("weights_v3/up4x-latest-denoise3x.pth", 4)]: - for tile_mode in [0, 1, 2, 3, 4]: - upscaler2x = RealWaifuUpScaler(scale, weight_path, half=True, device="cuda:0") - input_dir = "%s/input_dir1" % root_path - output_dir = "%s/opt-dir-all-test" % root_path - os.makedirs(output_dir, exist_ok=True) - for name in os.listdir(input_dir): - print(name) - tmp = name.split(".") - inp_path = os.path.join(input_dir, name) - suffix = tmp[-1] - prefix = ".".join(tmp[:-1]) - tmp_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix)) - print(inp_path, tmp_path) - # 支持中文路径 - # os.link(inp_path, tmp_path)#win用硬链接 - os.symlink(inp_path, tmp_path) # linux用软链接 - frame = cv2.imread(tmp_path)[:, :, [2, 1, 0]] - t0 = ttime() - result = upscaler2x(frame, tile_mode=tile_mode)[:, :, ::-1] - t1 = ttime() - print(prefix, "done", t1 - t0) - tmp_opt_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix)) - cv2.imwrite(tmp_opt_path, result) - n = 0 - while (1): - if (n == 0): - suffix = "_%sx_tile%s.png" % (scale, tile_mode) - else: - suffix = "_%sx_tile%s_%s.png" % (scale, tile_mode, n) # - if (os.path.exists(os.path.join(output_dir, prefix + suffix)) == False): - break - else: - n += 1 - final_opt_path = os.path.join(output_dir, prefix + suffix) - os.rename(tmp_opt_path, final_opt_path) - os.remove(tmp_path) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/ImageCms.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/ImageCms.py deleted file mode 100644 index f87849680df169869db8c9378d7da546583635bd..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/ImageCms.py +++ /dev/null @@ -1,1026 +0,0 @@ -# The Python Imaging Library. -# $Id$ - -# Optional color management support, based on Kevin Cazabon's PyCMS -# library. - -# History: - -# 2009-03-08 fl Added to PIL. - -# Copyright (C) 2002-2003 Kevin Cazabon -# Copyright (c) 2009 by Fredrik Lundh -# Copyright (c) 2013 by Eric Soroos - -# See the README file for information on usage and redistribution. See -# below for the original description. - -import sys -from enum import IntEnum - -from PIL import Image - -from ._deprecate import deprecate - -try: - from PIL import _imagingcms -except ImportError as ex: - # Allow error import for doc purposes, but error out when accessing - # anything in core. - from ._util import DeferredError - - _imagingcms = DeferredError(ex) - -DESCRIPTION = """ -pyCMS - - a Python / PIL interface to the littleCMS ICC Color Management System - Copyright (C) 2002-2003 Kevin Cazabon - kevin@cazabon.com - https://www.cazabon.com - - pyCMS home page: https://www.cazabon.com/pyCMS - littleCMS home page: https://www.littlecms.com - (littleCMS is Copyright (C) 1998-2001 Marti Maria) - - Originally released under LGPL. Graciously donated to PIL in - March 2009, for distribution under the standard PIL license - - The pyCMS.py module provides a "clean" interface between Python/PIL and - pyCMSdll, taking care of some of the more complex handling of the direct - pyCMSdll functions, as well as error-checking and making sure that all - relevant data is kept together. - - While it is possible to call pyCMSdll functions directly, it's not highly - recommended. - - Version History: - - 1.0.0 pil Oct 2013 Port to LCMS 2. - - 0.1.0 pil mod March 10, 2009 - - Renamed display profile to proof profile. The proof - profile is the profile of the device that is being - simulated, not the profile of the device which is - actually used to display/print the final simulation - (that'd be the output profile) - also see LCMSAPI.txt - input colorspace -> using 'renderingIntent' -> proof - colorspace -> using 'proofRenderingIntent' -> output - colorspace - - Added LCMS FLAGS support. - Added FLAGS["SOFTPROOFING"] as default flag for - buildProofTransform (otherwise the proof profile/intent - would be ignored). - - 0.1.0 pil March 2009 - added to PIL, as PIL.ImageCms - - 0.0.2 alpha Jan 6, 2002 - - Added try/except statements around type() checks of - potential CObjects... Python won't let you use type() - on them, and raises a TypeError (stupid, if you ask - me!) - - Added buildProofTransformFromOpenProfiles() function. - Additional fixes in DLL, see DLL code for details. - - 0.0.1 alpha first public release, Dec. 26, 2002 - - Known to-do list with current version (of Python interface, not pyCMSdll): - - none - -""" - -VERSION = "1.0.0 pil" - -# --------------------------------------------------------------------. - -core = _imagingcms - -# -# intent/direction values - - -class Intent(IntEnum): - PERCEPTUAL = 0 - RELATIVE_COLORIMETRIC = 1 - SATURATION = 2 - ABSOLUTE_COLORIMETRIC = 3 - - -class Direction(IntEnum): - INPUT = 0 - OUTPUT = 1 - PROOF = 2 - - -def __getattr__(name): - for enum, prefix in {Intent: "INTENT_", Direction: "DIRECTION_"}.items(): - if name.startswith(prefix): - name = name[len(prefix) :] - if name in enum.__members__: - deprecate(f"{prefix}{name}", 10, f"{enum.__name__}.{name}") - return enum[name] - msg = f"module '{__name__}' has no attribute '{name}'" - raise AttributeError(msg) - - -# -# flags - -FLAGS = { - "MATRIXINPUT": 1, - "MATRIXOUTPUT": 2, - "MATRIXONLY": (1 | 2), - "NOWHITEONWHITEFIXUP": 4, # Don't hot fix scum dot - # Don't create prelinearization tables on precalculated transforms - # (internal use): - "NOPRELINEARIZATION": 16, - "GUESSDEVICECLASS": 32, # Guess device class (for transform2devicelink) - "NOTCACHE": 64, # Inhibit 1-pixel cache - "NOTPRECALC": 256, - "NULLTRANSFORM": 512, # Don't transform anyway - "HIGHRESPRECALC": 1024, # Use more memory to give better accuracy - "LOWRESPRECALC": 2048, # Use less memory to minimize resources - "WHITEBLACKCOMPENSATION": 8192, - "BLACKPOINTCOMPENSATION": 8192, - "GAMUTCHECK": 4096, # Out of Gamut alarm - "SOFTPROOFING": 16384, # Do softproofing - "PRESERVEBLACK": 32768, # Black preservation - "NODEFAULTRESOURCEDEF": 16777216, # CRD special - "GRIDPOINTS": lambda n: (n & 0xFF) << 16, # Gridpoints -} - -_MAX_FLAG = 0 -for flag in FLAGS.values(): - if isinstance(flag, int): - _MAX_FLAG = _MAX_FLAG | flag - - -# --------------------------------------------------------------------. -# Experimental PIL-level API -# --------------------------------------------------------------------. - -## -# Profile. - - -class ImageCmsProfile: - def __init__(self, profile): - """ - :param profile: Either a string representing a filename, - a file like object containing a profile or a - low-level profile object - - """ - - if isinstance(profile, str): - if sys.platform == "win32": - profile_bytes_path = profile.encode() - try: - profile_bytes_path.decode("ascii") - except UnicodeDecodeError: - with open(profile, "rb") as f: - self._set(core.profile_frombytes(f.read())) - return - self._set(core.profile_open(profile), profile) - elif hasattr(profile, "read"): - self._set(core.profile_frombytes(profile.read())) - elif isinstance(profile, _imagingcms.CmsProfile): - self._set(profile) - else: - msg = "Invalid type for Profile" - raise TypeError(msg) - - def _set(self, profile, filename=None): - self.profile = profile - self.filename = filename - if profile: - self.product_name = None # profile.product_name - self.product_info = None # profile.product_info - else: - self.product_name = None - self.product_info = None - - def tobytes(self): - """ - Returns the profile in a format suitable for embedding in - saved images. - - :returns: a bytes object containing the ICC profile. - """ - - return core.profile_tobytes(self.profile) - - -class ImageCmsTransform(Image.ImagePointHandler): - - """ - Transform. This can be used with the procedural API, or with the standard - :py:func:`~PIL.Image.Image.point` method. - - Will return the output profile in the ``output.info['icc_profile']``. - """ - - def __init__( - self, - input, - output, - input_mode, - output_mode, - intent=Intent.PERCEPTUAL, - proof=None, - proof_intent=Intent.ABSOLUTE_COLORIMETRIC, - flags=0, - ): - if proof is None: - self.transform = core.buildTransform( - input.profile, output.profile, input_mode, output_mode, intent, flags - ) - else: - self.transform = core.buildProofTransform( - input.profile, - output.profile, - proof.profile, - input_mode, - output_mode, - intent, - proof_intent, - flags, - ) - # Note: inputMode and outputMode are for pyCMS compatibility only - self.input_mode = self.inputMode = input_mode - self.output_mode = self.outputMode = output_mode - - self.output_profile = output - - def point(self, im): - return self.apply(im) - - def apply(self, im, imOut=None): - im.load() - if imOut is None: - imOut = Image.new(self.output_mode, im.size, None) - self.transform.apply(im.im.id, imOut.im.id) - imOut.info["icc_profile"] = self.output_profile.tobytes() - return imOut - - def apply_in_place(self, im): - im.load() - if im.mode != self.output_mode: - msg = "mode mismatch" - raise ValueError(msg) # wrong output mode - self.transform.apply(im.im.id, im.im.id) - im.info["icc_profile"] = self.output_profile.tobytes() - return im - - -def get_display_profile(handle=None): - """ - (experimental) Fetches the profile for the current display device. - - :returns: ``None`` if the profile is not known. - """ - - if sys.platform != "win32": - return None - - from PIL import ImageWin - - if isinstance(handle, ImageWin.HDC): - profile = core.get_display_profile_win32(handle, 1) - else: - profile = core.get_display_profile_win32(handle or 0) - if profile is None: - return None - return ImageCmsProfile(profile) - - -# --------------------------------------------------------------------. -# pyCMS compatible layer -# --------------------------------------------------------------------. - - -class PyCMSError(Exception): - - """(pyCMS) Exception class. - This is used for all errors in the pyCMS API.""" - - pass - - -def profileToProfile( - im, - inputProfile, - outputProfile, - renderingIntent=Intent.PERCEPTUAL, - outputMode=None, - inPlace=False, - flags=0, -): - """ - (pyCMS) Applies an ICC transformation to a given image, mapping from - ``inputProfile`` to ``outputProfile``. - - If the input or output profiles specified are not valid filenames, a - :exc:`PyCMSError` will be raised. If ``inPlace`` is ``True`` and - ``outputMode != im.mode``, a :exc:`PyCMSError` will be raised. - If an error occurs during application of the profiles, - a :exc:`PyCMSError` will be raised. - If ``outputMode`` is not a mode supported by the ``outputProfile`` (or by pyCMS), - a :exc:`PyCMSError` will be raised. - - This function applies an ICC transformation to im from ``inputProfile``'s - color space to ``outputProfile``'s color space using the specified rendering - intent to decide how to handle out-of-gamut colors. - - ``outputMode`` can be used to specify that a color mode conversion is to - be done using these profiles, but the specified profiles must be able - to handle that mode. I.e., if converting im from RGB to CMYK using - profiles, the input profile must handle RGB data, and the output - profile must handle CMYK data. - - :param im: An open :py:class:`~PIL.Image.Image` object (i.e. Image.new(...) - or Image.open(...), etc.) - :param inputProfile: String, as a valid filename path to the ICC input - profile you wish to use for this image, or a profile object - :param outputProfile: String, as a valid filename path to the ICC output - profile you wish to use for this image, or a profile object - :param renderingIntent: Integer (0-3) specifying the rendering intent you - wish to use for the transform - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :param outputMode: A valid PIL mode for the output image (i.e. "RGB", - "CMYK", etc.). Note: if rendering the image "inPlace", outputMode - MUST be the same mode as the input, or omitted completely. If - omitted, the outputMode will be the same as the mode of the input - image (im.mode) - :param inPlace: Boolean. If ``True``, the original image is modified in-place, - and ``None`` is returned. If ``False`` (default), a new - :py:class:`~PIL.Image.Image` object is returned with the transform applied. - :param flags: Integer (0-...) specifying additional flags - :returns: Either None or a new :py:class:`~PIL.Image.Image` object, depending on - the value of ``inPlace`` - :exception PyCMSError: - """ - - if outputMode is None: - outputMode = im.mode - - if not isinstance(renderingIntent, int) or not (0 <= renderingIntent <= 3): - msg = "renderingIntent must be an integer between 0 and 3" - raise PyCMSError(msg) - - if not isinstance(flags, int) or not (0 <= flags <= _MAX_FLAG): - msg = f"flags must be an integer between 0 and {_MAX_FLAG}" - raise PyCMSError(msg) - - try: - if not isinstance(inputProfile, ImageCmsProfile): - inputProfile = ImageCmsProfile(inputProfile) - if not isinstance(outputProfile, ImageCmsProfile): - outputProfile = ImageCmsProfile(outputProfile) - transform = ImageCmsTransform( - inputProfile, - outputProfile, - im.mode, - outputMode, - renderingIntent, - flags=flags, - ) - if inPlace: - transform.apply_in_place(im) - imOut = None - else: - imOut = transform.apply(im) - except (OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - return imOut - - -def getOpenProfile(profileFilename): - """ - (pyCMS) Opens an ICC profile file. - - The PyCMSProfile object can be passed back into pyCMS for use in creating - transforms and such (as in ImageCms.buildTransformFromOpenProfiles()). - - If ``profileFilename`` is not a valid filename for an ICC profile, - a :exc:`PyCMSError` will be raised. - - :param profileFilename: String, as a valid filename path to the ICC profile - you wish to open, or a file-like object. - :returns: A CmsProfile class object. - :exception PyCMSError: - """ - - try: - return ImageCmsProfile(profileFilename) - except (OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def buildTransform( - inputProfile, - outputProfile, - inMode, - outMode, - renderingIntent=Intent.PERCEPTUAL, - flags=0, -): - """ - (pyCMS) Builds an ICC transform mapping from the ``inputProfile`` to the - ``outputProfile``. Use applyTransform to apply the transform to a given - image. - - If the input or output profiles specified are not valid filenames, a - :exc:`PyCMSError` will be raised. If an error occurs during creation - of the transform, a :exc:`PyCMSError` will be raised. - - If ``inMode`` or ``outMode`` are not a mode supported by the ``outputProfile`` - (or by pyCMS), a :exc:`PyCMSError` will be raised. - - This function builds and returns an ICC transform from the ``inputProfile`` - to the ``outputProfile`` using the ``renderingIntent`` to determine what to do - with out-of-gamut colors. It will ONLY work for converting images that - are in ``inMode`` to images that are in ``outMode`` color format (PIL mode, - i.e. "RGB", "RGBA", "CMYK", etc.). - - Building the transform is a fair part of the overhead in - ImageCms.profileToProfile(), so if you're planning on converting multiple - images using the same input/output settings, this can save you time. - Once you have a transform object, it can be used with - ImageCms.applyProfile() to convert images without the need to re-compute - the lookup table for the transform. - - The reason pyCMS returns a class object rather than a handle directly - to the transform is that it needs to keep track of the PIL input/output - modes that the transform is meant for. These attributes are stored in - the ``inMode`` and ``outMode`` attributes of the object (which can be - manually overridden if you really want to, but I don't know of any - time that would be of use, or would even work). - - :param inputProfile: String, as a valid filename path to the ICC input - profile you wish to use for this transform, or a profile object - :param outputProfile: String, as a valid filename path to the ICC output - profile you wish to use for this transform, or a profile object - :param inMode: String, as a valid PIL mode that the appropriate profile - also supports (i.e. "RGB", "RGBA", "CMYK", etc.) - :param outMode: String, as a valid PIL mode that the appropriate profile - also supports (i.e. "RGB", "RGBA", "CMYK", etc.) - :param renderingIntent: Integer (0-3) specifying the rendering intent you - wish to use for the transform - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :param flags: Integer (0-...) specifying additional flags - :returns: A CmsTransform class object. - :exception PyCMSError: - """ - - if not isinstance(renderingIntent, int) or not (0 <= renderingIntent <= 3): - msg = "renderingIntent must be an integer between 0 and 3" - raise PyCMSError(msg) - - if not isinstance(flags, int) or not (0 <= flags <= _MAX_FLAG): - msg = "flags must be an integer between 0 and %s" + _MAX_FLAG - raise PyCMSError(msg) - - try: - if not isinstance(inputProfile, ImageCmsProfile): - inputProfile = ImageCmsProfile(inputProfile) - if not isinstance(outputProfile, ImageCmsProfile): - outputProfile = ImageCmsProfile(outputProfile) - return ImageCmsTransform( - inputProfile, outputProfile, inMode, outMode, renderingIntent, flags=flags - ) - except (OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def buildProofTransform( - inputProfile, - outputProfile, - proofProfile, - inMode, - outMode, - renderingIntent=Intent.PERCEPTUAL, - proofRenderingIntent=Intent.ABSOLUTE_COLORIMETRIC, - flags=FLAGS["SOFTPROOFING"], -): - """ - (pyCMS) Builds an ICC transform mapping from the ``inputProfile`` to the - ``outputProfile``, but tries to simulate the result that would be - obtained on the ``proofProfile`` device. - - If the input, output, or proof profiles specified are not valid - filenames, a :exc:`PyCMSError` will be raised. - - If an error occurs during creation of the transform, - a :exc:`PyCMSError` will be raised. - - If ``inMode`` or ``outMode`` are not a mode supported by the ``outputProfile`` - (or by pyCMS), a :exc:`PyCMSError` will be raised. - - This function builds and returns an ICC transform from the ``inputProfile`` - to the ``outputProfile``, but tries to simulate the result that would be - obtained on the ``proofProfile`` device using ``renderingIntent`` and - ``proofRenderingIntent`` to determine what to do with out-of-gamut - colors. This is known as "soft-proofing". It will ONLY work for - converting images that are in ``inMode`` to images that are in outMode - color format (PIL mode, i.e. "RGB", "RGBA", "CMYK", etc.). - - Usage of the resulting transform object is exactly the same as with - ImageCms.buildTransform(). - - Proof profiling is generally used when using an output device to get a - good idea of what the final printed/displayed image would look like on - the ``proofProfile`` device when it's quicker and easier to use the - output device for judging color. Generally, this means that the - output device is a monitor, or a dye-sub printer (etc.), and the simulated - device is something more expensive, complicated, or time consuming - (making it difficult to make a real print for color judgement purposes). - - Soft-proofing basically functions by adjusting the colors on the - output device to match the colors of the device being simulated. However, - when the simulated device has a much wider gamut than the output - device, you may obtain marginal results. - - :param inputProfile: String, as a valid filename path to the ICC input - profile you wish to use for this transform, or a profile object - :param outputProfile: String, as a valid filename path to the ICC output - (monitor, usually) profile you wish to use for this transform, or a - profile object - :param proofProfile: String, as a valid filename path to the ICC proof - profile you wish to use for this transform, or a profile object - :param inMode: String, as a valid PIL mode that the appropriate profile - also supports (i.e. "RGB", "RGBA", "CMYK", etc.) - :param outMode: String, as a valid PIL mode that the appropriate profile - also supports (i.e. "RGB", "RGBA", "CMYK", etc.) - :param renderingIntent: Integer (0-3) specifying the rendering intent you - wish to use for the input->proof (simulated) transform - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :param proofRenderingIntent: Integer (0-3) specifying the rendering intent - you wish to use for proof->output transform - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :param flags: Integer (0-...) specifying additional flags - :returns: A CmsTransform class object. - :exception PyCMSError: - """ - - if not isinstance(renderingIntent, int) or not (0 <= renderingIntent <= 3): - msg = "renderingIntent must be an integer between 0 and 3" - raise PyCMSError(msg) - - if not isinstance(flags, int) or not (0 <= flags <= _MAX_FLAG): - msg = "flags must be an integer between 0 and %s" + _MAX_FLAG - raise PyCMSError(msg) - - try: - if not isinstance(inputProfile, ImageCmsProfile): - inputProfile = ImageCmsProfile(inputProfile) - if not isinstance(outputProfile, ImageCmsProfile): - outputProfile = ImageCmsProfile(outputProfile) - if not isinstance(proofProfile, ImageCmsProfile): - proofProfile = ImageCmsProfile(proofProfile) - return ImageCmsTransform( - inputProfile, - outputProfile, - inMode, - outMode, - renderingIntent, - proofProfile, - proofRenderingIntent, - flags, - ) - except (OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -buildTransformFromOpenProfiles = buildTransform -buildProofTransformFromOpenProfiles = buildProofTransform - - -def applyTransform(im, transform, inPlace=False): - """ - (pyCMS) Applies a transform to a given image. - - If ``im.mode != transform.inMode``, a :exc:`PyCMSError` is raised. - - If ``inPlace`` is ``True`` and ``transform.inMode != transform.outMode``, a - :exc:`PyCMSError` is raised. - - If ``im.mode``, ``transform.inMode`` or ``transform.outMode`` is not - supported by pyCMSdll or the profiles you used for the transform, a - :exc:`PyCMSError` is raised. - - If an error occurs while the transform is being applied, - a :exc:`PyCMSError` is raised. - - This function applies a pre-calculated transform (from - ImageCms.buildTransform() or ImageCms.buildTransformFromOpenProfiles()) - to an image. The transform can be used for multiple images, saving - considerable calculation time if doing the same conversion multiple times. - - If you want to modify im in-place instead of receiving a new image as - the return value, set ``inPlace`` to ``True``. This can only be done if - ``transform.inMode`` and ``transform.outMode`` are the same, because we can't - change the mode in-place (the buffer sizes for some modes are - different). The default behavior is to return a new :py:class:`~PIL.Image.Image` - object of the same dimensions in mode ``transform.outMode``. - - :param im: An :py:class:`~PIL.Image.Image` object, and im.mode must be the same - as the ``inMode`` supported by the transform. - :param transform: A valid CmsTransform class object - :param inPlace: Bool. If ``True``, ``im`` is modified in place and ``None`` is - returned, if ``False``, a new :py:class:`~PIL.Image.Image` object with the - transform applied is returned (and ``im`` is not changed). The default is - ``False``. - :returns: Either ``None``, or a new :py:class:`~PIL.Image.Image` object, - depending on the value of ``inPlace``. The profile will be returned in - the image's ``info['icc_profile']``. - :exception PyCMSError: - """ - - try: - if inPlace: - transform.apply_in_place(im) - imOut = None - else: - imOut = transform.apply(im) - except (TypeError, ValueError) as v: - raise PyCMSError(v) from v - - return imOut - - -def createProfile(colorSpace, colorTemp=-1): - """ - (pyCMS) Creates a profile. - - If colorSpace not in ``["LAB", "XYZ", "sRGB"]``, - a :exc:`PyCMSError` is raised. - - If using LAB and ``colorTemp`` is not a positive integer, - a :exc:`PyCMSError` is raised. - - If an error occurs while creating the profile, - a :exc:`PyCMSError` is raised. - - Use this function to create common profiles on-the-fly instead of - having to supply a profile on disk and knowing the path to it. It - returns a normal CmsProfile object that can be passed to - ImageCms.buildTransformFromOpenProfiles() to create a transform to apply - to images. - - :param colorSpace: String, the color space of the profile you wish to - create. - Currently only "LAB", "XYZ", and "sRGB" are supported. - :param colorTemp: Positive integer for the white point for the profile, in - degrees Kelvin (i.e. 5000, 6500, 9600, etc.). The default is for D50 - illuminant if omitted (5000k). colorTemp is ONLY applied to LAB - profiles, and is ignored for XYZ and sRGB. - :returns: A CmsProfile class object - :exception PyCMSError: - """ - - if colorSpace not in ["LAB", "XYZ", "sRGB"]: - msg = ( - f"Color space not supported for on-the-fly profile creation ({colorSpace})" - ) - raise PyCMSError(msg) - - if colorSpace == "LAB": - try: - colorTemp = float(colorTemp) - except (TypeError, ValueError) as e: - msg = f'Color temperature must be numeric, "{colorTemp}" not valid' - raise PyCMSError(msg) from e - - try: - return core.createProfile(colorSpace, colorTemp) - except (TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileName(profile): - """ - - (pyCMS) Gets the internal product name for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, - a :exc:`PyCMSError` is raised If an error occurs while trying - to obtain the name tag, a :exc:`PyCMSError` is raised. - - Use this function to obtain the INTERNAL name of the profile (stored - in an ICC tag in the profile itself), usually the one used when the - profile was originally created. Sometimes this tag also contains - additional information supplied by the creator. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal name of the profile as stored - in an ICC tag. - :exception PyCMSError: - """ - - try: - # add an extra newline to preserve pyCMS compatibility - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - # do it in python, not c. - # // name was "%s - %s" (model, manufacturer) || Description , - # // but if the Model and Manufacturer were the same or the model - # // was long, Just the model, in 1.x - model = profile.profile.model - manufacturer = profile.profile.manufacturer - - if not (model or manufacturer): - return (profile.profile.profile_description or "") + "\n" - if not manufacturer or len(model) > 30: - return model + "\n" - return f"{model} - {manufacturer}\n" - - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileInfo(profile): - """ - (pyCMS) Gets the internal product information for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, - a :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the info tag, - a :exc:`PyCMSError` is raised. - - Use this function to obtain the information stored in the profile's - info tag. This often contains details about the profile, and how it - was created, as supplied by the creator. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal profile information stored in - an ICC tag. - :exception PyCMSError: - """ - - try: - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - # add an extra newline to preserve pyCMS compatibility - # Python, not C. the white point bits weren't working well, - # so skipping. - # info was description \r\n\r\n copyright \r\n\r\n K007 tag \r\n\r\n whitepoint - description = profile.profile.profile_description - cpright = profile.profile.copyright - arr = [] - for elt in (description, cpright): - if elt: - arr.append(elt) - return "\r\n\r\n".join(arr) + "\r\n\r\n" - - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileCopyright(profile): - """ - (pyCMS) Gets the copyright for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, a - :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the copyright tag, - a :exc:`PyCMSError` is raised. - - Use this function to obtain the information stored in the profile's - copyright tag. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal profile information stored in - an ICC tag. - :exception PyCMSError: - """ - try: - # add an extra newline to preserve pyCMS compatibility - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - return (profile.profile.copyright or "") + "\n" - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileManufacturer(profile): - """ - (pyCMS) Gets the manufacturer for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, a - :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the manufacturer tag, a - :exc:`PyCMSError` is raised. - - Use this function to obtain the information stored in the profile's - manufacturer tag. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal profile information stored in - an ICC tag. - :exception PyCMSError: - """ - try: - # add an extra newline to preserve pyCMS compatibility - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - return (profile.profile.manufacturer or "") + "\n" - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileModel(profile): - """ - (pyCMS) Gets the model for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, a - :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the model tag, - a :exc:`PyCMSError` is raised. - - Use this function to obtain the information stored in the profile's - model tag. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal profile information stored in - an ICC tag. - :exception PyCMSError: - """ - - try: - # add an extra newline to preserve pyCMS compatibility - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - return (profile.profile.model or "") + "\n" - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileDescription(profile): - """ - (pyCMS) Gets the description for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, a - :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the description tag, - a :exc:`PyCMSError` is raised. - - Use this function to obtain the information stored in the profile's - description tag. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal profile information stored in an - ICC tag. - :exception PyCMSError: - """ - - try: - # add an extra newline to preserve pyCMS compatibility - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - return (profile.profile.profile_description or "") + "\n" - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getDefaultIntent(profile): - """ - (pyCMS) Gets the default intent name for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, a - :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the default intent, a - :exc:`PyCMSError` is raised. - - Use this function to determine the default (and usually best optimized) - rendering intent for this profile. Most profiles support multiple - rendering intents, but are intended mostly for one type of conversion. - If you wish to use a different intent than returned, use - ImageCms.isIntentSupported() to verify it will work first. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: Integer 0-3 specifying the default rendering intent for this - profile. - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :exception PyCMSError: - """ - - try: - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - return profile.profile.rendering_intent - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def isIntentSupported(profile, intent, direction): - """ - (pyCMS) Checks if a given intent is supported. - - Use this function to verify that you can use your desired - ``intent`` with ``profile``, and that ``profile`` can be used for the - input/output/proof profile as you desire. - - Some profiles are created specifically for one "direction", can cannot - be used for others. Some profiles can only be used for certain - rendering intents, so it's best to either verify this before trying - to create a transform with them (using this function), or catch the - potential :exc:`PyCMSError` that will occur if they don't - support the modes you select. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :param intent: Integer (0-3) specifying the rendering intent you wish to - use with this profile - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :param direction: Integer specifying if the profile is to be used for - input, output, or proof - - INPUT = 0 (or use ImageCms.Direction.INPUT) - OUTPUT = 1 (or use ImageCms.Direction.OUTPUT) - PROOF = 2 (or use ImageCms.Direction.PROOF) - - :returns: 1 if the intent/direction are supported, -1 if they are not. - :exception PyCMSError: - """ - - try: - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - # FIXME: I get different results for the same data w. different - # compilers. Bug in LittleCMS or in the binding? - if profile.profile.is_intent_supported(intent, direction): - return 1 - else: - return -1 - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def versions(): - """ - (pyCMS) Fetches versions. - """ - - return VERSION, core.littlecms_version, sys.version.split()[0], Image.__version__ diff --git a/spaces/lambdalabs/LambdaSuperRes/KAIR/main_train_usrnet.py b/spaces/lambdalabs/LambdaSuperRes/KAIR/main_train_usrnet.py deleted file mode 100644 index b3d345999c1a54070a0303aa3d13e0501ecb462a..0000000000000000000000000000000000000000 --- a/spaces/lambdalabs/LambdaSuperRes/KAIR/main_train_usrnet.py +++ /dev/null @@ -1,230 +0,0 @@ -import os.path -import math -import argparse -import time -import random -import numpy as np -from collections import OrderedDict -import logging -from torch.utils.data import DataLoader -import torch - -from utils import utils_logger -from utils import utils_image as util -from utils import utils_option as option -from utils import utils_sisr as sisr - -from data.select_dataset import define_Dataset -from models.select_model import define_Model - - -''' -# -------------------------------------------- -# training code for USRNet -# -------------------------------------------- -# Kai Zhang (cskaizhang@gmail.com) -# github: https://github.com/cszn/KAIR -# https://github.com/cszn/USRNet -# -# Reference: -@inproceedings{zhang2020deep, - title={Deep unfolding network for image super-resolution}, - author={Zhang, Kai and Van Gool, Luc and Timofte, Radu}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={3217--3226}, - year={2020} -} -# -------------------------------------------- -''' - - -def main(json_path='options/train_usrnet.json'): - - ''' - # ---------------------------------------- - # Step--1 (prepare opt) - # ---------------------------------------- - ''' - - parser = argparse.ArgumentParser() - parser.add_argument('-opt', type=str, default=json_path, help='Path to option JSON file.') - - opt = option.parse(parser.parse_args().opt, is_train=True) - util.mkdirs((path for key, path in opt['path'].items() if 'pretrained' not in key)) - - # ---------------------------------------- - # update opt - # ---------------------------------------- - # -->-->-->-->-->-->-->-->-->-->-->-->-->- - init_iter, init_path_G = option.find_last_checkpoint(opt['path']['models'], net_type='G') - opt['path']['pretrained_netG'] = init_path_G - current_step = init_iter - - border = opt['scale'] - # --<--<--<--<--<--<--<--<--<--<--<--<--<- - - # ---------------------------------------- - # save opt to a '../option.json' file - # ---------------------------------------- - option.save(opt) - - # ---------------------------------------- - # return None for missing key - # ---------------------------------------- - opt = option.dict_to_nonedict(opt) - - # ---------------------------------------- - # configure logger - # ---------------------------------------- - logger_name = 'train' - utils_logger.logger_info(logger_name, os.path.join(opt['path']['log'], logger_name+'.log')) - logger = logging.getLogger(logger_name) - logger.info(option.dict2str(opt)) - - - # ---------------------------------------- - # seed - # ---------------------------------------- - seed = opt['train']['manual_seed'] - if seed is None: - seed = random.randint(1, 10000) - logger.info('Random seed: {}'.format(seed)) - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - - ''' - # ---------------------------------------- - # Step--2 (creat dataloader) - # ---------------------------------------- - ''' - - # ---------------------------------------- - # 1) create_dataset - # 2) creat_dataloader for train and test - # ---------------------------------------- - for phase, dataset_opt in opt['datasets'].items(): - if phase == 'train': - train_set = define_Dataset(dataset_opt) - train_size = int(math.ceil(len(train_set) / dataset_opt['dataloader_batch_size'])) - logger.info('Number of train images: {:,d}, iters: {:,d}'.format(len(train_set), train_size)) - train_loader = DataLoader(train_set, - batch_size=dataset_opt['dataloader_batch_size'], - shuffle=dataset_opt['dataloader_shuffle'], - num_workers=dataset_opt['dataloader_num_workers'], - drop_last=True, - pin_memory=True) - elif phase == 'test': - test_set = define_Dataset(dataset_opt) - test_loader = DataLoader(test_set, batch_size=1, - shuffle=False, num_workers=1, - drop_last=False, pin_memory=True) - else: - raise NotImplementedError("Phase [%s] is not recognized." % phase) - - ''' - # ---------------------------------------- - # Step--3 (initialize model) - # ---------------------------------------- - ''' - - model = define_Model(opt) - - logger.info(model.info_network()) - model.init_train() - logger.info(model.info_params()) - - ''' - # ---------------------------------------- - # Step--4 (main training) - # ---------------------------------------- - ''' - - for epoch in range(1000000): # keep running - for i, train_data in enumerate(train_loader): - - current_step += 1 - - # ------------------------------- - # 1) update learning rate - # ------------------------------- - model.update_learning_rate(current_step) - - # ------------------------------- - # 2) feed patch pairs - # ------------------------------- - model.feed_data(train_data) - - # ------------------------------- - # 3) optimize parameters - # ------------------------------- - model.optimize_parameters(current_step) - - # ------------------------------- - # 4) training information - # ------------------------------- - if current_step % opt['train']['checkpoint_print'] == 0: - logs = model.current_log() # such as loss - message = ' '.format(epoch, current_step, model.current_learning_rate()) - for k, v in logs.items(): # merge log information into message - message += '{:s}: {:.3e} '.format(k, v) - logger.info(message) - - # ------------------------------- - # 5) save model - # ------------------------------- - if current_step % opt['train']['checkpoint_save'] == 0: - logger.info('Saving the model.') - model.save(current_step) - - # ------------------------------- - # 6) testing - # ------------------------------- - if current_step % opt['train']['checkpoint_test'] == 0: - - avg_psnr = 0.0 - idx = 0 - - for test_data in test_loader: - idx += 1 - image_name_ext = os.path.basename(test_data['L_path'][0]) - img_name, ext = os.path.splitext(image_name_ext) - - img_dir = os.path.join(opt['path']['images'], img_name) - util.mkdir(img_dir) - - model.feed_data(test_data) - model.test() - - visuals = model.current_visuals() - E_img = util.tensor2uint(visuals['E']) - H_img = util.tensor2uint(visuals['H']) - - # ----------------------- - # save estimated image E - # ----------------------- - save_img_path = os.path.join(img_dir, '{:s}_{:d}.png'.format(img_name, current_step)) - util.imsave(E_img, save_img_path) - - # ----------------------- - # calculate PSNR - # ----------------------- - current_psnr = util.calculate_psnr(E_img, H_img, border=border) - - logger.info('{:->4d}--> {:>10s} | {:<4.2f}dB'.format(idx, image_name_ext, current_psnr)) - - avg_psnr += current_psnr - - avg_psnr = avg_psnr / idx - - # testing log - logger.info(' 0 and len(user_queue_map) >= MAX_QUEUE_SIZE: - print("Server is full") - await websocket.send_json({"status": "error", "message": "Server is full"}) - await websocket.close() - return - - try: - uid = str(uuid.uuid4()) - print(f"New user connected: {uid}") - await websocket.send_json( - {"status": "success", "message": "Connected", "userId": uid} - ) - user_queue_map[uid] = {"queue": asyncio.Queue()} - await websocket.send_json( - {"status": "start", "message": "Start Streaming", "userId": uid} - ) - await handle_websocket_data(websocket, uid) - except WebSocketDisconnect as e: - logging.error(f"WebSocket Error: {e}, {uid}") - traceback.print_exc() - finally: - print(f"User disconnected: {uid}") - queue_value = user_queue_map.pop(uid, None) - queue = queue_value.get("queue", None) - if queue: - while not queue.empty(): - try: - queue.get_nowait() - except asyncio.QueueEmpty: - continue - - -@app.get("/queue_size") -async def get_queue_size(): - queue_size = len(user_queue_map) - return JSONResponse({"queue_size": queue_size}) - - -@app.get("/stream/{user_id}") -async def stream(user_id: uuid.UUID): - uid = str(user_id) - try: - user_queue = user_queue_map[uid] - queue = user_queue["queue"] - - async def generate(): - last_prompt: str = None - prompt_embeds: torch.Tensor = None - while True: - data = await queue.get() - input_image = data["image"] - params = data["params"] - if input_image is None: - continue - # avoid recalculate prompt embeds - if last_prompt != params.prompt: - print("new prompt") - prompt_embeds = compel_proc(params.prompt) - last_prompt = params.prompt - - image = predict( - input_image, - params, - prompt_embeds, - ) - if image is None: - continue - frame_data = io.BytesIO() - image.save(frame_data, format="JPEG") - frame_data = frame_data.getvalue() - if frame_data is not None and len(frame_data) > 0: - yield b"--frame\r\nContent-Type: image/jpeg\r\n\r\n" + frame_data + b"\r\n" - - await asyncio.sleep(1.0 / 120.0) - - return StreamingResponse( - generate(), media_type="multipart/x-mixed-replace;boundary=frame" - ) - except Exception as e: - logging.error(f"Streaming Error: {e}, {user_queue_map}") - traceback.print_exc() - return HTTPException(status_code=404, detail="User not found") - - -async def handle_websocket_data(websocket: WebSocket, user_id: uuid.UUID): - uid = str(user_id) - user_queue = user_queue_map[uid] - queue = user_queue["queue"] - if not queue: - return HTTPException(status_code=404, detail="User not found") - last_time = time.time() - try: - while True: - data = await websocket.receive_bytes() - params = await websocket.receive_json() - params = InputParams(**params) - pil_image = Image.open(io.BytesIO(data)) - - while not queue.empty(): - try: - queue.get_nowait() - except asyncio.QueueEmpty: - continue - await queue.put({"image": pil_image, "params": params}) - if TIMEOUT > 0 and time.time() - last_time > TIMEOUT: - await websocket.send_json( - { - "status": "timeout", - "message": "Your session has ended", - "userId": uid, - } - ) - await websocket.close() - return - - except Exception as e: - logging.error(f"Error: {e}") - traceback.print_exc() - - -@app.get("/", response_class=HTMLResponse) -async def root(): - return FileResponse("./static/controlnetlora.html") diff --git a/spaces/lazyboy450/RVCv2-Genshin/lib/infer_pack/models.py b/spaces/lazyboy450/RVCv2-Genshin/lib/infer_pack/models.py deleted file mode 100644 index 44c08d361bcb13b84b38dc29beff5cdaddad4ea2..0000000000000000000000000000000000000000 --- a/spaces/lazyboy450/RVCv2-Genshin/lib/infer_pack/models.py +++ /dev/null @@ -1,1124 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/leilevy/bingo/src/components/tone-selector.tsx b/spaces/leilevy/bingo/src/components/tone-selector.tsx deleted file mode 100644 index 5c6e464c91f564b895acd121f0a4a79ed9c5c356..0000000000000000000000000000000000000000 --- a/spaces/leilevy/bingo/src/components/tone-selector.tsx +++ /dev/null @@ -1,43 +0,0 @@ -import React from 'react' -import { BingConversationStyle } from '@/lib/bots/bing/types' -import { cn } from '@/lib/utils' - -type ToneItem = { - type: BingConversationStyle, - name: string -} - -const ToneList: ToneItem[] = [ - { name: '有创造力', type: BingConversationStyle.Creative }, - { name: '更平衡', type: BingConversationStyle.Balanced }, - { name: '更精确', type: BingConversationStyle.Precise } -] - -interface ToneSelectorProps { - type: BingConversationStyle | '' - onChange?: (type: BingConversationStyle) => void -} - -export function ToneSelector({ type, onChange }: ToneSelectorProps) { - return ( -
      -
      - 选择对话样式 -
      -
      -
        - { - ToneList.map(tone => ( -
      • onChange?.(tone.type)}> - -
      • - )) - } -
      -
      -
      - ) -} diff --git a/spaces/lemon7/White-box-Cartoonization/wbc/cartoonize.py b/spaces/lemon7/White-box-Cartoonization/wbc/cartoonize.py deleted file mode 100644 index 25faf1ceb95aaed9a3f7a7982d17a03dc6bc32b1..0000000000000000000000000000000000000000 --- a/spaces/lemon7/White-box-Cartoonization/wbc/cartoonize.py +++ /dev/null @@ -1,112 +0,0 @@ -import os -import cv2 -import numpy as np -import tensorflow as tf -import wbc.network as network -import wbc.guided_filter as guided_filter -from tqdm import tqdm - - -def resize_crop(image): - h, w, c = np.shape(image) - if min(h, w) > 720: - if h > w: - h, w = int(720 * h / w), 720 - else: - h, w = 720, int(720 * w / h) - image = cv2.resize(image, (w, h), - interpolation=cv2.INTER_AREA) - h, w = (h // 8) * 8, (w // 8) * 8 - image = image[:h, :w, :] - return image - - -def cartoonize(load_folder, save_folder, model_path): - print(model_path) - input_photo = tf.placeholder(tf.float32, [1, None, None, 3]) - network_out = network.unet_generator(input_photo) - final_out = guided_filter.guided_filter(input_photo, network_out, r=1, eps=5e-3) - - all_vars = tf.trainable_variables() - gene_vars = [var for var in all_vars if 'generator' in var.name] - saver = tf.train.Saver(var_list=gene_vars) - - config = tf.ConfigProto() - config.gpu_options.allow_growth = True - sess = tf.Session(config=config) - - sess.run(tf.global_variables_initializer()) - saver.restore(sess, tf.train.latest_checkpoint(model_path)) - name_list = os.listdir(load_folder) - for name in tqdm(name_list): - try: - load_path = os.path.join(load_folder, name) - save_path = os.path.join(save_folder, name) - image = cv2.imread(load_path) - image = resize_crop(image) - batch_image = image.astype(np.float32) / 127.5 - 1 - batch_image = np.expand_dims(batch_image, axis=0) - output = sess.run(final_out, feed_dict={input_photo: batch_image}) - output = (np.squeeze(output) + 1) * 127.5 - output = np.clip(output, 0, 255).astype(np.uint8) - cv2.imwrite(save_path, output) - except: - print('cartoonize {} failed'.format(load_path)) - - -class Cartoonize: - def __init__(self, model_path): - print(model_path) - self.input_photo = tf.placeholder(tf.float32, [1, None, None, 3]) - network_out = network.unet_generator(self.input_photo) - self.final_out = guided_filter.guided_filter(self.input_photo, network_out, r=1, eps=5e-3) - - all_vars = tf.trainable_variables() - gene_vars = [var for var in all_vars if 'generator' in var.name] - saver = tf.train.Saver(var_list=gene_vars) - - config = tf.ConfigProto() - config.gpu_options.allow_growth = True - self.sess = tf.Session(config=config) - - self.sess.run(tf.global_variables_initializer()) - saver.restore(self.sess, tf.train.latest_checkpoint(model_path)) - - def run(self, load_folder, save_folder): - name_list = os.listdir(load_folder) - for name in tqdm(name_list): - try: - load_path = os.path.join(load_folder, name) - save_path = os.path.join(save_folder, name) - image = cv2.imread(load_path) - image = resize_crop(image) - batch_image = image.astype(np.float32) / 127.5 - 1 - batch_image = np.expand_dims(batch_image, axis=0) - output = self.sess.run(self.final_out, feed_dict={self.input_photo: batch_image}) - output = (np.squeeze(output) + 1) * 127.5 - output = np.clip(output, 0, 255).astype(np.uint8) - cv2.imwrite(save_path, output) - except: - print('cartoonize {} failed'.format(load_path)) - - def run_sigle(self, load_path, save_path): - try: - image = cv2.imread(load_path) - image = resize_crop(image) - batch_image = image.astype(np.float32) / 127.5 - 1 - batch_image = np.expand_dims(batch_image, axis=0) - output = self.sess.run(self.final_out, feed_dict={self.input_photo: batch_image}) - output = (np.squeeze(output) + 1) * 127.5 - output = np.clip(output, 0, 255).astype(np.uint8) - cv2.imwrite(save_path, output) - except: - print('cartoonize {} failed'.format(load_path)) - - -if __name__ == '__main__': - model_path = 'saved_models' - load_folder = 'test_images' - save_folder = 'cartoonized_images' - if not os.path.exists(save_folder): - os.mkdir(save_folder) - cartoonize(load_folder, save_folder, model_path) diff --git a/spaces/lewisliuX123/wechatglm_demo/channel/wechat/wechat_channel.py b/spaces/lewisliuX123/wechatglm_demo/channel/wechat/wechat_channel.py deleted file mode 100644 index b800fc43753fad893a485eb214cc9602a7f69af9..0000000000000000000000000000000000000000 --- a/spaces/lewisliuX123/wechatglm_demo/channel/wechat/wechat_channel.py +++ /dev/null @@ -1,176 +0,0 @@ -# encoding:utf-8 - -""" -wechat channel -""" -import itchat -import json -from itchat.content import * -from channel.channel import Channel -from concurrent.futures import ThreadPoolExecutor -from common.log import logger -from config import conf -import requests -import io - -thread_pool = ThreadPoolExecutor(max_workers=8) - - -class WechatChannel(Channel): - - qrcode = b'' - - newInstance=None - - def __init__(self): - pass - - def startup(self): - # login by scan QRCode - newInstance = itchat.load_sync_itchat() - self.newInstance = newInstance - - @newInstance.msg_register(TEXT) - def handler_single_msg(msg): - self.handle(msg) - return None - - @newInstance.msg_register(TEXT, isGroupChat=True) - def handler_group_msg(msg): - self.handle_group(msg) - return None - - newInstance.auto_login(qrCallback=self.qrCallback) - # start message listener - newInstance.run() - - def qrCallback(self, uuid, status, qrcode): - self.qrcode = qrcode - - def getQrCode(self): - return self.qrcode - - def handle(self, msg): - logger.debug("[WX]receive msg: " + json.dumps(msg, ensure_ascii=False)) - from_user_id = msg['FromUserName'] - to_user_id = msg['ToUserName'] # 接收人id - other_user_id = msg['User']['UserName'] # 对手方id - content = msg['Text'] - match_prefix = self.check_prefix(content, conf().get('single_chat_prefix')) - if from_user_id == other_user_id and match_prefix is not None: - # 好友向自己发送消息 - if match_prefix != '': - str_list = content.split(match_prefix, 1) - if len(str_list) == 2: - content = str_list[1].strip() - - img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix')) - if img_match_prefix: - content = content.split(img_match_prefix, 1)[1].strip() - thread_pool.submit(self._do_send_img, content, from_user_id) - else: - thread_pool.submit(self._do_send, content, from_user_id) - - elif to_user_id == other_user_id and match_prefix: - # 自己给好友发送消息 - str_list = content.split(match_prefix, 1) - if len(str_list) == 2: - content = str_list[1].strip() - img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix')) - if img_match_prefix: - content = content.split(img_match_prefix, 1)[1].strip() - thread_pool.submit(self._do_send_img, content, to_user_id) - else: - thread_pool.submit(self._do_send, content, to_user_id) - - - def handle_group(self, msg): - logger.debug("[WX]receive group msg: " + json.dumps(msg, ensure_ascii=False)) - group_name = msg['User'].get('NickName', None) - group_id = msg['User'].get('UserName', None) - if not group_name: - return "" - origin_content = msg['Content'] - content = msg['Content'] - content_list = content.split(' ', 1) - context_special_list = content.split('\u2005', 1) - if len(context_special_list) == 2: - content = context_special_list[1] - elif len(content_list) == 2: - content = content_list[1] - - config = conf() - match_prefix = (msg['IsAt'] and not config.get("group_at_off", False)) or self.check_prefix(origin_content, config.get('group_chat_prefix')) \ - or self.check_contain(origin_content, config.get('group_chat_keyword')) - if ('ALL_GROUP' in config.get('group_name_white_list') or group_name in config.get('group_name_white_list') or self.check_contain(group_name, config.get('group_name_keyword_white_list'))) and match_prefix: - img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix')) - if img_match_prefix: - content = content.split(img_match_prefix, 1)[1].strip() - thread_pool.submit(self._do_send_img, content, group_id) - else: - thread_pool.submit(self._do_send_group, content, msg) - - def send(self, msg, receiver): - logger.info('[WX] sendMsg={}, receiver={}'.format(msg, receiver)) - self.newInstance.send(msg, toUserName=receiver) - - def _do_send(self, query, reply_user_id): - try: - if not query: - return - context = dict() - context['from_user_id'] = reply_user_id - reply_text = super().build_reply_content(query, context) - if reply_text: - self.send(conf().get("single_chat_reply_prefix") + reply_text, reply_user_id) - except Exception as e: - logger.exception(e) - - def _do_send_img(self, query, reply_user_id): - try: - if not query: - return - context = dict() - context['type'] = 'IMAGE_CREATE' - img_url = super().build_reply_content(query, context) - if not img_url: - return - - # 图片下载 - pic_res = requests.get(img_url, stream=True) - image_storage = io.BytesIO() - for block in pic_res.iter_content(1024): - image_storage.write(block) - image_storage.seek(0) - - # 图片发送 - logger.info('[WX] sendImage, receiver={}'.format(reply_user_id)) - self.newInstance.send_image(image_storage, reply_user_id) - except Exception as e: - logger.exception(e) - - def _do_send_group(self, query, msg): - if not query: - return - context = dict() - context['from_user_id'] = msg['ActualUserName'] - reply_text = super().build_reply_content(query, context) - if reply_text: - reply_text = '@' + msg['ActualNickName'] + ' ' + reply_text.strip() - self.send(conf().get("group_chat_reply_prefix", "") + reply_text, msg['User']['UserName']) - - - def check_prefix(self, content, prefix_list): - for prefix in prefix_list: - if content.startswith(prefix): - return prefix - return None - - - def check_contain(self, content, keyword_list): - if not keyword_list: - return None - for ky in keyword_list: - if content.find(ky) != -1: - return True - return None diff --git a/spaces/lewiswu1209/MockingBird/web/api/audio.py b/spaces/lewiswu1209/MockingBird/web/api/audio.py deleted file mode 100644 index b30e5dd9ad3a249c2a6e73d9f42372f0ed098b5a..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/web/api/audio.py +++ /dev/null @@ -1,43 +0,0 @@ -import os -from pathlib import Path -from flask_restx import Namespace, Resource, fields -from flask import Response, current_app - -api = Namespace('audios', description='Audios related operations') - -audio = api.model('Audio', { - 'name': fields.String(required=True, description='The audio name'), -}) - -def generate(wav_path): - with open(wav_path, "rb") as fwav: - data = fwav.read(1024) - while data: - yield data - data = fwav.read(1024) - -@api.route('/') -class AudioList(Resource): - @api.doc('list_audios') - @api.marshal_list_with(audio) - def get(self): - '''List all audios''' - audio_samples = [] - AUDIO_SAMPLES_DIR = current_app.config.get("AUDIO_SAMPLES_DIR") - if os.path.isdir(AUDIO_SAMPLES_DIR): - audio_samples = list(Path(AUDIO_SAMPLES_DIR).glob("*.wav")) - return list(a.name for a in audio_samples) - -@api.route('/') -@api.param('name', 'The name of audio') -@api.response(404, 'audio not found') -class Audio(Resource): - @api.doc('get_audio') - @api.marshal_with(audio) - def get(self, name): - '''Fetch a cat given its identifier''' - AUDIO_SAMPLES_DIR = current_app.config.get("AUDIO_SAMPLES_DIR") - if Path(AUDIO_SAMPLES_DIR + name).exists(): - return Response(generate(AUDIO_SAMPLES_DIR + name), mimetype="audio/x-wav") - api.abort(404) - \ No newline at end of file diff --git a/spaces/lightli/bingo-newbing/src/components/ui/alert-dialog.tsx b/spaces/lightli/bingo-newbing/src/components/ui/alert-dialog.tsx deleted file mode 100644 index 17fec4d16510328deacc1416569173c97761ef72..0000000000000000000000000000000000000000 --- a/spaces/lightli/bingo-newbing/src/components/ui/alert-dialog.tsx +++ /dev/null @@ -1,150 +0,0 @@ -'use client' - -import * as React from 'react' -import * as AlertDialogPrimitive from '@radix-ui/react-alert-dialog' - -import { cn } from '@/lib/utils' -import { buttonVariants } from '@/components/ui/button' - -const AlertDialog = AlertDialogPrimitive.Root - -const AlertDialogTrigger = AlertDialogPrimitive.Trigger - -const AlertDialogPortal = ({ - className, - children, - ...props -}: AlertDialogPrimitive.AlertDialogPortalProps) => ( - -
      - {children} -
      -
      -) -AlertDialogPortal.displayName = AlertDialogPrimitive.Portal.displayName - -const AlertDialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -AlertDialogOverlay.displayName = AlertDialogPrimitive.Overlay.displayName - -const AlertDialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - - - - -)) -AlertDialogContent.displayName = AlertDialogPrimitive.Content.displayName - -const AlertDialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
      -) -AlertDialogHeader.displayName = 'AlertDialogHeader' - -const AlertDialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
      -) -AlertDialogFooter.displayName = 'AlertDialogFooter' - -const AlertDialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogTitle.displayName = AlertDialogPrimitive.Title.displayName - -const AlertDialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogDescription.displayName = - AlertDialogPrimitive.Description.displayName - -const AlertDialogAction = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogAction.displayName = AlertDialogPrimitive.Action.displayName - -const AlertDialogCancel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogCancel.displayName = AlertDialogPrimitive.Cancel.displayName - -export { - AlertDialog, - AlertDialogTrigger, - AlertDialogContent, - AlertDialogHeader, - AlertDialogFooter, - AlertDialogTitle, - AlertDialogDescription, - AlertDialogAction, - AlertDialogCancel -} diff --git a/spaces/lightli/bingo-newbing/src/pages/api/blob.ts b/spaces/lightli/bingo-newbing/src/pages/api/blob.ts deleted file mode 100644 index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000 --- a/spaces/lightli/bingo-newbing/src/pages/api/blob.ts +++ /dev/null @@ -1,40 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { Readable } from 'node:stream' -import { fetch } from '@/lib/isomorphic' - -const API_DOMAIN = 'https://www.bing.com' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const { bcid } = req.query - - const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`, - { - method: 'GET', - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referrer-Policy": "origin-when-cross-origin", - }, - }, - ) - - res.writeHead(200, { - 'Content-Length': headers.get('content-length')!, - 'Content-Type': headers.get('content-type')!, - }) - // @ts-ignore - return Readable.fromWeb(body!).pipe(res) - } catch (e) { - console.log('Error', e) - return res.json({ - result: { - value: 'UploadFailed', - message: `${e}` - } - }) - } -} diff --git a/spaces/liliyRehtina/color/app.py b/spaces/liliyRehtina/color/app.py deleted file mode 100644 index 6fc4b0e2f2b25b59e311bb61298bc879048fe800..0000000000000000000000000000000000000000 --- a/spaces/liliyRehtina/color/app.py +++ /dev/null @@ -1,82 +0,0 @@ -import gradio as gr -import os, requests -import numpy as np -from inference import setup_model, colorize_grayscale, predict_anchors - -## local | remote -RUN_MODE = "remote" -if RUN_MODE != "local": - os.system("wget https://huggingface.co/menghanxia/disco/resolve/main/disco-beta.pth.rar") - os.rename("disco-beta.pth.rar", "./checkpoints/disco-beta.pth.rar") - ## examples - - os.system("wget https://huggingface.co/menghanxia/disco/resolve/main/04.jpg") - -## step 1: set up model -device = "cpu" -checkpt_path = "checkpoints/disco-beta.pth.rar" -colorizer, colorLabeler = setup_model(checkpt_path, device=device) - -def click_colorize(rgb_img, hint_img, n_anchors, is_high_res, is_editable): - if hint_img is None: - hint_img = rgb_img - output = colorize_grayscale(colorizer, colorLabeler, rgb_img, hint_img, n_anchors, is_high_res, is_editable, device) - return output - -def click_predanchors(rgb_img, n_anchors, is_high_res, is_editable): - output = predict_anchors(colorizer, colorLabeler, rgb_img, n_anchors, is_high_res, is_editable, device) - return output - -## step 2: configure interface -def switch_states(is_checked): - if is_checked: - return gr.Image.update(visible=True), gr.Button.update(visible=True) - else: - return gr.Image.update(visible=False), gr.Button.update(visible=False) - -demo = gr.Blocks(title="DISCO") -with demo: - gr.HTML(value=""" -
      Раскрашивание черно-белых фотографий
      - """) - - with gr.Row(): - with gr.Column(): - with gr.Row(): - Image_input = gr.Image(type="numpy", label="Input", interactive=True) - Image_anchor = gr.Image(type="numpy", label="Anchor", tool="color-sketch", interactive=True, visible=False) - - with gr.Row(): - Num_anchor = gr.Number(type="int", value=8, label="Количество опорных точек (3~14)") - Radio_resolution = gr.Radio(type="index", choices=["Low (256x256)", "Medium (512x512)", "High (1024x1024)"], \ - label="Область для раскрашивания кистью", value="Low (256x256)") - with gr.Row(): - Ckeckbox_editable = gr.Checkbox(default=False, label='Загрузить редактор') - Button_show_anchor = gr.Button(value="Predict anchors", visible=False) - Button_run = gr.Button(value="Исполнить") - with gr.Column(): - Image_output = gr.Image(type="numpy", label="Output").style(height=480) - - Ckeckbox_editable.change(fn=switch_states, inputs=Ckeckbox_editable, outputs=[Image_anchor, Button_show_anchor]) - Button_show_anchor.click(fn=click_predanchors, inputs=[Image_input, Num_anchor, Radio_resolution, Ckeckbox_editable], outputs=Image_anchor) - Button_run.click(fn=click_colorize, inputs=[Image_input, Image_anchor, Num_anchor, Radio_resolution, Ckeckbox_editable], \ - outputs=Image_output) - - ## guiline - gr.Markdown(value=""" - - """) - if RUN_MODE != "local": - gr.Examples(examples=[ - - ['04.jpg', 8, "Low (256x256)"], - ], - inputs=[Image_input,Num_anchor,Radio_resolution], outputs=[Image_output], label="Examples") - gr.HTML(value=""" - - """) - -if RUN_MODE == "local": - demo.launch(server_name='9.134.253.83',server_port=7788) -else: - demo.launch() \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Boring Man Premium! Ativador Download [hack].md b/spaces/lincquiQcaudo/Top-20-Diffusion/Boring Man Premium! Ativador Download [hack].md deleted file mode 100644 index c0c258ed68278da117e9d99a5c4d7910d47a270d..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Boring Man Premium! Ativador Download [hack].md +++ /dev/null @@ -1,6 +0,0 @@ -

      Boring Man: Premium! Ativador download [hack]


      DOWNLOAD ->>->>->> https://bytlly.com/2uGycS



      -
      -Download the FxGuru Movie FX Director APK Android version. ... FxGuru Movie FX Director Unlock All Effects FxGuru Movie FX Director Unlock All Effects. ... all the effects in the modded version for free and premium .... fxguru all effects apk free download ... free download ... Boring Man: Premium! Ativador !! 1fdad05405
      -
      -
      -

      diff --git a/spaces/lj1995/vocal2guitar/infer_pack/commons.py b/spaces/lj1995/vocal2guitar/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/lj1995/vocal2guitar/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/wavenet.py b/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/wavenet.py deleted file mode 100644 index 3d48c7eaaa0e8191b27a5d1890eb657cbcc0d143..0000000000000000000000000000000000000000 --- a/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/wavenet.py +++ /dev/null @@ -1,108 +0,0 @@ -import math -from math import sqrt - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn import Mish - - -class Conv1d(torch.nn.Conv1d): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - nn.init.kaiming_normal_(self.weight) - - -class SinusoidalPosEmb(nn.Module): - def __init__(self, dim): - super().__init__() - self.dim = dim - - def forward(self, x): - device = x.device - half_dim = self.dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, device=device) * -emb) - emb = x[:, None] * emb[None, :] - emb = torch.cat((emb.sin(), emb.cos()), dim=-1) - return emb - - -class ResidualBlock(nn.Module): - def __init__(self, encoder_hidden, residual_channels, dilation): - super().__init__() - self.residual_channels = residual_channels - self.dilated_conv = nn.Conv1d( - residual_channels, - 2 * residual_channels, - kernel_size=3, - padding=dilation, - dilation=dilation - ) - self.diffusion_projection = nn.Linear(residual_channels, residual_channels) - self.conditioner_projection = nn.Conv1d(encoder_hidden, 2 * residual_channels, 1) - self.output_projection = nn.Conv1d(residual_channels, 2 * residual_channels, 1) - - def forward(self, x, conditioner, diffusion_step): - diffusion_step = self.diffusion_projection(diffusion_step).unsqueeze(-1) - conditioner = self.conditioner_projection(conditioner) - y = x + diffusion_step - - y = self.dilated_conv(y) + conditioner - - # Using torch.split instead of torch.chunk to avoid using onnx::Slice - gate, filter = torch.split(y, [self.residual_channels, self.residual_channels], dim=1) - y = torch.sigmoid(gate) * torch.tanh(filter) - - y = self.output_projection(y) - - # Using torch.split instead of torch.chunk to avoid using onnx::Slice - residual, skip = torch.split(y, [self.residual_channels, self.residual_channels], dim=1) - return (x + residual) / math.sqrt(2.0), skip - - -class WaveNet(nn.Module): - def __init__(self, in_dims=128, n_layers=20, n_chans=384, n_hidden=256): - super().__init__() - self.input_projection = Conv1d(in_dims, n_chans, 1) - self.diffusion_embedding = SinusoidalPosEmb(n_chans) - self.mlp = nn.Sequential( - nn.Linear(n_chans, n_chans * 4), - Mish(), - nn.Linear(n_chans * 4, n_chans) - ) - self.residual_layers = nn.ModuleList([ - ResidualBlock( - encoder_hidden=n_hidden, - residual_channels=n_chans, - dilation=1 - ) - for i in range(n_layers) - ]) - self.skip_projection = Conv1d(n_chans, n_chans, 1) - self.output_projection = Conv1d(n_chans, in_dims, 1) - nn.init.zeros_(self.output_projection.weight) - - def forward(self, spec, diffusion_step, cond): - """ - :param spec: [B, 1, M, T] - :param diffusion_step: [B, 1] - :param cond: [B, M, T] - :return: - """ - x = spec.squeeze(1) - x = self.input_projection(x) # [B, residual_channel, T] - - x = F.relu(x) - diffusion_step = self.diffusion_embedding(diffusion_step) - diffusion_step = self.mlp(diffusion_step) - skip = [] - for layer in self.residual_layers: - x, skip_connection = layer(x, cond, diffusion_step) - skip.append(skip_connection) - - x = torch.sum(torch.stack(skip), dim=0) / sqrt(len(self.residual_layers)) - x = self.skip_projection(x) - x = F.relu(x) - x = self.output_projection(x) # [B, mel_bins, T] - return x[:, None, :, :] diff --git a/spaces/lllqqq/so-vits-svc-models-pcr/vencoder/ContentVec256L9.py b/spaces/lllqqq/so-vits-svc-models-pcr/vencoder/ContentVec256L9.py deleted file mode 100644 index b0089c789cd87cfd3b1badb2fc45cb1b88041eab..0000000000000000000000000000000000000000 --- a/spaces/lllqqq/so-vits-svc-models-pcr/vencoder/ContentVec256L9.py +++ /dev/null @@ -1,35 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import torch -from fairseq import checkpoint_utils - -class ContentVec256L9(SpeechEncoder): - def __init__(self,vec_path = "pretrain/checkpoint_best_legacy_500.pt",device=None): - print("load model(s) from {}".format(vec_path)) - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [vec_path], - suffix="", - ) - self.hidden_dim = 256 - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - self.model = models[0].to(self.dev) - self.model.eval() - - def encoder(self, wav): - feats = wav - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.to(wav.device), - "padding_mask": padding_mask.to(wav.device), - "output_layer": 9, # layer 9 - } - with torch.no_grad(): - logits = self.model.extract_features(**inputs) - feats = self.model.final_proj(logits[0]) - return feats.transpose(1, 2) diff --git a/spaces/lzglyq/bingolzglyq/README.md b/spaces/lzglyq/bingolzglyq/README.md deleted file mode 100644 index 5d6936218874c647b5d22e13ad4be7edb8936f92..0000000000000000000000000000000000000000 --- a/spaces/lzglyq/bingolzglyq/README.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: bingo -emoji: 😊 -colorFrom: red -colorTo: red -sdk: docker -license: mit -duplicated_from: hf4all/bingo ---- - -
      - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -问题反馈请前往 https://github.com/weaigc/bingo/issues -
      - - diff --git a/spaces/ma-xu/LIVE/pybind11/tests/test_class.py b/spaces/ma-xu/LIVE/pybind11/tests/test_class.py deleted file mode 100644 index 4214fe79d7fbab2b38a1f15ca39d41e7cd33a171..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/tests/test_class.py +++ /dev/null @@ -1,333 +0,0 @@ -# -*- coding: utf-8 -*- -import pytest - -import env # noqa: F401 - -from pybind11_tests import class_ as m -from pybind11_tests import UserType, ConstructorStats - - -def test_repr(): - # In Python 3.3+, repr() accesses __qualname__ - assert "pybind11_type" in repr(type(UserType)) - assert "UserType" in repr(UserType) - - -def test_instance(msg): - with pytest.raises(TypeError) as excinfo: - m.NoConstructor() - assert msg(excinfo.value) == "m.class_.NoConstructor: No constructor defined!" - - instance = m.NoConstructor.new_instance() - - cstats = ConstructorStats.get(m.NoConstructor) - assert cstats.alive() == 1 - del instance - assert cstats.alive() == 0 - - -def test_docstrings(doc): - assert doc(UserType) == "A `py::class_` type for testing" - assert UserType.__name__ == "UserType" - assert UserType.__module__ == "pybind11_tests" - assert UserType.get_value.__name__ == "get_value" - assert UserType.get_value.__module__ == "pybind11_tests" - - assert doc(UserType.get_value) == """ - get_value(self: m.UserType) -> int - - Get value using a method - """ - assert doc(UserType.value) == "Get/set value using a property" - - assert doc(m.NoConstructor.new_instance) == """ - new_instance() -> m.class_.NoConstructor - - Return an instance - """ - - -def test_qualname(doc): - """Tests that a properly qualified name is set in __qualname__ (even in pre-3.3, where we - backport the attribute) and that generated docstrings properly use it and the module name""" - assert m.NestBase.__qualname__ == "NestBase" - assert m.NestBase.Nested.__qualname__ == "NestBase.Nested" - - assert doc(m.NestBase.__init__) == """ - __init__(self: m.class_.NestBase) -> None - """ - assert doc(m.NestBase.g) == """ - g(self: m.class_.NestBase, arg0: m.class_.NestBase.Nested) -> None - """ - assert doc(m.NestBase.Nested.__init__) == """ - __init__(self: m.class_.NestBase.Nested) -> None - """ - assert doc(m.NestBase.Nested.fn) == """ - fn(self: m.class_.NestBase.Nested, arg0: int, arg1: m.class_.NestBase, arg2: m.class_.NestBase.Nested) -> None - """ # noqa: E501 line too long - assert doc(m.NestBase.Nested.fa) == """ - fa(self: m.class_.NestBase.Nested, a: int, b: m.class_.NestBase, c: m.class_.NestBase.Nested) -> None - """ # noqa: E501 line too long - assert m.NestBase.__module__ == "pybind11_tests.class_" - assert m.NestBase.Nested.__module__ == "pybind11_tests.class_" - - -def test_inheritance(msg): - roger = m.Rabbit('Rabbit') - assert roger.name() + " is a " + roger.species() == "Rabbit is a parrot" - assert m.pet_name_species(roger) == "Rabbit is a parrot" - - polly = m.Pet('Polly', 'parrot') - assert polly.name() + " is a " + polly.species() == "Polly is a parrot" - assert m.pet_name_species(polly) == "Polly is a parrot" - - molly = m.Dog('Molly') - assert molly.name() + " is a " + molly.species() == "Molly is a dog" - assert m.pet_name_species(molly) == "Molly is a dog" - - fred = m.Hamster('Fred') - assert fred.name() + " is a " + fred.species() == "Fred is a rodent" - - assert m.dog_bark(molly) == "Woof!" - - with pytest.raises(TypeError) as excinfo: - m.dog_bark(polly) - assert msg(excinfo.value) == """ - dog_bark(): incompatible function arguments. The following argument types are supported: - 1. (arg0: m.class_.Dog) -> str - - Invoked with: - """ - - with pytest.raises(TypeError) as excinfo: - m.Chimera("lion", "goat") - assert "No constructor defined!" in str(excinfo.value) - - -def test_inheritance_init(msg): - - # Single base - class Python(m.Pet): - def __init__(self): - pass - with pytest.raises(TypeError) as exc_info: - Python() - expected = ["m.class_.Pet.__init__() must be called when overriding __init__", - "Pet.__init__() must be called when overriding __init__"] # PyPy? - # TODO: fix PyPy error message wrt. tp_name/__qualname__? - assert msg(exc_info.value) in expected - - # Multiple bases - class RabbitHamster(m.Rabbit, m.Hamster): - def __init__(self): - m.Rabbit.__init__(self, "RabbitHamster") - - with pytest.raises(TypeError) as exc_info: - RabbitHamster() - expected = ["m.class_.Hamster.__init__() must be called when overriding __init__", - "Hamster.__init__() must be called when overriding __init__"] # PyPy - assert msg(exc_info.value) in expected - - -def test_automatic_upcasting(): - assert type(m.return_class_1()).__name__ == "DerivedClass1" - assert type(m.return_class_2()).__name__ == "DerivedClass2" - assert type(m.return_none()).__name__ == "NoneType" - # Repeat these a few times in a random order to ensure no invalid caching is applied - assert type(m.return_class_n(1)).__name__ == "DerivedClass1" - assert type(m.return_class_n(2)).__name__ == "DerivedClass2" - assert type(m.return_class_n(0)).__name__ == "BaseClass" - assert type(m.return_class_n(2)).__name__ == "DerivedClass2" - assert type(m.return_class_n(2)).__name__ == "DerivedClass2" - assert type(m.return_class_n(0)).__name__ == "BaseClass" - assert type(m.return_class_n(1)).__name__ == "DerivedClass1" - - -def test_isinstance(): - objects = [tuple(), dict(), m.Pet("Polly", "parrot")] + [m.Dog("Molly")] * 4 - expected = (True, True, True, True, True, False, False) - assert m.check_instances(objects) == expected - - -def test_mismatched_holder(): - import re - - with pytest.raises(RuntimeError) as excinfo: - m.mismatched_holder_1() - assert re.match('generic_type: type ".*MismatchDerived1" does not have a non-default ' - 'holder type while its base ".*MismatchBase1" does', str(excinfo.value)) - - with pytest.raises(RuntimeError) as excinfo: - m.mismatched_holder_2() - assert re.match('generic_type: type ".*MismatchDerived2" has a non-default holder type ' - 'while its base ".*MismatchBase2" does not', str(excinfo.value)) - - -def test_override_static(): - """#511: problem with inheritance + overwritten def_static""" - b = m.MyBase.make() - d1 = m.MyDerived.make2() - d2 = m.MyDerived.make() - - assert isinstance(b, m.MyBase) - assert isinstance(d1, m.MyDerived) - assert isinstance(d2, m.MyDerived) - - -def test_implicit_conversion_life_support(): - """Ensure the lifetime of temporary objects created for implicit conversions""" - assert m.implicitly_convert_argument(UserType(5)) == 5 - assert m.implicitly_convert_variable(UserType(5)) == 5 - - assert "outside a bound function" in m.implicitly_convert_variable_fail(UserType(5)) - - -def test_operator_new_delete(capture): - """Tests that class-specific operator new/delete functions are invoked""" - - class SubAliased(m.AliasedHasOpNewDelSize): - pass - - with capture: - a = m.HasOpNewDel() - b = m.HasOpNewDelSize() - d = m.HasOpNewDelBoth() - assert capture == """ - A new 8 - B new 4 - D new 32 - """ - sz_alias = str(m.AliasedHasOpNewDelSize.size_alias) - sz_noalias = str(m.AliasedHasOpNewDelSize.size_noalias) - with capture: - c = m.AliasedHasOpNewDelSize() - c2 = SubAliased() - assert capture == ( - "C new " + sz_noalias + "\n" + - "C new " + sz_alias + "\n" - ) - - with capture: - del a - pytest.gc_collect() - del b - pytest.gc_collect() - del d - pytest.gc_collect() - assert capture == """ - A delete - B delete 4 - D delete - """ - - with capture: - del c - pytest.gc_collect() - del c2 - pytest.gc_collect() - assert capture == ( - "C delete " + sz_noalias + "\n" + - "C delete " + sz_alias + "\n" - ) - - -def test_bind_protected_functions(): - """Expose protected member functions to Python using a helper class""" - a = m.ProtectedA() - assert a.foo() == 42 - - b = m.ProtectedB() - assert b.foo() == 42 - - class C(m.ProtectedB): - def __init__(self): - m.ProtectedB.__init__(self) - - def foo(self): - return 0 - - c = C() - assert c.foo() == 0 - - -def test_brace_initialization(): - """ Tests that simple POD classes can be constructed using C++11 brace initialization """ - a = m.BraceInitialization(123, "test") - assert a.field1 == 123 - assert a.field2 == "test" - - # Tests that a non-simple class doesn't get brace initialization (if the - # class defines an initializer_list constructor, in particular, it would - # win over the expected constructor). - b = m.NoBraceInitialization([123, 456]) - assert b.vec == [123, 456] - - -@pytest.mark.xfail("env.PYPY") -def test_class_refcount(): - """Instances must correctly increase/decrease the reference count of their types (#1029)""" - from sys import getrefcount - - class PyDog(m.Dog): - pass - - for cls in m.Dog, PyDog: - refcount_1 = getrefcount(cls) - molly = [cls("Molly") for _ in range(10)] - refcount_2 = getrefcount(cls) - - del molly - pytest.gc_collect() - refcount_3 = getrefcount(cls) - - assert refcount_1 == refcount_3 - assert refcount_2 > refcount_1 - - -def test_reentrant_implicit_conversion_failure(msg): - # ensure that there is no runaway reentrant implicit conversion (#1035) - with pytest.raises(TypeError) as excinfo: - m.BogusImplicitConversion(0) - assert msg(excinfo.value) == ''' - __init__(): incompatible constructor arguments. The following argument types are supported: - 1. m.class_.BogusImplicitConversion(arg0: m.class_.BogusImplicitConversion) - - Invoked with: 0 - ''' - - -def test_error_after_conversions(): - with pytest.raises(TypeError) as exc_info: - m.test_error_after_conversions("hello") - assert str(exc_info.value).startswith( - "Unable to convert function return value to a Python type!") - - -def test_aligned(): - if hasattr(m, "Aligned"): - p = m.Aligned().ptr() - assert p % 1024 == 0 - - -# https://foss.heptapod.net/pypy/pypy/-/issues/2742 -@pytest.mark.xfail("env.PYPY") -def test_final(): - with pytest.raises(TypeError) as exc_info: - class PyFinalChild(m.IsFinal): - pass - assert str(exc_info.value).endswith("is not an acceptable base type") - - -# https://foss.heptapod.net/pypy/pypy/-/issues/2742 -@pytest.mark.xfail("env.PYPY") -def test_non_final_final(): - with pytest.raises(TypeError) as exc_info: - class PyNonFinalFinalChild(m.IsNonFinalFinal): - pass - assert str(exc_info.value).endswith("is not an acceptable base type") - - -# https://github.com/pybind/pybind11/issues/1878 -def test_exception_rvalue_abort(): - with pytest.raises(RuntimeError): - m.PyPrintDestructor().throw_something() diff --git a/spaces/ma-xu/LIVE/thrust/dependencies/cub/CHANGELOG.md b/spaces/ma-xu/LIVE/thrust/dependencies/cub/CHANGELOG.md deleted file mode 100644 index 8c05ac274c68ae42b31d93dfcc7e06ddf8e28de9..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/dependencies/cub/CHANGELOG.md +++ /dev/null @@ -1,848 +0,0 @@ -# CUB 1.9.10-1 (NVIDIA HPC SDK 20.7, CUDA Toolkit 11.1) - -## Summary - -CUB 1.9.10-1 is the minor release accompanying the NVIDIA HPC SDK 20.7 release - and the CUDA Toolkit 11.1 release. - -## Bug Fixes - -- #1217: Move static local in `cub::DeviceCount` to a separate host-only - function because NVC++ doesn't support static locals in host-device - functions. - -# CUB 1.9.10 (NVIDIA HPC SDK 20.5) - -## Summary - -Thrust 1.9.10 is the release accompanying the NVIDIA HPC SDK 20.5 release. -It adds CMake `find_package` support. -C++03, C++11, GCC < 5, Clang < 6, and MSVC < 2017 are now deprecated. -Starting with the upcoming 1.10.0 release, C++03 support will be dropped - entirely. - -## Breaking Changes - -- Thrust now checks that it is compatible with the version of CUB found - in your include path, generating an error if it is not. - If you are using your own version of CUB, it may be too old. - It is recommended to simply delete your own version of CUB and use the - version of CUB that comes with Thrust. -- C++03 and C++11 are deprecated. - Using these dialects will generate a compile-time warning. - These warnings can be suppressed by defining - `CUB_IGNORE_DEPRECATED_CPP_DIALECT` (to suppress C++03 and C++11 - deprecation warnings) or `CUB_IGNORE_DEPRECATED_CPP_11` (to suppress C++11 - deprecation warnings). - Suppression is only a short term solution. - We will be dropping support for C++03 in the 1.10.0 release and C++11 in the - near future. -- GCC < 5, Clang < 6, and MSVC < 2017 are deprecated. - Using these compilers will generate a compile-time warning. - These warnings can be suppressed by defining - `CUB_IGNORE_DEPRECATED_COMPILER`. - Suppression is only a short term solution. - We will be dropping support for these compilers in the near future. - -## New Features - -- CMake `find_package` support. - Just point CMake at the `cmake` folder in your CUB include directory - (ex: `cmake -DCUB_DIR=/usr/local/cuda/include/cub/cmake/ .`) and then you - can add CUB to your CMake project with `find_package(CUB REQUIRED CONFIG)`. - -# CUB 1.9.9 (CUDA 11.0) - -## Summary - -CUB 1.9.9 is the release accompanying the CUDA Toolkit 11.0 release. -It introduces CMake support, version macros, platform detection machinery, - and support for NVC++, which uses Thrust (and thus CUB) to implement - GPU-accelerated C++17 Parallel Algorithms. -Additionally, the scan dispatch layer was refactored and modernized. -C++03, C++11, GCC < 5, Clang < 6, and MSVC < 2017 are now deprecated. -Starting with the upcoming 1.10.0 release, C++03 support will be dropped - entirely. - -## Breaking Changes - -- Thrust now checks that it is compatible with the version of CUB found - in your include path, generating an error if it is not. - If you are using your own version of CUB, it may be too old. - It is recommended to simply delete your own version of CUB and use the - version of CUB that comes with Thrust. -- C++03 and C++11 are deprecated. - Using these dialects will generate a compile-time warning. - These warnings can be suppressed by defining - `CUB_IGNORE_DEPRECATED_CPP_DIALECT` (to suppress C++03 and C++11 - deprecation warnings) or `CUB_IGNORE_DEPRECATED_CPP11` (to suppress C++11 - deprecation warnings). - Suppression is only a short term solution. - We will be dropping support for C++03 in the 1.10.0 release and C++11 in the - near future. -- GCC < 5, Clang < 6, and MSVC < 2017 are deprecated. - Using these compilers will generate a compile-time warning. - These warnings can be suppressed by defining - `CUB_IGNORE_DEPRECATED_COMPILER`. - Suppression is only a short term solution. - We will be dropping support for these compilers in the near future. - -## New Features - -- CMake support. - Thanks to Francis Lemaire for this contribution. -- Refactorized and modernized scan dispatch layer. - Thanks to Francis Lemaire for this contribution. -- Policy hooks for device-wide reduce, scan, and radix sort facilities - to simplify tuning and allow users to provide custom policies. - Thanks to Francis Lemaire for this contribution. -- ``: `CUB_VERSION`, `CUB_VERSION_MAJOR`, `CUB_VERSION_MINOR`, - `CUB_VERSION_SUBMINOR`, and `CUB_PATCH_NUMBER`. -- Platform detection machinery: - - ``: Detects the C++ standard dialect. - - ``: host and device compiler detection. - - ``: `CUB_DEPRECATED`. - - `: Includes ``, - ``, ``, - ``, ``, - `` -- `cub::DeviceCount` and `cub::DeviceCountUncached`, caching abstractions for - `cudaGetDeviceCount`. - -## Other Enhancements - -- Lazily initialize the per-device CUDAattribute caches, because CUDA context - creation is expensive and adds up with large CUDA binaries on machines with - many GPUs. - Thanks to the NVIDIA PyTorch team for bringing this to our attention. -- Make `cub::SwitchDevice` avoid setting/resetting the device if the current - device is the same as the target device. - -## Bug Fixes - -- Add explicit failure parameter to CAS in the CUB attribute cache to workaround - a GCC 4.8 bug. -- Revert a change in reductions that changed the signedness of the `lane_id` - variable to suppress a warning, as this introduces a bug in optimized device - code. -- Fix initialization in `cub::ExclusiveSum`. - Thanks to Conor Hoekstra for this contribution. -- Fix initialization of the `std::array` in the CUB attribute cache. -- Fix `-Wsign-compare` warnings. - Thanks to Elias Stehle for this contribution. -- Fix `test_block_reduce.cu` to build without parameters. - Thanks to Francis Lemaire for this contribution. -- Add missing includes to `grid_even_share.cuh`. - Thanks to Francis Lemaire for this contribution. -- Add missing includes to `thread_search.cuh`. - Thanks to Francis Lemaire for this contribution. -- Add missing includes to `cub.cuh`. - Thanks to Felix Kallenborn for this contribution. - -# CUB 1.9.8-1 (NVIDIA HPC SDK 20.3) - -## Summary - -CUB 1.9.8-1 is a variant of 1.9.8 accompanying the NVIDIA HPC SDK 20.3 release. -It contains modifications necessary to serve as the implementation of NVC++'s - GPU-accelerated C++17 Parallel Algorithms. - -# CUB 1.9.8 (CUDA 11.0 Early Access) - -## Summary - -CUB 1.9.8 is the first release of CUB to be officially supported and included - in the CUDA Toolkit. -When compiling CUB in C++11 mode, CUB now caches calls to CUDA attribute query - APIs, which improves performance of these queries by 20x to 50x when they - are called concurrently by multiple host threads. - -## Enhancements - -- (C++11 or later) Cache calls to `cudaFuncGetAttributes` and - `cudaDeviceGetAttribute` within `cub::PtxVersion` and `cub::SmVersion`. - These CUDA APIs acquire locks to CUDA driver/runtime mutex and perform - poorly under contention; with the caching, they are 20 to 50x faster when - called concurrently. - Thanks to Bilge Acun for bringing this issue to our attention. -- `DispatchReduce` now takes an `OutputT` template parameter so that users can - specify the intermediate type explicitly. -- Radix sort tuning policies updates to fix performance issues for element - types smaller than 4 bytes. - -## Bug Fixes - -- Change initialization style from copy initialization to direct initialization - (which is more permissive) in `AgentReduce` to allow a wider range of types - to be used with it. -- Fix bad signed/unsigned comparisons in `WarpReduce`. -- Fix computation of valid lanes in warp-level reduction primitive to correctly - handle the case where there are 0 input items per warp. - -# CUB 1.8.0 - -## Summary - -CUB 1.8.0 introduces changes to the `cub::Shuffle*` interfaces. - -## Breaking Changes - -- The interfaces of `cub::ShuffleIndex`, `cub::ShuffleUp`, and - `cub::ShuffleDown` have been changed to allow for better computation of the - PTX SHFL control constant for logical warps smaller than 32 threads. - -## Bug Fixes - -- #112: Fix `cub::WarpScan`'s broadcast of warp-wide aggregate for logical - warps smaller than 32 threads. - -# CUB 1.7.5 - -## Summary - -CUB 1.7.5 adds support for radix sorting `__half` keys and improved sorting - performance for 1 byte keys. -It was incorporated into Thrust 1.9.2. - -## Enhancements - -- Radix sort support for `__half` keys. -- Radix sort tuning policy updates to improve 1 byte key performance. - -## Bug Fixes - -- Syntax tweaks to mollify Clang. -- #127: `cub::DeviceRunLengthEncode::Encode` returns incorrect results. -- #128: 7-bit sorting passes fail for SM61 with large values. - -# CUB 1.7.4 - -## Summary - -CUB 1.7.4 is a minor release that was incorporated into Thrust 1.9.1-2. - -## Bug Fixes - -- #114: Can't pair non-trivially-constructible values in radix sort. -- #115: `cub::WarpReduce` segmented reduction is broken in CUDA 9 for logical - warp sizes smaller than 32. - -# CUB 1.7.3 - -## Summary - -CUB 1.7.3 is a minor release. - -## Bug Fixes - -- #110: `cub::DeviceHistogram` null-pointer exception bug for iterator inputs. - -# CUB 1.7.2 - -## Summary - -CUB 1.7.2 is a minor release. - -## Bug Fixes - -- #104: Device-wide reduction is now "run-to-run" deterministic for - pseudo-associative reduction operators (like floating point addition). - -# CUB 1.7.1 - -## Summary - -CUB 1.7.1 delivers improved radix sort performance on SM7x (Volta) GPUs and a - number of bug fixes. - -## Enhancements - -- Radix sort tuning policies updated for SM7x (Volta). - -## Bug Fixes - -- #104: `uint64_t` `cub::WarpReduce` broken for CUB 1.7.0 on CUDA 8 and older. -- #103: Can't mix Thrust from CUDA 9.0 and CUB. -- #102: CUB pulls in `windows.h` which defines `min`/`max` macros that conflict - with `std::min`/`std::max`. -- #99: Radix sorting crashes NVCC on Windows 10 for SM52. -- #98: cuda-memcheck: --tool initcheck failed with lineOfSight. -- #94: Git clone size. -- #93: Accept iterators for segment offsets. -- #87: CUB uses anonymous unions which is not valid C++. -- #44: Check for C++11 is incorrect for Visual Studio 2013. - -# CUB 1.7.0 - -## Summary - -CUB 1.7.0 brings support for CUDA 9.0 and SM7x (Volta) GPUs. -It is compatible with independent thread scheduling. -It was incorporated into Thrust 1.9.0-5. - -## Breaking Changes - -- Remove `cub::WarpAll` and `cub::WarpAny`. - These functions served to emulate `__all` and `__any` functionality for - SM1x devices, which did not have those operations. - However, SM1x devices are now deprecated in CUDA, and the interfaces of these - two functions are now lacking the lane-mask needed for collectives to run on - SM7x and newer GPUs which have independent thread scheduling. - -## Other Enhancements - -- Remove any assumptions of implicit warp synchronization to be compatible with - SM7x's (Volta) independent thread scheduling. - -## Bug Fixes - -- #86: Incorrect results with reduce-by-key. - -# CUB 1.6.4 - -## Summary - -CUB 1.6.4 improves radix sorting performance for SM5x (Maxwell) and SM6x - (Pascal) GPUs. - -## Enhancements - -- Radix sort tuning policies updated for SM5x (Maxwell) and SM6x (Pascal) - - 3.5B and 3.4B 32 byte keys/s on TitanX and GTX 1080, respectively. - -## Bug Fixes - -- Restore fence work-around for scan (reduce-by-key, etc.) hangs in CUDA 8.5. -- #65: `cub::DeviceSegmentedRadixSort` should allow inputs to have - pointer-to-const type. -- Mollify Clang device-side warnings. -- Remove out-dated MSVC project files. - -# CUB 1.6.3 - -## Summary - -CUB 1.6.3 improves support for Windows, changes - `cub::BlockLoad`/`cub::BlockStore` interface to take the local data type, - and enhances radix sort performance for SM6x (Pascal) GPUs. - -## Breaking Changes - -- `cub::BlockLoad` and `cub::BlockStore` are now templated by the local data - type, instead of the `Iterator` type. - This allows for output iterators having `void` as their `value_type` (e.g. - discard iterators). - -## Other Enhancements - -- Radix sort tuning policies updated for SM6x (Pascal) GPUs - 6.2B 4 byte - keys/s on GP100. -- Improved support for Windows (warnings, alignment, etc). - -## Bug Fixes - -- #74: `cub::WarpReduce` executes reduction operator for out-of-bounds items. -- #72: `cub:InequalityWrapper::operator` should be non-const. -- #71: `cub::KeyValuePair` won't work if `Key` has non-trivial constructor. -- #69: cub::BlockStore::Store` doesn't compile if `OutputIteratorT::value_type` - isn't `T`. -- #68: `cub::TilePrefixCallbackOp::WarpReduce` doesn't permit PTX arch - specialization. - -# CUB 1.6.2 (previously 1.5.5) - -## Summary - -CUB 1.6.2 (previously 1.5.5) improves radix sort performance for SM6x (Pascal) - GPUs. - -## Enhancements - -- Radix sort tuning policies updated for SM6x (Pascal) GPUs. - -## Bug Fixes - -- Fix AArch64 compilation of `cub::CachingDeviceAllocator`. - -# CUB 1.6.1 (previously 1.5.4) - -## Summary - -CUB 1.6.1 (previously 1.5.4) is a minor release. - -## Bug Fixes - -- Fix radix sorting bug introduced by scan refactorization. - -# CUB 1.6.0 (previously 1.5.3) - -## Summary - -CUB 1.6.0 changes the scan and reduce interfaces. -Exclusive scans now accept an "initial value" instead of an "identity value". -Scans and reductions now support differing input and output sequence types. -Additionally, many bugs have been fixed. - -## Breaking Changes - -- Device/block/warp-wide exclusive scans have been revised to now accept an - "initial value" (instead of an "identity value") for seeding the computation - with an arbitrary prefix. -- Device-wide reductions and scans can now have input sequence types that are - different from output sequence types (as long as they are convertible). - -## Other Enhancements - -- Reduce repository size by moving the doxygen binary to doc repository. -- Minor reduction in `cub::BlockScan` instruction counts. - -## Bug Fixes - -- Issue #55: Warning in `cub/device/dispatch/dispatch_reduce_by_key.cuh`. -- Issue #59: `cub::DeviceScan::ExclusiveSum` can't prefix sum of float into - double. -- Issue #58: Infinite loop in `cub::CachingDeviceAllocator::NearestPowerOf`. -- Issue #47: `cub::CachingDeviceAllocator` needs to clean up CUDA global error - state upon successful retry. -- Issue #46: Very high amount of needed memory from the - `cub::DeviceHistogram::HistogramEven`. -- Issue #45: `cub::CachingDeviceAllocator` fails with debug output enabled - -# CUB 1.5.2 - -## Summary - -CUB 1.5.2 enhances `cub::CachingDeviceAllocator` and improves scan performance - for SM5x (Maxwell). - -## Enhancements - -- Improved medium-size scan performance on SM5x (Maxwell). -- Refactored `cub::CachingDeviceAllocator`: - - Now spends less time locked. - - Uses C++11's `std::mutex` when available. - - Failure to allocate a block from the runtime will retry once after - freeing cached allocations. - - Now respects max-bin, fixing an issue where blocks in excess of max-bin - were still being retained in the free cache. - -## Bug fixes: - -- Fix for generic-type reduce-by-key `cub::WarpScan` for SM3x and newer GPUs. - -# CUB 1.5.1 - -## Summary - -CUB 1.5.1 is a minor release. - -## Bug Fixes - -- Fix for incorrect `cub::DeviceRadixSort` output for some small problems on - SM52 (Mawell) GPUs. -- Fix for macro redefinition warnings when compiling `thrust::sort`. - -# CUB 1.5.0 - -CUB 1.5.0 introduces segmented sort and reduction primitives. - -## New Features: - -- Segmented device-wide operations for device-wide sort and reduction primitives. - -## Bug Fixes: - -- #36: `cub::ThreadLoad` generates compiler errors when loading from - pointer-to-const. -- #29: `cub::DeviceRadixSort::SortKeys` yields compiler errors. -- #26: Misaligned address after `cub::DeviceRadixSort::SortKeys`. -- #25: Fix for incorrect results and crashes when radix sorting 0-length - problems. -- Fix CUDA 7.5 issues on SM52 GPUs with SHFL-based warp-scan and - warp-reduction on non-primitive data types (e.g. user-defined structs). -- Fix small radix sorting problems where 0 temporary bytes were required and - users code was invoking `malloc(0)` on some systems where that returns - `NULL`. - CUB assumed the user was asking for the size again and not running the sort. - -# CUB 1.4.1 - -## Summary - -CUB 1.4.1 is a minor release. - -## Enhancements - -- Allow `cub::DeviceRadixSort` and `cub::BlockRadixSort` on bool types. - -## Bug Fixes - -- Fix minor CUDA 7.0 performance regressions in `cub::DeviceScan` and - `cub::DeviceReduceByKey`. -- Remove requirement for callers to define the `CUB_CDP` macro - when invoking CUB device-wide rountines using CUDA dynamic parallelism. -- Fix headers not being included in the proper order (or missing includes) - for some block-wide functions. - -# CUB 1.4.0 - -## Summary - -CUB 1.4.0 adds `cub::DeviceSpmv`, `cub::DeviceRunLength::NonTrivialRuns`, - improves `cub::DeviceHistogram`, and introduces support for SM5x (Maxwell) - GPUs. - -## New Features: - -- `cub::DeviceSpmv` methods for multiplying sparse matrices by - dense vectors, load-balanced using a merge-based parallel decomposition. -- `cub::DeviceRadixSort` sorting entry-points that always return - the sorted output into the specified buffer, as opposed to the - `cub::DoubleBuffer` in which it could end up in either buffer. -- `cub::DeviceRunLengthEncode::NonTrivialRuns` for finding the starting - offsets and lengths of all non-trivial runs (i.e., length > 1) of keys in - a given sequence. - Useful for top-down partitioning algorithms like MSD sorting of very-large - keys. - -## Other Enhancements - -- Support and performance tuning for SM5x (Maxwell) GPUs. -- Updated cub::DeviceHistogram implementation that provides the same - "histogram-even" and "histogram-range" functionality as IPP/NPP. - Provides extremely fast and, perhaps more importantly, very uniform - performance response across diverse real-world datasets, including - pathological (homogeneous) sample distributions. - -# CUB 1.3.2 - -## Summary - -CUB 1.3.2 is a minor release. - -## Bug Fixes - -- Fix `cub::DeviceReduce` where reductions of small problems (small enough to - only dispatch a single thread block) would run in the default stream (stream - zero) regardless of whether an alternate stream was specified. - -# CUB 1.3.1 - -## Summary - -CUB 1.3.1 is a minor release. - -## Bug Fixes - -- Workaround for a benign WAW race warning reported by cuda-memcheck - in `cub::BlockScan` specialized for `BLOCK_SCAN_WARP_SCANS` algorithm. -- Fix bug in `cub::DeviceRadixSort` where the algorithm may sort more - key bits than the caller specified (up to the nearest radix digit). -- Fix for ~3% `cub::DeviceRadixSort` performance regression on SM2x (Fermi) and - SM3x (Kepler) GPUs. - -# CUB 1.3.0 - -## Summary - -CUB 1.3.0 improves how thread blocks are expressed in block- and warp-wide - primitives and adds an enhanced version of `cub::WarpScan`. - -## Breaking Changes - -- CUB's collective (block-wide, warp-wide) primitives underwent a minor - interface refactoring: - - To provide the appropriate support for multidimensional thread blocks, - The interfaces for collective classes are now template-parameterized by - X, Y, and Z block dimensions (with `BLOCK_DIM_Y` and `BLOCK_DIM_Z` being - optional, and `BLOCK_DIM_X` replacing `BLOCK_THREADS`). - Furthermore, the constructors that accept remapped linear - thread-identifiers have been removed: all primitives now assume a - row-major thread-ranking for multidimensional thread blocks. - - To allow the host program (compiled by the host-pass) to accurately - determine the device-specific storage requirements for a given collective - (compiled for each device-pass), the interfaces for collective classes - are now (optionally) template-parameterized by the desired PTX compute - capability. - This is useful when aliasing collective storage to shared memory that has - been allocated dynamically by the host at the kernel call site. - - Most CUB programs having typical 1D usage should not require any - changes to accomodate these updates. - -## New Features - -- Added "combination" `cub::WarpScan` methods for efficiently computing - both inclusive and exclusive prefix scans (and sums). - -## Bug Fixes - -- Fix for bug in `cub::WarpScan` (which affected `cub::BlockScan` and - `cub::DeviceScan`) where incorrect results (e.g., NAN) would often be - returned when parameterized for floating-point types (fp32, fp64). -- Workaround for ptxas error when compiling with with -G flag on Linux (for - debug instrumentation). -- Fixes for certain scan scenarios using custom scan operators where code - compiled for SM1x is run on newer GPUs of higher compute-capability: the - compiler could not tell which memory space was being used collective - operations and was mistakenly using global ops instead of shared ops. - -# CUB 1.2.3 - -## Summary - -CUB 1.2.3 is a minor release. - -## Bug Fixes - -- Fixed access violation bug in `cub::DeviceReduce::ReduceByKey` for - non-primitive value types. -- Fixed code-snippet bug in `ArgIndexInputIteratorT` documentation. - -# CUB 1.2.2 - -## Summary - -CUB 1.2.2 adds a new variant of `cub::BlockReduce` and MSVC project solections - for examples. - -## New Features - -- MSVC project solutions for device-wide and block-wide examples -- New algorithmic variant of cub::BlockReduce for improved performance - when using commutative operators (e.g., numeric addition). - -## Bug Fixes - -- Inclusion of Thrust headers in a certain order prevented CUB device-wide - primitives from working properly. - -# CUB 1.2.0 - -## Summary - -CUB 1.2.0 adds `cub::DeviceReduce::ReduceByKey` and - `cub::DeviceReduce::RunLengthEncode` and support for CUDA 6.0. - -## New Features - -- `cub::DeviceReduce::ReduceByKey`. -- `cub::DeviceReduce::RunLengthEncode`. - -## Other Enhancements - -- Improved `cub::DeviceScan`, `cub::DeviceSelect`, `cub::DevicePartition` - performance. -- Documentation and testing: - - Added performance-portability plots for many device-wide primitives. - - Explain that iterator (in)compatibilities with CUDA 5.0 (and older) and - Thrust 1.6 (and older). -- Revised the operation of temporary tile status bookkeeping for - `cub::DeviceScan` (and similar) to be safe for current code run on future - platforms (now uses proper fences). - -## Bug Fixes - -- Fix `cub::DeviceScan` bug where Windows alignment disagreements between host - and device regarding user-defined data types would corrupt tile status. -- Fix `cub::BlockScan` bug where certain exclusive scans on custom data types - for the `BLOCK_SCAN_WARP_SCANS` variant would return incorrect results for - the first thread in the block. -- Added workaround to make `cub::TexRefInputIteratorT` work with CUDA 6.0. - -# CUB 1.1.1 - -## Summary - -CUB 1.1.1 introduces texture and cache modifier iterators, descending sorting, - `cub::DeviceSelect`, `cub::DevicePartition`, `cub::Shuffle*`, and - `cub::MaxSMOccupancy`. -Additionally, scan and sort performance for older GPUs has been improved and - many bugs have been fixed. - -## Breaking Changes - -- Refactored block-wide I/O (`cub::BlockLoad` and `cub::BlockStore`), removing - cache-modifiers from their interfaces. - `cub::CacheModifiedInputIterator` and `cub::CacheModifiedOutputIterator` - should now be used with `cub::BlockLoad` and `cub::BlockStore` to effect that - behavior. - -## New Features - -- `cub::TexObjInputIterator`, `cub::TexRefInputIterator`, - `cub::CacheModifiedInputIterator`, and `cub::CacheModifiedOutputIterator` - types for loading & storing arbitrary types through the cache hierarchy. - They are compatible with Thrust. -- Descending sorting for `cub::DeviceRadixSort` and `cub::BlockRadixSort`. -- Min, max, arg-min, and arg-max operators for `cub::DeviceReduce`. -- `cub::DeviceSelect` (select-unique, select-if, and select-flagged). -- `cub::DevicePartition` (partition-if, partition-flagged). -- Generic `cub::ShuffleUp`, `cub::ShuffleDown`, and `cub::ShuffleIndex` for - warp-wide communication of arbitrary data types (SM3x and up). -- `cub::MaxSmOccupancy` for accurately determining SM occupancy for any given - kernel function pointer. - -## Other Enhancements - -- Improved `cub::DeviceScan` and `cub::DeviceRadixSort` performance for older - GPUs (SM1x to SM3x). -- Renamed device-wide `stream_synchronous` param to `debug_synchronous` to - avoid confusion about usage. -- Documentation improvements: - - Added simple examples of device-wide methods. - - Improved doxygen documentation and example snippets. -- Improved test coverege to include up to 21,000 kernel variants and 851,000 - unit tests (per architecture, per platform). - -## Bug Fixes - -- Fix misc `cub::DeviceScan, BlockScan, DeviceReduce, and BlockReduce bugs when - operating on non-primitive types for older architectures SM1x. -- SHFL-based scans and reductions produced incorrect results for multi-word - types (size > 4B) on Linux. -- For `cub::WarpScan`-based scans, not all threads in the first warp were - entering the prefix callback functor. -- `cub::DeviceRadixSort` had a race condition with key-value pairs for pre-SM35 - architectures. -- `cub::DeviceRadixSor` bitfield-extract behavior with long keys on 64-bit - Linux was incorrect. -- `cub::BlockDiscontinuity` failed to compile for types other than - `int32_t`/`uint32_t`. -- CUDA Dynamic Parallelism (CDP, e.g. device-callable) versions of device-wide - methods now report the same temporary storage allocation size requirement as - their host-callable counterparts. - -# CUB 1.0.2 - -## Summary - -CUB 1.0.2 is a minor release. - -## Bug Fixes - -- Corrections to code snippet examples for `cub::BlockLoad`, `cub::BlockStore`, - and `cub::BlockDiscontinuity`. -- Cleaned up unnecessary/missing header includes. - You can now safely include a specific .cuh (instead of `cub.cuh`). -- Bug/compilation fixes for `cub::BlockHistogram`. - -# CUB 1.0.1 - -## Summary - -CUB 1.0.1 adds `cub::DeviceRadixSort` and `cub::DeviceScan`. -Numerous other performance and correctness fixes and included. - -## Breaking Changes - -- New collective interface idiom (specialize/construct/invoke). - -## New Features - -- `cub::DeviceRadixSort`. - Implements short-circuiting for homogenous digit passes. -- `cub::DeviceScan`. - Implements single-pass "adaptive-lookback" strategy. - -## Other Enhancements - -- Significantly improved documentation (with example code snippets). -- More extensive regression test suit for aggressively testing collective - variants. -- Allow non-trially-constructed types (previously unions had prevented aliasing - temporary storage of those types). -- Improved support for SM3x SHFL (collective ops now use SHFL for types larger - than 32 bits). -- Better code generation for 64-bit addressing within - `cub::BlockLoad`/`cub::BlockStore`. -- `cub::DeviceHistogram` now supports histograms of arbitrary bins. -- Updates to accommodate CUDA 5.5 dynamic parallelism. - -## Bug Fixes - -- Workarounds for SM10 codegen issues in uncommonly-used - `cub::WarpScan`/`cub::WarpReduce` specializations. - -# CUB 0.9.4 - -## Summary - -CUB 0.9.3 is a minor release. - -## Enhancements - -- Various documentation updates and corrections. - -## Bug Fixes - -- Fixed compilation errors for SM1x. -- Fixed compilation errors for some WarpScan entrypoints on SM3x and up. - -# CUB 0.9.3 - -## Summary - -CUB 0.9.3 adds histogram algorithms and work management utility descriptors. - -## New Features - -- `cub::DevicHistogram256`. -- `cub::BlockHistogram256`. -- `cub::BlockScan` algorithm variant `BLOCK_SCAN_RAKING_MEMOIZE`, which - trades more register consumption for less shared memory I/O. -- `cub::GridQueue`, `cub::GridEvenShare`, work management utility descriptors. - -## Other Enhancements - -- Updates to `cub::BlockRadixRank` to use `cub::BlockScan`, which improves - performance on SM3x by using SHFL. -- Allow types other than builtin types to be used in `cub::WarpScan::*Sum` - methods if they only have `operator+` overloaded. - Previously they also required to support assignment from `int(0)`. -- Update `cub::BlockReduce`'s `BLOCK_REDUCE_WARP_REDUCTIONS` algorithm to work - even when block size is not an even multiple of warp size. -- Refactoring of `cub::DeviceAllocator` interface and - `cub::CachingDeviceAllocator` implementation. - -# CUB 0.9.2 - -## Summary - -CUB 0.9.2 adds `cub::WarpReduce`. - -## New Features - -- `cub::WarpReduce`, which uses the SHFL instruction when applicable. - `cub::BlockReduce` now uses this `cub::WarpReduce` instead of implementing - its own. - -## Enhancements - -- Documentation updates and corrections. - -## Bug Fixes - -- Fixes for 64-bit Linux compilation warnings and errors. - -# CUB 0.9.1 - -## Summary - -CUB 0.9.1 is a minor release. - -## Bug Fixes - -- Fix for ambiguity in `cub::BlockScan::Reduce` between generic reduction and - summation. - Summation entrypoints are now called `::Sum()`, similar to the - convention in `cub::BlockScan`. -- Small edits to documentation and download tracking. - -# CUB 0.9.0 - -## Summary - -Initial preview release. -CUB is the first durable, high-performance library of cooperative block-level, - warp-level, and thread-level primitives for CUDA kernel programming. - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/set_operations.h b/spaces/ma-xu/LIVE/thrust/thrust/set_operations.h deleted file mode 100644 index a51eaed4351e52aaf3569c986cc5153640dd15d6..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/set_operations.h +++ /dev/null @@ -1,2963 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file set_operations.h - * \brief Set theoretic operations for sorted ranges - */ - -#pragma once - -#include -#include -#include - -namespace thrust -{ - - -/*! \addtogroup set_operations Set Operations - * \ingroup algorithms - * \{ - */ - - -/*! \p set_difference constructs a sorted range that is the set difference of the sorted - * ranges [first1, last1) and [first2, last2). The return value is the - * end of the output range. - * - * In the simplest case, \p set_difference performs the "difference" operation from set - * theory: the output range contains a copy of every element that is contained in - * [first1, last1) and not contained in [first2, last1). The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements - * that are equivalent to each other and if [first2, last2) contains \c n - * elements that are equivalent to them, the last max(m-n,0) elements from - * [first1, last1) range shall be copied to the output range. - * - * This version of \p set_difference compares elements using \c operator<. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \return The end of the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_difference to compute the - * set difference of two sets of integers sorted in ascending order using the \p thrust::host execution - * policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A1[6] = {0, 1, 3, 4, 5, 6, 9}; - * int A2[5] = {1, 3, 5, 7, 9}; - * - * int result[3]; - * - * int *result_end = thrust::set_difference(thrust::host, A1, A1 + 6, A2, A2 + 5, result); - * // result is now {0, 4, 6} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_difference.html - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template -__host__ __device__ - OutputIterator set_difference(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result); - - -/*! \p set_difference constructs a sorted range that is the set difference of the sorted - * ranges [first1, last1) and [first2, last2). The return value is the - * end of the output range. - * - * In the simplest case, \p set_difference performs the "difference" operation from set - * theory: the output range contains a copy of every element that is contained in - * [first1, last1) and not contained in [first2, last1). The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements - * that are equivalent to each other and if [first2, last2) contains \c n - * elements that are equivalent to them, the last max(m-n,0) elements from - * [first1, last1) range shall be copied to the output range. - * - * This version of \p set_difference compares elements using \c operator<. - * - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \return The end of the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_difference to compute the - * set difference of two sets of integers sorted in ascending order. - * - * \code - * #include - * ... - * int A1[6] = {0, 1, 3, 4, 5, 6, 9}; - * int A2[5] = {1, 3, 5, 7, 9}; - * - * int result[3]; - * - * int *result_end = thrust::set_difference(A1, A1 + 6, A2, A2 + 5, result); - * // result is now {0, 4, 6} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_difference.html - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template - OutputIterator set_difference(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result); - - -/*! \p set_difference constructs a sorted range that is the set difference of the sorted - * ranges [first1, last1) and [first2, last2). The return value is the - * end of the output range. - * - * In the simplest case, \p set_difference performs the "difference" operation from set - * theory: the output range contains a copy of every element that is contained in - * [first1, last1) and not contained in [first2, last1). The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements - * that are equivalent to each other and if [first2, last2) contains \c n - * elements that are equivalent to them, the last max(m-n,0) elements from - * [first1, last1) range shall be copied to the output range. - * - * This version of \p set_difference compares elements using a function object \p comp. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \param comp Comparison operator. - * \return The end of the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1's \c value_type is convertable to \p StrictWeakCompare's \c first_argument_type. - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2's \c value_type is convertable to \p StrictWeakCompare's \c second_argument_type. - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_difference to compute the - * set difference of two sets of integers sorted in descending order using the \p thrust::host execution - * policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * int A1[6] = {9, 6, 5, 4, 3, 1, 0}; - * int A2[5] = {9, 7, 5, 3, 1}; - * - * int result[3]; - * - * int *result_end = thrust::set_difference(thrust::host, A1, A1 + 6, A2, A2 + 5, result, thrust::greater()); - * // result is now {6, 4, 0} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_difference.html - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template -__host__ __device__ - OutputIterator set_difference(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result, - StrictWeakCompare comp); - - -/*! \p set_difference constructs a sorted range that is the set difference of the sorted - * ranges [first1, last1) and [first2, last2). The return value is the - * end of the output range. - * - * In the simplest case, \p set_difference performs the "difference" operation from set - * theory: the output range contains a copy of every element that is contained in - * [first1, last1) and not contained in [first2, last1). The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements - * that are equivalent to each other and if [first2, last2) contains \c n - * elements that are equivalent to them, the last max(m-n,0) elements from - * [first1, last1) range shall be copied to the output range. - * - * This version of \p set_difference compares elements using a function object \p comp. - * - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \param comp Comparison operator. - * \return The end of the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1's \c value_type is convertable to \p StrictWeakCompare's \c first_argument_type. - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2's \c value_type is convertable to \p StrictWeakCompare's \c second_argument_type. - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_difference to compute the - * set difference of two sets of integers sorted in descending order. - * - * \code - * #include - * #include - * ... - * int A1[6] = {9, 6, 5, 4, 3, 1, 0}; - * int A2[5] = {9, 7, 5, 3, 1}; - * - * int result[3]; - * - * int *result_end = thrust::set_difference(A1, A1 + 6, A2, A2 + 5, result, thrust::greater()); - * // result is now {6, 4, 0} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_difference.html - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template - OutputIterator set_difference(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result, - StrictWeakCompare comp); - - -/*! \p set_intersection constructs a sorted range that is the - * intersection of sorted ranges [first1, last1) and - * [first2, last2). The return value is the end of the - * output range. - * - * In the simplest case, \p set_intersection performs the - * "intersection" operation from set theory: the output range - * contains a copy of every element that is contained in both - * [first1, last1) and [first2, last2). The - * general case is more complicated, because the input ranges may - * contain duplicate elements. The generalization is that if a value - * appears \c m times in [first1, last1) and \c n times in - * [first2, last2) (where \c m may be zero), then it - * appears min(m,n) times in the output range. - * \p set_intersection is stable, meaning that both elements are - * copied from the first range rather than the second, and that the - * relative order of elements in the output range is the same as in - * the first input range. - * - * This version of \p set_intersection compares objects using - * \c operator<. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \return The end of the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_intersection to compute the - * set intersection of two sets of integers sorted in ascending order using the \p thrust::host execution - * policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A1[6] = {1, 3, 5, 7, 9, 11}; - * int A2[7] = {1, 1, 2, 3, 5, 8, 13}; - * - * int result[7]; - * - * int *result_end = thrust::set_intersection(thrust::host, A1, A1 + 6, A2, A2 + 7, result); - * // result is now {1, 3, 5} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_intersection.html - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template -__host__ __device__ - OutputIterator set_intersection(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result); - - -/*! \p set_intersection constructs a sorted range that is the - * intersection of sorted ranges [first1, last1) and - * [first2, last2). The return value is the end of the - * output range. - * - * In the simplest case, \p set_intersection performs the - * "intersection" operation from set theory: the output range - * contains a copy of every element that is contained in both - * [first1, last1) and [first2, last2). The - * general case is more complicated, because the input ranges may - * contain duplicate elements. The generalization is that if a value - * appears \c m times in [first1, last1) and \c n times in - * [first2, last2) (where \c m may be zero), then it - * appears min(m,n) times in the output range. - * \p set_intersection is stable, meaning that both elements are - * copied from the first range rather than the second, and that the - * relative order of elements in the output range is the same as in - * the first input range. - * - * This version of \p set_intersection compares objects using - * \c operator<. - * - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \return The end of the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_intersection to compute the - * set intersection of two sets of integers sorted in ascending order. - * - * \code - * #include - * ... - * int A1[6] = {1, 3, 5, 7, 9, 11}; - * int A2[7] = {1, 1, 2, 3, 5, 8, 13}; - * - * int result[7]; - * - * int *result_end = thrust::set_intersection(A1, A1 + 6, A2, A2 + 7, result); - * // result is now {1, 3, 5} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_intersection.html - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template - OutputIterator set_intersection(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result); - - -/*! \p set_intersection constructs a sorted range that is the - * intersection of sorted ranges [first1, last1) and - * [first2, last2). The return value is the end of the - * output range. - * - * In the simplest case, \p set_intersection performs the - * "intersection" operation from set theory: the output range - * contains a copy of every element that is contained in both - * [first1, last1) and [first2, last2). The - * general case is more complicated, because the input ranges may - * contain duplicate elements. The generalization is that if a value - * appears \c m times in [first1, last1) and \c n times in - * [first2, last2) (where \c m may be zero), then it - * appears min(m,n) times in the output range. - * \p set_intersection is stable, meaning that both elements are - * copied from the first range rather than the second, and that the - * relative order of elements in the output range is the same as in - * the first input range. - * - * This version of \p set_intersection compares elements using a function object \p comp. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \param comp Comparison operator. - * \return The end of the output range. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp. - * \pre The resulting range shall not overlap with either input range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * The following code snippet demonstrates how to use \p set_intersection to compute - * the set intersection of sets of integers sorted in descending order using the \p thrust::host execution - * policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A1[6] = {11, 9, 7, 5, 3, 1}; - * int A2[7] = {13, 8, 5, 3, 2, 1, 1}; - * - * int result[3]; - * - * int *result_end = thrust::set_intersection(thrust::host, A1, A1 + 6, A2, A2 + 7, result, thrust::greater()); - * // result is now {5, 3, 1} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_intersection.html - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template -__host__ __device__ - OutputIterator set_intersection(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result, - StrictWeakCompare comp); - - -/*! \p set_intersection constructs a sorted range that is the - * intersection of sorted ranges [first1, last1) and - * [first2, last2). The return value is the end of the - * output range. - * - * In the simplest case, \p set_intersection performs the - * "intersection" operation from set theory: the output range - * contains a copy of every element that is contained in both - * [first1, last1) and [first2, last2). The - * general case is more complicated, because the input ranges may - * contain duplicate elements. The generalization is that if a value - * appears \c m times in [first1, last1) and \c n times in - * [first2, last2) (where \c m may be zero), then it - * appears min(m,n) times in the output range. - * \p set_intersection is stable, meaning that both elements are - * copied from the first range rather than the second, and that the - * relative order of elements in the output range is the same as in - * the first input range. - * - * This version of \p set_intersection compares elements using a function object \p comp. - * - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \param comp Comparison operator. - * \return The end of the output range. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp. - * \pre The resulting range shall not overlap with either input range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * The following code snippet demonstrates how to use \p set_intersection to compute - * the set intersection of sets of integers sorted in descending order. - * - * \code - * #include - * ... - * int A1[6] = {11, 9, 7, 5, 3, 1}; - * int A2[7] = {13, 8, 5, 3, 2, 1, 1}; - * - * int result[3]; - * - * int *result_end = thrust::set_intersection(A1, A1 + 6, A2, A2 + 7, result, thrust::greater()); - * // result is now {5, 3, 1} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_intersection.html - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template - OutputIterator set_intersection(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result, - StrictWeakCompare comp); - - -/*! \p set_symmetric_difference constructs a sorted range that is the set symmetric - * difference of the sorted ranges [first1, last1) and [first2, last2). - * The return value is the end of the output range. - * - * In the simplest case, \p set_symmetric_difference performs a set theoretic calculation: - * it constructs the union of the two sets A - B and B - A, where A and B are the two - * input ranges. That is, the output range contains a copy of every element that is - * contained in [first1, last1) but not [first2, last1), and a copy of - * every element that is contained in [first2, last2) but not [first1, last1). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements that are - * equivalent to each other and [first2, last1) contains \c n elements that are - * equivalent to them, then |m - n| of those elements shall be copied to the output - * range: the last m - n elements from [first1, last1) if m > n, and - * the last n - m of these elements from [first2, last2) if m < n. - * - * This version of \p set_union compares elements using \c operator<. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \return The end of the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference to compute - * the symmetric difference of two sets of integers sorted in ascending order using the \p thrust::host - * execution policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A1[6] = {0, 1, 2, 2, 4, 6, 7}; - * int A2[5] = {1, 1, 2, 5, 8}; - * - * int result[6]; - * - * int *result_end = thrust::set_symmetric_difference(thrust::host, A1, A1 + 6, A2, A2 + 5, result); - * // result = {0, 4, 5, 6, 7, 8} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_symmetric_difference.html - * \see \p merge - * \see \p includes - * \see \p set_difference - * \see \p set_union - * \see \p set_intersection - * \see \p sort - * \see \p is_sorted - */ -template -__host__ __device__ - OutputIterator set_symmetric_difference(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result); - - -/*! \p set_symmetric_difference constructs a sorted range that is the set symmetric - * difference of the sorted ranges [first1, last1) and [first2, last2). - * The return value is the end of the output range. - * - * In the simplest case, \p set_symmetric_difference performs a set theoretic calculation: - * it constructs the union of the two sets A - B and B - A, where A and B are the two - * input ranges. That is, the output range contains a copy of every element that is - * contained in [first1, last1) but not [first2, last1), and a copy of - * every element that is contained in [first2, last2) but not [first1, last1). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements that are - * equivalent to each other and [first2, last1) contains \c n elements that are - * equivalent to them, then |m - n| of those elements shall be copied to the output - * range: the last m - n elements from [first1, last1) if m > n, and - * the last n - m of these elements from [first2, last2) if m < n. - * - * This version of \p set_union compares elements using \c operator<. - * - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \return The end of the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference to compute - * the symmetric difference of two sets of integers sorted in ascending order. - * - * \code - * #include - * ... - * int A1[6] = {0, 1, 2, 2, 4, 6, 7}; - * int A2[5] = {1, 1, 2, 5, 8}; - * - * int result[6]; - * - * int *result_end = thrust::set_symmetric_difference(A1, A1 + 6, A2, A2 + 5, result); - * // result = {0, 4, 5, 6, 7, 8} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_symmetric_difference.html - * \see \p merge - * \see \p includes - * \see \p set_difference - * \see \p set_union - * \see \p set_intersection - * \see \p sort - * \see \p is_sorted - */ -template - OutputIterator set_symmetric_difference(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result); - - -/*! \p set_symmetric_difference constructs a sorted range that is the set symmetric - * difference of the sorted ranges [first1, last1) and [first2, last2). - * The return value is the end of the output range. - * - * In the simplest case, \p set_symmetric_difference performs a set theoretic calculation: - * it constructs the union of the two sets A - B and B - A, where A and B are the two - * input ranges. That is, the output range contains a copy of every element that is - * contained in [first1, last1) but not [first2, last1), and a copy of - * every element that is contained in [first2, last2) but not [first1, last1). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements that are - * equivalent to each other and [first2, last1) contains \c n elements that are - * equivalent to them, then |m - n| of those elements shall be copied to the output - * range: the last m - n elements from [first1, last1) if m > n, and - * the last n - m of these elements from [first2, last2) if m < n. - * - * This version of \p set_union compares elements using a function object \p comp. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \param comp Comparison operator. - * \return The end of the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference to compute - * the symmetric difference of two sets of integers sorted in descending order using the \p thrust::host - * execution policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A1[6] = {7, 6, 4, 2, 2, 1, 0}; - * int A2[5] = {8, 5, 2, 1, 1}; - * - * int result[6]; - * - * int *result_end = thrust::set_symmetric_difference(thrust::host, A1, A1 + 6, A2, A2 + 5, result); - * // result = {8, 7, 6, 5, 4, 0} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_symmetric_difference.html - * \see \p merge - * \see \p includes - * \see \p set_difference - * \see \p set_union - * \see \p set_intersection - * \see \p sort - * \see \p is_sorted - */ -template -__host__ __device__ - OutputIterator set_symmetric_difference(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result, - StrictWeakCompare comp); - - -/*! \p set_symmetric_difference constructs a sorted range that is the set symmetric - * difference of the sorted ranges [first1, last1) and [first2, last2). - * The return value is the end of the output range. - * - * In the simplest case, \p set_symmetric_difference performs a set theoretic calculation: - * it constructs the union of the two sets A - B and B - A, where A and B are the two - * input ranges. That is, the output range contains a copy of every element that is - * contained in [first1, last1) but not [first2, last1), and a copy of - * every element that is contained in [first2, last2) but not [first1, last1). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements that are - * equivalent to each other and [first2, last1) contains \c n elements that are - * equivalent to them, then |m - n| of those elements shall be copied to the output - * range: the last m - n elements from [first1, last1) if m > n, and - * the last n - m of these elements from [first2, last2) if m < n. - * - * This version of \p set_union compares elements using a function object \p comp. - * - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \param comp Comparison operator. - * \return The end of the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference to compute - * the symmetric difference of two sets of integers sorted in descending order. - * - * \code - * #include - * ... - * int A1[6] = {7, 6, 4, 2, 2, 1, 0}; - * int A2[5] = {8, 5, 2, 1, 1}; - * - * int result[6]; - * - * int *result_end = thrust::set_symmetric_difference(A1, A1 + 6, A2, A2 + 5, result); - * // result = {8, 7, 6, 5, 4, 0} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_symmetric_difference.html - * \see \p merge - * \see \p includes - * \see \p set_difference - * \see \p set_union - * \see \p set_intersection - * \see \p sort - * \see \p is_sorted - */ -template - OutputIterator set_symmetric_difference(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result, - StrictWeakCompare comp); - - -/*! \p set_union constructs a sorted range that is the union of the sorted ranges - * [first1, last1) and [first2, last2). The return value is the - * end of the output range. - * - * In the simplest case, \p set_union performs the "union" operation from set - * theory: the output range contains a copy of every element that is contained in - * [first1, last1), [first2, last1), or both. The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements - * that are equivalent to each other and if [first2, last2) contains \c n - * elements that are equivalent to them, then all \c m elements from the first - * range shall be copied to the output range, in order, and then max(n - m, 0) - * elements from the second range shall be copied to the output, in order. - * - * This version of \p set_union compares elements using \c operator<. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \return The end of the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_union to compute the union of - * two sets of integers sorted in ascending order using the \p thrust::host execution policy for - * parallelization: - * - * \code - * #include - * #include - * ... - * int A1[7] = {0, 2, 4, 6, 8, 10, 12}; - * int A2[5] = {1, 3, 5, 7, 9}; - * - * int result[11]; - * - * int *result_end = thrust::set_union(thrust::host, A1, A1 + 7, A2, A2 + 5, result); - * // result = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_union.html - * \see \p merge - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template -__host__ __device__ - OutputIterator set_union(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result); - - -/*! \p set_union constructs a sorted range that is the union of the sorted ranges - * [first1, last1) and [first2, last2). The return value is the - * end of the output range. - * - * In the simplest case, \p set_union performs the "union" operation from set - * theory: the output range contains a copy of every element that is contained in - * [first1, last1), [first2, last1), or both. The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements - * that are equivalent to each other and if [first2, last2) contains \c n - * elements that are equivalent to them, then all \c m elements from the first - * range shall be copied to the output range, in order, and then max(n - m, 0) - * elements from the second range shall be copied to the output, in order. - * - * This version of \p set_union compares elements using \c operator<. - * - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \return The end of the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_union to compute the union of - * two sets of integers sorted in ascending order. - * - * \code - * #include - * ... - * int A1[7] = {0, 2, 4, 6, 8, 10, 12}; - * int A2[5] = {1, 3, 5, 7, 9}; - * - * int result[11]; - * - * int *result_end = thrust::set_union(A1, A1 + 7, A2, A2 + 5, result); - * // result = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_union.html - * \see \p merge - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template - OutputIterator set_union(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result); - - -/*! \p set_union constructs a sorted range that is the union of the sorted ranges - * [first1, last1) and [first2, last2). The return value is the - * end of the output range. - * - * In the simplest case, \p set_union performs the "union" operation from set - * theory: the output range contains a copy of every element that is contained in - * [first1, last1), [first2, last1), or both. The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements - * that are equivalent to each other and if [first2, last2) contains \c n - * elements that are equivalent to them, then all \c m elements from the first - * range shall be copied to the output range, in order, and then max(n - m, 0) - * elements from the second range shall be copied to the output, in order. - * - * This version of \p set_union compares elements using a function object \p comp. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \param comp Comparison operator. - * \return The end of the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1's \c value_type is convertable to \p StrictWeakCompare's \c first_argument_type. - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2's \c value_type is convertable to \p StrictWeakCompare's \c second_argument_type. - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_union to compute the union of - * two sets of integers sorted in ascending order using the \p thrust::host execution policy for - * parallelization: - * - * \code - * #include - * #include - * #include - * ... - * int A1[7] = {12, 10, 8, 6, 4, 2, 0}; - * int A2[5] = {9, 7, 5, 3, 1}; - * - * int result[11]; - * - * int *result_end = thrust::set_union(thrust::host, A1, A1 + 7, A2, A2 + 5, result, thrust::greater()); - * // result = {12, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_union.html - * \see \p merge - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template -__host__ __device__ - OutputIterator set_union(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result, - StrictWeakCompare comp); - - -/*! \p set_union constructs a sorted range that is the union of the sorted ranges - * [first1, last1) and [first2, last2). The return value is the - * end of the output range. - * - * In the simplest case, \p set_union performs the "union" operation from set - * theory: the output range contains a copy of every element that is contained in - * [first1, last1), [first2, last1), or both. The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements - * that are equivalent to each other and if [first2, last2) contains \c n - * elements that are equivalent to them, then all \c m elements from the first - * range shall be copied to the output range, in order, and then max(n - m, 0) - * elements from the second range shall be copied to the output, in order. - * - * This version of \p set_union compares elements using a function object \p comp. - * - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \param comp Comparison operator. - * \return The end of the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1's \c value_type is convertable to \p StrictWeakCompare's \c first_argument_type. - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2's \c value_type is convertable to \p StrictWeakCompare's \c second_argument_type. - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_union to compute the union of - * two sets of integers sorted in ascending order. - * - * \code - * #include - * #include - * ... - * int A1[7] = {12, 10, 8, 6, 4, 2, 0}; - * int A2[5] = {9, 7, 5, 3, 1}; - * - * int result[11]; - * - * int *result_end = thrust::set_union(A1, A1 + 7, A2, A2 + 5, result, thrust::greater()); - * // result = {12, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_union.html - * \see \p merge - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template - OutputIterator set_union(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result, - StrictWeakCompare comp); - - -/*! \p set_difference_by_key performs a key-value difference operation from set theory. - * \p set_difference_by_key constructs a sorted range that is the difference of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_difference_by_key performs the "difference" operation from set - * theory: the keys output range contains a copy of every element that is contained in - * [keys_first1, keys_last1) and not contained in [keys_first2, keys_last2). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements - * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n - * elements that are equivalent to them, the last max(m-n,0) elements from - * [keys_first1, keys_last1) range shall be copied to the output range. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_difference_by_key compares key elements using \c operator<. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_difference_by_key to compute the - * set difference of two sets of integers sorted in ascending order with their values using the \p thrust::host - * execution policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A_keys[6] = {0, 1, 3, 4, 5, 6, 9}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {1, 3, 5, 7, 9}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[3]; - * int vals_result[3]; - * - * thrust::pair end = thrust::set_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result); - * // keys_result is now {0, 4, 6} - * // vals_result is now {0, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_intersection_by_key - * \see \p set_symmetric_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template -__host__ __device__ - thrust::pair - set_difference_by_key(const thrust::detail::execution_policy_base &exec, - InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result); - - -/*! \p set_difference_by_key performs a key-value difference operation from set theory. - * \p set_difference_by_key constructs a sorted range that is the difference of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_difference_by_key performs the "difference" operation from set - * theory: the keys output range contains a copy of every element that is contained in - * [keys_first1, keys_last1) and not contained in [keys_first2, keys_last2). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements - * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n - * elements that are equivalent to them, the last max(m-n,0) elements from - * [keys_first1, keys_last1) range shall be copied to the output range. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_difference_by_key compares key elements using \c operator<. - * - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_difference_by_key to compute the - * set difference of two sets of integers sorted in ascending order with their values. - * - * \code - * #include - * ... - * int A_keys[6] = {0, 1, 3, 4, 5, 6, 9}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {1, 3, 5, 7, 9}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[3]; - * int vals_result[3]; - * - * thrust::pair end = thrust::set_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result); - * // keys_result is now {0, 4, 6} - * // vals_result is now {0, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_intersection_by_key - * \see \p set_symmetric_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template - thrust::pair - set_difference_by_key(InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result); - - -/*! \p set_difference_by_key performs a key-value difference operation from set theory. - * \p set_difference_by_key constructs a sorted range that is the difference of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_difference_by_key performs the "difference" operation from set - * theory: the keys output range contains a copy of every element that is contained in - * [keys_first1, keys_last1) and not contained in [keys_first2, keys_last2). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements - * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n - * elements that are equivalent to them, the last max(m-n,0) elements from - * [keys_first1, keys_last1) range shall be copied to the output range. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_difference_by_key compares key elements using a function object \p comp. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \param comp Comparison operator. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_difference_by_key to compute the - * set difference of two sets of integers sorted in descending order with their values using the \p thrust::host - * execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * int A_keys[6] = {9, 6, 5, 4, 3, 1, 0}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {9, 7, 5, 3, 1}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[3]; - * int vals_result[3]; - * - * thrust::pair end = thrust::set_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result, thrust::greater()); - * // keys_result is now {0, 4, 6} - * // vals_result is now {0, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_intersection_by_key - * \see \p set_symmetric_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template -__host__ __device__ - thrust::pair - set_difference_by_key(const thrust::detail::execution_policy_base &exec, - InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result, - StrictWeakCompare comp); - - -/*! \p set_difference_by_key performs a key-value difference operation from set theory. - * \p set_difference_by_key constructs a sorted range that is the difference of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_difference_by_key performs the "difference" operation from set - * theory: the keys output range contains a copy of every element that is contained in - * [keys_first1, keys_last1) and not contained in [keys_first2, keys_last2). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements - * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n - * elements that are equivalent to them, the last max(m-n,0) elements from - * [keys_first1, keys_last1) range shall be copied to the output range. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_difference_by_key compares key elements using a function object \p comp. - * - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \param comp Comparison operator. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_difference_by_key to compute the - * set difference of two sets of integers sorted in descending order with their values. - * - * \code - * #include - * #include - * ... - * int A_keys[6] = {9, 6, 5, 4, 3, 1, 0}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {9, 7, 5, 3, 1}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[3]; - * int vals_result[3]; - * - * thrust::pair end = thrust::set_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result, thrust::greater()); - * // keys_result is now {0, 4, 6} - * // vals_result is now {0, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_intersection_by_key - * \see \p set_symmetric_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template - thrust::pair - set_difference_by_key(InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result, - StrictWeakCompare comp); - - -/*! \p set_intersection_by_key performs a key-value intersection operation from set theory. - * \p set_intersection_by_key constructs a sorted range that is the intersection of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_intersection_by_key performs the "intersection" operation from set - * theory: the keys output range contains a copy of every element that is contained in both - * [keys_first1, keys_last1) [keys_first2, keys_last2). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if an element appears \c m times in [keys_first1, keys_last1) - * and \c n times in [keys_first2, keys_last2) (where \c m may be zero), then it - * appears min(m,n) times in the keys output range. - * \p set_intersection_by_key is stable, meaning both that elements are copied from the first - * input range rather than the second, and that the relative order of elements in the output range - * is the same as the first input range. - * - * Each time a key element is copied from [keys_first1, keys_last1) to the keys output range, - * the corresponding value element is copied from [values_first1, values_last1) to the values - * output range. - * - * This version of \p set_intersection_by_key compares objects using \c operator<. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \note Unlike the other key-value set operations, \p set_intersection_by_key is unique in that it has no - * \c values_first2 parameter because elements from the second input range are never copied to the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_intersection_by_key to compute the - * set intersection of two sets of integers sorted in ascending order with their values using the \p thrust::host - * execution policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A_keys[6] = {1, 3, 5, 7, 9, 11}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0}; - * - * int B_keys[7] = {1, 1, 2, 3, 5, 8, 13}; - * - * int keys_result[7]; - * int vals_result[7]; - * - * thrust::pair end = thrust::set_intersection_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 7, A_vals, keys_result, vals_result); - * - * // keys_result is now {1, 3, 5} - * // vals_result is now {0, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_difference_by_key - * \see \p set_symmetric_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template -__host__ __device__ - thrust::pair - set_intersection_by_key(const thrust::detail::execution_policy_base &exec, - InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - OutputIterator1 keys_result, - OutputIterator2 values_result); - - -/*! \p set_intersection_by_key performs a key-value intersection operation from set theory. - * \p set_intersection_by_key constructs a sorted range that is the intersection of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_intersection_by_key performs the "intersection" operation from set - * theory: the keys output range contains a copy of every element that is contained in both - * [keys_first1, keys_last1) [keys_first2, keys_last2). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if an element appears \c m times in [keys_first1, keys_last1) - * and \c n times in [keys_first2, keys_last2) (where \c m may be zero), then it - * appears min(m,n) times in the keys output range. - * \p set_intersection_by_key is stable, meaning both that elements are copied from the first - * input range rather than the second, and that the relative order of elements in the output range - * is the same as the first input range. - * - * Each time a key element is copied from [keys_first1, keys_last1) to the keys output range, - * the corresponding value element is copied from [values_first1, values_last1) to the values - * output range. - * - * This version of \p set_intersection_by_key compares objects using \c operator<. - * - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \note Unlike the other key-value set operations, \p set_intersection_by_key is unique in that it has no - * \c values_first2 parameter because elements from the second input range are never copied to the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_intersection_by_key to compute the - * set intersection of two sets of integers sorted in ascending order with their values. - * - * \code - * #include - * ... - * int A_keys[6] = {1, 3, 5, 7, 9, 11}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0}; - * - * int B_keys[7] = {1, 1, 2, 3, 5, 8, 13}; - * - * int keys_result[7]; - * int vals_result[7]; - * - * thrust::pair end = thrust::set_intersection_by_key(A_keys, A_keys + 6, B_keys, B_keys + 7, A_vals, keys_result, vals_result); - * - * // keys_result is now {1, 3, 5} - * // vals_result is now {0, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_difference_by_key - * \see \p set_symmetric_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template - thrust::pair - set_intersection_by_key(InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - OutputIterator1 keys_result, - OutputIterator2 values_result); - - -/*! \p set_intersection_by_key performs a key-value intersection operation from set theory. - * \p set_intersection_by_key constructs a sorted range that is the intersection of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_intersection_by_key performs the "intersection" operation from set - * theory: the keys output range contains a copy of every element that is contained in both - * [keys_first1, keys_last1) [keys_first2, keys_last2). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if an element appears \c m times in [keys_first1, keys_last1) - * and \c n times in [keys_first2, keys_last2) (where \c m may be zero), then it - * appears min(m,n) times in the keys output range. - * \p set_intersection_by_key is stable, meaning both that elements are copied from the first - * input range rather than the second, and that the relative order of elements in the output range - * is the same as the first input range. - * - * Each time a key element is copied from [keys_first1, keys_last1) to the keys output range, - * the corresponding value element is copied from [values_first1, values_last1) to the values - * output range. - * - * This version of \p set_intersection_by_key compares objects using a function object \p comp. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \param comp Comparison operator. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \note Unlike the other key-value set operations, \p set_intersection_by_key is unique in that it has no - * \c values_first2 parameter because elements from the second input range are never copied to the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_intersection_by_key to compute the - * set intersection of two sets of integers sorted in descending order with their values using the - * \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * int A_keys[6] = {11, 9, 7, 5, 3, 1}; - * int A_vals[6] = { 0, 0, 0, 0, 0, 0}; - * - * int B_keys[7] = {13, 8, 5, 3, 2, 1, 1}; - * - * int keys_result[7]; - * int vals_result[7]; - * - * thrust::pair end = thrust::set_intersection_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 7, A_vals, keys_result, vals_result, thrust::greater()); - * - * // keys_result is now {5, 3, 1} - * // vals_result is now {0, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_difference_by_key - * \see \p set_symmetric_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template -__host__ __device__ - thrust::pair - set_intersection_by_key(const thrust::detail::execution_policy_base &exec, - InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - OutputIterator1 keys_result, - OutputIterator2 values_result, - StrictWeakCompare comp); - - -/*! \p set_intersection_by_key performs a key-value intersection operation from set theory. - * \p set_intersection_by_key constructs a sorted range that is the intersection of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_intersection_by_key performs the "intersection" operation from set - * theory: the keys output range contains a copy of every element that is contained in both - * [keys_first1, keys_last1) [keys_first2, keys_last2). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if an element appears \c m times in [keys_first1, keys_last1) - * and \c n times in [keys_first2, keys_last2) (where \c m may be zero), then it - * appears min(m,n) times in the keys output range. - * \p set_intersection_by_key is stable, meaning both that elements are copied from the first - * input range rather than the second, and that the relative order of elements in the output range - * is the same as the first input range. - * - * Each time a key element is copied from [keys_first1, keys_last1) to the keys output range, - * the corresponding value element is copied from [values_first1, values_last1) to the values - * output range. - * - * This version of \p set_intersection_by_key compares objects using a function object \p comp. - * - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \param comp Comparison operator. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \note Unlike the other key-value set operations, \p set_intersection_by_key is unique in that it has no - * \c values_first2 parameter because elements from the second input range are never copied to the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_intersection_by_key to compute the - * set intersection of two sets of integers sorted in descending order with their values. - * - * \code - * #include - * #include - * ... - * int A_keys[6] = {11, 9, 7, 5, 3, 1}; - * int A_vals[6] = { 0, 0, 0, 0, 0, 0}; - * - * int B_keys[7] = {13, 8, 5, 3, 2, 1, 1}; - * - * int keys_result[7]; - * int vals_result[7]; - * - * thrust::pair end = thrust::set_intersection_by_key(A_keys, A_keys + 6, B_keys, B_keys + 7, A_vals, keys_result, vals_result, thrust::greater()); - * - * // keys_result is now {5, 3, 1} - * // vals_result is now {0, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_difference_by_key - * \see \p set_symmetric_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template - thrust::pair - set_intersection_by_key(InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - OutputIterator1 keys_result, - OutputIterator2 values_result, - StrictWeakCompare comp); - - -/*! \p set_symmetric_difference_by_key performs a key-value symmetric difference operation from set theory. - * \p set_difference_by_key constructs a sorted range that is the symmetric difference of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_symmetric_difference_by_key performs a set theoretic calculation: - * it constructs the union of the two sets A - B and B - A, where A and B are the two - * input ranges. That is, the output range contains a copy of every element that is - * contained in [keys_first1, keys_last1) but not [keys_first2, keys_last1), and a copy of - * every element that is contained in [keys_first2, keys_last2) but not [keys_first1, keys_last1). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements that are - * equivalent to each other and [keys_first2, keys_last1) contains \c n elements that are - * equivalent to them, then |m - n| of those elements shall be copied to the output - * range: the last m - n elements from [keys_first1, keys_last1) if m > n, and - * the last n - m of these elements from [keys_first2, keys_last2) if m < n. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_symmetric_difference_by_key compares key elements using \c operator<. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the - * symmetric difference of two sets of integers sorted in ascending order with their values using the - * \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A_keys[6] = {0, 1, 2, 2, 4, 6, 7}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {1, 1, 2, 5, 8}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[6]; - * int vals_result[6]; - * - * thrust::pair end = thrust::set_symmetric_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result); - * // keys_result is now {0, 4, 5, 6, 7, 8} - * // vals_result is now {0, 0, 1, 0, 0, 1} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_intersection_by_key - * \see \p set_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template -__host__ __device__ - thrust::pair - set_symmetric_difference_by_key(const thrust::detail::execution_policy_base &exec, - InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result); - - -/*! \p set_symmetric_difference_by_key performs a key-value symmetric difference operation from set theory. - * \p set_difference_by_key constructs a sorted range that is the symmetric difference of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_symmetric_difference_by_key performs a set theoretic calculation: - * it constructs the union of the two sets A - B and B - A, where A and B are the two - * input ranges. That is, the output range contains a copy of every element that is - * contained in [keys_first1, keys_last1) but not [keys_first2, keys_last1), and a copy of - * every element that is contained in [keys_first2, keys_last2) but not [keys_first1, keys_last1). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements that are - * equivalent to each other and [keys_first2, keys_last1) contains \c n elements that are - * equivalent to them, then |m - n| of those elements shall be copied to the output - * range: the last m - n elements from [keys_first1, keys_last1) if m > n, and - * the last n - m of these elements from [keys_first2, keys_last2) if m < n. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_symmetric_difference_by_key compares key elements using \c operator<. - * - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the - * symmetric difference of two sets of integers sorted in ascending order with their values. - * - * \code - * #include - * ... - * int A_keys[6] = {0, 1, 2, 2, 4, 6, 7}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {1, 1, 2, 5, 8}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[6]; - * int vals_result[6]; - * - * thrust::pair end = thrust::set_symmetric_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result); - * // keys_result is now {0, 4, 5, 6, 7, 8} - * // vals_result is now {0, 0, 1, 0, 0, 1} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_intersection_by_key - * \see \p set_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template - thrust::pair - set_symmetric_difference_by_key(InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result); - - -/*! \p set_symmetric_difference_by_key performs a key-value symmetric difference operation from set theory. - * \p set_difference_by_key constructs a sorted range that is the symmetric difference of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_symmetric_difference_by_key performs a set theoretic calculation: - * it constructs the union of the two sets A - B and B - A, where A and B are the two - * input ranges. That is, the output range contains a copy of every element that is - * contained in [keys_first1, keys_last1) but not [keys_first2, keys_last1), and a copy of - * every element that is contained in [keys_first2, keys_last2) but not [keys_first1, keys_last1). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements that are - * equivalent to each other and [keys_first2, keys_last1) contains \c n elements that are - * equivalent to them, then |m - n| of those elements shall be copied to the output - * range: the last m - n elements from [keys_first1, keys_last1) if m > n, and - * the last n - m of these elements from [keys_first2, keys_last2) if m < n. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_symmetric_difference_by_key compares key elements using a function object \c comp. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \param comp Comparison operator. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the - * symmetric difference of two sets of integers sorted in descending order with their values using the - * \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * int A_keys[6] = {7, 6, 4, 2, 2, 1, 0}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {8, 5, 2, 1, 1}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[6]; - * int vals_result[6]; - * - * thrust::pair end = thrust::set_symmetric_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result); - * // keys_result is now {8, 7, 6, 5, 4, 0} - * // vals_result is now {1, 0, 0, 1, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_intersection_by_key - * \see \p set_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template -__host__ __device__ - thrust::pair - set_symmetric_difference_by_key(const thrust::detail::execution_policy_base &exec, - InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result, - StrictWeakCompare comp); - - -/*! \p set_symmetric_difference_by_key performs a key-value symmetric difference operation from set theory. - * \p set_difference_by_key constructs a sorted range that is the symmetric difference of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_symmetric_difference_by_key performs a set theoretic calculation: - * it constructs the union of the two sets A - B and B - A, where A and B are the two - * input ranges. That is, the output range contains a copy of every element that is - * contained in [keys_first1, keys_last1) but not [keys_first2, keys_last1), and a copy of - * every element that is contained in [keys_first2, keys_last2) but not [keys_first1, keys_last1). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements that are - * equivalent to each other and [keys_first2, keys_last1) contains \c n elements that are - * equivalent to them, then |m - n| of those elements shall be copied to the output - * range: the last m - n elements from [keys_first1, keys_last1) if m > n, and - * the last n - m of these elements from [keys_first2, keys_last2) if m < n. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_symmetric_difference_by_key compares key elements using a function object \c comp. - * - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \param comp Comparison operator. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the - * symmetric difference of two sets of integers sorted in descending order with their values. - * - * \code - * #include - * #include - * ... - * int A_keys[6] = {7, 6, 4, 2, 2, 1, 0}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {8, 5, 2, 1, 1}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[6]; - * int vals_result[6]; - * - * thrust::pair end = thrust::set_symmetric_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result); - * // keys_result is now {8, 7, 6, 5, 4, 0} - * // vals_result is now {1, 0, 0, 1, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_intersection_by_key - * \see \p set_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template - thrust::pair - set_symmetric_difference_by_key(InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result, - StrictWeakCompare comp); - - -/*! \p set_union_by_key performs a key-value union operation from set theory. - * \p set_union_by_key constructs a sorted range that is the union of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_union_by_key performs the "union" operation from set theory: - * the output range contains a copy of every element that is contained in - * [keys_first1, keys_last1), [keys_first2, keys_last1), or both. The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements - * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n - * elements that are equivalent to them, then all \c m elements from the first - * range shall be copied to the output range, in order, and then max(n - m, 0) - * elements from the second range shall be copied to the output, in order. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_union_by_key compares key elements using \c operator<. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the - * symmetric difference of two sets of integers sorted in ascending order with their values using the - * \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A_keys[6] = {0, 2, 4, 6, 8, 10, 12}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {1, 3, 5, 7, 9}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[11]; - * int vals_result[11]; - * - * thrust::pair end = thrust::set_symmetric_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result); - * // keys_result is now {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12} - * // vals_result is now {0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0} - * \endcode - * - * \see \p set_symmetric_difference_by_key - * \see \p set_intersection_by_key - * \see \p set_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template -__host__ __device__ - thrust::pair - set_union_by_key(const thrust::detail::execution_policy_base &exec, - InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result); - - -/*! \p set_union_by_key performs a key-value union operation from set theory. - * \p set_union_by_key constructs a sorted range that is the union of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_union_by_key performs the "union" operation from set theory: - * the output range contains a copy of every element that is contained in - * [keys_first1, keys_last1), [keys_first2, keys_last1), or both. The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements - * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n - * elements that are equivalent to them, then all \c m elements from the first - * range shall be copied to the output range, in order, and then max(n - m, 0) - * elements from the second range shall be copied to the output, in order. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_union_by_key compares key elements using \c operator<. - * - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the - * symmetric difference of two sets of integers sorted in ascending order with their values. - * - * \code - * #include - * ... - * int A_keys[6] = {0, 2, 4, 6, 8, 10, 12}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {1, 3, 5, 7, 9}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[11]; - * int vals_result[11]; - * - * thrust::pair end = thrust::set_symmetric_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result); - * // keys_result is now {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12} - * // vals_result is now {0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0} - * \endcode - * - * \see \p set_symmetric_difference_by_key - * \see \p set_intersection_by_key - * \see \p set_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template - thrust::pair - set_union_by_key(InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result); - - -/*! \p set_union_by_key performs a key-value union operation from set theory. - * \p set_union_by_key constructs a sorted range that is the union of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_union_by_key performs the "union" operation from set theory: - * the output range contains a copy of every element that is contained in - * [keys_first1, keys_last1), [keys_first2, keys_last1), or both. The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements - * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n - * elements that are equivalent to them, then all \c m elements from the first - * range shall be copied to the output range, in order, and then max(n - m, 0) - * elements from the second range shall be copied to the output, in order. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_union_by_key compares key elements using a function object \c comp. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \param comp Comparison operator. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the - * symmetric difference of two sets of integers sorted in descending order with their values using the - * \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * int A_keys[6] = {12, 10, 8, 6, 4, 2, 0}; - * int A_vals[6] = { 0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {9, 7, 5, 3, 1}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[11]; - * int vals_result[11]; - * - * thrust::pair end = thrust::set_symmetric_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result, thrust::greater()); - * // keys_result is now {12, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0} - * // vals_result is now { 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0} - * \endcode - * - * \see \p set_symmetric_difference_by_key - * \see \p set_intersection_by_key - * \see \p set_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template -__host__ __device__ - thrust::pair - set_union_by_key(const thrust::detail::execution_policy_base &exec, - InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result, - StrictWeakCompare comp); - - -/*! \p set_union_by_key performs a key-value union operation from set theory. - * \p set_union_by_key constructs a sorted range that is the union of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_union_by_key performs the "union" operation from set theory: - * the output range contains a copy of every element that is contained in - * [keys_first1, keys_last1), [keys_first2, keys_last1), or both. The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements - * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n - * elements that are equivalent to them, then all \c m elements from the first - * range shall be copied to the output range, in order, and then max(n - m, 0) - * elements from the second range shall be copied to the output, in order. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_union_by_key compares key elements using a function object \c comp. - * - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \param comp Comparison operator. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the - * symmetric difference of two sets of integers sorted in descending order with their values. - * - * \code - * #include - * #include - * ... - * int A_keys[6] = {12, 10, 8, 6, 4, 2, 0}; - * int A_vals[6] = { 0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {9, 7, 5, 3, 1}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[11]; - * int vals_result[11]; - * - * thrust::pair end = thrust::set_symmetric_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result, thrust::greater()); - * // keys_result is now {12, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0} - * // vals_result is now { 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0} - * \endcode - * - * \see \p set_symmetric_difference_by_key - * \see \p set_intersection_by_key - * \see \p set_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template - thrust::pair - set_union_by_key(InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result, - StrictWeakCompare comp); - - -/*! \} // end set_operations - */ - - -} // end thrust - -#include - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/sequential/transform_reduce.h b/spaces/ma-xu/LIVE/thrust/thrust/system/detail/sequential/transform_reduce.h deleted file mode 100644 index 8d2a1b3850dea55c3c8440aa7e22fdb6d002d151..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/sequential/transform_reduce.h +++ /dev/null @@ -1,22 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system has no special transform_reduce functions - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/gather.h b/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/gather.h deleted file mode 100644 index 098e0f4fbad4001632ed02ee9e9b39aa17e54ea0..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/gather.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits gather -#include - diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/esrgan/upsample.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/esrgan/upsample.py deleted file mode 100644 index f9a6d1c26bc5b77c2ece7f66511391a0f82dd1f6..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/esrgan/upsample.py +++ /dev/null @@ -1,84 +0,0 @@ -import cv2 -import glob -import os -import sys -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.utils.download_util import load_file_from_url -import numpy as np -import torch -from gfpgan import GFPGANer -from realesrgan import RealESRGANer -from realesrgan.archs.srvgg_arch import SRVGGNetCompact -from basicsr.utils import imwrite, img2tensor, tensor2img -from torchvision.transforms.functional import normalize -from basicsr.utils.registry import ARCH_REGISTRY - -def load_sr(model_path, device, face): - if not face=='codeformer': - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) #alter to match dims as needed - netscale = 4 - model_path = os.path.normpath(model_path) - if not os.path.isfile(model_path): - model_path = load_file_from_url( - url='https://github.com/GucciFlipFlops1917/wav2lip-hq-updated-ESRGAN/releases/download/v0.0.1/4x_BigFace_v3_Clear.pth', - model_dir='weights', progress=True, file_name=None) - upsampler = RealESRGANer( - scale=netscale, - model_path=model_path, - dni_weight=None, - model=model, - tile=0, - tile_pad=10, - pre_pad=0, - half=True, - gpu_id=0) - if face==None: - run_params=upsampler - else: - gfp = GFPGANer( - model_path='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=upsampler) - run_params=gfp - else: - run_params = ARCH_REGISTRY.get('CodeFormer')(dim_embd=512, codebook_size=1024, n_head=8, n_layers=9, - connect_list=['32', '64', '128', '256']).to(device) - ckpt_path = load_file_from_url(url='https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth', - model_dir='weights/CodeFormer', progress=True, file_name=None) - checkpoint = torch.load(ckpt_path)['params_ema'] - run_params.load_state_dict(checkpoint) - run_params.eval() - return run_params - - -def upscale(image, face, properties): - try: - if face==1: ## GFP-GAN - _, _, output = properties.enhance(image, has_aligned=False, only_center_face=False, paste_back=True) - elif face==2: ## CODEFORMER - net = properties[0] - device = properties[1] - w = properties[2] - image = cv2.resize(image, (512, 512), interpolation=cv2.INTER_LINEAR) - cropped_face_t = img2tensor(image / 255., bgr2rgb=True, float32=True) - normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True) - cropped_face_t = cropped_face_t.unsqueeze(0).to(device) - try: - with torch.no_grad(): - cropped_face_t = net(cropped_face_t, w=w, adain=True)[0] - restored_face = tensor2img(cropped_face_t, rgb2bgr=True, min_max=(-1, 1)) - del cropped_face_t - torch.cuda.empty_cache() - except Exception as error: - print(f'\tFailed inference for CodeFormer: {error}') - restored_face = tensor2img(cropped_face_t, rgb2bgr=True, min_max=(-1, 1)) - output = restored_face.astype('uint8') - elif face==0: ## ESRGAN - img = image.astype(np.float32) / 255. - output, _ = properties.enhance(image, outscale=4) - except RuntimeError as error: - print('Error', error) - print('If you encounter CUDA out of memory, try to set --tile with a smaller number.') - return output diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/wav2lip_models/wav2lip.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/wav2lip_models/wav2lip.py deleted file mode 100644 index ae5d6919169ec497f0f0815184f5db8ba9108fbd..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/wav2lip_models/wav2lip.py +++ /dev/null @@ -1,184 +0,0 @@ -import torch -from torch import nn -from torch.nn import functional as F -import math - -from .conv import Conv2dTranspose, Conv2d, nonorm_Conv2d - -class Wav2Lip(nn.Module): - def __init__(self): - super(Wav2Lip, self).__init__() - - self.face_encoder_blocks = nn.ModuleList([ - nn.Sequential(Conv2d(6, 16, kernel_size=7, stride=1, padding=3)), # 96,96 - - nn.Sequential(Conv2d(16, 32, kernel_size=3, stride=2, padding=1), # 48,48 - Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True)), - - nn.Sequential(Conv2d(32, 64, kernel_size=3, stride=2, padding=1), # 24,24 - Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True)), - - nn.Sequential(Conv2d(64, 128, kernel_size=3, stride=2, padding=1), # 12,12 - Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True)), - - nn.Sequential(Conv2d(128, 256, kernel_size=3, stride=2, padding=1), # 6,6 - Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True)), - - nn.Sequential(Conv2d(256, 512, kernel_size=3, stride=2, padding=1), # 3,3 - Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),), - - nn.Sequential(Conv2d(512, 512, kernel_size=3, stride=1, padding=0), # 1, 1 - Conv2d(512, 512, kernel_size=1, stride=1, padding=0)),]) - - self.audio_encoder = nn.Sequential( - Conv2d(1, 32, kernel_size=3, stride=1, padding=1), - Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True), - - Conv2d(32, 64, kernel_size=3, stride=(3, 1), padding=1), - Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True), - - Conv2d(64, 128, kernel_size=3, stride=3, padding=1), - Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True), - - Conv2d(128, 256, kernel_size=3, stride=(3, 2), padding=1), - Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True), - - Conv2d(256, 512, kernel_size=3, stride=1, padding=0), - Conv2d(512, 512, kernel_size=1, stride=1, padding=0),) - - self.face_decoder_blocks = nn.ModuleList([ - nn.Sequential(Conv2d(512, 512, kernel_size=1, stride=1, padding=0),), - - nn.Sequential(Conv2dTranspose(1024, 512, kernel_size=3, stride=1, padding=0), # 3,3 - Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),), - - nn.Sequential(Conv2dTranspose(1024, 512, kernel_size=3, stride=2, padding=1, output_padding=1), - Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),), # 6, 6 - - nn.Sequential(Conv2dTranspose(768, 384, kernel_size=3, stride=2, padding=1, output_padding=1), - Conv2d(384, 384, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(384, 384, kernel_size=3, stride=1, padding=1, residual=True),), # 12, 12 - - nn.Sequential(Conv2dTranspose(512, 256, kernel_size=3, stride=2, padding=1, output_padding=1), - Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),), # 24, 24 - - nn.Sequential(Conv2dTranspose(320, 128, kernel_size=3, stride=2, padding=1, output_padding=1), - Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),), # 48, 48 - - nn.Sequential(Conv2dTranspose(160, 64, kernel_size=3, stride=2, padding=1, output_padding=1), - Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),),]) # 96,96 - - self.output_block = nn.Sequential(Conv2d(80, 32, kernel_size=3, stride=1, padding=1), - nn.Conv2d(32, 3, kernel_size=1, stride=1, padding=0), - nn.Sigmoid()) - - def forward(self, audio_sequences, face_sequences): - # audio_sequences = (B, T, 1, 80, 16) - B = audio_sequences.size(0) - - input_dim_size = len(face_sequences.size()) - if input_dim_size > 4: - audio_sequences = torch.cat([audio_sequences[:, i] for i in range(audio_sequences.size(1))], dim=0) - face_sequences = torch.cat([face_sequences[:, :, i] for i in range(face_sequences.size(2))], dim=0) - - audio_embedding = self.audio_encoder(audio_sequences) # B, 512, 1, 1 - - feats = [] - x = face_sequences - for f in self.face_encoder_blocks: - x = f(x) - feats.append(x) - - x = audio_embedding - for f in self.face_decoder_blocks: - x = f(x) - try: - x = torch.cat((x, feats[-1]), dim=1) - except Exception as e: - print(x.size()) - print(feats[-1].size()) - raise e - - feats.pop() - - x = self.output_block(x) - - if input_dim_size > 4: - x = torch.split(x, B, dim=0) # [(B, C, H, W)] - outputs = torch.stack(x, dim=2) # (B, C, T, H, W) - - else: - outputs = x - - return outputs - -class Wav2Lip_disc_qual(nn.Module): - def __init__(self): - super(Wav2Lip_disc_qual, self).__init__() - - self.face_encoder_blocks = nn.ModuleList([ - nn.Sequential(nonorm_Conv2d(3, 32, kernel_size=7, stride=1, padding=3)), # 48,96 - - nn.Sequential(nonorm_Conv2d(32, 64, kernel_size=5, stride=(1, 2), padding=2), # 48,48 - nonorm_Conv2d(64, 64, kernel_size=5, stride=1, padding=2)), - - nn.Sequential(nonorm_Conv2d(64, 128, kernel_size=5, stride=2, padding=2), # 24,24 - nonorm_Conv2d(128, 128, kernel_size=5, stride=1, padding=2)), - - nn.Sequential(nonorm_Conv2d(128, 256, kernel_size=5, stride=2, padding=2), # 12,12 - nonorm_Conv2d(256, 256, kernel_size=5, stride=1, padding=2)), - - nn.Sequential(nonorm_Conv2d(256, 512, kernel_size=3, stride=2, padding=1), # 6,6 - nonorm_Conv2d(512, 512, kernel_size=3, stride=1, padding=1)), - - nn.Sequential(nonorm_Conv2d(512, 512, kernel_size=3, stride=2, padding=1), # 3,3 - nonorm_Conv2d(512, 512, kernel_size=3, stride=1, padding=1),), - - nn.Sequential(nonorm_Conv2d(512, 512, kernel_size=3, stride=1, padding=0), # 1, 1 - nonorm_Conv2d(512, 512, kernel_size=1, stride=1, padding=0)),]) - - self.binary_pred = nn.Sequential(nn.Conv2d(512, 1, kernel_size=1, stride=1, padding=0), nn.Sigmoid()) - self.label_noise = .0 - - def get_lower_half(self, face_sequences): - return face_sequences[:, :, face_sequences.size(2)//2:] - - def to_2d(self, face_sequences): - B = face_sequences.size(0) - face_sequences = torch.cat([face_sequences[:, :, i] for i in range(face_sequences.size(2))], dim=0) - return face_sequences - - def perceptual_forward(self, false_face_sequences): - false_face_sequences = self.to_2d(false_face_sequences) - false_face_sequences = self.get_lower_half(false_face_sequences) - - false_feats = false_face_sequences - for f in self.face_encoder_blocks: - false_feats = f(false_feats) - - false_pred_loss = F.binary_cross_entropy(self.binary_pred(false_feats).view(len(false_feats), -1), - torch.ones((len(false_feats), 1)).cuda()) - - return false_pred_loss - - def forward(self, face_sequences): - face_sequences = self.to_2d(face_sequences) - face_sequences = self.get_lower_half(face_sequences) - - x = face_sequences - for f in self.face_encoder_blocks: - x = f(x) - - return self.binary_pred(x).view(len(x), -1) diff --git a/spaces/matthoffner/chatbot/pages/api/home/home.tsx b/spaces/matthoffner/chatbot/pages/api/home/home.tsx deleted file mode 100644 index 884d6637c4521f2fd512da948a03ecb9a90b4122..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/pages/api/home/home.tsx +++ /dev/null @@ -1,430 +0,0 @@ -import { useEffect, useRef, useState } from 'react'; -import { useQuery } from 'react-query'; - -import { GetServerSideProps } from 'next'; -import { useTranslation } from 'next-i18next'; -import { serverSideTranslations } from 'next-i18next/serverSideTranslations'; -import Head from 'next/head'; - -import { useCreateReducer } from '@/hooks/useCreateReducer'; - -import useErrorService from '@/services/errorService'; -import useApiService from '@/services/useApiService'; - -import { - cleanConversationHistory, - cleanSelectedConversation, -} from '@/utils/app/clean'; -import { DEFAULT_SYSTEM_PROMPT, DEFAULT_TEMPERATURE } from '@/utils/app/const'; -import { - saveConversation, - saveConversations, - updateConversation, -} from '@/utils/app/conversation'; -import { saveFolders } from '@/utils/app/folders'; -import { savePrompts } from '@/utils/app/prompts'; -import { getSettings } from '@/utils/app/settings'; - -import { Conversation } from '@/types/chat'; -import { KeyValuePair } from '@/types/data'; -import { FolderInterface, FolderType } from '@/types/folder'; -import { OpenAIModelID, OpenAIModels, fallbackModelID } from '@/types/openai'; -import { Prompt } from '@/types/prompt'; - -import { Chat } from '@/components/Chat/Chat'; -import { Chatbar } from '@/components/Chatbar/Chatbar'; -import { Navbar } from '@/components/Mobile/Navbar'; -import Promptbar from '@/components/Promptbar'; - -import HomeContext from './home.context'; -import { HomeInitialState, initialState } from './home.state'; - -import { v4 as uuidv4 } from 'uuid'; - -interface Props { - serverSideApiKeyIsSet: boolean; - serverSidePluginKeysSet: boolean; - defaultModelId: OpenAIModelID; -} - -const Home = ({ - serverSideApiKeyIsSet, - serverSidePluginKeysSet, - defaultModelId, -}: Props) => { - const { t } = useTranslation('chat'); - const { getModels } = useApiService(); - const { getModelsError } = useErrorService(); - const [initialRender, setInitialRender] = useState(true); - - const contextValue = useCreateReducer({ - initialState, - }); - - const { - state: { - apiKey, - lightMode, - folders, - conversations, - selectedConversation, - prompts, - temperature, - }, - dispatch, - } = contextValue; - - const stopConversationRef = useRef(false); - - const { data, error, refetch } = useQuery( - ['GetModels', apiKey, serverSideApiKeyIsSet], - ({ signal }) => { - - return getModels( - { - key: 'apiKey', - }, - signal, - ); - }, - { enabled: true, refetchOnMount: false }, - ); - - useEffect(() => { - if (data) dispatch({ field: 'models', value: data }); - }, [data, dispatch]); - - useEffect(() => { - dispatch({ field: 'modelError', value: getModelsError(error) }); - }, [dispatch, error, getModelsError]); - - // FETCH MODELS ---------------------------------------------- - - const handleSelectConversation = (conversation: Conversation) => { - dispatch({ - field: 'selectedConversation', - value: conversation, - }); - - saveConversation(conversation); - }; - - // FOLDER OPERATIONS -------------------------------------------- - - const handleCreateFolder = (name: string, type: FolderType) => { - const newFolder: FolderInterface = { - id: uuidv4(), - name, - type, - }; - - const updatedFolders = [...folders, newFolder]; - - dispatch({ field: 'folders', value: updatedFolders }); - saveFolders(updatedFolders); - }; - - const handleDeleteFolder = (folderId: string) => { - const updatedFolders = folders.filter((f) => f.id !== folderId); - dispatch({ field: 'folders', value: updatedFolders }); - saveFolders(updatedFolders); - - const updatedConversations: Conversation[] = conversations.map((c) => { - if (c.folderId === folderId) { - return { - ...c, - folderId: null, - }; - } - - return c; - }); - - dispatch({ field: 'conversations', value: updatedConversations }); - saveConversations(updatedConversations); - - const updatedPrompts: Prompt[] = prompts.map((p) => { - if (p.folderId === folderId) { - return { - ...p, - folderId: null, - }; - } - - return p; - }); - - dispatch({ field: 'prompts', value: updatedPrompts }); - savePrompts(updatedPrompts); - }; - - const handleUpdateFolder = (folderId: string, name: string) => { - const updatedFolders = folders.map((f) => { - if (f.id === folderId) { - return { - ...f, - name, - }; - } - - return f; - }); - - dispatch({ field: 'folders', value: updatedFolders }); - - saveFolders(updatedFolders); - }; - - // CONVERSATION OPERATIONS -------------------------------------------- - - const handleNewConversation = () => { - const lastConversation = conversations[conversations.length - 1]; - - const newConversation: Conversation = { - id: uuidv4(), - name: t('New Conversation'), - messages: [], - model: lastConversation?.model || { - id: OpenAIModels[defaultModelId].id, - name: OpenAIModels[defaultModelId].name, - maxLength: OpenAIModels[defaultModelId].maxLength, - tokenLimit: OpenAIModels[defaultModelId].tokenLimit, - }, - prompt: DEFAULT_SYSTEM_PROMPT, - temperature: lastConversation?.temperature ?? DEFAULT_TEMPERATURE, - folderId: null, - }; - - const updatedConversations = [...conversations, newConversation]; - - dispatch({ field: 'selectedConversation', value: newConversation }); - dispatch({ field: 'conversations', value: updatedConversations }); - - saveConversation(newConversation); - saveConversations(updatedConversations); - - dispatch({ field: 'loading', value: false }); - }; - - const handleUpdateConversation = ( - conversation: Conversation, - data: KeyValuePair, - ) => { - const updatedConversation = { - ...conversation, - [data.key]: data.value, - }; - - const { single, all } = updateConversation( - updatedConversation, - conversations, - ); - - dispatch({ field: 'selectedConversation', value: single }); - dispatch({ field: 'conversations', value: all }); - }; - - // EFFECTS -------------------------------------------- - - useEffect(() => { - if (window.innerWidth < 640) { - dispatch({ field: 'showChatbar', value: false }); - } - }, [selectedConversation]); - - useEffect(() => { - defaultModelId && - dispatch({ field: 'defaultModelId', value: defaultModelId }); - serverSideApiKeyIsSet && - dispatch({ - field: 'serverSideApiKeyIsSet', - value: serverSideApiKeyIsSet, - }); - serverSidePluginKeysSet && - dispatch({ - field: 'serverSidePluginKeysSet', - value: serverSidePluginKeysSet, - }); - }, [defaultModelId, serverSideApiKeyIsSet, serverSidePluginKeysSet]); - - // ON LOAD -------------------------------------------- - - useEffect(() => { - const settings = getSettings(); - if (settings.theme) { - dispatch({ - field: 'lightMode', - value: settings.theme, - }); - } - - const apiKey = "test"; - - if (serverSideApiKeyIsSet) { - dispatch({ field: 'apiKey', value: '' }); - - localStorage.removeItem('apiKey'); - } else if (apiKey) { - dispatch({ field: 'apiKey', value: apiKey }); - } - - const pluginKeys = localStorage.getItem('pluginKeys'); - if (serverSidePluginKeysSet) { - dispatch({ field: 'pluginKeys', value: [] }); - localStorage.removeItem('pluginKeys'); - } else if (pluginKeys) { - dispatch({ field: 'pluginKeys', value: pluginKeys }); - } - - if (window.innerWidth < 640) { - dispatch({ field: 'showChatbar', value: false }); - dispatch({ field: 'showPromptbar', value: false }); - } - - const showChatbar = localStorage.getItem('showChatbar'); - if (showChatbar) { - dispatch({ field: 'showChatbar', value: showChatbar === 'true' }); - } - - const showPromptbar = localStorage.getItem('showPromptbar'); - if (showPromptbar) { - dispatch({ field: 'showPromptbar', value: showPromptbar === 'true' }); - } - - const folders = localStorage.getItem('folders'); - if (folders) { - dispatch({ field: 'folders', value: JSON.parse(folders) }); - } - - const prompts = localStorage.getItem('prompts'); - if (prompts) { - dispatch({ field: 'prompts', value: JSON.parse(prompts) }); - } - - const conversationHistory = localStorage.getItem('conversationHistory'); - if (conversationHistory) { - const parsedConversationHistory: Conversation[] = - JSON.parse(conversationHistory); - const cleanedConversationHistory = cleanConversationHistory( - parsedConversationHistory, - ); - - dispatch({ field: 'conversations', value: cleanedConversationHistory }); - } - - const selectedConversation = localStorage.getItem('selectedConversation'); - if (selectedConversation) { - const parsedSelectedConversation: Conversation = - JSON.parse(selectedConversation); - const cleanedSelectedConversation = cleanSelectedConversation( - parsedSelectedConversation, - ); - - dispatch({ - field: 'selectedConversation', - value: cleanedSelectedConversation, - }); - } else { - const lastConversation = conversations[conversations.length - 1]; - dispatch({ - field: 'selectedConversation', - value: { - id: uuidv4(), - name: t('New Conversation'), - messages: [], - model: OpenAIModels[defaultModelId], - prompt: DEFAULT_SYSTEM_PROMPT, - temperature: lastConversation?.temperature ?? DEFAULT_TEMPERATURE, - folderId: null, - }, - }); - } - }, [ - defaultModelId, - dispatch, - serverSideApiKeyIsSet, - serverSidePluginKeysSet, - ]); - - return ( - - - Chatbot UI - - - - - {selectedConversation && ( -
      -
      - -
      - -
      - - -
      - -
      - - -
      -
      - )} -
      - ); -}; -export default Home; - -export const getServerSideProps: GetServerSideProps = async ({ locale }) => { - const defaultModelId = - (process.env.DEFAULT_MODEL && - Object.values(OpenAIModelID).includes( - process.env.DEFAULT_MODEL as OpenAIModelID, - ) && - process.env.DEFAULT_MODEL) || - fallbackModelID; - - let serverSidePluginKeysSet = false; - - const googleApiKey = process.env.GOOGLE_API_KEY; - const googleCSEId = process.env.GOOGLE_CSE_ID; - - if (googleApiKey && googleCSEId) { - serverSidePluginKeysSet = true; - } - - return { - props: { - serverSideApiKeyIsSet: !!process.env.OPENAI_API_KEY, - defaultModelId, - serverSidePluginKeysSet, - ...(await serverSideTranslations(locale ?? 'en', [ - 'common', - 'chat', - 'sidebar', - 'markdown', - 'promptbar', - 'settings', - ])), - }, - }; -}; diff --git a/spaces/matthoffner/chatbot/utils/server/index.ts b/spaces/matthoffner/chatbot/utils/server/index.ts deleted file mode 100644 index af243dc3af7eb37f0c4078b92ace624be42f2787..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/utils/server/index.ts +++ /dev/null @@ -1,118 +0,0 @@ -import { Message } from '@/types/chat'; -import { OpenAIModel } from '@/types/openai'; - -import { AZURE_DEPLOYMENT_ID, OPENAI_API_HOST, OPENAI_API_TYPE, OPENAI_API_VERSION, OPENAI_ORGANIZATION } from '../app/const'; - -import { - ParsedEvent, - ReconnectInterval, - createParser, -} from 'eventsource-parser'; - -export class OpenAIError extends Error { - type: string; - param: string; - code: string; - - constructor(message: string, type: string, param: string, code: string) { - super(message); - this.name = 'OpenAIError'; - this.type = type; - this.param = param; - this.code = code; - } -} - -export const OpenAIStream = async ( - model: OpenAIModel, - systemPrompt: string, - temperature : number, - key: string, - messages: Message[], -) => { - let url = `${OPENAI_API_HOST}/v1/chat/completions`; - if (OPENAI_API_TYPE === 'azure') { - url = `${OPENAI_API_HOST}/openai/deployments/${AZURE_DEPLOYMENT_ID}/chat/completions?api-version=${OPENAI_API_VERSION}`; - } - const res = await fetch(url, { - headers: { - 'Content-Type': 'application/json', - ...(OPENAI_API_TYPE === 'openai' && { - Authorization: `Bearer ${key ? key : process.env.OPENAI_API_KEY}` - }), - ...(OPENAI_API_TYPE === 'azure' && { - 'api-key': `${key ? key : process.env.OPENAI_API_KEY}` - }), - ...((OPENAI_API_TYPE === 'openai' && OPENAI_ORGANIZATION) && { - 'OpenAI-Organization': OPENAI_ORGANIZATION, - }), - }, - method: 'POST', - body: JSON.stringify({ - ...(OPENAI_API_TYPE === 'openai' && {model: model.id}), - messages: [ - { - role: 'system', - content: systemPrompt, - }, - ...messages, - ], - max_tokens: 1000, - temperature: temperature, - stream: true, - stop: ["###Human:"] - }), - }); - - const encoder = new TextEncoder(); - const decoder = new TextDecoder(); - - if (res.status !== 200) { - const result = await res.json(); - if (result.error) { - throw new OpenAIError( - result.error.message, - result.error.type, - result.error.param, - result.error.code, - ); - } else { - throw new Error( - `OpenAI API returned an error: ${ - decoder.decode(result?.value) || result.statusText - }`, - ); - } - } - - const stream = new ReadableStream({ - async start(controller) { - const onParse = (event: ParsedEvent | ReconnectInterval) => { - if (event.type === 'event') { - const data = event.data; - - try { - const json = JSON.parse(data); - if (json.choices[0].finish_reason != null) { - controller.close(); - return; - } - const text = json.choices[0].delta.content; - const queue = encoder.encode(text); - controller.enqueue(queue); - } catch (e) { - controller.error(e); - } - } - }; - - const parser = createParser(onParse); - - for await (const chunk of res.body as any) { - parser.feed(decoder.decode(chunk)); - } - }, - }); - - return stream; -}; diff --git a/spaces/maxmax20160403/sovits5.0/vits_decoder/discriminator.py b/spaces/maxmax20160403/sovits5.0/vits_decoder/discriminator.py deleted file mode 100644 index 764c0ca806b707e4f36ca2abb64ce79971358dd9..0000000000000000000000000000000000000000 --- a/spaces/maxmax20160403/sovits5.0/vits_decoder/discriminator.py +++ /dev/null @@ -1,39 +0,0 @@ -import torch -import torch.nn as nn - -from omegaconf import OmegaConf -from .msd import ScaleDiscriminator -from .mpd import MultiPeriodDiscriminator -from .mrd import MultiResolutionDiscriminator - - -class Discriminator(nn.Module): - def __init__(self, hp): - super(Discriminator, self).__init__() - self.MRD = MultiResolutionDiscriminator(hp) - self.MPD = MultiPeriodDiscriminator(hp) - self.MSD = ScaleDiscriminator() - - def forward(self, x): - r = self.MRD(x) - p = self.MPD(x) - s = self.MSD(x) - return r + p + s - - -if __name__ == '__main__': - hp = OmegaConf.load('../config/base.yaml') - model = Discriminator(hp) - - x = torch.randn(3, 1, 16384) - print(x.shape) - - output = model(x) - for features, score in output: - for feat in features: - print(feat.shape) - print(score.shape) - - pytorch_total_params = sum(p.numel() - for p in model.parameters() if p.requires_grad) - print(pytorch_total_params) diff --git a/spaces/menghanxia/ReversibleHalftoning/model/__init__.py b/spaces/menghanxia/ReversibleHalftoning/model/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/meraih/English-Japanese-Anime-TTS/monotonic_align/__init__.py b/spaces/meraih/English-Japanese-Anime-TTS/monotonic_align/__init__.py deleted file mode 100644 index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000 --- a/spaces/meraih/English-Japanese-Anime-TTS/monotonic_align/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -import numpy as np -import torch -from .monotonic_align.core import maximum_path_c - - -def maximum_path(neg_cent, mask): - """ Cython optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(np.float32) - path = np.zeros(neg_cent.shape, dtype=np.int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32) - maximum_path_c(path, neg_cent, t_t_max, t_s_max) - return torch.from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/merve/data-leak/public/dataset-worldviews/person-photos.js b/spaces/merve/data-leak/public/dataset-worldviews/person-photos.js deleted file mode 100644 index 305b037acebf14e083ead577ce566ad39b81c531..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/public/dataset-worldviews/person-photos.js +++ /dev/null @@ -1,119 +0,0 @@ - -function createPhotoScroller(){ - - var base_path = 'img/woman_washing_clothes.jpeg' - var data = [ - { - 'path': 'img/labels_1.svg', - 'alt': 'Image of a woman washing clothes with bounding boxes including \'person\', and \'bucket\'', - 'x': 198, - 'y': 30, - 'width': 305, - 'height': 400, - }, - - { - 'path': 'img/labels_4.svg', - 'alt': 'Image of a woman washing clothes with bounding boxes including \'parent\', and \'laundry\'', - 'x': 110, - 'y': 60, - 'width': 450, - 'height': 470, - }, - - - { - 'path': 'img/labels_2.svg', - 'alt': 'Image of a woman washing clothes with bounding boxes including \'hair_boho\', and \'decor_outdoor_rustic\'', - 'x': 198, - 'y': -35, - 'width': 395, - 'height': 500 - }, - - { - 'path': 'img/labels_3.svg', - 'alt': 'Image of a woman washing clothes with one bounding box around her, labeled \'pedestrian\'', - 'x': 190, - 'y': 65, - 'width': 190, - 'height': 315 - }, - ]; - - - var photoIndex = 0; - - var c = d3.conventions({ - sel: d3.select('.person-photos').html(''), - height: 550 - }) - - var photoSel = c.svg.append('svg:image') - .attr('x', 50) - .attr('y', 50) - .attr('width', 700) - .attr('height', 500) - .attr('xlink:href', base_path) - - var photoSel = c.svg.appendMany('svg:image', data) - .attr('x', d => d.x) - .attr('y', d => d.y) - .attr('width', d => d.width) - .attr('height', d => d.height) - .attr('xlink:href', d => d.path) - .attr('alt', d => d.alt) - - - var buttonHeight = 35 - var buttonWidth = 130 - - var buttonSel = c.svg.appendMany('g.photo-button', data) - .translate((d,i) => [(i * 170) + 100, 0]) - .at({ - // class: "dropdown" - }) - .on('click', function(d, i){ - photoIndex = i - setActiveImage() - timer.stop(); - }) - - buttonSel.append('rect') - .at({ - height: buttonHeight, - width: buttonWidth, - // fill: '#fff' - }) - - buttonSel.append('text') - .at({ - textAnchor: 'middle', - // dominantBaseline: 'central', - dy: '.33em', - x: buttonWidth/2, - y: buttonHeight/2, - class: "monospace" - }) - .text((d,i) => 'ground truth ' + (i + 1)) - - // buttonSel.classed('dropdown', true); - - if (window.__photoPersonTimer) window.__photoPersonTimer.stop() - var timer = window.__photoPersonTimer = d3.interval(() => { - photoIndex = (photoIndex + 1) % data.length; - setActiveImage() - }, 2000) - - function setActiveImage(i){ - photoSel.st({opacity: (d, i) => i == photoIndex ? 1 : 0 }) - buttonSel.classed('is-active-button', (d, i) => i == photoIndex) - } - setActiveImage() -} - -createPhotoScroller(); - - - - diff --git a/spaces/merve/hidden-bias/public/measuring-fairness/annotations.js b/spaces/merve/hidden-bias/public/measuring-fairness/annotations.js deleted file mode 100644 index 7ab68f297f98c655427a84de22388906182b240c..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/public/measuring-fairness/annotations.js +++ /dev/null @@ -1,52 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - -var annotations = -[ -] - - -function addSwoop(c){ - var swoopy = d3.swoopyDrag() - .x(d => c.x(d.x)) - .y(d => c.y(d.y)) - .draggable(0) - .annotations(annotations) - - var swoopySel = c.svg.append('g.annotations').call(swoopy) - - c.svg.append('marker#arrow') - .attr('viewBox', '-10 -10 20 20') - .attr('markerWidth', 20) - .attr('markerHeight', 20) - .attr('orient', 'auto') - .append('path').at({d: 'M-6.75,-6.75 L 0,0 L -6.75,6.75'}) - - - swoopySel.selectAll('path').attr('marker-end', 'url(#arrow)') - window.annotationSel = swoopySel.selectAll('g') - .st({fontSize: 12, opacity: d => d.slide == 0 ? 1 : 0}) - - swoopySel.selectAll('text') - .each(function(d){ - d3.select(this) - .text('') //clear existing text - .tspans(d3.wordwrap(d.text, d.width || 20), 12) //wrap after 20 char - }) -} - - diff --git a/spaces/mikeion/research_guru/README.md b/spaces/mikeion/research_guru/README.md deleted file mode 100644 index 15b8c756d439437f5e40f2718ee9e3f084ce4d5e..0000000000000000000000000000000000000000 --- a/spaces/mikeion/research_guru/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Research Guru -emoji: 🐠 -colorFrom: gray -colorTo: gray -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mmlab-ntu/Segment-Any-RGBD/open_vocab_seg/data/datasets/register_pascal_context.py b/spaces/mmlab-ntu/Segment-Any-RGBD/open_vocab_seg/data/datasets/register_pascal_context.py deleted file mode 100644 index e40f87c945da20e78c0a3ea230bc9f36d1800071..0000000000000000000000000000000000000000 --- a/spaces/mmlab-ntu/Segment-Any-RGBD/open_vocab_seg/data/datasets/register_pascal_context.py +++ /dev/null @@ -1,588 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import os - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets import load_sem_seg - -PASCALCONTEX59_NAMES = ( - "aeroplane", - "bicycle", - "bird", - "boat", - "bottle", - "bus", - "car", - "cat", - "chair", - "cow", - "table", - "dog", - "horse", - "motorbike", - "person", - "pottedplant", - "sheep", - "sofa", - "train", - "tvmonitor", - "bag", - "bed", - "bench", - "book", - "building", - "cabinet", - "ceiling", - "cloth", - "computer", - "cup", - "door", - "fence", - "floor", - "flower", - "food", - "grass", - "ground", - "keyboard", - "light", - "mountain", - "mouse", - "curtain", - "platform", - "sign", - "plate", - "road", - "rock", - "shelves", - "sidewalk", - "sky", - "snow", - "bedclothes", - "track", - "tree", - "truck", - "wall", - "water", - "window", - "wood", -) - -PASCALCONTEX459_NAMES = ( - "accordion", - "aeroplane", - "air conditioner", - "antenna", - "artillery", - "ashtray", - "atrium", - "baby carriage", - "bag", - "ball", - "balloon", - "bamboo weaving", - "barrel", - "baseball bat", - "basket", - "basketball backboard", - "bathtub", - "bed", - "bedclothes", - "beer", - "bell", - "bench", - "bicycle", - "binoculars", - "bird", - "bird cage", - "bird feeder", - "bird nest", - "blackboard", - "board", - "boat", - "bone", - "book", - "bottle", - "bottle opener", - "bowl", - "box", - "bracelet", - "brick", - "bridge", - "broom", - "brush", - "bucket", - "building", - "bus", - "cabinet", - "cabinet door", - "cage", - "cake", - "calculator", - "calendar", - "camel", - "camera", - "camera lens", - "can", - "candle", - "candle holder", - "cap", - "car", - "card", - "cart", - "case", - "casette recorder", - "cash register", - "cat", - "cd", - "cd player", - "ceiling", - "cell phone", - "cello", - "chain", - "chair", - "chessboard", - "chicken", - "chopstick", - "clip", - "clippers", - "clock", - "closet", - "cloth", - "clothes tree", - "coffee", - "coffee machine", - "comb", - "computer", - "concrete", - "cone", - "container", - "control booth", - "controller", - "cooker", - "copying machine", - "coral", - "cork", - "corkscrew", - "counter", - "court", - "cow", - "crabstick", - "crane", - "crate", - "cross", - "crutch", - "cup", - "curtain", - "cushion", - "cutting board", - "dais", - "disc", - "disc case", - "dishwasher", - "dock", - "dog", - "dolphin", - "door", - "drainer", - "dray", - "drink dispenser", - "drinking machine", - "drop", - "drug", - "drum", - "drum kit", - "duck", - "dumbbell", - "earphone", - "earrings", - "egg", - "electric fan", - "electric iron", - "electric pot", - "electric saw", - "electronic keyboard", - "engine", - "envelope", - "equipment", - "escalator", - "exhibition booth", - "extinguisher", - "eyeglass", - "fan", - "faucet", - "fax machine", - "fence", - "ferris wheel", - "fire extinguisher", - "fire hydrant", - "fire place", - "fish", - "fish tank", - "fishbowl", - "fishing net", - "fishing pole", - "flag", - "flagstaff", - "flame", - "flashlight", - "floor", - "flower", - "fly", - "foam", - "food", - "footbridge", - "forceps", - "fork", - "forklift", - "fountain", - "fox", - "frame", - "fridge", - "frog", - "fruit", - "funnel", - "furnace", - "game controller", - "game machine", - "gas cylinder", - "gas hood", - "gas stove", - "gift box", - "glass", - "glass marble", - "globe", - "glove", - "goal", - "grandstand", - "grass", - "gravestone", - "ground", - "guardrail", - "guitar", - "gun", - "hammer", - "hand cart", - "handle", - "handrail", - "hanger", - "hard disk drive", - "hat", - "hay", - "headphone", - "heater", - "helicopter", - "helmet", - "holder", - "hook", - "horse", - "horse-drawn carriage", - "hot-air balloon", - "hydrovalve", - "ice", - "inflator pump", - "ipod", - "iron", - "ironing board", - "jar", - "kart", - "kettle", - "key", - "keyboard", - "kitchen range", - "kite", - "knife", - "knife block", - "ladder", - "ladder truck", - "ladle", - "laptop", - "leaves", - "lid", - "life buoy", - "light", - "light bulb", - "lighter", - "line", - "lion", - "lobster", - "lock", - "machine", - "mailbox", - "mannequin", - "map", - "mask", - "mat", - "match book", - "mattress", - "menu", - "metal", - "meter box", - "microphone", - "microwave", - "mirror", - "missile", - "model", - "money", - "monkey", - "mop", - "motorbike", - "mountain", - "mouse", - "mouse pad", - "musical instrument", - "napkin", - "net", - "newspaper", - "oar", - "ornament", - "outlet", - "oven", - "oxygen bottle", - "pack", - "pan", - "paper", - "paper box", - "paper cutter", - "parachute", - "parasol", - "parterre", - "patio", - "pelage", - "pen", - "pen container", - "pencil", - "person", - "photo", - "piano", - "picture", - "pig", - "pillar", - "pillow", - "pipe", - "pitcher", - "plant", - "plastic", - "plate", - "platform", - "player", - "playground", - "pliers", - "plume", - "poker", - "poker chip", - "pole", - "pool table", - "postcard", - "poster", - "pot", - "pottedplant", - "printer", - "projector", - "pumpkin", - "rabbit", - "racket", - "radiator", - "radio", - "rail", - "rake", - "ramp", - "range hood", - "receiver", - "recorder", - "recreational machines", - "remote control", - "road", - "robot", - "rock", - "rocket", - "rocking horse", - "rope", - "rug", - "ruler", - "runway", - "saddle", - "sand", - "saw", - "scale", - "scanner", - "scissors", - "scoop", - "screen", - "screwdriver", - "sculpture", - "scythe", - "sewer", - "sewing machine", - "shed", - "sheep", - "shell", - "shelves", - "shoe", - "shopping cart", - "shovel", - "sidecar", - "sidewalk", - "sign", - "signal light", - "sink", - "skateboard", - "ski", - "sky", - "sled", - "slippers", - "smoke", - "snail", - "snake", - "snow", - "snowmobiles", - "sofa", - "spanner", - "spatula", - "speaker", - "speed bump", - "spice container", - "spoon", - "sprayer", - "squirrel", - "stage", - "stair", - "stapler", - "stick", - "sticky note", - "stone", - "stool", - "stove", - "straw", - "stretcher", - "sun", - "sunglass", - "sunshade", - "surveillance camera", - "swan", - "sweeper", - "swim ring", - "swimming pool", - "swing", - "switch", - "table", - "tableware", - "tank", - "tap", - "tape", - "tarp", - "telephone", - "telephone booth", - "tent", - "tire", - "toaster", - "toilet", - "tong", - "tool", - "toothbrush", - "towel", - "toy", - "toy car", - "track", - "train", - "trampoline", - "trash bin", - "tray", - "tree", - "tricycle", - "tripod", - "trophy", - "truck", - "tube", - "turtle", - "tvmonitor", - "tweezers", - "typewriter", - "umbrella", - "unknown", - "vacuum cleaner", - "vending machine", - "video camera", - "video game console", - "video player", - "video tape", - "violin", - "wakeboard", - "wall", - "wallet", - "wardrobe", - "washing machine", - "watch", - "water", - "water dispenser", - "water pipe", - "water skate board", - "watermelon", - "whale", - "wharf", - "wheel", - "wheelchair", - "window", - "window blinds", - "wineglass", - "wire", - "wood", - "wool", - -) - - -def _get_voc_meta(cat_list): - ret = { - "stuff_classes": cat_list, - } - return ret - - -def register_pascal_context_59(root): - root = os.path.join(root, "VOCdevkit/VOC2010") - meta = _get_voc_meta(PASCALCONTEX59_NAMES) - for name, image_dirname, sem_seg_dirname in [ - ("val", "JPEGImages", "annotations_detectron2/pc59_val"), - ]: - image_dir = os.path.join(root, image_dirname) - gt_dir = os.path.join(root, sem_seg_dirname) - all_name = f"pascal_context_59_sem_seg_{name}" - DatasetCatalog.register( - all_name, - lambda x=image_dir, y=gt_dir: load_sem_seg( - y, x, gt_ext="png", image_ext="jpg" - ), - ) - MetadataCatalog.get(all_name).set( - image_root=image_dir, - sem_seg_root=gt_dir, - evaluator_type="sem_seg", - ignore_label=255, - **meta, - ) - -def register_pascal_context_459(root): - root = os.path.join(root, "VOCdevkit/VOC2010") - meta = _get_voc_meta(PASCALCONTEX459_NAMES) - for name, image_dirname, sem_seg_dirname in [ - ("val", "JPEGImages", "annotations_detectron2/pc459_val"), - ]: - image_dir = os.path.join(root, image_dirname) - gt_dir = os.path.join(root, sem_seg_dirname) - all_name = f"pascal_context_459_sem_seg_{name}" - DatasetCatalog.register( - all_name, - lambda x=image_dir, y=gt_dir: load_sem_seg( - y, x, gt_ext="tif", image_ext="jpg" - ), - ) - MetadataCatalog.get(all_name).set( - image_root=image_dir, - sem_seg_root=gt_dir, - evaluator_type="sem_seg", - ignore_label=65535, # NOTE: gt is saved in 16-bit TIFF images - **meta, - ) - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_pascal_context_59(_root) -register_pascal_context_459(_root) diff --git a/spaces/mmlab-ntu/relate-anything-model/segment_anything/build_sam.py b/spaces/mmlab-ntu/relate-anything-model/segment_anything/build_sam.py deleted file mode 100644 index 07abfca24e96eced7f13bdefd3212ce1b77b8999..0000000000000000000000000000000000000000 --- a/spaces/mmlab-ntu/relate-anything-model/segment_anything/build_sam.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from functools import partial - -from .modeling import ImageEncoderViT, MaskDecoder, PromptEncoder, Sam, TwoWayTransformer - - -def build_sam_vit_h(checkpoint=None): - return _build_sam( - encoder_embed_dim=1280, - encoder_depth=32, - encoder_num_heads=16, - encoder_global_attn_indexes=[7, 15, 23, 31], - checkpoint=checkpoint, - ) - - -build_sam = build_sam_vit_h - - -def build_sam_vit_l(checkpoint=None): - return _build_sam( - encoder_embed_dim=1024, - encoder_depth=24, - encoder_num_heads=16, - encoder_global_attn_indexes=[5, 11, 17, 23], - checkpoint=checkpoint, - ) - - -def build_sam_vit_b(checkpoint=None): - return _build_sam( - encoder_embed_dim=768, - encoder_depth=12, - encoder_num_heads=12, - encoder_global_attn_indexes=[2, 5, 8, 11], - checkpoint=checkpoint, - ) - - -sam_model_registry = { - "default": build_sam, - "vit_h": build_sam, - "vit_l": build_sam_vit_l, - "vit_b": build_sam_vit_b, -} - - -def _build_sam( - encoder_embed_dim, - encoder_depth, - encoder_num_heads, - encoder_global_attn_indexes, - checkpoint=None, -): - prompt_embed_dim = 256 - image_size = 1024 - vit_patch_size = 16 - image_embedding_size = image_size // vit_patch_size - sam = Sam( - image_encoder=ImageEncoderViT( - depth=encoder_depth, - embed_dim=encoder_embed_dim, - img_size=image_size, - mlp_ratio=4, - norm_layer=partial(torch.nn.LayerNorm, eps=1e-6), - num_heads=encoder_num_heads, - patch_size=vit_patch_size, - qkv_bias=True, - use_rel_pos=True, - global_attn_indexes=encoder_global_attn_indexes, - window_size=14, - out_chans=prompt_embed_dim, - ), - prompt_encoder=PromptEncoder( - embed_dim=prompt_embed_dim, - image_embedding_size=(image_embedding_size, image_embedding_size), - input_image_size=(image_size, image_size), - mask_in_chans=16, - ), - mask_decoder=MaskDecoder( - num_multimask_outputs=3, - transformer=TwoWayTransformer( - depth=2, - embedding_dim=prompt_embed_dim, - mlp_dim=2048, - num_heads=8, - ), - transformer_dim=prompt_embed_dim, - iou_head_depth=3, - iou_head_hidden_dim=256, - ), - pixel_mean=[123.675, 116.28, 103.53], - pixel_std=[58.395, 57.12, 57.375], - ) - sam.eval() - if checkpoint is not None: - with open(checkpoint, "rb") as f: - state_dict = torch.load(f) - sam.load_state_dict(state_dict) - return sam diff --git a/spaces/mms-meta/MMS/lid.py b/spaces/mms-meta/MMS/lid.py deleted file mode 100644 index 7d0c96248ef2c85788348874618bf8cc1b088d69..0000000000000000000000000000000000000000 --- a/spaces/mms-meta/MMS/lid.py +++ /dev/null @@ -1,73 +0,0 @@ -from transformers import Wav2Vec2ForSequenceClassification, AutoFeatureExtractor -import torch -import librosa - -model_id = "facebook/mms-lid-1024" - -processor = AutoFeatureExtractor.from_pretrained(model_id) -model = Wav2Vec2ForSequenceClassification.from_pretrained(model_id) - - -LID_SAMPLING_RATE = 16_000 -LID_TOPK = 10 -LID_THRESHOLD = 0.33 - -LID_LANGUAGES = {} -with open(f"data/lid/all_langs.tsv") as f: - for line in f: - iso, name = line.split(" ", 1) - LID_LANGUAGES[iso] = name - - -def identify(audio_source=None, microphone=None, file_upload=None): - if audio_source is None and microphone is None and file_upload is None: - # HACK: need to handle this case for some reason - return {} - - if type(microphone) is dict: - # HACK: microphone variable is a dict when running on examples - microphone = microphone["name"] - audio_fp = ( - file_upload if "upload" in str(audio_source or "").lower() else microphone - ) - if audio_fp is None: - return "ERROR: You have to either use the microphone or upload an audio file" - - audio_samples = librosa.load(audio_fp, sr=LID_SAMPLING_RATE, mono=True)[0] - - inputs = processor( - audio_samples, sampling_rate=LID_SAMPLING_RATE, return_tensors="pt" - ) - - # set device - if torch.cuda.is_available(): - device = torch.device("cuda") - elif ( - hasattr(torch.backends, "mps") - and torch.backends.mps.is_available() - and torch.backends.mps.is_built() - ): - device = torch.device("mps") - else: - device = torch.device("cpu") - - model.to(device) - inputs = inputs.to(device) - - with torch.no_grad(): - logit = model(**inputs).logits - - logit_lsm = torch.log_softmax(logit.squeeze(), dim=-1) - scores, indices = torch.topk(logit_lsm, 5, dim=-1) - scores, indices = torch.exp(scores).to("cpu").tolist(), indices.to("cpu").tolist() - iso2score = {model.config.id2label[int(i)]: s for s, i in zip(scores, indices)} - if max(iso2score.values()) < LID_THRESHOLD: - return "Low confidence in the language identification predictions. Output is not shown!" - return {LID_LANGUAGES[iso]: score for iso, score in iso2score.items()} - - -LID_EXAMPLES = [ - [None, "./assets/english.mp3", None], - [None, "./assets/tamil.mp3", None], - [None, "./assets/burmese.mp3", None], -] diff --git a/spaces/mshkdm/VToonify/vtoonify/model/encoder/readme.md b/spaces/mshkdm/VToonify/vtoonify/model/encoder/readme.md deleted file mode 100644 index 5421bfe3e67b7b6cbd7baf96b741b539d65bb0fd..0000000000000000000000000000000000000000 --- a/spaces/mshkdm/VToonify/vtoonify/model/encoder/readme.md +++ /dev/null @@ -1,9 +0,0 @@ -# Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation - -## Description -Official Implementation of pSp paper for both training and evaluation. The pSp method extends the StyleGAN model to -allow solving different image-to-image translation problems using its encoder. - -Fork from [https://github.com/eladrich/pixel2style2pixel](https://github.com/eladrich/pixel2style2pixel). - -In VToonify, we modify pSp to accept z+ latent code. diff --git a/spaces/mshukor/UnIVAL/fairseq/tests/speech_recognition/test_collaters.py b/spaces/mshukor/UnIVAL/fairseq/tests/speech_recognition/test_collaters.py deleted file mode 100644 index 6a5029a48faea2426d7a0277655a2c7c08c1d16c..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/tests/speech_recognition/test_collaters.py +++ /dev/null @@ -1,58 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest - -import numpy as np -import torch -from examples.speech_recognition.data.collaters import Seq2SeqCollater - - -class TestSeq2SeqCollator(unittest.TestCase): - def test_collate(self): - - eos_idx = 1 - pad_idx = 0 - collater = Seq2SeqCollater( - feature_index=0, label_index=1, pad_index=pad_idx, eos_index=eos_idx - ) - - # 2 frames in the first sample and 3 frames in the second one - frames1 = np.array([[7, 8], [9, 10]]) - frames2 = np.array([[1, 2], [3, 4], [5, 6]]) - target1 = np.array([4, 2, 3, eos_idx]) - target2 = np.array([3, 2, eos_idx]) - sample1 = {"id": 0, "data": [frames1, target1]} - sample2 = {"id": 1, "data": [frames2, target2]} - batch = collater.collate([sample1, sample2]) - - # collate sort inputs by frame's length before creating the batch - self.assertTensorEqual(batch["id"], torch.tensor([1, 0])) - self.assertEqual(batch["ntokens"], 7) - self.assertTensorEqual( - batch["net_input"]["src_tokens"], - torch.tensor( - [[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10], [pad_idx, pad_idx]]] - ), - ) - self.assertTensorEqual( - batch["net_input"]["prev_output_tokens"], - torch.tensor([[eos_idx, 3, 2, pad_idx], [eos_idx, 4, 2, 3]]), - ) - self.assertTensorEqual(batch["net_input"]["src_lengths"], torch.tensor([3, 2])) - self.assertTensorEqual( - batch["target"], - torch.tensor([[3, 2, eos_idx, pad_idx], [4, 2, 3, eos_idx]]), - ) - self.assertEqual(batch["nsentences"], 2) - - def assertTensorEqual(self, t1, t2): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - self.assertEqual(t1.ne(t2).long().sum(), 0) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/multimodalart/Tune-A-Video-Training-UI-poli/app_inference.py b/spaces/multimodalart/Tune-A-Video-Training-UI-poli/app_inference.py deleted file mode 100644 index d705504e5bc7a8938e1b5fcfb207f4cb731c866b..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/Tune-A-Video-Training-UI-poli/app_inference.py +++ /dev/null @@ -1,170 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import enum - -import gradio as gr -from huggingface_hub import HfApi - -from constants import MODEL_LIBRARY_ORG_NAME, UploadTarget -from inference import InferencePipeline -from utils import find_exp_dirs - - -class ModelSource(enum.Enum): - HUB_LIB = UploadTarget.MODEL_LIBRARY.value - LOCAL = 'Local' - - -class InferenceUtil: - def __init__(self, hf_token: str | None): - self.hf_token = hf_token - - def load_hub_model_list(self) -> dict: - api = HfApi(token=self.hf_token) - choices = [ - info.modelId - for info in api.list_models(author=MODEL_LIBRARY_ORG_NAME) - ] - return gr.update(choices=choices, - value=choices[0] if choices else None) - - @staticmethod - def load_local_model_list() -> dict: - choices = find_exp_dirs() - return gr.update(choices=choices, - value=choices[0] if choices else None) - - def reload_model_list(self, model_source: str) -> dict: - if model_source == ModelSource.HUB_LIB.value: - return self.load_hub_model_list() - elif model_source == ModelSource.LOCAL.value: - return self.load_local_model_list() - else: - raise ValueError - - def load_model_info(self, model_id: str) -> tuple[str, str]: - try: - card = InferencePipeline.get_model_card(model_id, self.hf_token) - except Exception: - return '', '' - base_model = getattr(card.data, 'base_model', '') - training_prompt = getattr(card.data, 'training_prompt', '') - return base_model, training_prompt - - def reload_model_list_and_update_model_info( - self, model_source: str) -> tuple[dict, str, str]: - model_list_update = self.reload_model_list(model_source) - model_list = model_list_update['choices'] - model_info = self.load_model_info(model_list[0] if model_list else '') - return model_list_update, *model_info - - -def create_inference_demo(pipe: InferencePipeline, - hf_token: str | None = None) -> gr.Blocks: - app = InferenceUtil(hf_token) - - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - with gr.Box(): - model_source = gr.Radio( - label='Model Source', - choices=[_.value for _ in ModelSource], - value=ModelSource.HUB_LIB.value) - reload_button = gr.Button('Reload Model List') - model_id = gr.Dropdown(label='Model ID', - choices=None, - value=None) - with gr.Accordion( - label= - 'Model info (Base model and prompt used for training)', - open=False): - with gr.Row(): - base_model_used_for_training = gr.Text( - label='Base model', interactive=False) - prompt_used_for_training = gr.Text( - label='Training prompt', interactive=False) - prompt = gr.Textbox( - label='Prompt', - max_lines=1, - placeholder='Example: "A panda is surfing"') - video_length = gr.Slider(label='Video length', - minimum=4, - maximum=12, - step=1, - value=8) - fps = gr.Slider(label='FPS', - minimum=1, - maximum=12, - step=1, - value=1) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=100000, - step=1, - value=0) - with gr.Accordion('Other Parameters', open=False): - num_steps = gr.Slider(label='Number of Steps', - minimum=0, - maximum=100, - step=1, - value=50) - guidance_scale = gr.Slider(label='CFG Scale', - minimum=0, - maximum=50, - step=0.1, - value=7.5) - - run_button = gr.Button('Generate') - - gr.Markdown(''' - - After training, you can press "Reload Model List" button to load your trained model names. - - It takes a few minutes to download model first. - - Expected time to generate an 8-frame video: 70 seconds with T4, 24 seconds with A10G, (10 seconds with A100) - ''') - with gr.Column(): - result = gr.Video(label='Result') - - model_source.change(fn=app.reload_model_list_and_update_model_info, - inputs=model_source, - outputs=[ - model_id, - base_model_used_for_training, - prompt_used_for_training, - ]) - reload_button.click(fn=app.reload_model_list_and_update_model_info, - inputs=model_source, - outputs=[ - model_id, - base_model_used_for_training, - prompt_used_for_training, - ]) - model_id.change(fn=app.load_model_info, - inputs=model_id, - outputs=[ - base_model_used_for_training, - prompt_used_for_training, - ]) - inputs = [ - model_id, - prompt, - video_length, - fps, - seed, - num_steps, - guidance_scale, - ] - prompt.submit(fn=pipe.run, inputs=inputs, outputs=result) - run_button.click(fn=pipe.run, inputs=inputs, outputs=result) - return demo - - -if __name__ == '__main__': - import os - - hf_token = os.getenv('HF_TOKEN') - pipe = InferencePipeline(hf_token) - demo = create_inference_demo(pipe, hf_token) - demo.queue(max_size=10).launch(share=False) diff --git a/spaces/multimodalart/stable-diffusion-inpainting/clipseg/score.py b/spaces/multimodalart/stable-diffusion-inpainting/clipseg/score.py deleted file mode 100644 index 8db8915b109953931fa2a330a7731db4a51b44f8..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/stable-diffusion-inpainting/clipseg/score.py +++ /dev/null @@ -1,453 +0,0 @@ -from torch.functional import Tensor - -import torch -import inspect -import json -import yaml -import time -import sys - -from general_utils import log - -import numpy as np -from os.path import expanduser, join, isfile, realpath - -from torch.utils.data import DataLoader - -from metrics import FixedIntervalMetrics - -from general_utils import load_model, log, score_config_from_cli_args, AttributeDict, get_attribute, filter_args - - -DATASET_CACHE = dict() - -def load_model(checkpoint_id, weights_file=None, strict=True, model_args='from_config', with_config=False, ignore_weights=False): - - config = json.load(open(join('logs', checkpoint_id, 'config.json'))) - - if model_args != 'from_config' and type(model_args) != dict: - raise ValueError('model_args must either be "from_config" or a dictionary of values') - - model_cls = get_attribute(config['model']) - - # load model - if model_args == 'from_config': - _, model_args, _ = filter_args(config, inspect.signature(model_cls).parameters) - - model = model_cls(**model_args) - - if weights_file is None: - weights_file = realpath(join('logs', checkpoint_id, 'weights.pth')) - else: - weights_file = realpath(join('logs', checkpoint_id, weights_file)) - - if isfile(weights_file) and not ignore_weights: - weights = torch.load(weights_file) - for _, w in weights.items(): - assert not torch.any(torch.isnan(w)), 'weights contain NaNs' - model.load_state_dict(weights, strict=strict) - else: - if not ignore_weights: - raise FileNotFoundError(f'model checkpoint {weights_file} was not found') - - if with_config: - return model, config - - return model - - -def compute_shift2(model, datasets, seed=123, repetitions=1): - """ computes shift """ - - model.eval() - model.cuda() - - import random - random.seed(seed) - - preds, gts = [], [] - for i_dataset, dataset in enumerate(datasets): - - loader = DataLoader(dataset, batch_size=1, num_workers=0, shuffle=False, drop_last=False) - - max_iterations = int(repetitions * len(dataset.dataset.data_list)) - - with torch.no_grad(): - - i, losses = 0, [] - for i_all, (data_x, data_y) in enumerate(loader): - - data_x = [v.cuda(non_blocking=True) if v is not None else v for v in data_x] - data_y = [v.cuda(non_blocking=True) if v is not None else v for v in data_y] - - pred, = model(data_x[0], data_x[1], data_x[2]) - preds += [pred.detach()] - gts += [data_y] - - i += 1 - if max_iterations and i >= max_iterations: - break - - from metrics import FixedIntervalMetrics - n_values = 51 - thresholds = np.linspace(0, 1, n_values)[1:-1] - metric = FixedIntervalMetrics(resize_pred=True, sigmoid=True, n_values=n_values) - - for p, y in zip(preds, gts): - metric.add(p.unsqueeze(1), y) - - best_idx = np.argmax(metric.value()['fgiou_scores']) - best_thresh = thresholds[best_idx] - - return best_thresh - - -def get_cached_pascal_pfe(split, config): - from datasets.pfe_dataset import PFEPascalWrapper - try: - dataset = DATASET_CACHE[(split, config.image_size, config.label_support, config.mask)] - except KeyError: - dataset = PFEPascalWrapper(mode='val', split=split, mask=config.mask, image_size=config.image_size, label_support=config.label_support) - DATASET_CACHE[(split, config.image_size, config.label_support, config.mask)] = dataset - return dataset - - - - -def main(): - config, train_checkpoint_id = score_config_from_cli_args() - - metrics = score(config, train_checkpoint_id, None) - - for dataset in metrics.keys(): - for k in metrics[dataset]: - if type(metrics[dataset][k]) in {float, int}: - print(dataset, f'{k:<16} {metrics[dataset][k]:.3f}') - - -def score(config, train_checkpoint_id, train_config): - - config = AttributeDict(config) - - print(config) - - # use training dataset and loss - train_config = AttributeDict(json.load(open(f'logs/{train_checkpoint_id}/config.json'))) - - cp_str = f'_{config.iteration_cp}' if config.iteration_cp is not None else '' - - - model_cls = get_attribute(train_config['model']) - - _, model_args, _ = filter_args(train_config, inspect.signature(model_cls).parameters) - - model_args = {**model_args, **{k: config[k] for k in ['process_cond', 'fix_shift'] if k in config}} - - strict_models = {'ConditionBase4', 'PFENetWrapper'} - model = load_model(train_checkpoint_id, strict=model_cls.__name__ in strict_models, model_args=model_args, - weights_file=f'weights{cp_str}.pth', ) - - - model.eval() - model.cuda() - - metric_args = dict() - - if 'threshold' in config: - if config.metric.split('.')[-1] == 'SkLearnMetrics': - metric_args['threshold'] = config.threshold - - if 'resize_to' in config: - metric_args['resize_to'] = config.resize_to - - if 'sigmoid' in config: - metric_args['sigmoid'] = config.sigmoid - - if 'custom_threshold' in config: - metric_args['custom_threshold'] = config.custom_threshold - - if config.test_dataset == 'pascal': - - loss_fn = get_attribute(train_config.loss) - # assume that if no split is specified in train_config, test on all splits, - - if 'splits' in config: - splits = config.splits - else: - if 'split' in train_config and type(train_config.split) == int: - # unless train_config has a split set, in that case assume train mode in training - splits = [train_config.split] - assert train_config.mode == 'train' - else: - splits = [0,1,2,3] - - log.info('Test on these splits', splits) - - scores = dict() - for split in splits: - - shift = config.shift if 'shift' in config else 0 - - # automatic shift - if shift == 'auto': - shift_compute_t = time.time() - shift = compute_shift2(model, [get_cached_pascal_pfe(s, config) for s in range(4) if s != split], repetitions=config.compute_shift_fac) - log.info(f'Best threshold is {shift}, computed on splits: {[s for s in range(4) if s != split]}, took {time.time() - shift_compute_t:.1f}s') - - dataset = get_cached_pascal_pfe(split, config) - - eval_start_t = time.time() - - loader = DataLoader(dataset, batch_size=1, num_workers=0, shuffle=False, drop_last=False) - - assert config.batch_size is None or config.batch_size == 1, 'When PFE Dataset is used, batch size must be 1' - - metric = FixedIntervalMetrics(resize_pred=True, sigmoid=True, custom_threshold=shift, **metric_args) - - with torch.no_grad(): - - i, losses = 0, [] - for i_all, (data_x, data_y) in enumerate(loader): - - data_x = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_x] - data_y = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_y] - - if config.mask == 'separate': # for old CondBase model - pred, = model(data_x[0], data_x[1], data_x[2]) - else: - # assert config.mask in {'text', 'highlight'} - pred, _, _, _ = model(data_x[0], data_x[1], return_features=True) - - # loss = loss_fn(pred, data_y[0]) - metric.add(pred.unsqueeze(1) + shift, data_y) - - # losses += [float(loss)] - - i += 1 - if config.max_iterations and i >= config.max_iterations: - break - - #scores[split] = {m: s for m, s in zip(metric.names(), metric.value())} - - log.info(f'Dataset length: {len(dataset)}, took {time.time() - eval_start_t:.1f}s to evaluate.') - - print(metric.value()['mean_iou_scores']) - - scores[split] = metric.scores() - - log.info(f'Completed split {split}') - - key_prefix = config['name'] if 'name' in config else 'pas' - - all_keys = set.intersection(*[set(v.keys()) for v in scores.values()]) - - valid_keys = [k for k in all_keys if all(v[k] is not None and isinstance(v[k], (int, float, np.float)) for v in scores.values())] - - return {key_prefix: {k: np.mean([s[k] for s in scores.values()]) for k in valid_keys}} - - - if config.test_dataset == 'coco': - from datasets.coco_wrapper import COCOWrapper - - coco_dataset = COCOWrapper('test', fold=train_config.fold, image_size=train_config.image_size, mask=config.mask, - with_class_label=True) - - log.info('Dataset length', len(coco_dataset)) - loader = DataLoader(coco_dataset, batch_size=config.batch_size, num_workers=2, shuffle=False, drop_last=False) - - metric = get_attribute(config.metric)(resize_pred=True, **metric_args) - - shift = config.shift if 'shift' in config else 0 - - with torch.no_grad(): - - i, losses = 0, [] - for i_all, (data_x, data_y) in enumerate(loader): - data_x = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_x] - data_y = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_y] - - if config.mask == 'separate': # for old CondBase model - pred, = model(data_x[0], data_x[1], data_x[2]) - else: - # assert config.mask in {'text', 'highlight'} - pred, _, _, _ = model(data_x[0], data_x[1], return_features=True) - - metric.add([pred + shift], data_y) - - i += 1 - if config.max_iterations and i >= config.max_iterations: - break - - key_prefix = config['name'] if 'name' in config else 'coco' - return {key_prefix: metric.scores()} - #return {key_prefix: {k: v for k, v in zip(metric.names(), metric.value())}} - - - if config.test_dataset == 'phrasecut': - from datasets.phrasecut import PhraseCut - - only_visual = config.only_visual is not None and config.only_visual - with_visual = config.with_visual is not None and config.with_visual - - dataset = PhraseCut('test', - image_size=train_config.image_size, - mask=config.mask, - with_visual=with_visual, only_visual=only_visual, aug_crop=False, - aug_color=False) - - loader = DataLoader(dataset, batch_size=config.batch_size, num_workers=2, shuffle=False, drop_last=False) - metric = get_attribute(config.metric)(resize_pred=True, **metric_args) - - shift = config.shift if 'shift' in config else 0 - - - with torch.no_grad(): - - i, losses = 0, [] - for i_all, (data_x, data_y) in enumerate(loader): - data_x = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_x] - data_y = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_y] - - pred, _, _, _ = model(data_x[0], data_x[1], return_features=True) - metric.add([pred + shift], data_y) - - i += 1 - if config.max_iterations and i >= config.max_iterations: - break - - key_prefix = config['name'] if 'name' in config else 'phrasecut' - return {key_prefix: metric.scores()} - #return {key_prefix: {k: v for k, v in zip(metric.names(), metric.value())}} - - if config.test_dataset == 'pascal_zs': - from third_party.JoEm.model.metric import Evaluator - from third_party.JoEm.data_loader import get_seen_idx, get_unseen_idx, VOC - from datasets.pascal_zeroshot import PascalZeroShot, PASCAL_VOC_CLASSES_ZS - - from models.clipseg import CLIPSegMultiLabel - - n_unseen = train_config.remove_classes[1] - - pz = PascalZeroShot('val', n_unseen, image_size=352) - m = CLIPSegMultiLabel(model=train_config.name).cuda() - m.eval(); - - print(len(pz), n_unseen) - print('training removed', [c for class_set in PASCAL_VOC_CLASSES_ZS[:n_unseen // 2] for c in class_set]) - - print('unseen', [VOC[i] for i in get_unseen_idx(n_unseen)]) - print('seen', [VOC[i] for i in get_seen_idx(n_unseen)]) - - loader = DataLoader(pz, batch_size=8) - evaluator = Evaluator(21, get_unseen_idx(n_unseen), get_seen_idx(n_unseen)) - - for i, (data_x, data_y) in enumerate(loader): - pred = m(data_x[0].cuda()) - evaluator.add_batch(data_y[0].numpy(), pred.argmax(1).cpu().detach().numpy()) - - if config.max_iter is not None and i > config.max_iter: - break - - scores = evaluator.Mean_Intersection_over_Union() - key_prefix = config['name'] if 'name' in config else 'pas_zs' - - return {key_prefix: {k: scores[k] for k in ['seen', 'unseen', 'harmonic', 'overall']}} - - elif config.test_dataset in {'same_as_training', 'affordance'}: - loss_fn = get_attribute(train_config.loss) - - metric_cls = get_attribute(config.metric) - metric = metric_cls(**metric_args) - - if config.test_dataset == 'same_as_training': - dataset_cls = get_attribute(train_config.dataset) - elif config.test_dataset == 'affordance': - dataset_cls = get_attribute('datasets.lvis_oneshot3.LVIS_Affordance') - dataset_name = 'aff' - else: - dataset_cls = get_attribute('datasets.lvis_oneshot3.LVIS_OneShot') - dataset_name = 'lvis' - - _, dataset_args, _ = filter_args(config, inspect.signature(dataset_cls).parameters) - - dataset_args['image_size'] = train_config.image_size # explicitly use training image size for evaluation - - if model.__class__.__name__ == 'PFENetWrapper': - dataset_args['image_size'] = config.image_size - - log.info('init dataset', str(dataset_cls)) - dataset = dataset_cls(**dataset_args) - - log.info(f'Score on {model.__class__.__name__} on {dataset_cls.__name__}') - - data_loader = torch.utils.data.DataLoader(dataset, batch_size=config.batch_size, shuffle=config.shuffle) - - # explicitly set prompts - if config.prompt == 'plain': - model.prompt_list = ['{}'] - elif config.prompt == 'fixed': - model.prompt_list = ['a photo of a {}.'] - elif config.prompt == 'shuffle': - model.prompt_list = ['a photo of a {}.', 'a photograph of a {}.', 'an image of a {}.', '{}.'] - elif config.prompt == 'shuffle_clip': - from models.clip_prompts import imagenet_templates - model.prompt_list = imagenet_templates - - config.assume_no_unused_keys(exceptions=['max_iterations']) - - t_start = time.time() - - with torch.no_grad(): # TODO: switch to inference_mode (torch 1.9) - i, losses = 0, [] - for data_x, data_y in data_loader: - - data_x = [x.cuda() if isinstance(x, torch.Tensor) else x for x in data_x] - data_y = [x.cuda() if isinstance(x, torch.Tensor) else x for x in data_y] - - if model.__class__.__name__ in {'ConditionBase4', 'PFENetWrapper'}: - pred, = model(data_x[0], data_x[1], data_x[2]) - visual_q = None - else: - pred, visual_q, _, _ = model(data_x[0], data_x[1], return_features=True) - - loss = loss_fn(pred, data_y[0]) - - metric.add([pred], data_y) - - losses += [float(loss)] - - i += 1 - if config.max_iterations and i >= config.max_iterations: - break - - # scores = {m: s for m, s in zip(metric.names(), metric.value())} - scores = metric.scores() - - keys = set(scores.keys()) - if dataset.negative_prob > 0 and 'mIoU' in keys: - keys.remove('mIoU') - - name_mask = dataset.mask.replace('text_label', 'txt')[:3] - name_neg = '' if dataset.negative_prob == 0 else '_' + str(dataset.negative_prob) - - score_name = config.name if 'name' in config else f'{dataset_name}_{name_mask}{name_neg}' - - scores = {score_name: {k: v for k,v in scores.items() if k in keys}} - scores[score_name].update({'test_loss': np.mean(losses)}) - - log.info(f'Evaluation took {time.time() - t_start:.1f}s') - - return scores - else: - raise ValueError('invalid test dataset') - - - - - - - - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/demo/configs/webpack/common.js b/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/demo/configs/webpack/common.js deleted file mode 100644 index 098f6686f063bf6c631df4f5f3b5921d48ed2d2a..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/demo/configs/webpack/common.js +++ /dev/null @@ -1,84 +0,0 @@ -// Copyright (c) Meta Platforms, Inc. and affiliates. -// All rights reserved. - -// This source code is licensed under the license found in the -// LICENSE file in the root directory of this source tree. - -const { resolve } = require("path"); -const HtmlWebpackPlugin = require("html-webpack-plugin"); -const FriendlyErrorsWebpackPlugin = require("friendly-errors-webpack-plugin"); -const CopyPlugin = require("copy-webpack-plugin"); -const webpack = require("webpack"); - -module.exports = { - entry: "./src/index.tsx", - resolve: { - extensions: [".js", ".jsx", ".ts", ".tsx"], - }, - output: { - path: resolve(__dirname, "dist"), - }, - module: { - rules: [ - { - test: /\.mjs$/, - include: /node_modules/, - type: "javascript/auto", - resolve: { - fullySpecified: false, - }, - }, - { - test: [/\.jsx?$/, /\.tsx?$/], - use: ["ts-loader"], - exclude: /node_modules/, - }, - { - test: /\.css$/, - use: ["style-loader", "css-loader"], - }, - { - test: /\.(scss|sass)$/, - use: ["style-loader", "css-loader", "postcss-loader"], - }, - { - test: /\.(jpe?g|png|gif|svg)$/i, - use: [ - "file-loader?hash=sha512&digest=hex&name=img/[contenthash].[ext]", - "image-webpack-loader?bypassOnDebug&optipng.optimizationLevel=7&gifsicle.interlaced=false", - ], - }, - { - test: /\.(woff|woff2|ttf)$/, - use: { - loader: "url-loader", - }, - }, - ], - }, - plugins: [ - new CopyPlugin({ - patterns: [ - { - from: "node_modules/onnxruntime-web/dist/*.wasm", - to: "[name][ext]", - }, - { - from: "model", - to: "model", - }, - { - from: "src/assets", - to: "assets", - }, - ], - }), - new HtmlWebpackPlugin({ - template: "./src/assets/index.html", - }), - new FriendlyErrorsWebpackPlugin(), - new webpack.ProvidePlugin({ - process: "process/browser", - }), - ], -}; diff --git a/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/demo/src/assets/index.html b/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/demo/src/assets/index.html deleted file mode 100644 index cbcd53c19953b4421dc7b4a537eef327eafd4cf1..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/demo/src/assets/index.html +++ /dev/null @@ -1,18 +0,0 @@ - - - - - - Segment Anything Demo - - - - - - -
      - - diff --git a/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/aws/userdata.sh b/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/aws/userdata.sh deleted file mode 100644 index 5fc1332ac1b0d1794cf8f8c5f6918059ae5dc381..0000000000000000000000000000000000000000 --- a/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/aws/userdata.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash -# AWS EC2 instance startup script https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html -# This script will run only once on first instance start (for a re-start script see mime.sh) -# /home/ubuntu (ubuntu) or /home/ec2-user (amazon-linux) is working dir -# Use >300 GB SSD - -cd home/ubuntu -if [ ! -d yolov5 ]; then - echo "Running first-time script." # install dependencies, download COCO, pull Docker - git clone https://github.com/ultralytics/yolov5 -b master && sudo chmod -R 777 yolov5 - cd yolov5 - bash data/scripts/get_coco.sh && echo "COCO done." & - sudo docker pull ultralytics/yolov5:latest && echo "Docker done." & - python -m pip install --upgrade pip && pip install -r requirements.txt && python detect.py && echo "Requirements done." & - wait && echo "All tasks done." # finish background tasks -else - echo "Running re-start script." # resume interrupted runs - i=0 - list=$(sudo docker ps -qa) # container list i.e. $'one\ntwo\nthree\nfour' - while IFS= read -r id; do - ((i++)) - echo "restarting container $i: $id" - sudo docker start $id - # sudo docker exec -it $id python train.py --resume # single-GPU - sudo docker exec -d $id python utils/aws/resume.py # multi-scenario - done <<<"$list" -fi diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cogz Cmms Maintenance Software Crack 15 [TOP].md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cogz Cmms Maintenance Software Crack 15 [TOP].md deleted file mode 100644 index 6a0b153de79d8689018bbe6c53870d69e6d2700e..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cogz Cmms Maintenance Software Crack 15 [TOP].md +++ /dev/null @@ -1,16 +0,0 @@ - -

      Why You Should Avoid Cogz Cmms Maintenance Software Crack 15

      -

      Cogz Cmms Maintenance Software is a powerful and easy-to-use solution that helps you manage your maintenance department. It automates preventive maintenance tasks, tracks work orders, manages spare parts inventory, and generates reports to optimize your maintenance efficiency. Cogz Cmms Maintenance Software has been used by thousands of customers in various industries, such as manufacturing, food processing, facilities management, transportation, government, education, health care, hospitality, and more.

      -

      Cogz Cmms Maintenance Software Crack 15


      DOWNLOAD >> https://urlcod.com/2uIbeP



      -

      However, some people may be tempted to use a cracked version of Cogz Cmms Maintenance Software, such as Cogz Cmms Maintenance Software Crack 15. This is a risky and unethical practice that can have serious consequences for your business. Here are some reasons why you should avoid using Cogz Cmms Maintenance Software Crack 15:

      -
        -
      • It is illegal. Using a cracked version of Cogz Cmms Maintenance Software is a violation of the software license agreement and a form of software piracy. You are stealing intellectual property from the software developer and depriving them of their rightful revenue. You may face legal action from the software developer or the authorities if you are caught using a cracked version of Cogz Cmms Maintenance Software.
      • -
      • It is unsafe. Using a cracked version of Cogz Cmms Maintenance Software exposes you to potential malware, viruses, spyware, ransomware, or other malicious software that may be embedded in the crack file. These can harm your computer system, compromise your data security, corrupt your files, or lock you out of your system. You may lose valuable information or incur additional costs to repair or replace your hardware or software.
      • -
      • It is unreliable. Using a cracked version of Cogz Cmms Maintenance Software may result in poor performance, errors, bugs, crashes, or compatibility issues. The crack file may not work properly with the latest updates or features of the software. You may experience frequent downtime or data loss that can affect your maintenance operations and productivity. You may also miss out on technical support, customer service, or warranty from the software developer.
      • -
      • It is unethical. Using a cracked version of Cogz Cmms Maintenance Software is unfair to the software developer who invested time, money, and effort to create a quality product that meets your maintenance needs. It is also unfair to other customers who paid for the legitimate version of the software and expect fair competition and quality service. You are undermining the trust and reputation of the software industry and harming its innovation and growth.
      • -
      -

      Therefore, you should avoid using Cogz Cmms Maintenance Software Crack 15 and instead purchase the legitimate version of Cogz Cmms Maintenance Software from their official website[^1^]. You will get a fully functional and secure software that will help you take control of your maintenance department and create efficiencies. You will also get access to free trial[^1^], free updates[^2^], cloud option[^3^], technical support[^2^], customer service[^2^], and warranty[^2^] from the software developer. You will also be supporting the software industry and its ethical standards.

      -

      Cogz Cmms Maintenance Software is a smart investment for your maintenance department. Don't risk your business by using a cracked version of Cogz Cmms Maintenance Software. Get the real deal today!

      -

      cec2833e83
      -
      -
      \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Program Development In Java Abstraction Specification And Object-Oriented Design Download Pdf.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Program Development In Java Abstraction Specification And Object-Oriented Design Download Pdf.md deleted file mode 100644 index 5ce5f120bd5a4c1985f613291008f4945bbd387b..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Program Development In Java Abstraction Specification And Object-Oriented Design Download Pdf.md +++ /dev/null @@ -1,38 +0,0 @@ -
      -

      Program Development In Java: Abstraction, Specification, And Object-Oriented Design Download Pdf - A Comprehensive Guide

      -

      If you are looking for a book that teaches you how to develop software using Java, you might be interested in Program Development In Java: Abstraction, Specification, And Object-Oriented Design by Barbara Liskov and John Guttag. This book covers the fundamental concepts and principles of software engineering, such as abstraction, specification, modularity, inheritance, polymorphism, and design patterns. It also shows you how to apply these concepts and principles to create high-quality Java programs that are easy to understand, maintain, and reuse.

      -

      Program Development In Java: Abstraction, Specification, And Object-Oriented Design Download Pdf


      Download Zip ››››› https://urlcod.com/2uIasl



      -

      In this article, we will give you a brief overview of the book and its contents, as well as provide you with a link to download the pdf version for free. We will also share some of the benefits and challenges of learning program development in Java using this book.

      -

      What is Program Development In Java: Abstraction, Specification, And Object-Oriented Design?

      -

      Program Development In Java: Abstraction, Specification, And Object-Oriented Design is a textbook written by Barbara Liskov and John Guttag, two renowned computer scientists and professors at MIT. The book was published in 2000 by Addison-Wesley Professional and has been widely used in undergraduate and graduate courses on software engineering and object-oriented programming.

      -

      The book aims to teach students how to design and implement software systems using Java as the programming language. It focuses on the use of abstraction and specification as tools for managing complexity and ensuring correctness. It also introduces object-oriented design as a way of organizing software components into classes and interfaces that support reuse and extensibility. The book covers topics such as:

      -
        -
      • The role of specifications in software development
      • -
      • The concept of abstract data types and their implementation in Java
      • -
      • The notion of subtyping and its relation to inheritance and polymorphism
      • -
      • The design of generic classes and methods using Java generics
      • -
      • The use of exceptions and assertions for error handling and verification
      • -
      • The application of design patterns to common software problems
      • -
      • The development of graphical user interfaces using Java Swing
      • -
      • The testing and debugging of Java programs using JUnit and other tools
      • -
      -

      The book also includes several case studies that illustrate the application of the concepts and techniques discussed in the book to real-world problems. Some of the case studies are:

      -
        -
      • A text editor that supports multiple fonts and styles
      • -
      • A calculator that can evaluate arithmetic expressions
      • -
      • A bank account system that supports multiple currencies and transactions
      • -
      • A game of Tetris that uses graphics and sound effects
      • -
      • A web browser that can display HTML pages and images
      • -
      -

      How to Download Program Development In Java: Abstraction, Specification, And Object-Oriented Design Pdf?

      -

      If you want to download the pdf version of Program Development In Java: Abstraction, Specification, And Object-Oriented Design, you can do so by clicking on the link below. The pdf file is hosted on a third-party website that requires you to complete a short survey before downloading. The survey is free and should take only a few minutes to complete. Once you finish the survey, you will be able to access the pdf file immediately.

      -

      Download Program Development In Java: Abstraction, Specification, And Object-Oriented Design Pdf Here

      -

      -

      What are the Benefits of Learning Program Development In Java Using This Book?

      -

      There are many benefits of learning program development in Java using this book. Some of them are:

      -
        -
      • You will learn from two experts who have decades of experience in teaching and researching software engineering and object-oriented programming.
      • -
      • You will gain a solid foundation in the theory and practice of software engineering, which will help you in your future studies and careers.
      • -
      • You will master the core features and concepts of Java, which is one of the most popular and widely used programming languages in the world. 81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/nomic-ai/BelleGroup_school_math_0.25M/style.css b/spaces/nomic-ai/BelleGroup_school_math_0.25M/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/BelleGroup_school_math_0.25M/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/npc0/BookSumBeta/app.py b/spaces/npc0/BookSumBeta/app.py deleted file mode 100644 index 62ee32736642e9aef18a71f6a1804a22d6d55b73..0000000000000000000000000000000000000000 --- a/spaces/npc0/BookSumBeta/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import os -import subprocess -import gradio as gr -from epub2txt import epub2txt -from websocket import create_connection - -if not os.path.exists(os.getenv("checkpoint_path")): - os.system("git clone --recurse-submodules https://github.com/ztxz16/fastllm.git") - os.system("cd fastllm; mkdir build; cd build; cmake ..; make -j; cd tools; python setup.py install --user --prefix=") - os.system("wget https://huggingface.co/huangyuyang/chatglm2-6b-int4.flm/resolve/main/chatglm2-6b-int4.flm") - subprocess.Popen(["uvicorn", "api:app"]) - -class GUI: - def __init__(self, *args, **kwargs): - with gr.Blocks() as demo: - with gr.Row(): - gr.Markdown(scale=2).attach_load_event(self.hello, None) - gr.LoginButton() - gr.LogoutButton() - out = gr.Markdown() - inp = gr.File(file_types=['.epub']) - inp.change(self.process, inp, out) - self.ws = None - self.out = [] - demo.queue(concurrency_count=2).launch() - - def process(self, file, profile: gr.OAuthProfile | None): - if profile is None: - return gr.update(value='Login to access the tool.') - - h = '' - chapter_titles = epub2txt.content_titles - title = epub2txt.title - if self.ws is None: - self.ws = create_connection(f"ws://localhost:8000/ws") - self.ws.send(file.name) - res = '' - while 'output: ' not in res: - res = self.ws.recv() - if 'chsum: ' in res: - self.out.append(res.replace("chsum: ", "")) - elif 'draft_sum: ' in res: - h = res[11:] - elif 'output: ' in res: - self.ws.close() - self.ws = None - self.out = [] - yield gr.update(value=res) - yield gr.update( - value=f"# {title}\n\n" +h+ "\n\n".join( - [f"## {ct}\n\n{c}" for ct, c in zip(chapter_titles, self.out)])) - - def hello(self, profile: gr.OAuthProfile | None): - if profile is None: - return '# ePub summarization tool\n\nLogin to access the tool.' - return f"# ePub summarization tool\n\nWelcome {profile.name}!!" - -GUI() \ No newline at end of file diff --git a/spaces/oliver2023/chatgpt-on-wechat/plugins/plugin_manager.py b/spaces/oliver2023/chatgpt-on-wechat/plugins/plugin_manager.py deleted file mode 100644 index c6a663152c6dc865140d9f9dafcf07a6b0d1a5f2..0000000000000000000000000000000000000000 --- a/spaces/oliver2023/chatgpt-on-wechat/plugins/plugin_manager.py +++ /dev/null @@ -1,182 +0,0 @@ -# encoding:utf-8 - -import importlib -import json -import os -from common.singleton import singleton -from common.sorted_dict import SortedDict -from .event import * -from common.log import logger -from config import conf - - -@singleton -class PluginManager: - def __init__(self): - self.plugins = SortedDict(lambda k,v: v.priority,reverse=True) - self.listening_plugins = {} - self.instances = {} - self.pconf = {} - - def register(self, name: str, desire_priority: int = 0, **kwargs): - def wrapper(plugincls): - plugincls.name = name - plugincls.priority = desire_priority - plugincls.desc = kwargs.get('desc') - plugincls.author = kwargs.get('author') - plugincls.version = kwargs.get('version') if kwargs.get('version') != None else "1.0" - plugincls.namecn = kwargs.get('namecn') if kwargs.get('namecn') != None else name - plugincls.hidden = kwargs.get('hidden') if kwargs.get('hidden') != None else False - plugincls.enabled = True - self.plugins[name.upper()] = plugincls - logger.info("Plugin %s_v%s registered" % (name, plugincls.version)) - return plugincls - return wrapper - - def save_config(self): - with open("./plugins/plugins.json", "w", encoding="utf-8") as f: - json.dump(self.pconf, f, indent=4, ensure_ascii=False) - - def load_config(self): - logger.info("Loading plugins config...") - - modified = False - if os.path.exists("./plugins/plugins.json"): - with open("./plugins/plugins.json", "r", encoding="utf-8") as f: - pconf = json.load(f) - pconf['plugins'] = SortedDict(lambda k,v: v["priority"],pconf['plugins'],reverse=True) - else: - modified = True - pconf = {"plugins": SortedDict(lambda k,v: v["priority"],reverse=True)} - self.pconf = pconf - if modified: - self.save_config() - return pconf - - def scan_plugins(self): - logger.info("Scaning plugins ...") - plugins_dir = "./plugins" - for plugin_name in os.listdir(plugins_dir): - plugin_path = os.path.join(plugins_dir, plugin_name) - if os.path.isdir(plugin_path): - # 判断插件是否包含同名.py文件 - main_module_path = os.path.join(plugin_path, plugin_name+".py") - if os.path.isfile(main_module_path): - # 导入插件 - import_path = "plugins.{}.{}".format(plugin_name, plugin_name) - try: - main_module = importlib.import_module(import_path) - except Exception as e: - logger.warn("Failed to import plugin %s: %s" % (plugin_name, e)) - continue - pconf = self.pconf - new_plugins = [] - modified = False - for name, plugincls in self.plugins.items(): - rawname = plugincls.name - if rawname not in pconf["plugins"]: - new_plugins.append(plugincls) - modified = True - logger.info("Plugin %s not found in pconfig, adding to pconfig..." % name) - pconf["plugins"][rawname] = {"enabled": plugincls.enabled, "priority": plugincls.priority} - else: - self.plugins[name].enabled = pconf["plugins"][rawname]["enabled"] - self.plugins[name].priority = pconf["plugins"][rawname]["priority"] - self.plugins._update_heap(name) # 更新下plugins中的顺序 - if modified: - self.save_config() - return new_plugins - - def refresh_order(self): - for event in self.listening_plugins.keys(): - self.listening_plugins[event].sort(key=lambda name: self.plugins[name].priority, reverse=True) - - def activate_plugins(self): # 生成新开启的插件实例 - for name, plugincls in self.plugins.items(): - if plugincls.enabled: - if name not in self.instances: - try: - instance = plugincls() - except Exception as e: - logger.warn("Failed to create init %s, diabled. %s" % (name, e)) - self.disable_plugin(name) - continue - self.instances[name] = instance - for event in instance.handlers: - if event not in self.listening_plugins: - self.listening_plugins[event] = [] - self.listening_plugins[event].append(name) - self.refresh_order() - - def reload_plugin(self, name:str): - name = name.upper() - if name in self.instances: - for event in self.listening_plugins: - if name in self.listening_plugins[event]: - self.listening_plugins[event].remove(name) - del self.instances[name] - self.activate_plugins() - return True - return False - - def load_plugins(self): - self.load_config() - self.scan_plugins() - pconf = self.pconf - logger.debug("plugins.json config={}".format(pconf)) - for name,plugin in pconf["plugins"].items(): - if name.upper() not in self.plugins: - logger.error("Plugin %s not found, but found in plugins.json" % name) - self.activate_plugins() - - def emit_event(self, e_context: EventContext, *args, **kwargs): - if e_context.event in self.listening_plugins: - for name in self.listening_plugins[e_context.event]: - if self.plugins[name].enabled and e_context.action == EventAction.CONTINUE: - logger.debug("Plugin %s triggered by event %s" % (name,e_context.event)) - instance = self.instances[name] - instance.handlers[e_context.event](e_context, *args, **kwargs) - return e_context - - def set_plugin_priority(self, name:str, priority:int): - name = name.upper() - if name not in self.plugins: - return False - if self.plugins[name].priority == priority: - return True - self.plugins[name].priority = priority - self.plugins._update_heap(name) - rawname = self.plugins[name].name - self.pconf["plugins"][rawname]["priority"] = priority - self.pconf["plugins"]._update_heap(rawname) - self.save_config() - self.refresh_order() - return True - - def enable_plugin(self, name:str): - name = name.upper() - if name not in self.plugins: - return False - if not self.plugins[name].enabled : - self.plugins[name].enabled = True - rawname = self.plugins[name].name - self.pconf["plugins"][rawname]["enabled"] = True - self.save_config() - self.activate_plugins() - return True - return True - - def disable_plugin(self, name:str): - name = name.upper() - if name not in self.plugins: - return False - if self.plugins[name].enabled : - self.plugins[name].enabled = False - rawname = self.plugins[name].name - self.pconf["plugins"][rawname]["enabled"] = False - self.save_config() - return True - return True - - def list_plugins(self): - return self.plugins \ No newline at end of file diff --git a/spaces/omdena-lc/omdena-ng-lagos-chatbot-interface/Dockerfile b/spaces/omdena-lc/omdena-ng-lagos-chatbot-interface/Dockerfile deleted file mode 100644 index a67b4805b05dfe1e21841f7fa7dd82b5a005001a..0000000000000000000000000000000000000000 --- a/spaces/omdena-lc/omdena-ng-lagos-chatbot-interface/Dockerfile +++ /dev/null @@ -1,45 +0,0 @@ -# syntax=docker/dockerfile:1 - -# Comments are provided throughout this file to help you get started. -# If you need more help, visit the Dockerfile reference guide at -# https://docs.docker.com/engine/reference/builder/ - -ARG PYTHON_VERSION=3.9 -FROM python:${PYTHON_VERSION}-slim as base - -RUN apt-get update && apt-get install -y \ - build-essential \ - curl \ - software-properties-common \ - git \ - && rm -rf /var/lib/apt/lists/* - -# Copy the requirements file into the container. -COPY requirements.txt . - -# Install the dependencies from the requirements file. -RUN pip3 install --no-cache-dir -r requirements.txt - -# Prevents Python from writing pyc files. -ENV PYTHONDONTWRITEBYTECODE=1 - -# Keeps Python from buffering stdout and stderr to avoid situations where -# the application crashes without emitting any logs due to buffering. -ENV PYTHONUNBUFFERED=1 - -WORKDIR /app -# Copy the source code into the container. -COPY . . - -# Create a non-privileged user that the app will run under. -# See https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#user -# Switch to the non-privileged user to run the application. -USER 1001 - -# Expose the port that the application listens on. -EXPOSE 7860 - -HEALTHCHECK CMD curl --fail http://localhost:7860/_stcore/health - -# set entrypoint for interactive shells -ENTRYPOINT ["streamlit", "run", "app.py", "--server.port=7860", "--server.address=0.0.0.0"] diff --git a/spaces/orpatashnik/local-prompt-mixing/src/prompt_utils.py b/spaces/orpatashnik/local-prompt-mixing/src/prompt_utils.py deleted file mode 100644 index 5611f071ad11e99692bcfb60bb869ba05ce4fbc7..0000000000000000000000000000000000000000 --- a/spaces/orpatashnik/local-prompt-mixing/src/prompt_utils.py +++ /dev/null @@ -1,64 +0,0 @@ -import json -import torch -import numpy as np -from tqdm import tqdm - - -def get_topk_similar_words(model, prompt, base_word, vocab, k=30): - text_input = model.tokenizer( - [prompt.format(word=base_word)], - padding="max_length", - max_length=model.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - with torch.no_grad(): - encoder_output = model.text_encoder(text_input.input_ids.to(model.device)) - full_prompt_embedding = encoder_output.pooler_output - full_prompt_embedding = full_prompt_embedding / full_prompt_embedding.norm(p=2, dim=-1, keepdim=True) - - prompts = [prompt.format(word=word) for word in vocab] - batch_size = 1000 - all_prompts_embeddings = [] - for i in tqdm(range(0, len(prompts), batch_size)): - curr_prompts = prompts[i:i + batch_size] - with torch.no_grad(): - text_input = model.tokenizer( - curr_prompts, - padding="max_length", - max_length=model.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - curr_embeddings = model.text_encoder(text_input.input_ids.to(model.device)).pooler_output - all_prompts_embeddings.append(curr_embeddings) - - all_prompts_embeddings = torch.cat(all_prompts_embeddings) - all_prompts_embeddings = all_prompts_embeddings / all_prompts_embeddings.norm(p=2, dim=-1, keepdim=True) - prompts_similarities = all_prompts_embeddings.matmul(full_prompt_embedding.view(-1, 1)) - sorted_prompts_similarities = np.flip(prompts_similarities.cpu().numpy().reshape(-1).argsort()) - - print(f"prompt: {prompt}") - print(f"initial word: {base_word}") - print(f"TOP {k} SIMILAR WORDS:") - similar_words = [vocab[index] for index in sorted_prompts_similarities[:k]] - print(similar_words) - return similar_words - -def get_proxy_words(args, ldm_stable): - if len(args.proxy_words) > 0: - return [args.object_of_interest] + args.proxy_words - vocab = list(json.load(open("vocab.json")).keys()) - vocab = [word for word in vocab if word.isalpha() and len(word) > 1] - filtered_vocab = get_topk_similar_words(ldm_stable, "a photo of a {word}", args.object_of_interest, vocab, k=50) - proxy_words = get_topk_similar_words(ldm_stable, args.prompt, args.object_of_interest, filtered_vocab, k=args.number_of_variations) - if proxy_words[0] != args.object_of_interest: - proxy_words = [args.object_of_interest] + proxy_words - - return proxy_words - -def get_proxy_prompts(args, ldm_stable): - proxy_words = get_proxy_words(args, ldm_stable) - prompts = [args.prompt.format(word=args.object_of_interest)] - proxy_prompts = [{"word": word, "prompt": args.prompt.format(word=word)} for word in proxy_words] - return proxy_words, prompts, proxy_prompts \ No newline at end of file diff --git a/spaces/osbm/streamlit-helloworld/app.py b/spaces/osbm/streamlit-helloworld/app.py deleted file mode 100644 index e68eedaad9102b885b772984c39fa8d97df8439a..0000000000000000000000000000000000000000 --- a/spaces/osbm/streamlit-helloworld/app.py +++ /dev/null @@ -1,18 +0,0 @@ -import streamlit as st -import subprocess -import pandas as pd -# import StringIO -from io import StringIO - - -st.title("Hello World") -st.write("This is a test") -output = subprocess.check_output("df -h", shell=True) - - -# make it a dataframe -df = pd.read_csv(StringIO(output.decode("utf-8")), sep="\s+") -df = df.drop(columns=["Mounted", "on"]) - -# show the dataframe -st.write(df) \ No newline at end of file diff --git "a/spaces/oskarvanderwal/MT-bias-demo/results/counterfactual_p\303\251k.html" "b/spaces/oskarvanderwal/MT-bias-demo/results/counterfactual_p\303\251k.html" deleted file mode 100644 index b00dcc8a7f6a759569f8dd94d06b2f83fb722311..0000000000000000000000000000000000000000 --- "a/spaces/oskarvanderwal/MT-bias-demo/results/counterfactual_p\303\251k.html" +++ /dev/null @@ -1,23 +0,0 @@ -
        0th instance:
        - -
        -
        -
        - -
        -
        - Source Saliency Heatmap -
        - x: Generated tokens, y: Attributed tokens -
        - - - -
        ▁He's → ▁She's▁a▁baker.</s>
        ▁Ő0.1660.0210.0190.026
        ▁pék.-0.146-0.0030.0010.004
        </s>0.00.00.00.0
        probability-0.2990.00.0030.002
        -
        - -
        -
        -
        - diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/unconditional_image_generation/README.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/unconditional_image_generation/README.md deleted file mode 100644 index d83dc928c7a1164b3e8896bcfa1ef5d417ea6b80..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/unconditional_image_generation/README.md +++ /dev/null @@ -1,163 +0,0 @@ -## Training an unconditional diffusion model - -Creating a training image set is [described in a different document](https://huggingface.co/docs/datasets/image_process#image-datasets). - -### Installing the dependencies - -Before running the scripts, make sure to install the library's training dependencies: - -**Important** - -To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: -```bash -git clone https://github.com/huggingface/diffusers -cd diffusers -pip install . -``` - -Then cd in the example folder and run -```bash -pip install -r requirements.txt -``` - - -And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: - -```bash -accelerate config -``` - -### Unconditional Flowers - -The command to train a DDPM UNet model on the Oxford Flowers dataset: - -```bash -accelerate launch train_unconditional.py \ - --dataset_name="huggan/flowers-102-categories" \ - --resolution=64 --center_crop --random_flip \ - --output_dir="ddpm-ema-flowers-64" \ - --train_batch_size=16 \ - --num_epochs=100 \ - --gradient_accumulation_steps=1 \ - --use_ema \ - --learning_rate=1e-4 \ - --lr_warmup_steps=500 \ - --mixed_precision=no \ - --push_to_hub -``` -An example trained model: https://huggingface.co/anton-l/ddpm-ema-flowers-64 - -A full training run takes 2 hours on 4xV100 GPUs. - - - - -### Unconditional Pokemon - -The command to train a DDPM UNet model on the Pokemon dataset: - -```bash -accelerate launch train_unconditional.py \ - --dataset_name="huggan/pokemon" \ - --resolution=64 --center_crop --random_flip \ - --output_dir="ddpm-ema-pokemon-64" \ - --train_batch_size=16 \ - --num_epochs=100 \ - --gradient_accumulation_steps=1 \ - --use_ema \ - --learning_rate=1e-4 \ - --lr_warmup_steps=500 \ - --mixed_precision=no \ - --push_to_hub -``` -An example trained model: https://huggingface.co/anton-l/ddpm-ema-pokemon-64 - -A full training run takes 2 hours on 4xV100 GPUs. - - - -### Training with multiple GPUs - -`accelerate` allows for seamless multi-GPU training. Follow the instructions [here](https://huggingface.co/docs/accelerate/basic_tutorials/launch) -for running distributed training with `accelerate`. Here is an example command: - -```bash -accelerate launch --mixed_precision="fp16" --multi_gpu train_unconditional.py \ - --dataset_name="huggan/pokemon" \ - --resolution=64 --center_crop --random_flip \ - --output_dir="ddpm-ema-pokemon-64" \ - --train_batch_size=16 \ - --num_epochs=100 \ - --gradient_accumulation_steps=1 \ - --use_ema \ - --learning_rate=1e-4 \ - --lr_warmup_steps=500 \ - --mixed_precision="fp16" \ - --logger="wandb" -``` - -To be able to use Weights and Biases (`wandb`) as a logger you need to install the library: `pip install wandb`. - -### Using your own data - -To use your own dataset, there are 2 ways: -- you can either provide your own folder as `--train_data_dir` -- or you can upload your dataset to the hub (possibly as a private repo, if you prefer so), and simply pass the `--dataset_name` argument. - -Below, we explain both in more detail. - -#### Provide the dataset as a folder - -If you provide your own folders with images, the script expects the following directory structure: - -```bash -data_dir/xxx.png -data_dir/xxy.png -data_dir/[...]/xxz.png -``` - -In other words, the script will take care of gathering all images inside the folder. You can then run the script like this: - -```bash -accelerate launch train_unconditional.py \ - --train_data_dir \ - -``` - -Internally, the script will use the [`ImageFolder`](https://huggingface.co/docs/datasets/v2.0.0/en/image_process#imagefolder) feature which will automatically turn the folders into 🤗 Dataset objects. - -#### Upload your data to the hub, as a (possibly private) repo - -It's very easy (and convenient) to upload your image dataset to the hub using the [`ImageFolder`](https://huggingface.co/docs/datasets/v2.0.0/en/image_process#imagefolder) feature available in 🤗 Datasets. Simply do the following: - -```python -from datasets import load_dataset - -# example 1: local folder -dataset = load_dataset("imagefolder", data_dir="path_to_your_folder") - -# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd) -dataset = load_dataset("imagefolder", data_files="path_to_zip_file") - -# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd) -dataset = load_dataset("imagefolder", data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip") - -# example 4: providing several splits -dataset = load_dataset("imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]}) -``` - -`ImageFolder` will create an `image` column containing the PIL-encoded images. - -Next, push it to the hub! - -```python -# assuming you have ran the huggingface-cli login command in a terminal -dataset.push_to_hub("name_of_your_dataset") - -# if you want to push to a private repo, simply pass private=True: -dataset.push_to_hub("name_of_your_dataset", private=True) -``` - -and that's it! You can now train your model by simply setting the `--dataset_name` argument to the name of your dataset on the hub. - -More on this can also be found in [this blog post](https://huggingface.co/blog/image-search-datasets). diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/latent_diffusion/__init__.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/latent_diffusion/__init__.py deleted file mode 100644 index bc6ac82217a37030740b3861242932f0e9bd8dd4..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/latent_diffusion/__init__.py +++ /dev/null @@ -1,49 +0,0 @@ -from typing import TYPE_CHECKING - -from ...utils import ( - OptionalDependencyNotAvailable, - _LazyModule, - get_objects_from_module, - is_torch_available, - is_transformers_available, -) - - -_dummy_objects = {} -_import_structure = {} - -try: - if not (is_transformers_available() and is_torch_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ...utils import dummy_torch_and_transformers_objects # noqa F403 - - _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects)) -else: - _import_structure["pipeline_latent_diffusion"] = ["LDMBertModel", "LDMTextToImagePipeline"] - _import_structure["pipeline_latent_diffusion_superresolution"] = ["LDMSuperResolutionPipeline"] - - -if TYPE_CHECKING: - try: - if not (is_transformers_available() and is_torch_available()): - raise OptionalDependencyNotAvailable() - - except OptionalDependencyNotAvailable: - from ...utils.dummy_torch_and_transformers_objects import * - else: - from .pipeline_latent_diffusion import LDMBertModel, LDMTextToImagePipeline - from .pipeline_latent_diffusion_superresolution import LDMSuperResolutionPipeline - -else: - import sys - - sys.modules[__name__] = _LazyModule( - __name__, - globals()["__file__"], - _import_structure, - module_spec=__spec__, - ) - - for name, value in _dummy_objects.items(): - setattr(sys.modules[__name__], name, value) diff --git a/spaces/paulbricman/velma/scripts/run_tweets.py b/spaces/paulbricman/velma/scripts/run_tweets.py deleted file mode 100644 index 52e20a5cc7bacc3045cff74d980a2c4137139ed1..0000000000000000000000000000000000000000 --- a/spaces/paulbricman/velma/scripts/run_tweets.py +++ /dev/null @@ -1,57 +0,0 @@ -from pathlib import Path -import pickle -from src.util import filter -from src.abduction import infer -from src.baselines import infer_embs, infer_nli -from transformers import AutoTokenizer, AutoModelForCausalLM -from sentence_transformers import CrossEncoder, SentenceTransformer -import pandas as pd -from tqdm import tqdm - - -df = pd.read_csv(Path('..') / 'data' / 'tweets' / 'tweets.csv') -users = ['nabla_theta', 'slatestarcodex', 'stuhlmueller', 'ESYudkowsky', 'ben_j_todd', - 'ch402', 'willmacaskill', 'hardmaru', 'kenneth0stanley', 'RichardMCNgo'] - -emb_model = SentenceTransformer('all-MiniLM-L6-v2') -# nli_model = CrossEncoder('cross-encoder/nli-deberta-v3-base') -lm_model = AutoModelForCausalLM.from_pretrained( - 'gustavecortal/gpt-neo-2.7B-8bit') -lm_tok = AutoTokenizer.from_pretrained('EleutherAI/gpt-neo-2.7B') -print('(*) Loaded models') - -for user in tqdm(users): - claim_tweets = df[df['username'] == user][pd.notna( - df['extracted_claim'])][pd.notna(df['negated_claim'])] - - for approach in ['embs', 'nli_relative', 'nli_absolute', 'lm']: - print(user, approach) - aggregate = [] - artifact_path = Path( - '..') / 'data' / 'tweets_artifacts' / approach / (user + '.pkl') - - for idx, row in claim_tweets.iterrows(): - other_tweets = df[df['username'] == - user][df['extracted_claim'] != row['extracted_claim']]['tweet'].values - - selection = filter( - row['extracted_claim'], other_tweets, emb_model, top_k=5) - print('(*) Filtered paragraphs') - probs = [] - - for tweet in selection: - if approach == 'embs': - probs += [infer_embs(tweet, [row['extracted_claim'], - row['negated_claim']], encoder=emb_model)[0]] - elif approach == 'nli_absolute': - probs += [infer_nli(tweet, - [row['extracted_claim']], mode='absolute')[0]] - elif approach == 'nli_relative': - probs += [infer_nli(tweet, [row['extracted_claim'], - row['negated_claim']], mode='relative')[0]] - elif approach == 'lm': - probs += [infer(tweet, [row['extracted_claim'], row['negated_claim']], - model=lm_model, tokenizer=lm_tok, return_components=True)] - - aggregate += [probs] - pickle.dump(aggregate, open(artifact_path, 'wb')) diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/utils/confusion_viz.py b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/utils/confusion_viz.py deleted file mode 100644 index e7250cd3c4ce887aa336be303998099e19c7644a..0000000000000000000000000000000000000000 --- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/utils/confusion_viz.py +++ /dev/null @@ -1,99 +0,0 @@ -from threading import local -import torch -import wandb -import numpy as np -import PIL.Image -from typing import Iterable - -from utils.val_loop_hook import ValidationLoopHook - -def _strip_image_from_grid_row(row, gap=5, bg=255): - strip = torch.full( - (row.shape[0] * (row.shape[3] + gap) - gap, - row.shape[1] * (row.shape[3] + gap) - gap), bg, dtype=row.dtype) - for i in range(0, row.shape[0] * row.shape[1]): - strip[(i // row.shape[1]) * (row.shape[2] + gap) : ((i // row.shape[1])+1) * (row.shape[2] + gap) - gap, - (i % row.shape[1]) * (row.shape[3] + gap) : ((i % row.shape[1])+1) * (row.shape[3] + gap) - gap] = row[i // row.shape[1]][i % row.shape[1]] - return PIL.Image.fromarray(strip.numpy()) - -class ConfusionVisualizer(ValidationLoopHook): - def __init__(self, image_shape: Iterable[int], num_classes: int, num_images: int = 5, num_slices: int = 8): - self.image_shape = image_shape - self.num_images = num_images - self.num_classes = num_classes - self.num_slices = num_slices - - self.activations = -99 * torch.ones(self.num_classes, self.num_images) - self.images = torch.zeros(torch.Size([self.num_classes, self.num_images]) + torch.Size(self.image_shape)) - - def process(self, batch, target_batch, logits_batch, prediction_batch): - image_batch = batch["image"] - - with torch.no_grad(): - local_activations = torch.amax(logits_batch, dim=-1) - - # filter samples where the prediction does not line up with the target - confused_samples = (prediction_batch != target_batch) - - # filter public dataset samples - public = torch.tensor(["verse" in id for id in batch["verse_id"]]).type_as(confused_samples) - - mask = confused_samples & public - - for current_idx in torch.nonzero(mask).squeeze(1): - target_class = target_batch[current_idx] - # next item in local batch has a higher activation than the previous confusions for this class, replace it - if local_activations[current_idx] > torch.min(self.activations[target_class]): - idx_to_replace = torch.argsort(self.activations[target_class])[0] - self.activations[target_class, idx_to_replace] = local_activations[current_idx] - self.images[target_class, idx_to_replace] = image_batch[current_idx].cpu() - - def trigger(self, module): - for class_idx in range(self.num_classes): - # determine final order such that the highest activations are placed on top - sorted_idx = torch.argsort(self.activations[class_idx], descending=True) - - self.images[class_idx] = self.images[class_idx, sorted_idx] - - normalize = lambda x: (x - np.min(x))/np.ptp(x) - - if len(self.images.shape) == 6: - # 3D, visualize slices - img_res = self.images[class_idx].shape[-1] - img_slices = torch.linspace(0, img_res-1, self.num_slices+2, dtype=torch.long)[1:-1] - - # Show all images slices in a larger combined image - top_confusing_samples = _strip_image_from_grid_row( - torch.stack([ - torch.stack([ - torch.tensor( - np.uint8(255 * normalize((self.images[class_idx, i, 0, ..., img_slices[s]]).numpy())) - ) - for s in range(self.num_slices)]) - for i in range(self.num_images if self.num_images < self.images[class_idx].shape[0] else self.images[class_idx].shape[0])]) - ) - - elif len(self.images.shape) == 5: - # 2D - top_confusing_samples = _strip_image_from_grid_row( - torch.stack([ - torch.stack([ - torch.tensor( - np.uint8(255 * normalize((self.images[class_idx, i, 0, ...]).numpy())) - ) - ]) - for i in range(self.num_images if self.num_images < self.images[class_idx].shape[0] else self.images[class_idx].shape[0])]) - ) - - else: - raise RuntimeError("Unknown image shape found for confusion visualization") - - module.logger.experiment.log({ - # class_idx represents the ground truth, i.e. these were samples to be classified as class_idx - # but they were predicted to belong to a different class - f"val/top_confusing_of_class_{class_idx}": wandb.Image(top_confusing_samples) - }) - - def reset(self): - self.activations = -99 * torch.ones(self.num_classes, self.num_images) - self.images = torch.zeros(torch.Size([self.num_classes, self.num_images]) + torch.Size(self.image_shape)) \ No newline at end of file diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/packages/backports/makefile.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/packages/backports/makefile.py deleted file mode 100644 index b8fb2154b6d0618b62281578e5e947bca487cee4..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/packages/backports/makefile.py +++ /dev/null @@ -1,51 +0,0 @@ -# -*- coding: utf-8 -*- -""" -backports.makefile -~~~~~~~~~~~~~~~~~~ - -Backports the Python 3 ``socket.makefile`` method for use with anything that -wants to create a "fake" socket object. -""" -import io -from socket import SocketIO - - -def backport_makefile( - self, mode="r", buffering=None, encoding=None, errors=None, newline=None -): - """ - Backport of ``socket.makefile`` from Python 3.5. - """ - if not set(mode) <= {"r", "w", "b"}: - raise ValueError("invalid mode %r (only r, w, b allowed)" % (mode,)) - writing = "w" in mode - reading = "r" in mode or not writing - assert reading or writing - binary = "b" in mode - rawmode = "" - if reading: - rawmode += "r" - if writing: - rawmode += "w" - raw = SocketIO(self, rawmode) - self._makefile_refs += 1 - if buffering is None: - buffering = -1 - if buffering < 0: - buffering = io.DEFAULT_BUFFER_SIZE - if buffering == 0: - if not binary: - raise ValueError("unbuffered streams must be binary") - return raw - if reading and writing: - buffer = io.BufferedRWPair(raw, raw, buffering) - elif reading: - buffer = io.BufferedReader(raw, buffering) - else: - assert writing - buffer = io.BufferedWriter(raw, buffering) - if binary: - return buffer - text = io.TextIOWrapper(buffer, encoding, errors, newline) - text.mode = mode - return text diff --git a/spaces/plzdontcry/dakubettergpt/src/types/theme.ts b/spaces/plzdontcry/dakubettergpt/src/types/theme.ts deleted file mode 100644 index 937ef1525dd2cca331e1dcdd964cae000049f982..0000000000000000000000000000000000000000 --- a/spaces/plzdontcry/dakubettergpt/src/types/theme.ts +++ /dev/null @@ -1 +0,0 @@ -export type Theme = 'light' | 'dark'; diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/click/parser.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/click/parser.py deleted file mode 100644 index 5fa7adfac842bfa5689fd1a41ae4017be1ebff6f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/click/parser.py +++ /dev/null @@ -1,529 +0,0 @@ -""" -This module started out as largely a copy paste from the stdlib's -optparse module with the features removed that we do not need from -optparse because we implement them in Click on a higher level (for -instance type handling, help formatting and a lot more). - -The plan is to remove more and more from here over time. - -The reason this is a different module and not optparse from the stdlib -is that there are differences in 2.x and 3.x about the error messages -generated and optparse in the stdlib uses gettext for no good reason -and might cause us issues. - -Click uses parts of optparse written by Gregory P. Ward and maintained -by the Python Software Foundation. This is limited to code in parser.py. - -Copyright 2001-2006 Gregory P. Ward. All rights reserved. -Copyright 2002-2006 Python Software Foundation. All rights reserved. -""" -# This code uses parts of optparse written by Gregory P. Ward and -# maintained by the Python Software Foundation. -# Copyright 2001-2006 Gregory P. Ward -# Copyright 2002-2006 Python Software Foundation -import typing as t -from collections import deque -from gettext import gettext as _ -from gettext import ngettext - -from .exceptions import BadArgumentUsage -from .exceptions import BadOptionUsage -from .exceptions import NoSuchOption -from .exceptions import UsageError - -if t.TYPE_CHECKING: - import typing_extensions as te - from .core import Argument as CoreArgument - from .core import Context - from .core import Option as CoreOption - from .core import Parameter as CoreParameter - -V = t.TypeVar("V") - -# Sentinel value that indicates an option was passed as a flag without a -# value but is not a flag option. Option.consume_value uses this to -# prompt or use the flag_value. -_flag_needs_value = object() - - -def _unpack_args( - args: t.Sequence[str], nargs_spec: t.Sequence[int] -) -> t.Tuple[t.Sequence[t.Union[str, t.Sequence[t.Optional[str]], None]], t.List[str]]: - """Given an iterable of arguments and an iterable of nargs specifications, - it returns a tuple with all the unpacked arguments at the first index - and all remaining arguments as the second. - - The nargs specification is the number of arguments that should be consumed - or `-1` to indicate that this position should eat up all the remainders. - - Missing items are filled with `None`. - """ - args = deque(args) - nargs_spec = deque(nargs_spec) - rv: t.List[t.Union[str, t.Tuple[t.Optional[str], ...], None]] = [] - spos: t.Optional[int] = None - - def _fetch(c: "te.Deque[V]") -> t.Optional[V]: - try: - if spos is None: - return c.popleft() - else: - return c.pop() - except IndexError: - return None - - while nargs_spec: - nargs = _fetch(nargs_spec) - - if nargs is None: - continue - - if nargs == 1: - rv.append(_fetch(args)) - elif nargs > 1: - x = [_fetch(args) for _ in range(nargs)] - - # If we're reversed, we're pulling in the arguments in reverse, - # so we need to turn them around. - if spos is not None: - x.reverse() - - rv.append(tuple(x)) - elif nargs < 0: - if spos is not None: - raise TypeError("Cannot have two nargs < 0") - - spos = len(rv) - rv.append(None) - - # spos is the position of the wildcard (star). If it's not `None`, - # we fill it with the remainder. - if spos is not None: - rv[spos] = tuple(args) - args = [] - rv[spos + 1 :] = reversed(rv[spos + 1 :]) - - return tuple(rv), list(args) - - -def split_opt(opt: str) -> t.Tuple[str, str]: - first = opt[:1] - if first.isalnum(): - return "", opt - if opt[1:2] == first: - return opt[:2], opt[2:] - return first, opt[1:] - - -def normalize_opt(opt: str, ctx: t.Optional["Context"]) -> str: - if ctx is None or ctx.token_normalize_func is None: - return opt - prefix, opt = split_opt(opt) - return f"{prefix}{ctx.token_normalize_func(opt)}" - - -def split_arg_string(string: str) -> t.List[str]: - """Split an argument string as with :func:`shlex.split`, but don't - fail if the string is incomplete. Ignores a missing closing quote or - incomplete escape sequence and uses the partial token as-is. - - .. code-block:: python - - split_arg_string("example 'my file") - ["example", "my file"] - - split_arg_string("example my\\") - ["example", "my"] - - :param string: String to split. - """ - import shlex - - lex = shlex.shlex(string, posix=True) - lex.whitespace_split = True - lex.commenters = "" - out = [] - - try: - for token in lex: - out.append(token) - except ValueError: - # Raised when end-of-string is reached in an invalid state. Use - # the partial token as-is. The quote or escape character is in - # lex.state, not lex.token. - out.append(lex.token) - - return out - - -class Option: - def __init__( - self, - obj: "CoreOption", - opts: t.Sequence[str], - dest: t.Optional[str], - action: t.Optional[str] = None, - nargs: int = 1, - const: t.Optional[t.Any] = None, - ): - self._short_opts = [] - self._long_opts = [] - self.prefixes: t.Set[str] = set() - - for opt in opts: - prefix, value = split_opt(opt) - if not prefix: - raise ValueError(f"Invalid start character for option ({opt})") - self.prefixes.add(prefix[0]) - if len(prefix) == 1 and len(value) == 1: - self._short_opts.append(opt) - else: - self._long_opts.append(opt) - self.prefixes.add(prefix) - - if action is None: - action = "store" - - self.dest = dest - self.action = action - self.nargs = nargs - self.const = const - self.obj = obj - - @property - def takes_value(self) -> bool: - return self.action in ("store", "append") - - def process(self, value: t.Any, state: "ParsingState") -> None: - if self.action == "store": - state.opts[self.dest] = value # type: ignore - elif self.action == "store_const": - state.opts[self.dest] = self.const # type: ignore - elif self.action == "append": - state.opts.setdefault(self.dest, []).append(value) # type: ignore - elif self.action == "append_const": - state.opts.setdefault(self.dest, []).append(self.const) # type: ignore - elif self.action == "count": - state.opts[self.dest] = state.opts.get(self.dest, 0) + 1 # type: ignore - else: - raise ValueError(f"unknown action '{self.action}'") - state.order.append(self.obj) - - -class Argument: - def __init__(self, obj: "CoreArgument", dest: t.Optional[str], nargs: int = 1): - self.dest = dest - self.nargs = nargs - self.obj = obj - - def process( - self, - value: t.Union[t.Optional[str], t.Sequence[t.Optional[str]]], - state: "ParsingState", - ) -> None: - if self.nargs > 1: - assert value is not None - holes = sum(1 for x in value if x is None) - if holes == len(value): - value = None - elif holes != 0: - raise BadArgumentUsage( - _("Argument {name!r} takes {nargs} values.").format( - name=self.dest, nargs=self.nargs - ) - ) - - if self.nargs == -1 and self.obj.envvar is not None and value == (): - # Replace empty tuple with None so that a value from the - # environment may be tried. - value = None - - state.opts[self.dest] = value # type: ignore - state.order.append(self.obj) - - -class ParsingState: - def __init__(self, rargs: t.List[str]) -> None: - self.opts: t.Dict[str, t.Any] = {} - self.largs: t.List[str] = [] - self.rargs = rargs - self.order: t.List["CoreParameter"] = [] - - -class OptionParser: - """The option parser is an internal class that is ultimately used to - parse options and arguments. It's modelled after optparse and brings - a similar but vastly simplified API. It should generally not be used - directly as the high level Click classes wrap it for you. - - It's not nearly as extensible as optparse or argparse as it does not - implement features that are implemented on a higher level (such as - types or defaults). - - :param ctx: optionally the :class:`~click.Context` where this parser - should go with. - """ - - def __init__(self, ctx: t.Optional["Context"] = None) -> None: - #: The :class:`~click.Context` for this parser. This might be - #: `None` for some advanced use cases. - self.ctx = ctx - #: This controls how the parser deals with interspersed arguments. - #: If this is set to `False`, the parser will stop on the first - #: non-option. Click uses this to implement nested subcommands - #: safely. - self.allow_interspersed_args: bool = True - #: This tells the parser how to deal with unknown options. By - #: default it will error out (which is sensible), but there is a - #: second mode where it will ignore it and continue processing - #: after shifting all the unknown options into the resulting args. - self.ignore_unknown_options: bool = False - - if ctx is not None: - self.allow_interspersed_args = ctx.allow_interspersed_args - self.ignore_unknown_options = ctx.ignore_unknown_options - - self._short_opt: t.Dict[str, Option] = {} - self._long_opt: t.Dict[str, Option] = {} - self._opt_prefixes = {"-", "--"} - self._args: t.List[Argument] = [] - - def add_option( - self, - obj: "CoreOption", - opts: t.Sequence[str], - dest: t.Optional[str], - action: t.Optional[str] = None, - nargs: int = 1, - const: t.Optional[t.Any] = None, - ) -> None: - """Adds a new option named `dest` to the parser. The destination - is not inferred (unlike with optparse) and needs to be explicitly - provided. Action can be any of ``store``, ``store_const``, - ``append``, ``append_const`` or ``count``. - - The `obj` can be used to identify the option in the order list - that is returned from the parser. - """ - opts = [normalize_opt(opt, self.ctx) for opt in opts] - option = Option(obj, opts, dest, action=action, nargs=nargs, const=const) - self._opt_prefixes.update(option.prefixes) - for opt in option._short_opts: - self._short_opt[opt] = option - for opt in option._long_opts: - self._long_opt[opt] = option - - def add_argument( - self, obj: "CoreArgument", dest: t.Optional[str], nargs: int = 1 - ) -> None: - """Adds a positional argument named `dest` to the parser. - - The `obj` can be used to identify the option in the order list - that is returned from the parser. - """ - self._args.append(Argument(obj, dest=dest, nargs=nargs)) - - def parse_args( - self, args: t.List[str] - ) -> t.Tuple[t.Dict[str, t.Any], t.List[str], t.List["CoreParameter"]]: - """Parses positional arguments and returns ``(values, args, order)`` - for the parsed options and arguments as well as the leftover - arguments if there are any. The order is a list of objects as they - appear on the command line. If arguments appear multiple times they - will be memorized multiple times as well. - """ - state = ParsingState(args) - try: - self._process_args_for_options(state) - self._process_args_for_args(state) - except UsageError: - if self.ctx is None or not self.ctx.resilient_parsing: - raise - return state.opts, state.largs, state.order - - def _process_args_for_args(self, state: ParsingState) -> None: - pargs, args = _unpack_args( - state.largs + state.rargs, [x.nargs for x in self._args] - ) - - for idx, arg in enumerate(self._args): - arg.process(pargs[idx], state) - - state.largs = args - state.rargs = [] - - def _process_args_for_options(self, state: ParsingState) -> None: - while state.rargs: - arg = state.rargs.pop(0) - arglen = len(arg) - # Double dashes always handled explicitly regardless of what - # prefixes are valid. - if arg == "--": - return - elif arg[:1] in self._opt_prefixes and arglen > 1: - self._process_opts(arg, state) - elif self.allow_interspersed_args: - state.largs.append(arg) - else: - state.rargs.insert(0, arg) - return - - # Say this is the original argument list: - # [arg0, arg1, ..., arg(i-1), arg(i), arg(i+1), ..., arg(N-1)] - # ^ - # (we are about to process arg(i)). - # - # Then rargs is [arg(i), ..., arg(N-1)] and largs is a *subset* of - # [arg0, ..., arg(i-1)] (any options and their arguments will have - # been removed from largs). - # - # The while loop will usually consume 1 or more arguments per pass. - # If it consumes 1 (eg. arg is an option that takes no arguments), - # then after _process_arg() is done the situation is: - # - # largs = subset of [arg0, ..., arg(i)] - # rargs = [arg(i+1), ..., arg(N-1)] - # - # If allow_interspersed_args is false, largs will always be - # *empty* -- still a subset of [arg0, ..., arg(i-1)], but - # not a very interesting subset! - - def _match_long_opt( - self, opt: str, explicit_value: t.Optional[str], state: ParsingState - ) -> None: - if opt not in self._long_opt: - from difflib import get_close_matches - - possibilities = get_close_matches(opt, self._long_opt) - raise NoSuchOption(opt, possibilities=possibilities, ctx=self.ctx) - - option = self._long_opt[opt] - if option.takes_value: - # At this point it's safe to modify rargs by injecting the - # explicit value, because no exception is raised in this - # branch. This means that the inserted value will be fully - # consumed. - if explicit_value is not None: - state.rargs.insert(0, explicit_value) - - value = self._get_value_from_state(opt, option, state) - - elif explicit_value is not None: - raise BadOptionUsage( - opt, _("Option {name!r} does not take a value.").format(name=opt) - ) - - else: - value = None - - option.process(value, state) - - def _match_short_opt(self, arg: str, state: ParsingState) -> None: - stop = False - i = 1 - prefix = arg[0] - unknown_options = [] - - for ch in arg[1:]: - opt = normalize_opt(f"{prefix}{ch}", self.ctx) - option = self._short_opt.get(opt) - i += 1 - - if not option: - if self.ignore_unknown_options: - unknown_options.append(ch) - continue - raise NoSuchOption(opt, ctx=self.ctx) - if option.takes_value: - # Any characters left in arg? Pretend they're the - # next arg, and stop consuming characters of arg. - if i < len(arg): - state.rargs.insert(0, arg[i:]) - stop = True - - value = self._get_value_from_state(opt, option, state) - - else: - value = None - - option.process(value, state) - - if stop: - break - - # If we got any unknown options we recombine the string of the - # remaining options and re-attach the prefix, then report that - # to the state as new larg. This way there is basic combinatorics - # that can be achieved while still ignoring unknown arguments. - if self.ignore_unknown_options and unknown_options: - state.largs.append(f"{prefix}{''.join(unknown_options)}") - - def _get_value_from_state( - self, option_name: str, option: Option, state: ParsingState - ) -> t.Any: - nargs = option.nargs - - if len(state.rargs) < nargs: - if option.obj._flag_needs_value: - # Option allows omitting the value. - value = _flag_needs_value - else: - raise BadOptionUsage( - option_name, - ngettext( - "Option {name!r} requires an argument.", - "Option {name!r} requires {nargs} arguments.", - nargs, - ).format(name=option_name, nargs=nargs), - ) - elif nargs == 1: - next_rarg = state.rargs[0] - - if ( - option.obj._flag_needs_value - and isinstance(next_rarg, str) - and next_rarg[:1] in self._opt_prefixes - and len(next_rarg) > 1 - ): - # The next arg looks like the start of an option, don't - # use it as the value if omitting the value is allowed. - value = _flag_needs_value - else: - value = state.rargs.pop(0) - else: - value = tuple(state.rargs[:nargs]) - del state.rargs[:nargs] - - return value - - def _process_opts(self, arg: str, state: ParsingState) -> None: - explicit_value = None - # Long option handling happens in two parts. The first part is - # supporting explicitly attached values. In any case, we will try - # to long match the option first. - if "=" in arg: - long_opt, explicit_value = arg.split("=", 1) - else: - long_opt = arg - norm_long_opt = normalize_opt(long_opt, self.ctx) - - # At this point we will match the (assumed) long option through - # the long option matching code. Note that this allows options - # like "-foo" to be matched as long options. - try: - self._match_long_opt(norm_long_opt, explicit_value, state) - except NoSuchOption: - # At this point the long option matching failed, and we need - # to try with short options. However there is a special rule - # which says, that if we have a two character options prefix - # (applies to "--foo" for instance), we do not dispatch to the - # short option code and will instead raise the no option - # error. - if arg[:2] not in self._opt_prefixes: - self._match_short_opt(arg, state) - return - - if not self.ignore_unknown_options: - raise - - state.largs.append(arg) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/dateutil/zoneinfo/rebuild.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/dateutil/zoneinfo/rebuild.py deleted file mode 100644 index 684c6586f091350c347f2b6150935f5214ffec27..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/dateutil/zoneinfo/rebuild.py +++ /dev/null @@ -1,75 +0,0 @@ -import logging -import os -import tempfile -import shutil -import json -from subprocess import check_call, check_output -from tarfile import TarFile - -from dateutil.zoneinfo import METADATA_FN, ZONEFILENAME - - -def rebuild(filename, tag=None, format="gz", zonegroups=[], metadata=None): - """Rebuild the internal timezone info in dateutil/zoneinfo/zoneinfo*tar* - - filename is the timezone tarball from ``ftp.iana.org/tz``. - - """ - tmpdir = tempfile.mkdtemp() - zonedir = os.path.join(tmpdir, "zoneinfo") - moduledir = os.path.dirname(__file__) - try: - with TarFile.open(filename) as tf: - for name in zonegroups: - tf.extract(name, tmpdir) - filepaths = [os.path.join(tmpdir, n) for n in zonegroups] - - _run_zic(zonedir, filepaths) - - # write metadata file - with open(os.path.join(zonedir, METADATA_FN), 'w') as f: - json.dump(metadata, f, indent=4, sort_keys=True) - target = os.path.join(moduledir, ZONEFILENAME) - with TarFile.open(target, "w:%s" % format) as tf: - for entry in os.listdir(zonedir): - entrypath = os.path.join(zonedir, entry) - tf.add(entrypath, entry) - finally: - shutil.rmtree(tmpdir) - - -def _run_zic(zonedir, filepaths): - """Calls the ``zic`` compiler in a compatible way to get a "fat" binary. - - Recent versions of ``zic`` default to ``-b slim``, while older versions - don't even have the ``-b`` option (but default to "fat" binaries). The - current version of dateutil does not support Version 2+ TZif files, which - causes problems when used in conjunction with "slim" binaries, so this - function is used to ensure that we always get a "fat" binary. - """ - - try: - help_text = check_output(["zic", "--help"]) - except OSError as e: - _print_on_nosuchfile(e) - raise - - if b"-b " in help_text: - bloat_args = ["-b", "fat"] - else: - bloat_args = [] - - check_call(["zic"] + bloat_args + ["-d", zonedir] + filepaths) - - -def _print_on_nosuchfile(e): - """Print helpful troubleshooting message - - e is an exception raised by subprocess.check_call() - - """ - if e.errno == 2: - logging.error( - "Could not find zic. Perhaps you need to install " - "libc-bin or some other package that provides it, " - "or it's not in your PATH?") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/layouts/tabs.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/layouts/tabs.py deleted file mode 100644 index 233f18c00f1adc946caa8affd970307a21490ea1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/layouts/tabs.py +++ /dev/null @@ -1,95 +0,0 @@ -from __future__ import annotations - -from gradio_client.documentation import document, set_documentation_group - -from gradio.blocks import BlockContext -from gradio.component_meta import ComponentMeta -from gradio.events import Events - -set_documentation_group("layout") - - -class Tabs(BlockContext, metaclass=ComponentMeta): - """ - Tabs is a layout element within Blocks that can contain multiple "Tab" Components. - """ - - EVENTS = [Events.change, Events.select] - - def __init__( - self, - *, - selected: int | str | None = None, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - render: bool = True, - ): - """ - Parameters: - selected: The currently selected tab. Must correspond to an id passed to the one of the child TabItems. Defaults to the first TabItem. - visible: If False, Tabs will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional string or list of strings that are assigned as the class of this component in the HTML DOM. Can be used for targeting CSS styles. - render: If False, this layout will not be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later. - """ - BlockContext.__init__( - self, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - render=render, - ) - self.selected = selected - - -@document() -class Tab(BlockContext, metaclass=ComponentMeta): - """ - Tab (or its alias TabItem) is a layout element. Components defined within the Tab will be visible when this tab is selected tab. - Example: - with gr.Blocks() as demo: - with gr.Tab("Lion"): - gr.Image("lion.jpg") - gr.Button("New Lion") - with gr.Tab("Tiger"): - gr.Image("tiger.jpg") - gr.Button("New Tiger") - Guides: controlling-layout - """ - - EVENTS = [Events.select] - - def __init__( - self, - label: str | None = None, - *, - id: int | str | None = None, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - render: bool = True, - ): - """ - Parameters: - label: The visual label for the tab - id: An optional identifier for the tab, required if you wish to control the selected tab from a predict function. - elem_id: An optional string that is assigned as the id of the
        containing the contents of the Tab layout. The same string followed by "-button" is attached to the Tab button. Can be used for targeting CSS styles. - elem_classes: An optional string or list of strings that are assigned as the class of this component in the HTML DOM. Can be used for targeting CSS styles. - """ - BlockContext.__init__( - self, - elem_id=elem_id, - elem_classes=elem_classes, - render=render, - ) - self.label = label - self.id = id - - def get_expected_parent(self) -> type[Tabs]: - return Tabs - - def get_block_name(self): - return "tabitem" - - -TabItem = Tab diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-350b76bc.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-350b76bc.js deleted file mode 100644 index a1b8519e651a6198bd28f69a2a42ef456408d64d..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-350b76bc.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as $}from"./Index-c74a8b7c.js";import{B as x}from"./Button-8eeccca1.js";import{B as ee}from"./BlockLabel-e3970ebb.js";import{E as le}from"./Empty-eeaba2d1.js";import"./index-50ad4c77.js";import"./svelte/svelte.js";const{SvelteComponent:te,append:ne,attr:v,detach:ae,init:ie,insert:se,noop:D,safe_not_equal:oe,svg_element:F}=window.__gradio__svelte__internal;function ce(a){let e,t;return{c(){e=F("svg"),t=F("path"),v(t,"fill","currentColor"),v(t,"d","M4 2H2v26a2 2 0 0 0 2 2h26v-2H4v-3h22v-8H4v-4h14V5H4Zm20 17v4H4v-4ZM16 7v4H4V7Z"),v(e,"xmlns","http://www.w3.org/2000/svg"),v(e,"xmlns:xlink","http://www.w3.org/1999/xlink"),v(e,"aria-hidden","true"),v(e,"role","img"),v(e,"class","iconify iconify--carbon"),v(e,"width","100%"),v(e,"height","100%"),v(e,"preserveAspectRatio","xMidYMid meet"),v(e,"viewBox","0 0 32 32")},m(l,n){se(l,e,n),ne(e,t)},p:D,i:D,o:D,d(l){l&&ae(e)}}}class y extends te{constructor(e){super(),ie(this,e,null,ce,oe,{})}}const{SvelteComponent:fe,append:h,attr:m,destroy_each:re,detach:E,element:q,empty:_e,ensure_array_like:G,init:ue,insert:N,listen:de,noop:J,safe_not_equal:me,set_data:z,set_style:j,space:Z,text:I,toggle_class:V}=window.__gradio__svelte__internal,{createEventDispatcher:be}=window.__gradio__svelte__internal;function K(a,e,t){const l=a.slice();return l[5]=e[t],l[7]=t,l}function O(a){let e,t=G(a[0].confidences),l=[];for(let n=0;n{n("select",{index:r,value:g.label})};return a.$$set=r=>{"value"in r&&t(0,l=r.value),"color"in r&&t(1,i=r.color),"selectable"in r&&t(2,s=r.selectable)},[l,i,s,n,c]}class ve extends fe{constructor(e){super(),ue(this,e,he,ge,me,{value:0,color:1,selectable:2})}}const ke=ve,{SvelteComponent:we,assign:pe,check_outros:T,create_component:B,destroy_component:H,detach:R,empty:qe,get_spread_object:Me,get_spread_update:Ce,group_outros:U,init:Se,insert:Y,mount_component:L,safe_not_equal:Be,space:W,transition_in:k,transition_out:p}=window.__gradio__svelte__internal;function X(a){let e,t;return e=new ee({props:{Icon:y,label:a[6],disable:a[7]===!1}}),{c(){B(e.$$.fragment)},m(l,n){L(e,l,n),t=!0},p(l,n){const i={};n&64&&(i.label=l[6]),n&128&&(i.disable=l[7]===!1),e.$set(i)},i(l){t||(k(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){H(e,l)}}}function He(a){let e,t;return e=new le({props:{unpadded_box:!0,$$slots:{default:[je]},$$scope:{ctx:a}}}),{c(){B(e.$$.fragment)},m(l,n){L(e,l,n),t=!0},p(l,n){const i={};n&65536&&(i.$$scope={dirty:n,ctx:l}),e.$set(i)},i(l){t||(k(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){H(e,l)}}}function Le(a){let e,t;return e=new ke({props:{selectable:a[12],value:a[5],color:a[4]}}),e.$on("select",a[15]),{c(){B(e.$$.fragment)},m(l,n){L(e,l,n),t=!0},p(l,n){const i={};n&4096&&(i.selectable=l[12]),n&32&&(i.value=l[5]),n&16&&(i.color=l[4]),e.$set(i)},i(l){t||(k(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){H(e,l)}}}function je(a){let e,t;return e=new y({}),{c(){B(e.$$.fragment)},m(l,n){L(e,l,n),t=!0},i(l){t||(k(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){H(e,l)}}}function Ee(a){let e,t,l,n,i,s,c;const r=[{autoscroll:a[0].autoscroll},{i18n:a[0].i18n},a[10]];let g={};for(let o=0;o{_=null}),T());let d=n;n=C(o),n===d?b[n].p(o,u):(U(),p(b[d],1,1,()=>{b[d]=null}),T(),i=b[n],i?i.p(o,u):(i=b[n]=M[n](o),i.c()),k(i,1),i.m(s.parentNode,s))},i(o){c||(k(e.$$.fragment,o),k(_),k(i),c=!0)},o(o){p(e.$$.fragment,o),p(_),p(i),c=!1},d(o){o&&(R(t),R(l),R(s)),H(e,o),_&&_.d(o),b[n].d(o)}}}function Ne(a){let e,t;return e=new x({props:{test_id:"label",visible:a[3],elem_id:a[1],elem_classes:a[2],container:a[7],scale:a[8],min_width:a[9],padding:!1,$$slots:{default:[Ee]},$$scope:{ctx:a}}}),{c(){B(e.$$.fragment)},m(l,n){L(e,l,n),t=!0},p(l,[n]){const i={};n&8&&(i.visible=l[3]),n&2&&(i.elem_id=l[1]),n&4&&(i.elem_classes=l[2]),n&128&&(i.container=l[7]),n&256&&(i.scale=l[8]),n&512&&(i.min_width=l[9]),n&81137&&(i.$$scope={dirty:n,ctx:l}),e.$set(i)},i(l){t||(k(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){H(e,l)}}}function Ze(a,e,t){let l,n,{gradio:i}=e,{elem_id:s=""}=e,{elem_classes:c=[]}=e,{visible:r=!0}=e,{color:g=void 0}=e,{value:_={}}=e,{label:M=i.i18n("label.label")}=e,{container:b=!0}=e,{scale:C=null}=e,{min_width:o=void 0}=e,{loading_status:u}=e,{show_label:S=!0}=e,{_selectable:d=!1}=e;const A=({detail:f})=>i.dispatch("select",f);return a.$$set=f=>{"gradio"in f&&t(0,i=f.gradio),"elem_id"in f&&t(1,s=f.elem_id),"elem_classes"in f&&t(2,c=f.elem_classes),"visible"in f&&t(3,r=f.visible),"color"in f&&t(4,g=f.color),"value"in f&&t(5,_=f.value),"label"in f&&t(6,M=f.label),"container"in f&&t(7,b=f.container),"scale"in f&&t(8,C=f.scale),"min_width"in f&&t(9,o=f.min_width),"loading_status"in f&&t(10,u=f.loading_status),"show_label"in f&&t(11,S=f.show_label),"_selectable"in f&&t(12,d=f._selectable)},a.$$.update=()=>{a.$$.dirty&32&&t(14,{confidences:l,label:n}=_,l,(t(13,n),t(5,_))),a.$$.dirty&24577&&i.dispatch("change")},[i,s,c,r,g,_,M,b,C,o,u,S,d,n,l,A]}class ze extends we{constructor(e){super(),Se(this,e,Ze,Ne,Be,{gradio:0,elem_id:1,elem_classes:2,visible:3,color:4,value:5,label:6,container:7,scale:8,min_width:9,loading_status:10,show_label:11,_selectable:12})}}export{ke as BaseLabel,ze as default}; -//# sourceMappingURL=Index-350b76bc.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/commands/download.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/commands/download.py deleted file mode 100644 index 8ac5205e842fcc1e1711333983a40157bb863a7b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/commands/download.py +++ /dev/null @@ -1,214 +0,0 @@ -# coding=utf-8 -# Copyright 2023-present, the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Contains command to download files from the Hub with the CLI. - -Usage: - huggingface-cli download --help - - # Download file - huggingface-cli download gpt2 config.json - - # Download entire repo - huggingface-cli download fffiloni/zeroscope --repo-type=space --revision=refs/pr/78 - - # Download repo with filters - huggingface-cli download gpt2 --include="*.safetensors" - - # Download with token - huggingface-cli download Wauplin/private-model --token=hf_*** - - # Download quietly (no progress bar, no warnings, only the returned path) - huggingface-cli download gpt2 config.json --quiet - - # Download to local dir - huggingface-cli download gpt2 --local-dir=./models/gpt2 -""" -import warnings -from argparse import Namespace, _SubParsersAction -from typing import List, Literal, Optional, Union - -from huggingface_hub import logging -from huggingface_hub._snapshot_download import snapshot_download -from huggingface_hub.commands import BaseHuggingfaceCLICommand -from huggingface_hub.constants import HF_HUB_ENABLE_HF_TRANSFER -from huggingface_hub.file_download import hf_hub_download -from huggingface_hub.utils import disable_progress_bars, enable_progress_bars - - -logger = logging.get_logger(__name__) - - -class DownloadCommand(BaseHuggingfaceCLICommand): - @staticmethod - def register_subcommand(parser: _SubParsersAction): - download_parser = parser.add_parser("download", help="Download files from the Hub") - download_parser.add_argument( - "repo_id", type=str, help="ID of the repo to download from (e.g. `username/repo-name`)." - ) - download_parser.add_argument( - "filenames", type=str, nargs="*", help="Files to download (e.g. `config.json`, `data/metadata.jsonl`)." - ) - download_parser.add_argument( - "--repo-type", - choices=["model", "dataset", "space"], - default="model", - help="Type of repo to download from (e.g. `dataset`).", - ) - download_parser.add_argument( - "--revision", - type=str, - help="An optional Git revision id which can be a branch name, a tag, or a commit hash.", - ) - download_parser.add_argument( - "--include", nargs="*", type=str, help="Glob patterns to match files to download." - ) - download_parser.add_argument( - "--exclude", nargs="*", type=str, help="Glob patterns to exclude from files to download." - ) - download_parser.add_argument( - "--cache-dir", type=str, help="Path to the directory where to save the downloaded files." - ) - download_parser.add_argument( - "--local-dir", - type=str, - help=( - "If set, the downloaded file will be placed under this directory either as a symlink (default) or a" - " regular file. Check out" - " https://huggingface.co/docs/huggingface_hub/guides/download#download-files-to-local-folder for more" - " details." - ), - ) - download_parser.add_argument( - "--local-dir-use-symlinks", - choices=["auto", "True", "False"], - default="auto", - help=( - "To be used with `local_dir`. If set to 'auto', the cache directory will be used and the file will be" - " either duplicated or symlinked to the local directory depending on its size. It set to `True`, a" - " symlink will be created, no matter the file size. If set to `False`, the file will either be" - " duplicated from cache (if already exists) or downloaded from the Hub and not cached." - ), - ) - download_parser.add_argument( - "--force-download", - action="store_true", - help="If True, the files will be downloaded even if they are already cached.", - ) - download_parser.add_argument( - "--resume-download", action="store_true", help="If True, resume a previously interrupted download." - ) - download_parser.add_argument( - "--token", type=str, help="A User Access Token generated from https://huggingface.co/settings/tokens" - ) - download_parser.add_argument( - "--quiet", - action="store_true", - help="If True, progress bars are disabled and only the path to the download files is printed.", - ) - download_parser.set_defaults(func=DownloadCommand) - - def __init__(self, args: Namespace) -> None: - self.token = args.token - self.repo_id: str = args.repo_id - self.filenames: List[str] = args.filenames - self.repo_type: str = args.repo_type - self.revision: Optional[str] = args.revision - self.include: Optional[List[str]] = args.include - self.exclude: Optional[List[str]] = args.exclude - self.cache_dir: Optional[str] = args.cache_dir - self.local_dir: Optional[str] = args.local_dir - self.force_download: bool = args.force_download - self.resume_download: bool = args.resume_download - self.quiet: bool = args.quiet - - # Raise if local_dir_use_symlinks is invalid - self.local_dir_use_symlinks: Union[Literal["auto"], bool] - use_symlinks_lowercase = args.local_dir_use_symlinks.lower() - if use_symlinks_lowercase == "true": - self.local_dir_use_symlinks = True - elif use_symlinks_lowercase == "false": - self.local_dir_use_symlinks = False - elif use_symlinks_lowercase == "auto": - self.local_dir_use_symlinks = "auto" - else: - raise ValueError( - f"'{args.local_dir_use_symlinks}' is not a valid value for `local_dir_use_symlinks`. It must be either" - " 'auto', 'True' or 'False'." - ) - - def run(self) -> None: - if self.quiet: - disable_progress_bars() - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - print(self._download()) # Print path to downloaded files - enable_progress_bars() - else: - logging.set_verbosity_info() - print(self._download()) # Print path to downloaded files - logging.set_verbosity_warning() - - def _download(self) -> str: - # Warn user if patterns are ignored - if len(self.filenames) > 0: - if self.include is not None and len(self.include) > 0: - warnings.warn("Ignoring `--include` since filenames have being explicitly set.") - if self.exclude is not None and len(self.exclude) > 0: - warnings.warn("Ignoring `--exclude` since filenames have being explicitly set.") - - if not HF_HUB_ENABLE_HF_TRANSFER: - logger.info( - "Consider using `hf_transfer` for faster downloads. This solution comes with some limitations. See" - " https://huggingface.co/docs/huggingface_hub/hf_transfer for more details." - ) - - # Single file to download: use `hf_hub_download` - if len(self.filenames) == 1: - return hf_hub_download( - repo_id=self.repo_id, - repo_type=self.repo_type, - revision=self.revision, - filename=self.filenames[0], - cache_dir=self.cache_dir, - resume_download=self.resume_download, - force_download=self.force_download, - token=self.token, - local_dir=self.local_dir, - local_dir_use_symlinks=self.local_dir_use_symlinks, - library_name="huggingface-cli", - ) - - # Otherwise: use `snapshot_download` to ensure all files comes from same revision - elif len(self.filenames) == 0: - allow_patterns = self.include - ignore_patterns = self.exclude - else: - allow_patterns = self.filenames - ignore_patterns = None - - return snapshot_download( - repo_id=self.repo_id, - repo_type=self.repo_type, - revision=self.revision, - allow_patterns=allow_patterns, - ignore_patterns=ignore_patterns, - resume_download=self.resume_download, - force_download=self.force_download, - cache_dir=self.cache_dir, - token=self.token, - local_dir=self.local_dir, - local_dir_use_symlinks=self.local_dir_use_symlinks, - library_name="huggingface-cli", - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_utils/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_utils/__init__.py deleted file mode 100644 index 388dd9174f356c74d6cdd6ad9a8b1ad603234420..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_utils/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -""" -This is a module for defining private helpers which do not depend on the -rest of NumPy. - -Everything in here must be self-contained so that it can be -imported anywhere else without creating circular imports. -If a utility requires the import of NumPy, it probably belongs -in ``numpy.core``. -""" - -from ._convertions import asunicode, asbytes - - -def set_module(module): - """Private decorator for overriding __module__ on a function or class. - - Example usage:: - - @set_module('numpy') - def example(): - pass - - assert example.__module__ == 'numpy' - """ - def decorator(func): - if module is not None: - func.__module__ = module - return func - return decorator diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/floating/test_repr.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/floating/test_repr.py deleted file mode 100644 index ea2cdd4fab86ada36d6d5804204c4a479a3e1603..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/floating/test_repr.py +++ /dev/null @@ -1,47 +0,0 @@ -import numpy as np -import pytest - -import pandas as pd -from pandas.core.arrays.floating import ( - Float32Dtype, - Float64Dtype, -) - - -def test_dtypes(dtype): - # smoke tests on auto dtype construction - - np.dtype(dtype.type).kind == "f" - assert dtype.name is not None - - -@pytest.mark.parametrize( - "dtype, expected", - [(Float32Dtype(), "Float32Dtype()"), (Float64Dtype(), "Float64Dtype()")], -) -def test_repr_dtype(dtype, expected): - assert repr(dtype) == expected - - -def test_repr_array(): - result = repr(pd.array([1.0, None, 3.0])) - expected = "\n[1.0, , 3.0]\nLength: 3, dtype: Float64" - assert result == expected - - -def test_repr_array_long(): - data = pd.array([1.0, 2.0, None] * 1000) - expected = """ -[ 1.0, 2.0, , 1.0, 2.0, , 1.0, 2.0, , 1.0, - ... - , 1.0, 2.0, , 1.0, 2.0, , 1.0, 2.0, ] -Length: 3000, dtype: Float64""" - result = repr(data) - assert result == expected - - -def test_frame_repr(data_missing): - df = pd.DataFrame({"A": data_missing}) - result = repr(df) - expected = " A\n0 \n1 0.1" - assert result == expected diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/plotting/test_style.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/plotting/test_style.py deleted file mode 100644 index 665bda15724fd67dc9917509d2b95957b03107e3..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/plotting/test_style.py +++ /dev/null @@ -1,157 +0,0 @@ -import pytest - -from pandas import Series - -pytest.importorskip("matplotlib") -from pandas.plotting._matplotlib.style import get_standard_colors - - -class TestGetStandardColors: - @pytest.mark.parametrize( - "num_colors, expected", - [ - (3, ["red", "green", "blue"]), - (5, ["red", "green", "blue", "red", "green"]), - (7, ["red", "green", "blue", "red", "green", "blue", "red"]), - (2, ["red", "green"]), - (1, ["red"]), - ], - ) - def test_default_colors_named_from_prop_cycle(self, num_colors, expected): - import matplotlib as mpl - from matplotlib.pyplot import cycler - - mpl_params = { - "axes.prop_cycle": cycler(color=["red", "green", "blue"]), - } - with mpl.rc_context(rc=mpl_params): - result = get_standard_colors(num_colors=num_colors) - assert result == expected - - @pytest.mark.parametrize( - "num_colors, expected", - [ - (1, ["b"]), - (3, ["b", "g", "r"]), - (4, ["b", "g", "r", "y"]), - (5, ["b", "g", "r", "y", "b"]), - (7, ["b", "g", "r", "y", "b", "g", "r"]), - ], - ) - def test_default_colors_named_from_prop_cycle_string(self, num_colors, expected): - import matplotlib as mpl - from matplotlib.pyplot import cycler - - mpl_params = { - "axes.prop_cycle": cycler(color="bgry"), - } - with mpl.rc_context(rc=mpl_params): - result = get_standard_colors(num_colors=num_colors) - assert result == expected - - @pytest.mark.parametrize( - "num_colors, expected_name", - [ - (1, ["C0"]), - (3, ["C0", "C1", "C2"]), - ( - 12, - [ - "C0", - "C1", - "C2", - "C3", - "C4", - "C5", - "C6", - "C7", - "C8", - "C9", - "C0", - "C1", - ], - ), - ], - ) - def test_default_colors_named_undefined_prop_cycle(self, num_colors, expected_name): - import matplotlib as mpl - import matplotlib.colors as mcolors - - with mpl.rc_context(rc={}): - expected = [mcolors.to_hex(x) for x in expected_name] - result = get_standard_colors(num_colors=num_colors) - assert result == expected - - @pytest.mark.parametrize( - "num_colors, expected", - [ - (1, ["red", "green", (0.1, 0.2, 0.3)]), - (2, ["red", "green", (0.1, 0.2, 0.3)]), - (3, ["red", "green", (0.1, 0.2, 0.3)]), - (4, ["red", "green", (0.1, 0.2, 0.3), "red"]), - ], - ) - def test_user_input_color_sequence(self, num_colors, expected): - color = ["red", "green", (0.1, 0.2, 0.3)] - result = get_standard_colors(color=color, num_colors=num_colors) - assert result == expected - - @pytest.mark.parametrize( - "num_colors, expected", - [ - (1, ["r", "g", "b", "k"]), - (2, ["r", "g", "b", "k"]), - (3, ["r", "g", "b", "k"]), - (4, ["r", "g", "b", "k"]), - (5, ["r", "g", "b", "k", "r"]), - (6, ["r", "g", "b", "k", "r", "g"]), - ], - ) - def test_user_input_color_string(self, num_colors, expected): - color = "rgbk" - result = get_standard_colors(color=color, num_colors=num_colors) - assert result == expected - - @pytest.mark.parametrize( - "num_colors, expected", - [ - (1, [(0.1, 0.2, 0.3)]), - (2, [(0.1, 0.2, 0.3), (0.1, 0.2, 0.3)]), - (3, [(0.1, 0.2, 0.3), (0.1, 0.2, 0.3), (0.1, 0.2, 0.3)]), - ], - ) - def test_user_input_color_floats(self, num_colors, expected): - color = (0.1, 0.2, 0.3) - result = get_standard_colors(color=color, num_colors=num_colors) - assert result == expected - - @pytest.mark.parametrize( - "color, num_colors, expected", - [ - ("Crimson", 1, ["Crimson"]), - ("DodgerBlue", 2, ["DodgerBlue", "DodgerBlue"]), - ("firebrick", 3, ["firebrick", "firebrick", "firebrick"]), - ], - ) - def test_user_input_named_color_string(self, color, num_colors, expected): - result = get_standard_colors(color=color, num_colors=num_colors) - assert result == expected - - @pytest.mark.parametrize("color", ["", [], (), Series([], dtype="object")]) - def test_empty_color_raises(self, color): - with pytest.raises(ValueError, match="Invalid color argument"): - get_standard_colors(color=color, num_colors=1) - - @pytest.mark.parametrize( - "color", - [ - "bad_color", - ("red", "green", "bad_color"), - (0.1,), - (0.1, 0.2), - (0.1, 0.2, 0.3, 0.4, 0.5), # must be either 3 or 4 floats - ], - ) - def test_bad_color_raises(self, color): - with pytest.raises(ValueError, match="Invalid color"): - get_standard_colors(color=color, num_colors=5) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/filters/inject_meta_charset.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/filters/inject_meta_charset.py deleted file mode 100644 index aefb5c842c2f55075546065cfba3c66d137e8184..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/filters/inject_meta_charset.py +++ /dev/null @@ -1,73 +0,0 @@ -from __future__ import absolute_import, division, unicode_literals - -from . import base - - -class Filter(base.Filter): - """Injects ```` tag into head of document""" - def __init__(self, source, encoding): - """Creates a Filter - - :arg source: the source token stream - - :arg encoding: the encoding to set - - """ - base.Filter.__init__(self, source) - self.encoding = encoding - - def __iter__(self): - state = "pre_head" - meta_found = (self.encoding is None) - pending = [] - - for token in base.Filter.__iter__(self): - type = token["type"] - if type == "StartTag": - if token["name"].lower() == "head": - state = "in_head" - - elif type == "EmptyTag": - if token["name"].lower() == "meta": - # replace charset with actual encoding - has_http_equiv_content_type = False - for (namespace, name), value in token["data"].items(): - if namespace is not None: - continue - elif name.lower() == 'charset': - token["data"][(namespace, name)] = self.encoding - meta_found = True - break - elif name == 'http-equiv' and value.lower() == 'content-type': - has_http_equiv_content_type = True - else: - if has_http_equiv_content_type and (None, "content") in token["data"]: - token["data"][(None, "content")] = 'text/html; charset=%s' % self.encoding - meta_found = True - - elif token["name"].lower() == "head" and not meta_found: - # insert meta into empty head - yield {"type": "StartTag", "name": "head", - "data": token["data"]} - yield {"type": "EmptyTag", "name": "meta", - "data": {(None, "charset"): self.encoding}} - yield {"type": "EndTag", "name": "head"} - meta_found = True - continue - - elif type == "EndTag": - if token["name"].lower() == "head" and pending: - # insert meta into head (if necessary) and flush pending queue - yield pending.pop(0) - if not meta_found: - yield {"type": "EmptyTag", "name": "meta", - "data": {(None, "charset"): self.encoding}} - while pending: - yield pending.pop(0) - meta_found = True - state = "post_head" - - if state == "in_head": - pending.append(token) - else: - yield token diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/_mapping.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/_mapping.py deleted file mode 100644 index 800fff193ed8e1254aff946f24763fecf0139858..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/_mapping.py +++ /dev/null @@ -1,572 +0,0 @@ -# Automatically generated by scripts/gen_mapfiles.py. -# DO NOT EDIT BY HAND; run `tox -e mapfiles` instead. - -LEXERS = { - 'ABAPLexer': ('pygments.lexers.business', 'ABAP', ('abap',), ('*.abap', '*.ABAP'), ('text/x-abap',)), - 'AMDGPULexer': ('pygments.lexers.amdgpu', 'AMDGPU', ('amdgpu',), ('*.isa',), ()), - 'APLLexer': ('pygments.lexers.apl', 'APL', ('apl',), ('*.apl', '*.aplf', '*.aplo', '*.apln', '*.aplc', '*.apli', '*.dyalog'), ()), - 'AbnfLexer': ('pygments.lexers.grammar_notation', 'ABNF', ('abnf',), ('*.abnf',), ('text/x-abnf',)), - 'ActionScript3Lexer': ('pygments.lexers.actionscript', 'ActionScript 3', ('actionscript3', 'as3'), ('*.as',), ('application/x-actionscript3', 'text/x-actionscript3', 'text/actionscript3')), - 'ActionScriptLexer': ('pygments.lexers.actionscript', 'ActionScript', ('actionscript', 'as'), ('*.as',), ('application/x-actionscript', 'text/x-actionscript', 'text/actionscript')), - 'AdaLexer': ('pygments.lexers.ada', 'Ada', ('ada', 'ada95', 'ada2005'), ('*.adb', '*.ads', '*.ada'), ('text/x-ada',)), - 'AdlLexer': ('pygments.lexers.archetype', 'ADL', ('adl',), ('*.adl', '*.adls', '*.adlf', '*.adlx'), ()), - 'AgdaLexer': ('pygments.lexers.haskell', 'Agda', ('agda',), ('*.agda',), ('text/x-agda',)), - 'AheuiLexer': ('pygments.lexers.esoteric', 'Aheui', ('aheui',), ('*.aheui',), ()), - 'AlloyLexer': ('pygments.lexers.dsls', 'Alloy', ('alloy',), ('*.als',), ('text/x-alloy',)), - 'AmbientTalkLexer': ('pygments.lexers.ambient', 'AmbientTalk', ('ambienttalk', 'ambienttalk/2', 'at'), ('*.at',), ('text/x-ambienttalk',)), - 'AmplLexer': ('pygments.lexers.ampl', 'Ampl', ('ampl',), ('*.run',), ()), - 'Angular2HtmlLexer': ('pygments.lexers.templates', 'HTML + Angular2', ('html+ng2',), ('*.ng2',), ()), - 'Angular2Lexer': ('pygments.lexers.templates', 'Angular2', ('ng2',), (), ()), - 'AntlrActionScriptLexer': ('pygments.lexers.parsers', 'ANTLR With ActionScript Target', ('antlr-actionscript', 'antlr-as'), ('*.G', '*.g'), ()), - 'AntlrCSharpLexer': ('pygments.lexers.parsers', 'ANTLR With C# Target', ('antlr-csharp', 'antlr-c#'), ('*.G', '*.g'), ()), - 'AntlrCppLexer': ('pygments.lexers.parsers', 'ANTLR With CPP Target', ('antlr-cpp',), ('*.G', '*.g'), ()), - 'AntlrJavaLexer': ('pygments.lexers.parsers', 'ANTLR With Java Target', ('antlr-java',), ('*.G', '*.g'), ()), - 'AntlrLexer': ('pygments.lexers.parsers', 'ANTLR', ('antlr',), (), ()), - 'AntlrObjectiveCLexer': ('pygments.lexers.parsers', 'ANTLR With ObjectiveC Target', ('antlr-objc',), ('*.G', '*.g'), ()), - 'AntlrPerlLexer': ('pygments.lexers.parsers', 'ANTLR With Perl Target', ('antlr-perl',), ('*.G', '*.g'), ()), - 'AntlrPythonLexer': ('pygments.lexers.parsers', 'ANTLR With Python Target', ('antlr-python',), ('*.G', '*.g'), ()), - 'AntlrRubyLexer': ('pygments.lexers.parsers', 'ANTLR With Ruby Target', ('antlr-ruby', 'antlr-rb'), ('*.G', '*.g'), ()), - 'ApacheConfLexer': ('pygments.lexers.configs', 'ApacheConf', ('apacheconf', 'aconf', 'apache'), ('.htaccess', 'apache.conf', 'apache2.conf'), ('text/x-apacheconf',)), - 'AppleScriptLexer': ('pygments.lexers.scripting', 'AppleScript', ('applescript',), ('*.applescript',), ()), - 'ArduinoLexer': ('pygments.lexers.c_like', 'Arduino', ('arduino',), ('*.ino',), ('text/x-arduino',)), - 'ArrowLexer': ('pygments.lexers.arrow', 'Arrow', ('arrow',), ('*.arw',), ()), - 'ArturoLexer': ('pygments.lexers.arturo', 'Arturo', ('arturo', 'art'), ('*.art',), ()), - 'AscLexer': ('pygments.lexers.asc', 'ASCII armored', ('asc', 'pem'), ('*.asc', '*.pem', 'id_dsa', 'id_ecdsa', 'id_ecdsa_sk', 'id_ed25519', 'id_ed25519_sk', 'id_rsa'), ('application/pgp-keys', 'application/pgp-encrypted', 'application/pgp-signature', 'application/pem-certificate-chain')), - 'Asn1Lexer': ('pygments.lexers.asn1', 'ASN.1', ('asn1',), ('*.asn1',), ()), - 'AspectJLexer': ('pygments.lexers.jvm', 'AspectJ', ('aspectj',), ('*.aj',), ('text/x-aspectj',)), - 'AsymptoteLexer': ('pygments.lexers.graphics', 'Asymptote', ('asymptote', 'asy'), ('*.asy',), ('text/x-asymptote',)), - 'AugeasLexer': ('pygments.lexers.configs', 'Augeas', ('augeas',), ('*.aug',), ()), - 'AutoItLexer': ('pygments.lexers.automation', 'AutoIt', ('autoit',), ('*.au3',), ('text/x-autoit',)), - 'AutohotkeyLexer': ('pygments.lexers.automation', 'autohotkey', ('autohotkey', 'ahk'), ('*.ahk', '*.ahkl'), ('text/x-autohotkey',)), - 'AwkLexer': ('pygments.lexers.textedit', 'Awk', ('awk', 'gawk', 'mawk', 'nawk'), ('*.awk',), ('application/x-awk',)), - 'BBCBasicLexer': ('pygments.lexers.basic', 'BBC Basic', ('bbcbasic',), ('*.bbc',), ()), - 'BBCodeLexer': ('pygments.lexers.markup', 'BBCode', ('bbcode',), (), ('text/x-bbcode',)), - 'BCLexer': ('pygments.lexers.algebra', 'BC', ('bc',), ('*.bc',), ()), - 'BQNLexer': ('pygments.lexers.bqn', 'BQN', ('bqn',), ('*.bqn',), ()), - 'BSTLexer': ('pygments.lexers.bibtex', 'BST', ('bst', 'bst-pybtex'), ('*.bst',), ()), - 'BareLexer': ('pygments.lexers.bare', 'BARE', ('bare',), ('*.bare',), ()), - 'BaseMakefileLexer': ('pygments.lexers.make', 'Base Makefile', ('basemake',), (), ()), - 'BashLexer': ('pygments.lexers.shell', 'Bash', ('bash', 'sh', 'ksh', 'zsh', 'shell'), ('*.sh', '*.ksh', '*.bash', '*.ebuild', '*.eclass', '*.exheres-0', '*.exlib', '*.zsh', '.bashrc', 'bashrc', '.bash_*', 'bash_*', 'zshrc', '.zshrc', '.kshrc', 'kshrc', 'PKGBUILD'), ('application/x-sh', 'application/x-shellscript', 'text/x-shellscript')), - 'BashSessionLexer': ('pygments.lexers.shell', 'Bash Session', ('console', 'shell-session'), ('*.sh-session', '*.shell-session'), ('application/x-shell-session', 'application/x-sh-session')), - 'BatchLexer': ('pygments.lexers.shell', 'Batchfile', ('batch', 'bat', 'dosbatch', 'winbatch'), ('*.bat', '*.cmd'), ('application/x-dos-batch',)), - 'BddLexer': ('pygments.lexers.bdd', 'Bdd', ('bdd',), ('*.feature',), ('text/x-bdd',)), - 'BefungeLexer': ('pygments.lexers.esoteric', 'Befunge', ('befunge',), ('*.befunge',), ('application/x-befunge',)), - 'BerryLexer': ('pygments.lexers.berry', 'Berry', ('berry', 'be'), ('*.be',), ('text/x-berry', 'application/x-berry')), - 'BibTeXLexer': ('pygments.lexers.bibtex', 'BibTeX', ('bibtex', 'bib'), ('*.bib',), ('text/x-bibtex',)), - 'BlitzBasicLexer': ('pygments.lexers.basic', 'BlitzBasic', ('blitzbasic', 'b3d', 'bplus'), ('*.bb', '*.decls'), ('text/x-bb',)), - 'BlitzMaxLexer': ('pygments.lexers.basic', 'BlitzMax', ('blitzmax', 'bmax'), ('*.bmx',), ('text/x-bmx',)), - 'BlueprintLexer': ('pygments.lexers.blueprint', 'Blueprint', ('blueprint',), ('*.blp',), ('text/x-blueprint',)), - 'BnfLexer': ('pygments.lexers.grammar_notation', 'BNF', ('bnf',), ('*.bnf',), ('text/x-bnf',)), - 'BoaLexer': ('pygments.lexers.boa', 'Boa', ('boa',), ('*.boa',), ()), - 'BooLexer': ('pygments.lexers.dotnet', 'Boo', ('boo',), ('*.boo',), ('text/x-boo',)), - 'BoogieLexer': ('pygments.lexers.verification', 'Boogie', ('boogie',), ('*.bpl',), ()), - 'BrainfuckLexer': ('pygments.lexers.esoteric', 'Brainfuck', ('brainfuck', 'bf'), ('*.bf', '*.b'), ('application/x-brainfuck',)), - 'BugsLexer': ('pygments.lexers.modeling', 'BUGS', ('bugs', 'winbugs', 'openbugs'), ('*.bug',), ()), - 'CAmkESLexer': ('pygments.lexers.esoteric', 'CAmkES', ('camkes', 'idl4'), ('*.camkes', '*.idl4'), ()), - 'CLexer': ('pygments.lexers.c_cpp', 'C', ('c',), ('*.c', '*.h', '*.idc', '*.x[bp]m'), ('text/x-chdr', 'text/x-csrc', 'image/x-xbitmap', 'image/x-xpixmap')), - 'CMakeLexer': ('pygments.lexers.make', 'CMake', ('cmake',), ('*.cmake', 'CMakeLists.txt'), ('text/x-cmake',)), - 'CObjdumpLexer': ('pygments.lexers.asm', 'c-objdump', ('c-objdump',), ('*.c-objdump',), ('text/x-c-objdump',)), - 'CPSALexer': ('pygments.lexers.lisp', 'CPSA', ('cpsa',), ('*.cpsa',), ()), - 'CSSUL4Lexer': ('pygments.lexers.ul4', 'CSS+UL4', ('css+ul4',), ('*.cssul4',), ()), - 'CSharpAspxLexer': ('pygments.lexers.dotnet', 'aspx-cs', ('aspx-cs',), ('*.aspx', '*.asax', '*.ascx', '*.ashx', '*.asmx', '*.axd'), ()), - 'CSharpLexer': ('pygments.lexers.dotnet', 'C#', ('csharp', 'c#', 'cs'), ('*.cs',), ('text/x-csharp',)), - 'Ca65Lexer': ('pygments.lexers.asm', 'ca65 assembler', ('ca65',), ('*.s',), ()), - 'CadlLexer': ('pygments.lexers.archetype', 'cADL', ('cadl',), ('*.cadl',), ()), - 'CapDLLexer': ('pygments.lexers.esoteric', 'CapDL', ('capdl',), ('*.cdl',), ()), - 'CapnProtoLexer': ('pygments.lexers.capnproto', "Cap'n Proto", ('capnp',), ('*.capnp',), ()), - 'CarbonLexer': ('pygments.lexers.carbon', 'Carbon', ('carbon',), ('*.carbon',), ('text/x-carbon',)), - 'CbmBasicV2Lexer': ('pygments.lexers.basic', 'CBM BASIC V2', ('cbmbas',), ('*.bas',), ()), - 'CddlLexer': ('pygments.lexers.cddl', 'CDDL', ('cddl',), ('*.cddl',), ('text/x-cddl',)), - 'CeylonLexer': ('pygments.lexers.jvm', 'Ceylon', ('ceylon',), ('*.ceylon',), ('text/x-ceylon',)), - 'Cfengine3Lexer': ('pygments.lexers.configs', 'CFEngine3', ('cfengine3', 'cf3'), ('*.cf',), ()), - 'ChaiscriptLexer': ('pygments.lexers.scripting', 'ChaiScript', ('chaiscript', 'chai'), ('*.chai',), ('text/x-chaiscript', 'application/x-chaiscript')), - 'ChapelLexer': ('pygments.lexers.chapel', 'Chapel', ('chapel', 'chpl'), ('*.chpl',), ()), - 'CharmciLexer': ('pygments.lexers.c_like', 'Charmci', ('charmci',), ('*.ci',), ()), - 'CheetahHtmlLexer': ('pygments.lexers.templates', 'HTML+Cheetah', ('html+cheetah', 'html+spitfire', 'htmlcheetah'), (), ('text/html+cheetah', 'text/html+spitfire')), - 'CheetahJavascriptLexer': ('pygments.lexers.templates', 'JavaScript+Cheetah', ('javascript+cheetah', 'js+cheetah', 'javascript+spitfire', 'js+spitfire'), (), ('application/x-javascript+cheetah', 'text/x-javascript+cheetah', 'text/javascript+cheetah', 'application/x-javascript+spitfire', 'text/x-javascript+spitfire', 'text/javascript+spitfire')), - 'CheetahLexer': ('pygments.lexers.templates', 'Cheetah', ('cheetah', 'spitfire'), ('*.tmpl', '*.spt'), ('application/x-cheetah', 'application/x-spitfire')), - 'CheetahXmlLexer': ('pygments.lexers.templates', 'XML+Cheetah', ('xml+cheetah', 'xml+spitfire'), (), ('application/xml+cheetah', 'application/xml+spitfire')), - 'CirruLexer': ('pygments.lexers.webmisc', 'Cirru', ('cirru',), ('*.cirru',), ('text/x-cirru',)), - 'ClayLexer': ('pygments.lexers.c_like', 'Clay', ('clay',), ('*.clay',), ('text/x-clay',)), - 'CleanLexer': ('pygments.lexers.clean', 'Clean', ('clean',), ('*.icl', '*.dcl'), ()), - 'ClojureLexer': ('pygments.lexers.jvm', 'Clojure', ('clojure', 'clj'), ('*.clj', '*.cljc'), ('text/x-clojure', 'application/x-clojure')), - 'ClojureScriptLexer': ('pygments.lexers.jvm', 'ClojureScript', ('clojurescript', 'cljs'), ('*.cljs',), ('text/x-clojurescript', 'application/x-clojurescript')), - 'CobolFreeformatLexer': ('pygments.lexers.business', 'COBOLFree', ('cobolfree',), ('*.cbl', '*.CBL'), ()), - 'CobolLexer': ('pygments.lexers.business', 'COBOL', ('cobol',), ('*.cob', '*.COB', '*.cpy', '*.CPY'), ('text/x-cobol',)), - 'CoffeeScriptLexer': ('pygments.lexers.javascript', 'CoffeeScript', ('coffeescript', 'coffee-script', 'coffee'), ('*.coffee',), ('text/coffeescript',)), - 'ColdfusionCFCLexer': ('pygments.lexers.templates', 'Coldfusion CFC', ('cfc',), ('*.cfc',), ()), - 'ColdfusionHtmlLexer': ('pygments.lexers.templates', 'Coldfusion HTML', ('cfm',), ('*.cfm', '*.cfml'), ('application/x-coldfusion',)), - 'ColdfusionLexer': ('pygments.lexers.templates', 'cfstatement', ('cfs',), (), ()), - 'Comal80Lexer': ('pygments.lexers.comal', 'COMAL-80', ('comal', 'comal80'), ('*.cml', '*.comal'), ()), - 'CommonLispLexer': ('pygments.lexers.lisp', 'Common Lisp', ('common-lisp', 'cl', 'lisp'), ('*.cl', '*.lisp'), ('text/x-common-lisp',)), - 'ComponentPascalLexer': ('pygments.lexers.oberon', 'Component Pascal', ('componentpascal', 'cp'), ('*.cp', '*.cps'), ('text/x-component-pascal',)), - 'CoqLexer': ('pygments.lexers.theorem', 'Coq', ('coq',), ('*.v',), ('text/x-coq',)), - 'CplintLexer': ('pygments.lexers.cplint', 'cplint', ('cplint',), ('*.ecl', '*.prolog', '*.pro', '*.pl', '*.P', '*.lpad', '*.cpl'), ('text/x-cplint',)), - 'CppLexer': ('pygments.lexers.c_cpp', 'C++', ('cpp', 'c++'), ('*.cpp', '*.hpp', '*.c++', '*.h++', '*.cc', '*.hh', '*.cxx', '*.hxx', '*.C', '*.H', '*.cp', '*.CPP', '*.tpp'), ('text/x-c++hdr', 'text/x-c++src')), - 'CppObjdumpLexer': ('pygments.lexers.asm', 'cpp-objdump', ('cpp-objdump', 'c++-objdumb', 'cxx-objdump'), ('*.cpp-objdump', '*.c++-objdump', '*.cxx-objdump'), ('text/x-cpp-objdump',)), - 'CrmshLexer': ('pygments.lexers.dsls', 'Crmsh', ('crmsh', 'pcmk'), ('*.crmsh', '*.pcmk'), ()), - 'CrocLexer': ('pygments.lexers.d', 'Croc', ('croc',), ('*.croc',), ('text/x-crocsrc',)), - 'CryptolLexer': ('pygments.lexers.haskell', 'Cryptol', ('cryptol', 'cry'), ('*.cry',), ('text/x-cryptol',)), - 'CrystalLexer': ('pygments.lexers.crystal', 'Crystal', ('cr', 'crystal'), ('*.cr',), ('text/x-crystal',)), - 'CsoundDocumentLexer': ('pygments.lexers.csound', 'Csound Document', ('csound-document', 'csound-csd'), ('*.csd',), ()), - 'CsoundOrchestraLexer': ('pygments.lexers.csound', 'Csound Orchestra', ('csound', 'csound-orc'), ('*.orc', '*.udo'), ()), - 'CsoundScoreLexer': ('pygments.lexers.csound', 'Csound Score', ('csound-score', 'csound-sco'), ('*.sco',), ()), - 'CssDjangoLexer': ('pygments.lexers.templates', 'CSS+Django/Jinja', ('css+django', 'css+jinja'), ('*.css.j2', '*.css.jinja2'), ('text/css+django', 'text/css+jinja')), - 'CssErbLexer': ('pygments.lexers.templates', 'CSS+Ruby', ('css+ruby', 'css+erb'), (), ('text/css+ruby',)), - 'CssGenshiLexer': ('pygments.lexers.templates', 'CSS+Genshi Text', ('css+genshitext', 'css+genshi'), (), ('text/css+genshi',)), - 'CssLexer': ('pygments.lexers.css', 'CSS', ('css',), ('*.css',), ('text/css',)), - 'CssPhpLexer': ('pygments.lexers.templates', 'CSS+PHP', ('css+php',), (), ('text/css+php',)), - 'CssSmartyLexer': ('pygments.lexers.templates', 'CSS+Smarty', ('css+smarty',), (), ('text/css+smarty',)), - 'CudaLexer': ('pygments.lexers.c_like', 'CUDA', ('cuda', 'cu'), ('*.cu', '*.cuh'), ('text/x-cuda',)), - 'CypherLexer': ('pygments.lexers.graph', 'Cypher', ('cypher',), ('*.cyp', '*.cypher'), ()), - 'CythonLexer': ('pygments.lexers.python', 'Cython', ('cython', 'pyx', 'pyrex'), ('*.pyx', '*.pxd', '*.pxi'), ('text/x-cython', 'application/x-cython')), - 'DLexer': ('pygments.lexers.d', 'D', ('d',), ('*.d', '*.di'), ('text/x-dsrc',)), - 'DObjdumpLexer': ('pygments.lexers.asm', 'd-objdump', ('d-objdump',), ('*.d-objdump',), ('text/x-d-objdump',)), - 'DarcsPatchLexer': ('pygments.lexers.diff', 'Darcs Patch', ('dpatch',), ('*.dpatch', '*.darcspatch'), ()), - 'DartLexer': ('pygments.lexers.javascript', 'Dart', ('dart',), ('*.dart',), ('text/x-dart',)), - 'Dasm16Lexer': ('pygments.lexers.asm', 'DASM16', ('dasm16',), ('*.dasm16', '*.dasm'), ('text/x-dasm16',)), - 'DaxLexer': ('pygments.lexers.dax', 'Dax', ('dax',), ('*.dax',), ()), - 'DebianControlLexer': ('pygments.lexers.installers', 'Debian Control file', ('debcontrol', 'control'), ('control',), ()), - 'DelphiLexer': ('pygments.lexers.pascal', 'Delphi', ('delphi', 'pas', 'pascal', 'objectpascal'), ('*.pas', '*.dpr'), ('text/x-pascal',)), - 'DesktopLexer': ('pygments.lexers.configs', 'Desktop file', ('desktop',), ('*.desktop',), ()), - 'DevicetreeLexer': ('pygments.lexers.devicetree', 'Devicetree', ('devicetree', 'dts'), ('*.dts', '*.dtsi'), ('text/x-c',)), - 'DgLexer': ('pygments.lexers.python', 'dg', ('dg',), ('*.dg',), ('text/x-dg',)), - 'DiffLexer': ('pygments.lexers.diff', 'Diff', ('diff', 'udiff'), ('*.diff', '*.patch'), ('text/x-diff', 'text/x-patch')), - 'DjangoLexer': ('pygments.lexers.templates', 'Django/Jinja', ('django', 'jinja'), (), ('application/x-django-templating', 'application/x-jinja')), - 'DnsZoneLexer': ('pygments.lexers.dns', 'Zone', ('zone',), ('*.zone',), ('text/dns',)), - 'DockerLexer': ('pygments.lexers.configs', 'Docker', ('docker', 'dockerfile'), ('Dockerfile', '*.docker'), ('text/x-dockerfile-config',)), - 'DtdLexer': ('pygments.lexers.html', 'DTD', ('dtd',), ('*.dtd',), ('application/xml-dtd',)), - 'DuelLexer': ('pygments.lexers.webmisc', 'Duel', ('duel', 'jbst', 'jsonml+bst'), ('*.duel', '*.jbst'), ('text/x-duel', 'text/x-jbst')), - 'DylanConsoleLexer': ('pygments.lexers.dylan', 'Dylan session', ('dylan-console', 'dylan-repl'), ('*.dylan-console',), ('text/x-dylan-console',)), - 'DylanLexer': ('pygments.lexers.dylan', 'Dylan', ('dylan',), ('*.dylan', '*.dyl', '*.intr'), ('text/x-dylan',)), - 'DylanLidLexer': ('pygments.lexers.dylan', 'DylanLID', ('dylan-lid', 'lid'), ('*.lid', '*.hdp'), ('text/x-dylan-lid',)), - 'ECLLexer': ('pygments.lexers.ecl', 'ECL', ('ecl',), ('*.ecl',), ('application/x-ecl',)), - 'ECLexer': ('pygments.lexers.c_like', 'eC', ('ec',), ('*.ec', '*.eh'), ('text/x-echdr', 'text/x-ecsrc')), - 'EarlGreyLexer': ('pygments.lexers.javascript', 'Earl Grey', ('earl-grey', 'earlgrey', 'eg'), ('*.eg',), ('text/x-earl-grey',)), - 'EasytrieveLexer': ('pygments.lexers.scripting', 'Easytrieve', ('easytrieve',), ('*.ezt', '*.mac'), ('text/x-easytrieve',)), - 'EbnfLexer': ('pygments.lexers.parsers', 'EBNF', ('ebnf',), ('*.ebnf',), ('text/x-ebnf',)), - 'EiffelLexer': ('pygments.lexers.eiffel', 'Eiffel', ('eiffel',), ('*.e',), ('text/x-eiffel',)), - 'ElixirConsoleLexer': ('pygments.lexers.erlang', 'Elixir iex session', ('iex',), (), ('text/x-elixir-shellsession',)), - 'ElixirLexer': ('pygments.lexers.erlang', 'Elixir', ('elixir', 'ex', 'exs'), ('*.ex', '*.eex', '*.exs', '*.leex'), ('text/x-elixir',)), - 'ElmLexer': ('pygments.lexers.elm', 'Elm', ('elm',), ('*.elm',), ('text/x-elm',)), - 'ElpiLexer': ('pygments.lexers.elpi', 'Elpi', ('elpi',), ('*.elpi',), ('text/x-elpi',)), - 'EmacsLispLexer': ('pygments.lexers.lisp', 'EmacsLisp', ('emacs-lisp', 'elisp', 'emacs'), ('*.el',), ('text/x-elisp', 'application/x-elisp')), - 'EmailLexer': ('pygments.lexers.email', 'E-mail', ('email', 'eml'), ('*.eml',), ('message/rfc822',)), - 'ErbLexer': ('pygments.lexers.templates', 'ERB', ('erb',), (), ('application/x-ruby-templating',)), - 'ErlangLexer': ('pygments.lexers.erlang', 'Erlang', ('erlang',), ('*.erl', '*.hrl', '*.es', '*.escript'), ('text/x-erlang',)), - 'ErlangShellLexer': ('pygments.lexers.erlang', 'Erlang erl session', ('erl',), ('*.erl-sh',), ('text/x-erl-shellsession',)), - 'EvoqueHtmlLexer': ('pygments.lexers.templates', 'HTML+Evoque', ('html+evoque',), ('*.html',), ('text/html+evoque',)), - 'EvoqueLexer': ('pygments.lexers.templates', 'Evoque', ('evoque',), ('*.evoque',), ('application/x-evoque',)), - 'EvoqueXmlLexer': ('pygments.lexers.templates', 'XML+Evoque', ('xml+evoque',), ('*.xml',), ('application/xml+evoque',)), - 'ExeclineLexer': ('pygments.lexers.shell', 'execline', ('execline',), ('*.exec',), ()), - 'EzhilLexer': ('pygments.lexers.ezhil', 'Ezhil', ('ezhil',), ('*.n',), ('text/x-ezhil',)), - 'FSharpLexer': ('pygments.lexers.dotnet', 'F#', ('fsharp', 'f#'), ('*.fs', '*.fsi', '*.fsx'), ('text/x-fsharp',)), - 'FStarLexer': ('pygments.lexers.ml', 'FStar', ('fstar',), ('*.fst', '*.fsti'), ('text/x-fstar',)), - 'FactorLexer': ('pygments.lexers.factor', 'Factor', ('factor',), ('*.factor',), ('text/x-factor',)), - 'FancyLexer': ('pygments.lexers.ruby', 'Fancy', ('fancy', 'fy'), ('*.fy', '*.fancypack'), ('text/x-fancysrc',)), - 'FantomLexer': ('pygments.lexers.fantom', 'Fantom', ('fan',), ('*.fan',), ('application/x-fantom',)), - 'FelixLexer': ('pygments.lexers.felix', 'Felix', ('felix', 'flx'), ('*.flx', '*.flxh'), ('text/x-felix',)), - 'FennelLexer': ('pygments.lexers.lisp', 'Fennel', ('fennel', 'fnl'), ('*.fnl',), ()), - 'FiftLexer': ('pygments.lexers.fift', 'Fift', ('fift', 'fif'), ('*.fif',), ()), - 'FishShellLexer': ('pygments.lexers.shell', 'Fish', ('fish', 'fishshell'), ('*.fish', '*.load'), ('application/x-fish',)), - 'FlatlineLexer': ('pygments.lexers.dsls', 'Flatline', ('flatline',), (), ('text/x-flatline',)), - 'FloScriptLexer': ('pygments.lexers.floscript', 'FloScript', ('floscript', 'flo'), ('*.flo',), ()), - 'ForthLexer': ('pygments.lexers.forth', 'Forth', ('forth',), ('*.frt', '*.fs'), ('application/x-forth',)), - 'FortranFixedLexer': ('pygments.lexers.fortran', 'FortranFixed', ('fortranfixed',), ('*.f', '*.F'), ()), - 'FortranLexer': ('pygments.lexers.fortran', 'Fortran', ('fortran', 'f90'), ('*.f03', '*.f90', '*.F03', '*.F90'), ('text/x-fortran',)), - 'FoxProLexer': ('pygments.lexers.foxpro', 'FoxPro', ('foxpro', 'vfp', 'clipper', 'xbase'), ('*.PRG', '*.prg'), ()), - 'FreeFemLexer': ('pygments.lexers.freefem', 'Freefem', ('freefem',), ('*.edp',), ('text/x-freefem',)), - 'FuncLexer': ('pygments.lexers.func', 'FunC', ('func', 'fc'), ('*.fc', '*.func'), ()), - 'FutharkLexer': ('pygments.lexers.futhark', 'Futhark', ('futhark',), ('*.fut',), ('text/x-futhark',)), - 'GAPConsoleLexer': ('pygments.lexers.algebra', 'GAP session', ('gap-console', 'gap-repl'), ('*.tst',), ()), - 'GAPLexer': ('pygments.lexers.algebra', 'GAP', ('gap',), ('*.g', '*.gd', '*.gi', '*.gap'), ()), - 'GDScriptLexer': ('pygments.lexers.gdscript', 'GDScript', ('gdscript', 'gd'), ('*.gd',), ('text/x-gdscript', 'application/x-gdscript')), - 'GLShaderLexer': ('pygments.lexers.graphics', 'GLSL', ('glsl',), ('*.vert', '*.frag', '*.geo'), ('text/x-glslsrc',)), - 'GSQLLexer': ('pygments.lexers.gsql', 'GSQL', ('gsql',), ('*.gsql',), ()), - 'GasLexer': ('pygments.lexers.asm', 'GAS', ('gas', 'asm'), ('*.s', '*.S'), ('text/x-gas',)), - 'GcodeLexer': ('pygments.lexers.gcodelexer', 'g-code', ('gcode',), ('*.gcode',), ()), - 'GenshiLexer': ('pygments.lexers.templates', 'Genshi', ('genshi', 'kid', 'xml+genshi', 'xml+kid'), ('*.kid',), ('application/x-genshi', 'application/x-kid')), - 'GenshiTextLexer': ('pygments.lexers.templates', 'Genshi Text', ('genshitext',), (), ('application/x-genshi-text', 'text/x-genshi')), - 'GettextLexer': ('pygments.lexers.textfmts', 'Gettext Catalog', ('pot', 'po'), ('*.pot', '*.po'), ('application/x-gettext', 'text/x-gettext', 'text/gettext')), - 'GherkinLexer': ('pygments.lexers.testing', 'Gherkin', ('gherkin', 'cucumber'), ('*.feature',), ('text/x-gherkin',)), - 'GnuplotLexer': ('pygments.lexers.graphics', 'Gnuplot', ('gnuplot',), ('*.plot', '*.plt'), ('text/x-gnuplot',)), - 'GoLexer': ('pygments.lexers.go', 'Go', ('go', 'golang'), ('*.go',), ('text/x-gosrc',)), - 'GoloLexer': ('pygments.lexers.jvm', 'Golo', ('golo',), ('*.golo',), ()), - 'GoodDataCLLexer': ('pygments.lexers.business', 'GoodData-CL', ('gooddata-cl',), ('*.gdc',), ('text/x-gooddata-cl',)), - 'GosuLexer': ('pygments.lexers.jvm', 'Gosu', ('gosu',), ('*.gs', '*.gsx', '*.gsp', '*.vark'), ('text/x-gosu',)), - 'GosuTemplateLexer': ('pygments.lexers.jvm', 'Gosu Template', ('gst',), ('*.gst',), ('text/x-gosu-template',)), - 'GraphQLLexer': ('pygments.lexers.graphql', 'GraphQL', ('graphql',), ('*.graphql',), ()), - 'GraphvizLexer': ('pygments.lexers.graphviz', 'Graphviz', ('graphviz', 'dot'), ('*.gv', '*.dot'), ('text/x-graphviz', 'text/vnd.graphviz')), - 'GroffLexer': ('pygments.lexers.markup', 'Groff', ('groff', 'nroff', 'man'), ('*.[1-9]', '*.man', '*.1p', '*.3pm'), ('application/x-troff', 'text/troff')), - 'GroovyLexer': ('pygments.lexers.jvm', 'Groovy', ('groovy',), ('*.groovy', '*.gradle'), ('text/x-groovy',)), - 'HLSLShaderLexer': ('pygments.lexers.graphics', 'HLSL', ('hlsl',), ('*.hlsl', '*.hlsli'), ('text/x-hlsl',)), - 'HTMLUL4Lexer': ('pygments.lexers.ul4', 'HTML+UL4', ('html+ul4',), ('*.htmlul4',), ()), - 'HamlLexer': ('pygments.lexers.html', 'Haml', ('haml',), ('*.haml',), ('text/x-haml',)), - 'HandlebarsHtmlLexer': ('pygments.lexers.templates', 'HTML+Handlebars', ('html+handlebars',), ('*.handlebars', '*.hbs'), ('text/html+handlebars', 'text/x-handlebars-template')), - 'HandlebarsLexer': ('pygments.lexers.templates', 'Handlebars', ('handlebars',), (), ()), - 'HaskellLexer': ('pygments.lexers.haskell', 'Haskell', ('haskell', 'hs'), ('*.hs',), ('text/x-haskell',)), - 'HaxeLexer': ('pygments.lexers.haxe', 'Haxe', ('haxe', 'hxsl', 'hx'), ('*.hx', '*.hxsl'), ('text/haxe', 'text/x-haxe', 'text/x-hx')), - 'HexdumpLexer': ('pygments.lexers.hexdump', 'Hexdump', ('hexdump',), (), ()), - 'HsailLexer': ('pygments.lexers.asm', 'HSAIL', ('hsail', 'hsa'), ('*.hsail',), ('text/x-hsail',)), - 'HspecLexer': ('pygments.lexers.haskell', 'Hspec', ('hspec',), ('*Spec.hs',), ()), - 'HtmlDjangoLexer': ('pygments.lexers.templates', 'HTML+Django/Jinja', ('html+django', 'html+jinja', 'htmldjango'), ('*.html.j2', '*.htm.j2', '*.xhtml.j2', '*.html.jinja2', '*.htm.jinja2', '*.xhtml.jinja2'), ('text/html+django', 'text/html+jinja')), - 'HtmlGenshiLexer': ('pygments.lexers.templates', 'HTML+Genshi', ('html+genshi', 'html+kid'), (), ('text/html+genshi',)), - 'HtmlLexer': ('pygments.lexers.html', 'HTML', ('html',), ('*.html', '*.htm', '*.xhtml', '*.xslt'), ('text/html', 'application/xhtml+xml')), - 'HtmlPhpLexer': ('pygments.lexers.templates', 'HTML+PHP', ('html+php',), ('*.phtml',), ('application/x-php', 'application/x-httpd-php', 'application/x-httpd-php3', 'application/x-httpd-php4', 'application/x-httpd-php5')), - 'HtmlSmartyLexer': ('pygments.lexers.templates', 'HTML+Smarty', ('html+smarty',), (), ('text/html+smarty',)), - 'HttpLexer': ('pygments.lexers.textfmts', 'HTTP', ('http',), (), ()), - 'HxmlLexer': ('pygments.lexers.haxe', 'Hxml', ('haxeml', 'hxml'), ('*.hxml',), ()), - 'HyLexer': ('pygments.lexers.lisp', 'Hy', ('hylang',), ('*.hy',), ('text/x-hy', 'application/x-hy')), - 'HybrisLexer': ('pygments.lexers.scripting', 'Hybris', ('hybris', 'hy'), ('*.hy', '*.hyb'), ('text/x-hybris', 'application/x-hybris')), - 'IDLLexer': ('pygments.lexers.idl', 'IDL', ('idl',), ('*.pro',), ('text/idl',)), - 'IconLexer': ('pygments.lexers.unicon', 'Icon', ('icon',), ('*.icon', '*.ICON'), ()), - 'IdrisLexer': ('pygments.lexers.haskell', 'Idris', ('idris', 'idr'), ('*.idr',), ('text/x-idris',)), - 'IgorLexer': ('pygments.lexers.igor', 'Igor', ('igor', 'igorpro'), ('*.ipf',), ('text/ipf',)), - 'Inform6Lexer': ('pygments.lexers.int_fiction', 'Inform 6', ('inform6', 'i6'), ('*.inf',), ()), - 'Inform6TemplateLexer': ('pygments.lexers.int_fiction', 'Inform 6 template', ('i6t',), ('*.i6t',), ()), - 'Inform7Lexer': ('pygments.lexers.int_fiction', 'Inform 7', ('inform7', 'i7'), ('*.ni', '*.i7x'), ()), - 'IniLexer': ('pygments.lexers.configs', 'INI', ('ini', 'cfg', 'dosini'), ('*.ini', '*.cfg', '*.inf', '.editorconfig'), ('text/x-ini', 'text/inf')), - 'IoLexer': ('pygments.lexers.iolang', 'Io', ('io',), ('*.io',), ('text/x-iosrc',)), - 'IokeLexer': ('pygments.lexers.jvm', 'Ioke', ('ioke', 'ik'), ('*.ik',), ('text/x-iokesrc',)), - 'IrcLogsLexer': ('pygments.lexers.textfmts', 'IRC logs', ('irc',), ('*.weechatlog',), ('text/x-irclog',)), - 'IsabelleLexer': ('pygments.lexers.theorem', 'Isabelle', ('isabelle',), ('*.thy',), ('text/x-isabelle',)), - 'JLexer': ('pygments.lexers.j', 'J', ('j',), ('*.ijs',), ('text/x-j',)), - 'JMESPathLexer': ('pygments.lexers.jmespath', 'JMESPath', ('jmespath', 'jp'), ('*.jp',), ()), - 'JSLTLexer': ('pygments.lexers.jslt', 'JSLT', ('jslt',), ('*.jslt',), ('text/x-jslt',)), - 'JagsLexer': ('pygments.lexers.modeling', 'JAGS', ('jags',), ('*.jag', '*.bug'), ()), - 'JasminLexer': ('pygments.lexers.jvm', 'Jasmin', ('jasmin', 'jasminxt'), ('*.j',), ()), - 'JavaLexer': ('pygments.lexers.jvm', 'Java', ('java',), ('*.java',), ('text/x-java',)), - 'JavascriptDjangoLexer': ('pygments.lexers.templates', 'JavaScript+Django/Jinja', ('javascript+django', 'js+django', 'javascript+jinja', 'js+jinja'), ('*.js.j2', '*.js.jinja2'), ('application/x-javascript+django', 'application/x-javascript+jinja', 'text/x-javascript+django', 'text/x-javascript+jinja', 'text/javascript+django', 'text/javascript+jinja')), - 'JavascriptErbLexer': ('pygments.lexers.templates', 'JavaScript+Ruby', ('javascript+ruby', 'js+ruby', 'javascript+erb', 'js+erb'), (), ('application/x-javascript+ruby', 'text/x-javascript+ruby', 'text/javascript+ruby')), - 'JavascriptGenshiLexer': ('pygments.lexers.templates', 'JavaScript+Genshi Text', ('js+genshitext', 'js+genshi', 'javascript+genshitext', 'javascript+genshi'), (), ('application/x-javascript+genshi', 'text/x-javascript+genshi', 'text/javascript+genshi')), - 'JavascriptLexer': ('pygments.lexers.javascript', 'JavaScript', ('javascript', 'js'), ('*.js', '*.jsm', '*.mjs', '*.cjs'), ('application/javascript', 'application/x-javascript', 'text/x-javascript', 'text/javascript')), - 'JavascriptPhpLexer': ('pygments.lexers.templates', 'JavaScript+PHP', ('javascript+php', 'js+php'), (), ('application/x-javascript+php', 'text/x-javascript+php', 'text/javascript+php')), - 'JavascriptSmartyLexer': ('pygments.lexers.templates', 'JavaScript+Smarty', ('javascript+smarty', 'js+smarty'), (), ('application/x-javascript+smarty', 'text/x-javascript+smarty', 'text/javascript+smarty')), - 'JavascriptUL4Lexer': ('pygments.lexers.ul4', 'Javascript+UL4', ('js+ul4',), ('*.jsul4',), ()), - 'JclLexer': ('pygments.lexers.scripting', 'JCL', ('jcl',), ('*.jcl',), ('text/x-jcl',)), - 'JsgfLexer': ('pygments.lexers.grammar_notation', 'JSGF', ('jsgf',), ('*.jsgf',), ('application/jsgf', 'application/x-jsgf', 'text/jsgf')), - 'JsonBareObjectLexer': ('pygments.lexers.data', 'JSONBareObject', (), (), ()), - 'JsonLdLexer': ('pygments.lexers.data', 'JSON-LD', ('jsonld', 'json-ld'), ('*.jsonld',), ('application/ld+json',)), - 'JsonLexer': ('pygments.lexers.data', 'JSON', ('json', 'json-object'), ('*.json', 'Pipfile.lock'), ('application/json', 'application/json-object')), - 'JsonnetLexer': ('pygments.lexers.jsonnet', 'Jsonnet', ('jsonnet',), ('*.jsonnet', '*.libsonnet'), ()), - 'JspLexer': ('pygments.lexers.templates', 'Java Server Page', ('jsp',), ('*.jsp',), ('application/x-jsp',)), - 'JuliaConsoleLexer': ('pygments.lexers.julia', 'Julia console', ('jlcon', 'julia-repl'), (), ()), - 'JuliaLexer': ('pygments.lexers.julia', 'Julia', ('julia', 'jl'), ('*.jl',), ('text/x-julia', 'application/x-julia')), - 'JuttleLexer': ('pygments.lexers.javascript', 'Juttle', ('juttle',), ('*.juttle',), ('application/juttle', 'application/x-juttle', 'text/x-juttle', 'text/juttle')), - 'KLexer': ('pygments.lexers.q', 'K', ('k',), ('*.k',), ()), - 'KalLexer': ('pygments.lexers.javascript', 'Kal', ('kal',), ('*.kal',), ('text/kal', 'application/kal')), - 'KconfigLexer': ('pygments.lexers.configs', 'Kconfig', ('kconfig', 'menuconfig', 'linux-config', 'kernel-config'), ('Kconfig*', '*Config.in*', 'external.in*', 'standard-modules.in'), ('text/x-kconfig',)), - 'KernelLogLexer': ('pygments.lexers.textfmts', 'Kernel log', ('kmsg', 'dmesg'), ('*.kmsg', '*.dmesg'), ()), - 'KokaLexer': ('pygments.lexers.haskell', 'Koka', ('koka',), ('*.kk', '*.kki'), ('text/x-koka',)), - 'KotlinLexer': ('pygments.lexers.jvm', 'Kotlin', ('kotlin',), ('*.kt', '*.kts'), ('text/x-kotlin',)), - 'KuinLexer': ('pygments.lexers.kuin', 'Kuin', ('kuin',), ('*.kn',), ()), - 'LSLLexer': ('pygments.lexers.scripting', 'LSL', ('lsl',), ('*.lsl',), ('text/x-lsl',)), - 'LassoCssLexer': ('pygments.lexers.templates', 'CSS+Lasso', ('css+lasso',), (), ('text/css+lasso',)), - 'LassoHtmlLexer': ('pygments.lexers.templates', 'HTML+Lasso', ('html+lasso',), (), ('text/html+lasso', 'application/x-httpd-lasso', 'application/x-httpd-lasso[89]')), - 'LassoJavascriptLexer': ('pygments.lexers.templates', 'JavaScript+Lasso', ('javascript+lasso', 'js+lasso'), (), ('application/x-javascript+lasso', 'text/x-javascript+lasso', 'text/javascript+lasso')), - 'LassoLexer': ('pygments.lexers.javascript', 'Lasso', ('lasso', 'lassoscript'), ('*.lasso', '*.lasso[89]'), ('text/x-lasso',)), - 'LassoXmlLexer': ('pygments.lexers.templates', 'XML+Lasso', ('xml+lasso',), (), ('application/xml+lasso',)), - 'LeanLexer': ('pygments.lexers.theorem', 'Lean', ('lean',), ('*.lean',), ('text/x-lean',)), - 'LessCssLexer': ('pygments.lexers.css', 'LessCss', ('less',), ('*.less',), ('text/x-less-css',)), - 'LighttpdConfLexer': ('pygments.lexers.configs', 'Lighttpd configuration file', ('lighttpd', 'lighty'), ('lighttpd.conf',), ('text/x-lighttpd-conf',)), - 'LilyPondLexer': ('pygments.lexers.lilypond', 'LilyPond', ('lilypond',), ('*.ly',), ()), - 'LimboLexer': ('pygments.lexers.inferno', 'Limbo', ('limbo',), ('*.b',), ('text/limbo',)), - 'LiquidLexer': ('pygments.lexers.templates', 'liquid', ('liquid',), ('*.liquid',), ()), - 'LiterateAgdaLexer': ('pygments.lexers.haskell', 'Literate Agda', ('literate-agda', 'lagda'), ('*.lagda',), ('text/x-literate-agda',)), - 'LiterateCryptolLexer': ('pygments.lexers.haskell', 'Literate Cryptol', ('literate-cryptol', 'lcryptol', 'lcry'), ('*.lcry',), ('text/x-literate-cryptol',)), - 'LiterateHaskellLexer': ('pygments.lexers.haskell', 'Literate Haskell', ('literate-haskell', 'lhaskell', 'lhs'), ('*.lhs',), ('text/x-literate-haskell',)), - 'LiterateIdrisLexer': ('pygments.lexers.haskell', 'Literate Idris', ('literate-idris', 'lidris', 'lidr'), ('*.lidr',), ('text/x-literate-idris',)), - 'LiveScriptLexer': ('pygments.lexers.javascript', 'LiveScript', ('livescript', 'live-script'), ('*.ls',), ('text/livescript',)), - 'LlvmLexer': ('pygments.lexers.asm', 'LLVM', ('llvm',), ('*.ll',), ('text/x-llvm',)), - 'LlvmMirBodyLexer': ('pygments.lexers.asm', 'LLVM-MIR Body', ('llvm-mir-body',), (), ()), - 'LlvmMirLexer': ('pygments.lexers.asm', 'LLVM-MIR', ('llvm-mir',), ('*.mir',), ()), - 'LogosLexer': ('pygments.lexers.objective', 'Logos', ('logos',), ('*.x', '*.xi', '*.xm', '*.xmi'), ('text/x-logos',)), - 'LogtalkLexer': ('pygments.lexers.prolog', 'Logtalk', ('logtalk',), ('*.lgt', '*.logtalk'), ('text/x-logtalk',)), - 'LuaLexer': ('pygments.lexers.scripting', 'Lua', ('lua',), ('*.lua', '*.wlua'), ('text/x-lua', 'application/x-lua')), - 'MCFunctionLexer': ('pygments.lexers.minecraft', 'MCFunction', ('mcfunction', 'mcf'), ('*.mcfunction',), ('text/mcfunction',)), - 'MCSchemaLexer': ('pygments.lexers.minecraft', 'MCSchema', ('mcschema',), ('*.mcschema',), ('text/mcschema',)), - 'MIMELexer': ('pygments.lexers.mime', 'MIME', ('mime',), (), ('multipart/mixed', 'multipart/related', 'multipart/alternative')), - 'MIPSLexer': ('pygments.lexers.mips', 'MIPS', ('mips',), ('*.mips', '*.MIPS'), ()), - 'MOOCodeLexer': ('pygments.lexers.scripting', 'MOOCode', ('moocode', 'moo'), ('*.moo',), ('text/x-moocode',)), - 'MSDOSSessionLexer': ('pygments.lexers.shell', 'MSDOS Session', ('doscon',), (), ()), - 'Macaulay2Lexer': ('pygments.lexers.macaulay2', 'Macaulay2', ('macaulay2',), ('*.m2',), ()), - 'MakefileLexer': ('pygments.lexers.make', 'Makefile', ('make', 'makefile', 'mf', 'bsdmake'), ('*.mak', '*.mk', 'Makefile', 'makefile', 'Makefile.*', 'GNUmakefile'), ('text/x-makefile',)), - 'MakoCssLexer': ('pygments.lexers.templates', 'CSS+Mako', ('css+mako',), (), ('text/css+mako',)), - 'MakoHtmlLexer': ('pygments.lexers.templates', 'HTML+Mako', ('html+mako',), (), ('text/html+mako',)), - 'MakoJavascriptLexer': ('pygments.lexers.templates', 'JavaScript+Mako', ('javascript+mako', 'js+mako'), (), ('application/x-javascript+mako', 'text/x-javascript+mako', 'text/javascript+mako')), - 'MakoLexer': ('pygments.lexers.templates', 'Mako', ('mako',), ('*.mao',), ('application/x-mako',)), - 'MakoXmlLexer': ('pygments.lexers.templates', 'XML+Mako', ('xml+mako',), (), ('application/xml+mako',)), - 'MaqlLexer': ('pygments.lexers.business', 'MAQL', ('maql',), ('*.maql',), ('text/x-gooddata-maql', 'application/x-gooddata-maql')), - 'MarkdownLexer': ('pygments.lexers.markup', 'Markdown', ('markdown', 'md'), ('*.md', '*.markdown'), ('text/x-markdown',)), - 'MaskLexer': ('pygments.lexers.javascript', 'Mask', ('mask',), ('*.mask',), ('text/x-mask',)), - 'MasonLexer': ('pygments.lexers.templates', 'Mason', ('mason',), ('*.m', '*.mhtml', '*.mc', '*.mi', 'autohandler', 'dhandler'), ('application/x-mason',)), - 'MathematicaLexer': ('pygments.lexers.algebra', 'Mathematica', ('mathematica', 'mma', 'nb'), ('*.nb', '*.cdf', '*.nbp', '*.ma'), ('application/mathematica', 'application/vnd.wolfram.mathematica', 'application/vnd.wolfram.mathematica.package', 'application/vnd.wolfram.cdf')), - 'MatlabLexer': ('pygments.lexers.matlab', 'Matlab', ('matlab',), ('*.m',), ('text/matlab',)), - 'MatlabSessionLexer': ('pygments.lexers.matlab', 'Matlab session', ('matlabsession',), (), ()), - 'MaximaLexer': ('pygments.lexers.maxima', 'Maxima', ('maxima', 'macsyma'), ('*.mac', '*.max'), ()), - 'MesonLexer': ('pygments.lexers.meson', 'Meson', ('meson', 'meson.build'), ('meson.build', 'meson_options.txt'), ('text/x-meson',)), - 'MiniDLexer': ('pygments.lexers.d', 'MiniD', ('minid',), (), ('text/x-minidsrc',)), - 'MiniScriptLexer': ('pygments.lexers.scripting', 'MiniScript', ('miniscript', 'ms'), ('*.ms',), ('text/x-minicript', 'application/x-miniscript')), - 'ModelicaLexer': ('pygments.lexers.modeling', 'Modelica', ('modelica',), ('*.mo',), ('text/x-modelica',)), - 'Modula2Lexer': ('pygments.lexers.modula2', 'Modula-2', ('modula2', 'm2'), ('*.def', '*.mod'), ('text/x-modula2',)), - 'MoinWikiLexer': ('pygments.lexers.markup', 'MoinMoin/Trac Wiki markup', ('trac-wiki', 'moin'), (), ('text/x-trac-wiki',)), - 'MonkeyLexer': ('pygments.lexers.basic', 'Monkey', ('monkey',), ('*.monkey',), ('text/x-monkey',)), - 'MonteLexer': ('pygments.lexers.monte', 'Monte', ('monte',), ('*.mt',), ()), - 'MoonScriptLexer': ('pygments.lexers.scripting', 'MoonScript', ('moonscript', 'moon'), ('*.moon',), ('text/x-moonscript', 'application/x-moonscript')), - 'MoselLexer': ('pygments.lexers.mosel', 'Mosel', ('mosel',), ('*.mos',), ()), - 'MozPreprocCssLexer': ('pygments.lexers.markup', 'CSS+mozpreproc', ('css+mozpreproc',), ('*.css.in',), ()), - 'MozPreprocHashLexer': ('pygments.lexers.markup', 'mozhashpreproc', ('mozhashpreproc',), (), ()), - 'MozPreprocJavascriptLexer': ('pygments.lexers.markup', 'Javascript+mozpreproc', ('javascript+mozpreproc',), ('*.js.in',), ()), - 'MozPreprocPercentLexer': ('pygments.lexers.markup', 'mozpercentpreproc', ('mozpercentpreproc',), (), ()), - 'MozPreprocXulLexer': ('pygments.lexers.markup', 'XUL+mozpreproc', ('xul+mozpreproc',), ('*.xul.in',), ()), - 'MqlLexer': ('pygments.lexers.c_like', 'MQL', ('mql', 'mq4', 'mq5', 'mql4', 'mql5'), ('*.mq4', '*.mq5', '*.mqh'), ('text/x-mql',)), - 'MscgenLexer': ('pygments.lexers.dsls', 'Mscgen', ('mscgen', 'msc'), ('*.msc',), ()), - 'MuPADLexer': ('pygments.lexers.algebra', 'MuPAD', ('mupad',), ('*.mu',), ()), - 'MxmlLexer': ('pygments.lexers.actionscript', 'MXML', ('mxml',), ('*.mxml',), ()), - 'MySqlLexer': ('pygments.lexers.sql', 'MySQL', ('mysql',), (), ('text/x-mysql',)), - 'MyghtyCssLexer': ('pygments.lexers.templates', 'CSS+Myghty', ('css+myghty',), (), ('text/css+myghty',)), - 'MyghtyHtmlLexer': ('pygments.lexers.templates', 'HTML+Myghty', ('html+myghty',), (), ('text/html+myghty',)), - 'MyghtyJavascriptLexer': ('pygments.lexers.templates', 'JavaScript+Myghty', ('javascript+myghty', 'js+myghty'), (), ('application/x-javascript+myghty', 'text/x-javascript+myghty', 'text/javascript+mygthy')), - 'MyghtyLexer': ('pygments.lexers.templates', 'Myghty', ('myghty',), ('*.myt', 'autodelegate'), ('application/x-myghty',)), - 'MyghtyXmlLexer': ('pygments.lexers.templates', 'XML+Myghty', ('xml+myghty',), (), ('application/xml+myghty',)), - 'NCLLexer': ('pygments.lexers.ncl', 'NCL', ('ncl',), ('*.ncl',), ('text/ncl',)), - 'NSISLexer': ('pygments.lexers.installers', 'NSIS', ('nsis', 'nsi', 'nsh'), ('*.nsi', '*.nsh'), ('text/x-nsis',)), - 'NasmLexer': ('pygments.lexers.asm', 'NASM', ('nasm',), ('*.asm', '*.ASM', '*.nasm'), ('text/x-nasm',)), - 'NasmObjdumpLexer': ('pygments.lexers.asm', 'objdump-nasm', ('objdump-nasm',), ('*.objdump-intel',), ('text/x-nasm-objdump',)), - 'NemerleLexer': ('pygments.lexers.dotnet', 'Nemerle', ('nemerle',), ('*.n',), ('text/x-nemerle',)), - 'NesCLexer': ('pygments.lexers.c_like', 'nesC', ('nesc',), ('*.nc',), ('text/x-nescsrc',)), - 'NestedTextLexer': ('pygments.lexers.configs', 'NestedText', ('nestedtext', 'nt'), ('*.nt',), ()), - 'NewLispLexer': ('pygments.lexers.lisp', 'NewLisp', ('newlisp',), ('*.lsp', '*.nl', '*.kif'), ('text/x-newlisp', 'application/x-newlisp')), - 'NewspeakLexer': ('pygments.lexers.smalltalk', 'Newspeak', ('newspeak',), ('*.ns2',), ('text/x-newspeak',)), - 'NginxConfLexer': ('pygments.lexers.configs', 'Nginx configuration file', ('nginx',), ('nginx.conf',), ('text/x-nginx-conf',)), - 'NimrodLexer': ('pygments.lexers.nimrod', 'Nimrod', ('nimrod', 'nim'), ('*.nim', '*.nimrod'), ('text/x-nim',)), - 'NitLexer': ('pygments.lexers.nit', 'Nit', ('nit',), ('*.nit',), ()), - 'NixLexer': ('pygments.lexers.nix', 'Nix', ('nixos', 'nix'), ('*.nix',), ('text/x-nix',)), - 'NodeConsoleLexer': ('pygments.lexers.javascript', 'Node.js REPL console session', ('nodejsrepl',), (), ('text/x-nodejsrepl',)), - 'NotmuchLexer': ('pygments.lexers.textfmts', 'Notmuch', ('notmuch',), (), ()), - 'NuSMVLexer': ('pygments.lexers.smv', 'NuSMV', ('nusmv',), ('*.smv',), ()), - 'NumPyLexer': ('pygments.lexers.python', 'NumPy', ('numpy',), (), ()), - 'ObjdumpLexer': ('pygments.lexers.asm', 'objdump', ('objdump',), ('*.objdump',), ('text/x-objdump',)), - 'ObjectiveCLexer': ('pygments.lexers.objective', 'Objective-C', ('objective-c', 'objectivec', 'obj-c', 'objc'), ('*.m', '*.h'), ('text/x-objective-c',)), - 'ObjectiveCppLexer': ('pygments.lexers.objective', 'Objective-C++', ('objective-c++', 'objectivec++', 'obj-c++', 'objc++'), ('*.mm', '*.hh'), ('text/x-objective-c++',)), - 'ObjectiveJLexer': ('pygments.lexers.javascript', 'Objective-J', ('objective-j', 'objectivej', 'obj-j', 'objj'), ('*.j',), ('text/x-objective-j',)), - 'OcamlLexer': ('pygments.lexers.ml', 'OCaml', ('ocaml',), ('*.ml', '*.mli', '*.mll', '*.mly'), ('text/x-ocaml',)), - 'OctaveLexer': ('pygments.lexers.matlab', 'Octave', ('octave',), ('*.m',), ('text/octave',)), - 'OdinLexer': ('pygments.lexers.archetype', 'ODIN', ('odin',), ('*.odin',), ('text/odin',)), - 'OmgIdlLexer': ('pygments.lexers.c_like', 'OMG Interface Definition Language', ('omg-idl',), ('*.idl', '*.pidl'), ()), - 'OocLexer': ('pygments.lexers.ooc', 'Ooc', ('ooc',), ('*.ooc',), ('text/x-ooc',)), - 'OpaLexer': ('pygments.lexers.ml', 'Opa', ('opa',), ('*.opa',), ('text/x-opa',)), - 'OpenEdgeLexer': ('pygments.lexers.business', 'OpenEdge ABL', ('openedge', 'abl', 'progress'), ('*.p', '*.cls'), ('text/x-openedge', 'application/x-openedge')), - 'OpenScadLexer': ('pygments.lexers.openscad', 'OpenSCAD', ('openscad',), ('*.scad',), ('application/x-openscad',)), - 'OutputLexer': ('pygments.lexers.special', 'Text output', ('output',), (), ()), - 'PacmanConfLexer': ('pygments.lexers.configs', 'PacmanConf', ('pacmanconf',), ('pacman.conf',), ()), - 'PanLexer': ('pygments.lexers.dsls', 'Pan', ('pan',), ('*.pan',), ()), - 'ParaSailLexer': ('pygments.lexers.parasail', 'ParaSail', ('parasail',), ('*.psi', '*.psl'), ('text/x-parasail',)), - 'PawnLexer': ('pygments.lexers.pawn', 'Pawn', ('pawn',), ('*.p', '*.pwn', '*.inc'), ('text/x-pawn',)), - 'PegLexer': ('pygments.lexers.grammar_notation', 'PEG', ('peg',), ('*.peg',), ('text/x-peg',)), - 'Perl6Lexer': ('pygments.lexers.perl', 'Perl6', ('perl6', 'pl6', 'raku'), ('*.pl', '*.pm', '*.nqp', '*.p6', '*.6pl', '*.p6l', '*.pl6', '*.6pm', '*.p6m', '*.pm6', '*.t', '*.raku', '*.rakumod', '*.rakutest', '*.rakudoc'), ('text/x-perl6', 'application/x-perl6')), - 'PerlLexer': ('pygments.lexers.perl', 'Perl', ('perl', 'pl'), ('*.pl', '*.pm', '*.t', '*.perl'), ('text/x-perl', 'application/x-perl')), - 'PhixLexer': ('pygments.lexers.phix', 'Phix', ('phix',), ('*.exw',), ('text/x-phix',)), - 'PhpLexer': ('pygments.lexers.php', 'PHP', ('php', 'php3', 'php4', 'php5'), ('*.php', '*.php[345]', '*.inc'), ('text/x-php',)), - 'PigLexer': ('pygments.lexers.jvm', 'Pig', ('pig',), ('*.pig',), ('text/x-pig',)), - 'PikeLexer': ('pygments.lexers.c_like', 'Pike', ('pike',), ('*.pike', '*.pmod'), ('text/x-pike',)), - 'PkgConfigLexer': ('pygments.lexers.configs', 'PkgConfig', ('pkgconfig',), ('*.pc',), ()), - 'PlPgsqlLexer': ('pygments.lexers.sql', 'PL/pgSQL', ('plpgsql',), (), ('text/x-plpgsql',)), - 'PointlessLexer': ('pygments.lexers.pointless', 'Pointless', ('pointless',), ('*.ptls',), ()), - 'PonyLexer': ('pygments.lexers.pony', 'Pony', ('pony',), ('*.pony',), ()), - 'PortugolLexer': ('pygments.lexers.pascal', 'Portugol', ('portugol',), ('*.alg', '*.portugol'), ()), - 'PostScriptLexer': ('pygments.lexers.graphics', 'PostScript', ('postscript', 'postscr'), ('*.ps', '*.eps'), ('application/postscript',)), - 'PostgresConsoleLexer': ('pygments.lexers.sql', 'PostgreSQL console (psql)', ('psql', 'postgresql-console', 'postgres-console'), (), ('text/x-postgresql-psql',)), - 'PostgresExplainLexer': ('pygments.lexers.sql', 'PostgreSQL EXPLAIN dialect', ('postgres-explain',), ('*.explain',), ('text/x-postgresql-explain',)), - 'PostgresLexer': ('pygments.lexers.sql', 'PostgreSQL SQL dialect', ('postgresql', 'postgres'), (), ('text/x-postgresql',)), - 'PovrayLexer': ('pygments.lexers.graphics', 'POVRay', ('pov',), ('*.pov', '*.inc'), ('text/x-povray',)), - 'PowerShellLexer': ('pygments.lexers.shell', 'PowerShell', ('powershell', 'pwsh', 'posh', 'ps1', 'psm1'), ('*.ps1', '*.psm1'), ('text/x-powershell',)), - 'PowerShellSessionLexer': ('pygments.lexers.shell', 'PowerShell Session', ('pwsh-session', 'ps1con'), (), ()), - 'PraatLexer': ('pygments.lexers.praat', 'Praat', ('praat',), ('*.praat', '*.proc', '*.psc'), ()), - 'ProcfileLexer': ('pygments.lexers.procfile', 'Procfile', ('procfile',), ('Procfile',), ()), - 'PrologLexer': ('pygments.lexers.prolog', 'Prolog', ('prolog',), ('*.ecl', '*.prolog', '*.pro', '*.pl'), ('text/x-prolog',)), - 'PromQLLexer': ('pygments.lexers.promql', 'PromQL', ('promql',), ('*.promql',), ()), - 'PropertiesLexer': ('pygments.lexers.configs', 'Properties', ('properties', 'jproperties'), ('*.properties',), ('text/x-java-properties',)), - 'ProtoBufLexer': ('pygments.lexers.dsls', 'Protocol Buffer', ('protobuf', 'proto'), ('*.proto',), ()), - 'PsyshConsoleLexer': ('pygments.lexers.php', 'PsySH console session for PHP', ('psysh',), (), ()), - 'PtxLexer': ('pygments.lexers.ptx', 'PTX', ('ptx',), ('*.ptx',), ('text/x-ptx',)), - 'PugLexer': ('pygments.lexers.html', 'Pug', ('pug', 'jade'), ('*.pug', '*.jade'), ('text/x-pug', 'text/x-jade')), - 'PuppetLexer': ('pygments.lexers.dsls', 'Puppet', ('puppet',), ('*.pp',), ()), - 'PyPyLogLexer': ('pygments.lexers.console', 'PyPy Log', ('pypylog', 'pypy'), ('*.pypylog',), ('application/x-pypylog',)), - 'Python2Lexer': ('pygments.lexers.python', 'Python 2.x', ('python2', 'py2'), (), ('text/x-python2', 'application/x-python2')), - 'Python2TracebackLexer': ('pygments.lexers.python', 'Python 2.x Traceback', ('py2tb',), ('*.py2tb',), ('text/x-python2-traceback',)), - 'PythonConsoleLexer': ('pygments.lexers.python', 'Python console session', ('pycon',), (), ('text/x-python-doctest',)), - 'PythonLexer': ('pygments.lexers.python', 'Python', ('python', 'py', 'sage', 'python3', 'py3'), ('*.py', '*.pyw', '*.pyi', '*.jy', '*.sage', '*.sc', 'SConstruct', 'SConscript', '*.bzl', 'BUCK', 'BUILD', 'BUILD.bazel', 'WORKSPACE', '*.tac'), ('text/x-python', 'application/x-python', 'text/x-python3', 'application/x-python3')), - 'PythonTracebackLexer': ('pygments.lexers.python', 'Python Traceback', ('pytb', 'py3tb'), ('*.pytb', '*.py3tb'), ('text/x-python-traceback', 'text/x-python3-traceback')), - 'PythonUL4Lexer': ('pygments.lexers.ul4', 'Python+UL4', ('py+ul4',), ('*.pyul4',), ()), - 'QBasicLexer': ('pygments.lexers.basic', 'QBasic', ('qbasic', 'basic'), ('*.BAS', '*.bas'), ('text/basic',)), - 'QLexer': ('pygments.lexers.q', 'Q', ('q',), ('*.q',), ()), - 'QVToLexer': ('pygments.lexers.qvt', 'QVTO', ('qvto', 'qvt'), ('*.qvto',), ()), - 'QlikLexer': ('pygments.lexers.qlik', 'Qlik', ('qlik', 'qlikview', 'qliksense', 'qlikscript'), ('*.qvs', '*.qvw'), ()), - 'QmlLexer': ('pygments.lexers.webmisc', 'QML', ('qml', 'qbs'), ('*.qml', '*.qbs'), ('application/x-qml', 'application/x-qt.qbs+qml')), - 'RConsoleLexer': ('pygments.lexers.r', 'RConsole', ('rconsole', 'rout'), ('*.Rout',), ()), - 'RNCCompactLexer': ('pygments.lexers.rnc', 'Relax-NG Compact', ('rng-compact', 'rnc'), ('*.rnc',), ()), - 'RPMSpecLexer': ('pygments.lexers.installers', 'RPMSpec', ('spec',), ('*.spec',), ('text/x-rpm-spec',)), - 'RacketLexer': ('pygments.lexers.lisp', 'Racket', ('racket', 'rkt'), ('*.rkt', '*.rktd', '*.rktl'), ('text/x-racket', 'application/x-racket')), - 'RagelCLexer': ('pygments.lexers.parsers', 'Ragel in C Host', ('ragel-c',), ('*.rl',), ()), - 'RagelCppLexer': ('pygments.lexers.parsers', 'Ragel in CPP Host', ('ragel-cpp',), ('*.rl',), ()), - 'RagelDLexer': ('pygments.lexers.parsers', 'Ragel in D Host', ('ragel-d',), ('*.rl',), ()), - 'RagelEmbeddedLexer': ('pygments.lexers.parsers', 'Embedded Ragel', ('ragel-em',), ('*.rl',), ()), - 'RagelJavaLexer': ('pygments.lexers.parsers', 'Ragel in Java Host', ('ragel-java',), ('*.rl',), ()), - 'RagelLexer': ('pygments.lexers.parsers', 'Ragel', ('ragel',), (), ()), - 'RagelObjectiveCLexer': ('pygments.lexers.parsers', 'Ragel in Objective C Host', ('ragel-objc',), ('*.rl',), ()), - 'RagelRubyLexer': ('pygments.lexers.parsers', 'Ragel in Ruby Host', ('ragel-ruby', 'ragel-rb'), ('*.rl',), ()), - 'RawTokenLexer': ('pygments.lexers.special', 'Raw token data', (), (), ('application/x-pygments-tokens',)), - 'RdLexer': ('pygments.lexers.r', 'Rd', ('rd',), ('*.Rd',), ('text/x-r-doc',)), - 'ReasonLexer': ('pygments.lexers.ml', 'ReasonML', ('reasonml', 'reason'), ('*.re', '*.rei'), ('text/x-reasonml',)), - 'RebolLexer': ('pygments.lexers.rebol', 'REBOL', ('rebol',), ('*.r', '*.r3', '*.reb'), ('text/x-rebol',)), - 'RedLexer': ('pygments.lexers.rebol', 'Red', ('red', 'red/system'), ('*.red', '*.reds'), ('text/x-red', 'text/x-red-system')), - 'RedcodeLexer': ('pygments.lexers.esoteric', 'Redcode', ('redcode',), ('*.cw',), ()), - 'RegeditLexer': ('pygments.lexers.configs', 'reg', ('registry',), ('*.reg',), ('text/x-windows-registry',)), - 'ResourceLexer': ('pygments.lexers.resource', 'ResourceBundle', ('resourcebundle', 'resource'), (), ()), - 'RexxLexer': ('pygments.lexers.scripting', 'Rexx', ('rexx', 'arexx'), ('*.rexx', '*.rex', '*.rx', '*.arexx'), ('text/x-rexx',)), - 'RhtmlLexer': ('pygments.lexers.templates', 'RHTML', ('rhtml', 'html+erb', 'html+ruby'), ('*.rhtml',), ('text/html+ruby',)), - 'RideLexer': ('pygments.lexers.ride', 'Ride', ('ride',), ('*.ride',), ('text/x-ride',)), - 'RitaLexer': ('pygments.lexers.rita', 'Rita', ('rita',), ('*.rita',), ('text/rita',)), - 'RoboconfGraphLexer': ('pygments.lexers.roboconf', 'Roboconf Graph', ('roboconf-graph',), ('*.graph',), ()), - 'RoboconfInstancesLexer': ('pygments.lexers.roboconf', 'Roboconf Instances', ('roboconf-instances',), ('*.instances',), ()), - 'RobotFrameworkLexer': ('pygments.lexers.robotframework', 'RobotFramework', ('robotframework',), ('*.robot', '*.resource'), ('text/x-robotframework',)), - 'RqlLexer': ('pygments.lexers.sql', 'RQL', ('rql',), ('*.rql',), ('text/x-rql',)), - 'RslLexer': ('pygments.lexers.dsls', 'RSL', ('rsl',), ('*.rsl',), ('text/rsl',)), - 'RstLexer': ('pygments.lexers.markup', 'reStructuredText', ('restructuredtext', 'rst', 'rest'), ('*.rst', '*.rest'), ('text/x-rst', 'text/prs.fallenstein.rst')), - 'RtsLexer': ('pygments.lexers.trafficscript', 'TrafficScript', ('trafficscript', 'rts'), ('*.rts',), ()), - 'RubyConsoleLexer': ('pygments.lexers.ruby', 'Ruby irb session', ('rbcon', 'irb'), (), ('text/x-ruby-shellsession',)), - 'RubyLexer': ('pygments.lexers.ruby', 'Ruby', ('ruby', 'rb', 'duby'), ('*.rb', '*.rbw', 'Rakefile', '*.rake', '*.gemspec', '*.rbx', '*.duby', 'Gemfile', 'Vagrantfile'), ('text/x-ruby', 'application/x-ruby')), - 'RustLexer': ('pygments.lexers.rust', 'Rust', ('rust', 'rs'), ('*.rs', '*.rs.in'), ('text/rust', 'text/x-rust')), - 'SASLexer': ('pygments.lexers.sas', 'SAS', ('sas',), ('*.SAS', '*.sas'), ('text/x-sas', 'text/sas', 'application/x-sas')), - 'SLexer': ('pygments.lexers.r', 'S', ('splus', 's', 'r'), ('*.S', '*.R', '.Rhistory', '.Rprofile', '.Renviron'), ('text/S-plus', 'text/S', 'text/x-r-source', 'text/x-r', 'text/x-R', 'text/x-r-history', 'text/x-r-profile')), - 'SMLLexer': ('pygments.lexers.ml', 'Standard ML', ('sml',), ('*.sml', '*.sig', '*.fun'), ('text/x-standardml', 'application/x-standardml')), - 'SNBTLexer': ('pygments.lexers.minecraft', 'SNBT', ('snbt',), ('*.snbt',), ('text/snbt',)), - 'SarlLexer': ('pygments.lexers.jvm', 'SARL', ('sarl',), ('*.sarl',), ('text/x-sarl',)), - 'SassLexer': ('pygments.lexers.css', 'Sass', ('sass',), ('*.sass',), ('text/x-sass',)), - 'SaviLexer': ('pygments.lexers.savi', 'Savi', ('savi',), ('*.savi',), ()), - 'ScalaLexer': ('pygments.lexers.jvm', 'Scala', ('scala',), ('*.scala',), ('text/x-scala',)), - 'ScamlLexer': ('pygments.lexers.html', 'Scaml', ('scaml',), ('*.scaml',), ('text/x-scaml',)), - 'ScdocLexer': ('pygments.lexers.scdoc', 'scdoc', ('scdoc', 'scd'), ('*.scd', '*.scdoc'), ()), - 'SchemeLexer': ('pygments.lexers.lisp', 'Scheme', ('scheme', 'scm'), ('*.scm', '*.ss'), ('text/x-scheme', 'application/x-scheme')), - 'ScilabLexer': ('pygments.lexers.matlab', 'Scilab', ('scilab',), ('*.sci', '*.sce', '*.tst'), ('text/scilab',)), - 'ScssLexer': ('pygments.lexers.css', 'SCSS', ('scss',), ('*.scss',), ('text/x-scss',)), - 'SedLexer': ('pygments.lexers.textedit', 'Sed', ('sed', 'gsed', 'ssed'), ('*.sed', '*.[gs]sed'), ('text/x-sed',)), - 'ShExCLexer': ('pygments.lexers.rdf', 'ShExC', ('shexc', 'shex'), ('*.shex',), ('text/shex',)), - 'ShenLexer': ('pygments.lexers.lisp', 'Shen', ('shen',), ('*.shen',), ('text/x-shen', 'application/x-shen')), - 'SieveLexer': ('pygments.lexers.sieve', 'Sieve', ('sieve',), ('*.siv', '*.sieve'), ()), - 'SilverLexer': ('pygments.lexers.verification', 'Silver', ('silver',), ('*.sil', '*.vpr'), ()), - 'SingularityLexer': ('pygments.lexers.configs', 'Singularity', ('singularity',), ('*.def', 'Singularity'), ()), - 'SlashLexer': ('pygments.lexers.slash', 'Slash', ('slash',), ('*.sla',), ()), - 'SlimLexer': ('pygments.lexers.webmisc', 'Slim', ('slim',), ('*.slim',), ('text/x-slim',)), - 'SlurmBashLexer': ('pygments.lexers.shell', 'Slurm', ('slurm', 'sbatch'), ('*.sl',), ()), - 'SmaliLexer': ('pygments.lexers.dalvik', 'Smali', ('smali',), ('*.smali',), ('text/smali',)), - 'SmalltalkLexer': ('pygments.lexers.smalltalk', 'Smalltalk', ('smalltalk', 'squeak', 'st'), ('*.st',), ('text/x-smalltalk',)), - 'SmartGameFormatLexer': ('pygments.lexers.sgf', 'SmartGameFormat', ('sgf',), ('*.sgf',), ()), - 'SmartyLexer': ('pygments.lexers.templates', 'Smarty', ('smarty',), ('*.tpl',), ('application/x-smarty',)), - 'SmithyLexer': ('pygments.lexers.smithy', 'Smithy', ('smithy',), ('*.smithy',), ()), - 'SnobolLexer': ('pygments.lexers.snobol', 'Snobol', ('snobol',), ('*.snobol',), ('text/x-snobol',)), - 'SnowballLexer': ('pygments.lexers.dsls', 'Snowball', ('snowball',), ('*.sbl',), ()), - 'SolidityLexer': ('pygments.lexers.solidity', 'Solidity', ('solidity',), ('*.sol',), ()), - 'SophiaLexer': ('pygments.lexers.sophia', 'Sophia', ('sophia',), ('*.aes',), ()), - 'SourcePawnLexer': ('pygments.lexers.pawn', 'SourcePawn', ('sp',), ('*.sp',), ('text/x-sourcepawn',)), - 'SourcesListLexer': ('pygments.lexers.installers', 'Debian Sourcelist', ('debsources', 'sourceslist', 'sources.list'), ('sources.list',), ()), - 'SparqlLexer': ('pygments.lexers.rdf', 'SPARQL', ('sparql',), ('*.rq', '*.sparql'), ('application/sparql-query',)), - 'SpiceLexer': ('pygments.lexers.spice', 'Spice', ('spice', 'spicelang'), ('*.spice',), ('text/x-spice',)), - 'SqlJinjaLexer': ('pygments.lexers.templates', 'SQL+Jinja', ('sql+jinja',), ('*.sql', '*.sql.j2', '*.sql.jinja2'), ()), - 'SqlLexer': ('pygments.lexers.sql', 'SQL', ('sql',), ('*.sql',), ('text/x-sql',)), - 'SqliteConsoleLexer': ('pygments.lexers.sql', 'sqlite3con', ('sqlite3',), ('*.sqlite3-console',), ('text/x-sqlite3-console',)), - 'SquidConfLexer': ('pygments.lexers.configs', 'SquidConf', ('squidconf', 'squid.conf', 'squid'), ('squid.conf',), ('text/x-squidconf',)), - 'SrcinfoLexer': ('pygments.lexers.srcinfo', 'Srcinfo', ('srcinfo',), ('.SRCINFO',), ()), - 'SspLexer': ('pygments.lexers.templates', 'Scalate Server Page', ('ssp',), ('*.ssp',), ('application/x-ssp',)), - 'StanLexer': ('pygments.lexers.modeling', 'Stan', ('stan',), ('*.stan',), ()), - 'StataLexer': ('pygments.lexers.stata', 'Stata', ('stata', 'do'), ('*.do', '*.ado'), ('text/x-stata', 'text/stata', 'application/x-stata')), - 'SuperColliderLexer': ('pygments.lexers.supercollider', 'SuperCollider', ('supercollider', 'sc'), ('*.sc', '*.scd'), ('application/supercollider', 'text/supercollider')), - 'SwiftLexer': ('pygments.lexers.objective', 'Swift', ('swift',), ('*.swift',), ('text/x-swift',)), - 'SwigLexer': ('pygments.lexers.c_like', 'SWIG', ('swig',), ('*.swg', '*.i'), ('text/swig',)), - 'SystemVerilogLexer': ('pygments.lexers.hdl', 'systemverilog', ('systemverilog', 'sv'), ('*.sv', '*.svh'), ('text/x-systemverilog',)), - 'SystemdLexer': ('pygments.lexers.configs', 'Systemd', ('systemd',), ('*.service', '*.socket', '*.device', '*.mount', '*.automount', '*.swap', '*.target', '*.path', '*.timer', '*.slice', '*.scope'), ()), - 'TAPLexer': ('pygments.lexers.testing', 'TAP', ('tap',), ('*.tap',), ()), - 'TNTLexer': ('pygments.lexers.tnt', 'Typographic Number Theory', ('tnt',), ('*.tnt',), ()), - 'TOMLLexer': ('pygments.lexers.configs', 'TOML', ('toml',), ('*.toml', 'Pipfile', 'poetry.lock'), ()), - 'Tads3Lexer': ('pygments.lexers.int_fiction', 'TADS 3', ('tads3',), ('*.t',), ()), - 'TalLexer': ('pygments.lexers.tal', 'Tal', ('tal', 'uxntal'), ('*.tal',), ('text/x-uxntal',)), - 'TasmLexer': ('pygments.lexers.asm', 'TASM', ('tasm',), ('*.asm', '*.ASM', '*.tasm'), ('text/x-tasm',)), - 'TclLexer': ('pygments.lexers.tcl', 'Tcl', ('tcl',), ('*.tcl', '*.rvt'), ('text/x-tcl', 'text/x-script.tcl', 'application/x-tcl')), - 'TcshLexer': ('pygments.lexers.shell', 'Tcsh', ('tcsh', 'csh'), ('*.tcsh', '*.csh'), ('application/x-csh',)), - 'TcshSessionLexer': ('pygments.lexers.shell', 'Tcsh Session', ('tcshcon',), (), ()), - 'TeaTemplateLexer': ('pygments.lexers.templates', 'Tea', ('tea',), ('*.tea',), ('text/x-tea',)), - 'TealLexer': ('pygments.lexers.teal', 'teal', ('teal',), ('*.teal',), ()), - 'TeraTermLexer': ('pygments.lexers.teraterm', 'Tera Term macro', ('teratermmacro', 'teraterm', 'ttl'), ('*.ttl',), ('text/x-teratermmacro',)), - 'TermcapLexer': ('pygments.lexers.configs', 'Termcap', ('termcap',), ('termcap', 'termcap.src'), ()), - 'TerminfoLexer': ('pygments.lexers.configs', 'Terminfo', ('terminfo',), ('terminfo', 'terminfo.src'), ()), - 'TerraformLexer': ('pygments.lexers.configs', 'Terraform', ('terraform', 'tf', 'hcl'), ('*.tf', '*.hcl'), ('application/x-tf', 'application/x-terraform')), - 'TexLexer': ('pygments.lexers.markup', 'TeX', ('tex', 'latex'), ('*.tex', '*.aux', '*.toc'), ('text/x-tex', 'text/x-latex')), - 'TextLexer': ('pygments.lexers.special', 'Text only', ('text',), ('*.txt',), ('text/plain',)), - 'ThingsDBLexer': ('pygments.lexers.thingsdb', 'ThingsDB', ('ti', 'thingsdb'), ('*.ti',), ()), - 'ThriftLexer': ('pygments.lexers.dsls', 'Thrift', ('thrift',), ('*.thrift',), ('application/x-thrift',)), - 'TiddlyWiki5Lexer': ('pygments.lexers.markup', 'tiddler', ('tid',), ('*.tid',), ('text/vnd.tiddlywiki',)), - 'TlbLexer': ('pygments.lexers.tlb', 'Tl-b', ('tlb',), ('*.tlb',), ()), - 'TlsLexer': ('pygments.lexers.tls', 'TLS Presentation Language', ('tls',), (), ()), - 'TodotxtLexer': ('pygments.lexers.textfmts', 'Todotxt', ('todotxt',), ('todo.txt', '*.todotxt'), ('text/x-todo',)), - 'TransactSqlLexer': ('pygments.lexers.sql', 'Transact-SQL', ('tsql', 't-sql'), ('*.sql',), ('text/x-tsql',)), - 'TreetopLexer': ('pygments.lexers.parsers', 'Treetop', ('treetop',), ('*.treetop', '*.tt'), ()), - 'TurtleLexer': ('pygments.lexers.rdf', 'Turtle', ('turtle',), ('*.ttl',), ('text/turtle', 'application/x-turtle')), - 'TwigHtmlLexer': ('pygments.lexers.templates', 'HTML+Twig', ('html+twig',), ('*.twig',), ('text/html+twig',)), - 'TwigLexer': ('pygments.lexers.templates', 'Twig', ('twig',), (), ('application/x-twig',)), - 'TypeScriptLexer': ('pygments.lexers.javascript', 'TypeScript', ('typescript', 'ts'), ('*.ts',), ('application/x-typescript', 'text/x-typescript')), - 'TypoScriptCssDataLexer': ('pygments.lexers.typoscript', 'TypoScriptCssData', ('typoscriptcssdata',), (), ()), - 'TypoScriptHtmlDataLexer': ('pygments.lexers.typoscript', 'TypoScriptHtmlData', ('typoscripthtmldata',), (), ()), - 'TypoScriptLexer': ('pygments.lexers.typoscript', 'TypoScript', ('typoscript',), ('*.typoscript',), ('text/x-typoscript',)), - 'UL4Lexer': ('pygments.lexers.ul4', 'UL4', ('ul4',), ('*.ul4',), ()), - 'UcodeLexer': ('pygments.lexers.unicon', 'ucode', ('ucode',), ('*.u', '*.u1', '*.u2'), ()), - 'UniconLexer': ('pygments.lexers.unicon', 'Unicon', ('unicon',), ('*.icn',), ('text/unicon',)), - 'UnixConfigLexer': ('pygments.lexers.configs', 'Unix/Linux config files', ('unixconfig', 'linuxconfig'), (), ()), - 'UrbiscriptLexer': ('pygments.lexers.urbi', 'UrbiScript', ('urbiscript',), ('*.u',), ('application/x-urbiscript',)), - 'UrlEncodedLexer': ('pygments.lexers.html', 'urlencoded', ('urlencoded',), (), ('application/x-www-form-urlencoded',)), - 'UsdLexer': ('pygments.lexers.usd', 'USD', ('usd', 'usda'), ('*.usd', '*.usda'), ()), - 'VBScriptLexer': ('pygments.lexers.basic', 'VBScript', ('vbscript',), ('*.vbs', '*.VBS'), ()), - 'VCLLexer': ('pygments.lexers.varnish', 'VCL', ('vcl',), ('*.vcl',), ('text/x-vclsrc',)), - 'VCLSnippetLexer': ('pygments.lexers.varnish', 'VCLSnippets', ('vclsnippets', 'vclsnippet'), (), ('text/x-vclsnippet',)), - 'VCTreeStatusLexer': ('pygments.lexers.console', 'VCTreeStatus', ('vctreestatus',), (), ()), - 'VGLLexer': ('pygments.lexers.dsls', 'VGL', ('vgl',), ('*.rpf',), ()), - 'ValaLexer': ('pygments.lexers.c_like', 'Vala', ('vala', 'vapi'), ('*.vala', '*.vapi'), ('text/x-vala',)), - 'VbNetAspxLexer': ('pygments.lexers.dotnet', 'aspx-vb', ('aspx-vb',), ('*.aspx', '*.asax', '*.ascx', '*.ashx', '*.asmx', '*.axd'), ()), - 'VbNetLexer': ('pygments.lexers.dotnet', 'VB.net', ('vb.net', 'vbnet', 'lobas', 'oobas', 'sobas'), ('*.vb', '*.bas'), ('text/x-vbnet', 'text/x-vba')), - 'VelocityHtmlLexer': ('pygments.lexers.templates', 'HTML+Velocity', ('html+velocity',), (), ('text/html+velocity',)), - 'VelocityLexer': ('pygments.lexers.templates', 'Velocity', ('velocity',), ('*.vm', '*.fhtml'), ()), - 'VelocityXmlLexer': ('pygments.lexers.templates', 'XML+Velocity', ('xml+velocity',), (), ('application/xml+velocity',)), - 'VerifpalLexer': ('pygments.lexers.verifpal', 'Verifpal', ('verifpal',), ('*.vp',), ('text/x-verifpal',)), - 'VerilogLexer': ('pygments.lexers.hdl', 'verilog', ('verilog', 'v'), ('*.v',), ('text/x-verilog',)), - 'VhdlLexer': ('pygments.lexers.hdl', 'vhdl', ('vhdl',), ('*.vhdl', '*.vhd'), ('text/x-vhdl',)), - 'VimLexer': ('pygments.lexers.textedit', 'VimL', ('vim',), ('*.vim', '.vimrc', '.exrc', '.gvimrc', '_vimrc', '_exrc', '_gvimrc', 'vimrc', 'gvimrc'), ('text/x-vim',)), - 'WDiffLexer': ('pygments.lexers.diff', 'WDiff', ('wdiff',), ('*.wdiff',), ()), - 'WatLexer': ('pygments.lexers.webassembly', 'WebAssembly', ('wast', 'wat'), ('*.wat', '*.wast'), ()), - 'WebIDLLexer': ('pygments.lexers.webidl', 'Web IDL', ('webidl',), ('*.webidl',), ()), - 'WgslLexer': ('pygments.lexers.wgsl', 'WebGPU Shading Language', ('wgsl',), ('*.wgsl',), ('text/wgsl',)), - 'WhileyLexer': ('pygments.lexers.whiley', 'Whiley', ('whiley',), ('*.whiley',), ('text/x-whiley',)), - 'WikitextLexer': ('pygments.lexers.markup', 'Wikitext', ('wikitext', 'mediawiki'), (), ('text/x-wiki',)), - 'WoWTocLexer': ('pygments.lexers.wowtoc', 'World of Warcraft TOC', ('wowtoc',), ('*.toc',), ()), - 'WrenLexer': ('pygments.lexers.wren', 'Wren', ('wren',), ('*.wren',), ()), - 'X10Lexer': ('pygments.lexers.x10', 'X10', ('x10', 'xten'), ('*.x10',), ('text/x-x10',)), - 'XMLUL4Lexer': ('pygments.lexers.ul4', 'XML+UL4', ('xml+ul4',), ('*.xmlul4',), ()), - 'XQueryLexer': ('pygments.lexers.webmisc', 'XQuery', ('xquery', 'xqy', 'xq', 'xql', 'xqm'), ('*.xqy', '*.xquery', '*.xq', '*.xql', '*.xqm'), ('text/xquery', 'application/xquery')), - 'XmlDjangoLexer': ('pygments.lexers.templates', 'XML+Django/Jinja', ('xml+django', 'xml+jinja'), ('*.xml.j2', '*.xml.jinja2'), ('application/xml+django', 'application/xml+jinja')), - 'XmlErbLexer': ('pygments.lexers.templates', 'XML+Ruby', ('xml+ruby', 'xml+erb'), (), ('application/xml+ruby',)), - 'XmlLexer': ('pygments.lexers.html', 'XML', ('xml',), ('*.xml', '*.xsl', '*.rss', '*.xslt', '*.xsd', '*.wsdl', '*.wsf'), ('text/xml', 'application/xml', 'image/svg+xml', 'application/rss+xml', 'application/atom+xml')), - 'XmlPhpLexer': ('pygments.lexers.templates', 'XML+PHP', ('xml+php',), (), ('application/xml+php',)), - 'XmlSmartyLexer': ('pygments.lexers.templates', 'XML+Smarty', ('xml+smarty',), (), ('application/xml+smarty',)), - 'XorgLexer': ('pygments.lexers.xorg', 'Xorg', ('xorg.conf',), ('xorg.conf',), ()), - 'XppLexer': ('pygments.lexers.dotnet', 'X++', ('xpp', 'x++'), ('*.xpp',), ()), - 'XsltLexer': ('pygments.lexers.html', 'XSLT', ('xslt',), ('*.xsl', '*.xslt', '*.xpl'), ('application/xsl+xml', 'application/xslt+xml')), - 'XtendLexer': ('pygments.lexers.jvm', 'Xtend', ('xtend',), ('*.xtend',), ('text/x-xtend',)), - 'XtlangLexer': ('pygments.lexers.lisp', 'xtlang', ('extempore',), ('*.xtm',), ()), - 'YamlJinjaLexer': ('pygments.lexers.templates', 'YAML+Jinja', ('yaml+jinja', 'salt', 'sls'), ('*.sls', '*.yaml.j2', '*.yml.j2', '*.yaml.jinja2', '*.yml.jinja2'), ('text/x-yaml+jinja', 'text/x-sls')), - 'YamlLexer': ('pygments.lexers.data', 'YAML', ('yaml',), ('*.yaml', '*.yml'), ('text/x-yaml',)), - 'YangLexer': ('pygments.lexers.yang', 'YANG', ('yang',), ('*.yang',), ('application/yang',)), - 'YaraLexer': ('pygments.lexers.yara', 'YARA', ('yara', 'yar'), ('*.yar',), ('text/x-yara',)), - 'ZeekLexer': ('pygments.lexers.dsls', 'Zeek', ('zeek', 'bro'), ('*.zeek', '*.bro'), ()), - 'ZephirLexer': ('pygments.lexers.php', 'Zephir', ('zephir',), ('*.zep',), ()), - 'ZigLexer': ('pygments.lexers.zig', 'Zig', ('zig',), ('*.zig',), ('text/zig',)), - 'apdlexer': ('pygments.lexers.apdlexer', 'ANSYS parametric design language', ('ansys', 'apdl'), ('*.ans',), ()), -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/roboconf.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/roboconf.py deleted file mode 100644 index 5d7d76e0bbb992d8db060eea51e29ed8066bf3da..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/roboconf.py +++ /dev/null @@ -1,81 +0,0 @@ -""" - pygments.lexers.roboconf - ~~~~~~~~~~~~~~~~~~~~~~~~ - - Lexers for Roboconf DSL. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.lexer import RegexLexer, words, re -from pygments.token import Text, Operator, Keyword, Name, Comment - -__all__ = ['RoboconfGraphLexer', 'RoboconfInstancesLexer'] - - -class RoboconfGraphLexer(RegexLexer): - """ - Lexer for Roboconf graph files. - - .. versionadded:: 2.1 - """ - name = 'Roboconf Graph' - aliases = ['roboconf-graph'] - filenames = ['*.graph'] - - flags = re.IGNORECASE | re.MULTILINE - tokens = { - 'root': [ - # Skip white spaces - (r'\s+', Text), - - # There is one operator - (r'=', Operator), - - # Keywords - (words(('facet', 'import'), suffix=r'\s*\b', prefix=r'\b'), Keyword), - (words(( - 'installer', 'extends', 'exports', 'imports', 'facets', - 'children'), suffix=r'\s*:?', prefix=r'\b'), Name), - - # Comments - (r'#.*\n', Comment), - - # Default - (r'[^#]', Text), - (r'.*\n', Text) - ] - } - - -class RoboconfInstancesLexer(RegexLexer): - """ - Lexer for Roboconf instances files. - - .. versionadded:: 2.1 - """ - name = 'Roboconf Instances' - aliases = ['roboconf-instances'] - filenames = ['*.instances'] - - flags = re.IGNORECASE | re.MULTILINE - tokens = { - 'root': [ - - # Skip white spaces - (r'\s+', Text), - - # Keywords - (words(('instance of', 'import'), suffix=r'\s*\b', prefix=r'\b'), Keyword), - (words(('name', 'count'), suffix=r's*:?', prefix=r'\b'), Name), - (r'\s*[\w.-]+\s*:', Name), - - # Comments - (r'#.*\n', Comment), - - # Default - (r'[^#]', Text), - (r'.*\n', Text) - ] - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pyparsing/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pyparsing/__init__.py deleted file mode 100644 index 3dbc3cf83b55ff1dca212d1bbed272ce7eb4370a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pyparsing/__init__.py +++ /dev/null @@ -1,325 +0,0 @@ -# module pyparsing.py -# -# Copyright (c) 2003-2022 Paul T. McGuire -# -# Permission is hereby granted, free of charge, to any person obtaining -# a copy of this software and associated documentation files (the -# "Software"), to deal in the Software without restriction, including -# without limitation the rights to use, copy, modify, merge, publish, -# distribute, sublicense, and/or sell copies of the Software, and to -# permit persons to whom the Software is furnished to do so, subject to -# the following conditions: -# -# The above copyright notice and this permission notice shall be -# included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. -# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY -# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, -# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE -# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -# - -__doc__ = """ -pyparsing module - Classes and methods to define and execute parsing grammars -============================================================================= - -The pyparsing module is an alternative approach to creating and -executing simple grammars, vs. the traditional lex/yacc approach, or the -use of regular expressions. With pyparsing, you don't need to learn -a new syntax for defining grammars or matching expressions - the parsing -module provides a library of classes that you use to construct the -grammar directly in Python. - -Here is a program to parse "Hello, World!" (or any greeting of the form -``", !"``), built up using :class:`Word`, -:class:`Literal`, and :class:`And` elements -(the :meth:`'+'` operators create :class:`And` expressions, -and the strings are auto-converted to :class:`Literal` expressions):: - - from pyparsing import Word, alphas - - # define grammar of a greeting - greet = Word(alphas) + "," + Word(alphas) + "!" - - hello = "Hello, World!" - print(hello, "->", greet.parse_string(hello)) - -The program outputs the following:: - - Hello, World! -> ['Hello', ',', 'World', '!'] - -The Python representation of the grammar is quite readable, owing to the -self-explanatory class names, and the use of :class:`'+'`, -:class:`'|'`, :class:`'^'` and :class:`'&'` operators. - -The :class:`ParseResults` object returned from -:class:`ParserElement.parse_string` can be -accessed as a nested list, a dictionary, or an object with named -attributes. - -The pyparsing module handles some of the problems that are typically -vexing when writing text parsers: - - - extra or missing whitespace (the above program will also handle - "Hello,World!", "Hello , World !", etc.) - - quoted strings - - embedded comments - - -Getting Started - ------------------ -Visit the classes :class:`ParserElement` and :class:`ParseResults` to -see the base classes that most other pyparsing -classes inherit from. Use the docstrings for examples of how to: - - - construct literal match expressions from :class:`Literal` and - :class:`CaselessLiteral` classes - - construct character word-group expressions using the :class:`Word` - class - - see how to create repetitive expressions using :class:`ZeroOrMore` - and :class:`OneOrMore` classes - - use :class:`'+'`, :class:`'|'`, :class:`'^'`, - and :class:`'&'` operators to combine simple expressions into - more complex ones - - associate names with your parsed results using - :class:`ParserElement.set_results_name` - - access the parsed data, which is returned as a :class:`ParseResults` - object - - find some helpful expression short-cuts like :class:`DelimitedList` - and :class:`one_of` - - find more useful common expressions in the :class:`pyparsing_common` - namespace class -""" -from typing import NamedTuple - - -class version_info(NamedTuple): - major: int - minor: int - micro: int - releaselevel: str - serial: int - - @property - def __version__(self): - return ( - f"{self.major}.{self.minor}.{self.micro}" - + ( - f"{'r' if self.releaselevel[0] == 'c' else ''}{self.releaselevel[0]}{self.serial}", - "", - )[self.releaselevel == "final"] - ) - - def __str__(self): - return f"{__name__} {self.__version__} / {__version_time__}" - - def __repr__(self): - return f"{__name__}.{type(self).__name__}({', '.join('{}={!r}'.format(*nv) for nv in zip(self._fields, self))})" - - -__version_info__ = version_info(3, 1, 1, "final", 1) -__version_time__ = "29 Jul 2023 22:27 UTC" -__version__ = __version_info__.__version__ -__versionTime__ = __version_time__ -__author__ = "Paul McGuire " - -from .util import * -from .exceptions import * -from .actions import * -from .core import __diag__, __compat__ -from .results import * -from .core import * # type: ignore[misc, assignment] -from .core import _builtin_exprs as core_builtin_exprs -from .helpers import * # type: ignore[misc, assignment] -from .helpers import _builtin_exprs as helper_builtin_exprs - -from .unicode import unicode_set, UnicodeRangeList, pyparsing_unicode as unicode -from .testing import pyparsing_test as testing -from .common import ( - pyparsing_common as common, - _builtin_exprs as common_builtin_exprs, -) - -# define backward compat synonyms -if "pyparsing_unicode" not in globals(): - pyparsing_unicode = unicode # type: ignore[misc] -if "pyparsing_common" not in globals(): - pyparsing_common = common # type: ignore[misc] -if "pyparsing_test" not in globals(): - pyparsing_test = testing # type: ignore[misc] - -core_builtin_exprs += common_builtin_exprs + helper_builtin_exprs - - -__all__ = [ - "__version__", - "__version_time__", - "__author__", - "__compat__", - "__diag__", - "And", - "AtLineStart", - "AtStringStart", - "CaselessKeyword", - "CaselessLiteral", - "CharsNotIn", - "CloseMatch", - "Combine", - "DelimitedList", - "Dict", - "Each", - "Empty", - "FollowedBy", - "Forward", - "GoToColumn", - "Group", - "IndentedBlock", - "Keyword", - "LineEnd", - "LineStart", - "Literal", - "Located", - "PrecededBy", - "MatchFirst", - "NoMatch", - "NotAny", - "OneOrMore", - "OnlyOnce", - "OpAssoc", - "Opt", - "Optional", - "Or", - "ParseBaseException", - "ParseElementEnhance", - "ParseException", - "ParseExpression", - "ParseFatalException", - "ParseResults", - "ParseSyntaxException", - "ParserElement", - "PositionToken", - "QuotedString", - "RecursiveGrammarException", - "Regex", - "SkipTo", - "StringEnd", - "StringStart", - "Suppress", - "Token", - "TokenConverter", - "White", - "Word", - "WordEnd", - "WordStart", - "ZeroOrMore", - "Char", - "alphanums", - "alphas", - "alphas8bit", - "any_close_tag", - "any_open_tag", - "autoname_elements", - "c_style_comment", - "col", - "common_html_entity", - "condition_as_parse_action", - "counted_array", - "cpp_style_comment", - "dbl_quoted_string", - "dbl_slash_comment", - "delimited_list", - "dict_of", - "empty", - "hexnums", - "html_comment", - "identchars", - "identbodychars", - "infix_notation", - "java_style_comment", - "line", - "line_end", - "line_start", - "lineno", - "make_html_tags", - "make_xml_tags", - "match_only_at_col", - "match_previous_expr", - "match_previous_literal", - "nested_expr", - "null_debug_action", - "nums", - "one_of", - "original_text_for", - "printables", - "punc8bit", - "pyparsing_common", - "pyparsing_test", - "pyparsing_unicode", - "python_style_comment", - "quoted_string", - "remove_quotes", - "replace_with", - "replace_html_entity", - "rest_of_line", - "sgl_quoted_string", - "srange", - "string_end", - "string_start", - "token_map", - "trace_parse_action", - "ungroup", - "unicode_set", - "unicode_string", - "with_attribute", - "with_class", - # pre-PEP8 compatibility names - "__versionTime__", - "anyCloseTag", - "anyOpenTag", - "cStyleComment", - "commonHTMLEntity", - "conditionAsParseAction", - "countedArray", - "cppStyleComment", - "dblQuotedString", - "dblSlashComment", - "delimitedList", - "dictOf", - "htmlComment", - "indentedBlock", - "infixNotation", - "javaStyleComment", - "lineEnd", - "lineStart", - "locatedExpr", - "makeHTMLTags", - "makeXMLTags", - "matchOnlyAtCol", - "matchPreviousExpr", - "matchPreviousLiteral", - "nestedExpr", - "nullDebugAction", - "oneOf", - "opAssoc", - "originalTextFor", - "pythonStyleComment", - "quotedString", - "removeQuotes", - "replaceHTMLEntity", - "replaceWith", - "restOfLine", - "sglQuotedString", - "stringEnd", - "stringStart", - "tokenMap", - "traceParseAction", - "unicodeString", - "withAttribute", - "withClass", - "common", - "unicode", - "testing", -] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette/__init__.py deleted file mode 100644 index cde6d8971dbc297f54967b9857c715c995c8a79c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette/__init__.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = "0.27.0" diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/legacy/compatibility.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/legacy/compatibility.py deleted file mode 100644 index cb9b02c86ad3807571f6eac6feed32db2080eb17..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/legacy/compatibility.py +++ /dev/null @@ -1,33 +0,0 @@ -from __future__ import annotations - -import asyncio -import sys -from typing import Any, Dict - - -__all__ = ["asyncio_timeout", "loop_if_py_lt_38"] - - -if sys.version_info[:2] >= (3, 8): - - def loop_if_py_lt_38(loop: asyncio.AbstractEventLoop) -> Dict[str, Any]: - """ - Helper for the removal of the loop argument in Python 3.10. - - """ - return {} - -else: - - def loop_if_py_lt_38(loop: asyncio.AbstractEventLoop) -> Dict[str, Any]: - """ - Helper for the removal of the loop argument in Python 3.10. - - """ - return {"loop": loop} - - -if sys.version_info[:2] >= (3, 11): - from asyncio import timeout as asyncio_timeout # noqa: F401 -else: - from .async_timeout import timeout as asyncio_timeout # noqa: F401 diff --git a/spaces/pvanand/RASA_moodbot/actions/__init__.py b/spaces/pvanand/RASA_moodbot/actions/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/pycoming/bingo/src/pages/api/blob.ts b/spaces/pycoming/bingo/src/pages/api/blob.ts deleted file mode 100644 index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000 --- a/spaces/pycoming/bingo/src/pages/api/blob.ts +++ /dev/null @@ -1,40 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { Readable } from 'node:stream' -import { fetch } from '@/lib/isomorphic' - -const API_DOMAIN = 'https://www.bing.com' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const { bcid } = req.query - - const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`, - { - method: 'GET', - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referrer-Policy": "origin-when-cross-origin", - }, - }, - ) - - res.writeHead(200, { - 'Content-Length': headers.get('content-length')!, - 'Content-Type': headers.get('content-type')!, - }) - // @ts-ignore - return Readable.fromWeb(body!).pipe(res) - } catch (e) { - console.log('Error', e) - return res.json({ - result: { - value: 'UploadFailed', - message: `${e}` - } - }) - } -} diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Chief Architect Bonus Catalogs X5 Torrent.md b/spaces/quidiaMuxgu/Expedit-SAM/Chief Architect Bonus Catalogs X5 Torrent.md deleted file mode 100644 index c76c3099591b27ff6bd4d32fddd3b1e5e8189f95..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Chief Architect Bonus Catalogs X5 Torrent.md +++ /dev/null @@ -1,7 +0,0 @@ -
        -

        when to use chief. business model is design matters more than price, you care about the final result, better brands of cabinets, most any amount of custom, need to draw accurate spaces/buildings, need detailed drawing for manufacturer or installer, need to move walls, need demo/construction plans, need electric plans.

        -

        chief architect bonus catalogs x5 torrent


        DOWNLOADhttps://geags.com/2uCqci



        -

        cad tools for productivity & precision
        chief architect has a powerful cad software engine that includes tools for lines, polylines, splines, arcs and solids to produce objects that range from custom entry columns to a deck ledger detail. quickly manipulate objects with multiple copy, align, reflect and replicate at specific intervals. a cad-to-walls tool imports autocad files and provides mapping for layers so you can quickly see the model in 3d. draw custom cad details, import as dwg/dxf/pdf, or choose from over 500 cad details in the premium ssa catalog to overlay on your design.

        -

        i was asked to compare the two programs by someone this week since i have fairly extensive experience with both. a number of years ago i'd written something on this forum in response to that question, think it was around x4 or 5 that was generally positive toward chief but acknowledge some important weaknesses. several versions later and the improvement to the program for kitchens in measurable. so since i had to write something anyway i thought i'd share it here. the first part is what i slammed together yesterday morning rather quickly. the second part i added today to fill it out a bit, again quickly so excuse the lack of editing and any drivel included..

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Dream Plan Home Design Software Crack 48.md b/spaces/quidiaMuxgu/Expedit-SAM/Dream Plan Home Design Software Crack 48.md deleted file mode 100644 index 43a9a12445d61fe78789650c749c27d213a1e120..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Dream Plan Home Design Software Crack 48.md +++ /dev/null @@ -1,6 +0,0 @@ -

        dream plan home design software crack 48


        DOWNLOAD > https://geags.com/2uCrq1



        -
        -October 19, 2020 - Useful software to easily plan the design of your home. You can create your house design in 3D very easily with this software. In addition to 3D home design, you can also scan and create photos of your space. This software can also help you create 3D models, 2D graphics plans, 3D images, 3D drawings, 3D models, building block drawings, shelving drawings and cabinet drawings, shelf drawings, furniture drawings, wall drawings, door drawings openings, ceiling drawings, floor drawings and more. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Action Essentials 2 Free Download Full 12 Learn from 60 Minutes of After Effects Training and Bonus Sound FX.md b/spaces/raedeXanto/academic-chatgpt-beta/Action Essentials 2 Free Download Full 12 Learn from 60 Minutes of After Effects Training and Bonus Sound FX.md deleted file mode 100644 index 129fb454a016231f6ad05e06b8c6f9ca4c833721..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Action Essentials 2 Free Download Full 12 Learn from 60 Minutes of After Effects Training and Bonus Sound FX.md +++ /dev/null @@ -1,21 +0,0 @@ - -

        Action Essentials 2 Free Download Full 12: A Complete Guide

        - If you are a video editor, filmmaker, or animator, you probably know how important it is to have high-quality visual effects and elements for your projects. Whether you are making an action movie, a horror film, or a sci-fi thriller, you need realistic explosions, fire, smoke, blood, debris, and other effects to enhance your scenes and create a stunning impact on your audience. But where can you find such effects and elements? And how can you use them in your video editing software? One of the best solutions is Action Essentials 2, a collection of over 500 pre-keyed high-definition action stock footage elements that you can easily drag and drop into your compositions. In this article, we will tell you everything you need to know about Action Essentials 2, including what it is, why you need it, how to get it for free, how to install and use it, and how to create amazing videos with it. Let's get started!

        What is Action Essentials 2?

        -

        A brief introduction to the product and its features

        - Action Essentials 2 is a product created by Video Copilot, a company founded by Andrew Kramer, a renowned visual effects artist and instructor. Video Copilot is known for its high-quality products and tutorials for video editors and filmmakers, such as Element 3D, Optical Flares, Saber, and more. Action Essentials 2 is the second version of Action Essentials, which was released in 2009. It is an upgrade from the original version, which had only 250 elements in standard definition. Action Essentials 2 has more than double the number of elements (over 500), and they are all in high definition (720p or 1080p). The elements are divided into 20 categories, such as Atmospheres, Blood Hits, Charges, Debris, Dirt Charges, Explosions, Fireworks, Glass Hits, Muzzle Flashes, Ricochets, Smoke Puffs, Sparks, Water Hits, and more. Each element has multiple variations and angles to choose from. The elements are pre-keyed, which means they have transparent backgrounds and can be easily composited over your footage without any additional keying or masking. They are also compatible with any video editing software that supports QuickTime files with alpha channels. Some of the popular software that can use Action Essentials 2 are Adobe After Effects, Adobe Premiere Pro, Final Cut Pro X, Sony Vegas Pro, DaVinci Resolve Studio Editions.

        Why you need Action Essentials 2 for your video projects

        - Action Essentials 2 is a must-have for any video editor or filmmaker who wants to create realistic and impressive action scenes. With Action Essentials 2, you can: - Save time and money by not having to shoot your own action elements or hire professionals to do it for you. - Enhance your footage with high-quality effects that look natural and believable. - Customize your effects by changing their color, size, speed, direction, opacity, and blending mode. - Combine different elements to create complex and unique effects. - Add motion blur, depth of field, and camera shake to make your effects more dynamic and cinematic. - Learn from the tutorials and tips provided by Video Copilot on how to use Action Essentials 2 effectively.

        How to get Action Essentials 2 for free

        -

        The legal way: using the trial version

        - If you want to try out Action Essentials 2 before buying it, you can download the trial version from the Video Copilot website. The trial version includes 20 free elements from different categories, as well as some bonus tutorials on how to use them. You can use the trial version for personal or commercial projects, as long as you credit Video Copilot as the source of the elements. To download the trial version, you need to create an account on the Video Copilot website, and then go to the product page of Action Essentials 2. There, you will find a link to download the trial version. You will also receive an email with a download link. The trial version is about 1 GB in size, so make sure you have enough space on your computer or external drive.

        The illegal way: downloading from torrent sites

        - Another way to get Action Essentials 2 for free is to download it from torrent sites. Torrent sites are websites that allow users to share files through peer-to-peer networks. By using a torrent client software, such as BitTorrent or uTorrent, you can download files from other users who have them on their computers. However, this method is illegal, as it violates the copyright laws and the terms of service of Video Copilot. By downloading Action Essentials 2 from torrent sites, you are stealing from Video Copilot and depriving them of their rightful income. This can have serious consequences for both you and Video Copilot.

        The risks and consequences of piracy

        - Piracy is not only unethical but also risky. By downloading files from torrent sites, you expose yourself to various dangers, such as: - Viruses and malware that can infect your computer and compromise your security and privacy. - Legal actions and lawsuits that can be taken against you by Video Copilot or other authorities for violating their intellectual property rights. - Fines and penalties that can be imposed on you by law enforcement agencies for breaking the law and causing economic damage to Video Copilot. - Loss of reputation and credibility as a video editor or filmmaker for using stolen content and not respecting the work of others.

        The ethical and moral issues of stealing content

        - Piracy is not only illegal but also immoral. By downloading files from torrent sites, you disrespect Video Copilot and their hard work and creativity. You also harm yourself and the video editing community by: - Devaluing your own skills and talents as a video editor or filmmaker by relying on stolen content instead of creating your own original content. - Discouraging innovation and quality in the video editing industry by reducing the incentive for Video Copilot and other creators to produce new products and improve their existing products. - Damaging the trust and relationship between Video Copilot and their customers by undermining their business model and revenue stream.

        How to install and use Action Essentials 2

        -

        The system requirements and compatibility issues

        - Before installing Action Essentials 2 on your computer, you need to make sure that your system meets the minimum requirements for running the product. According to Video Copilot, the minimum requirements are: - Operating system: Windows XP SP3 or later / Mac OS X Leopard or later - Processor: Intel Core Duo / AMD Athlon X64 - Memory: 1 GB RAM - Hard drive space: At least 13 GB free space - Graphics card: Any card that supports QuickTime files with alpha channels - Software: Any video editing software that supports QuickTime files with alpha channels If your system does not meet these requirements, you may experience problems such as slow performance, crashes, or errors when using Action Essentials 2. You also need to make sure that your video editing software is compatible with Action Essentials 2. As mentioned earlier, Action Essentials 2 works with any software that supports QuickTime files with alpha channels. However, some software may have specific settings or preferences that need to be adjusted for optimal results. For example, - In Adobe After Effects, you need to set the color depth to at least 16 bits per channel (bpc) to avoid banding issues in some elements. You also need to enable motion blur for each layer that contains an element if you want realistic motion blur effects. - In Final Cut Pro X, you need to set the project properties to match the resolution (720p or 1080p) and frame rate (24 fps or 30 fps) of the elements you are using. You also need to change the blend mode of each element layer from Normal to Screen if you want transparent backgrounds. that determine how a layer interacts with the layers below it in terms of color and transparency. Different blending modes can create different effects and moods for your elements. For example, the Screen blending mode can make your elements look brighter and more transparent, while the Multiply blending mode can make them look darker and more opaque. You can change the blending mode of a layer by selecting it and choosing a mode from the drop-down menu in your software. Some of the common blending modes for Action Essentials 2 are Screen, Add, Overlay, and Color Dodge. - Color correction: Color correction is the process of adjusting the color and contrast of your elements to match the color and contrast of your footage. This can make your elements look more realistic and integrated with your scene. You can use various tools and effects in your software to perform color correction, such as Curves, Levels, Hue/Saturation, Color Balance, and more. You can also use Video Copilot's free plugin called VC Color Vibrance to add some vibrancy and glow to your elements. - Motion tracking: Motion tracking is the process of tracking the movement of an object or a point in your footage and applying that movement to another layer or element. This can make your elements follow the motion of your footage and create a more dynamic and convincing effect. You can use various tools and effects in your software to perform motion tracking, such as Track Motion, Stabilize Motion, Mocha AE, and more. You can also use Video Copilot's free plugin called VC Saber to create realistic motion trails for your elements. - Masking: Masking is the process of hiding or revealing parts of a layer or element using a shape or a path. This can help you isolate or blend your elements with your footage and create custom shapes and effects. You can use various tools and effects in your software to create masks, such as Pen Tool, Rectangle Tool, Ellipse Tool, Roto Brush Tool, Mask Feather, and more. You can also use Video Copilot's free plugin called VC Reflect to create realistic reflections for your elements.

        How to create amazing videos with Action Essentials 2

        -

        The different types of elements and effects available

        - Action Essentials 2 offers a wide range of elements and effects that you can use for various types of videos and genres. Here are some examples: - Explosions: Explosions are one of the most popular and versatile elements in Action Essentials 2. You can use them for action scenes, war scenes, sci-fi scenes, disaster scenes, and more. You can choose from different types of explosions, such as fireballs, shockwaves, blasts, sparks, debris, smoke plumes, and more. You can also combine different explosions to create bigger and more complex explosions. fireworks, fireballs, fire trails, and more. You can also combine different fire elements to create bigger and more varied fire effects. - Smoke: Smoke is another essential element in Action Essentials 2. You can use it for scenes involving smoke effects, such as explosions, fires, gunshots, vehicles, factories, and more. You can choose from different types of smoke, such as smoke puffs, smoke plumes, smoke trails, smoke rings, and more. You can also combine different smoke elements to create thicker and denser smoke effects. - Blood: Blood is another important element in Action Essentials 2. You can use it for scenes involving blood effects, such as injuries, wounds, deaths, fights, horror, and more. You can choose from different types of blood, such as blood hits, blood splatters, blood squirts, blood drips, and more. You can also combine different blood elements to create more realistic and gruesome blood effects. water splashes, water jets, water streams, and more. You can also combine different water elements to create more dynamic and fluid water effects. These are just some of the elements and effects available in Action Essentials 2. You can explore the rest of them by browsing the Action Essentials 2 folder or watching the product overview video on Video Copilot's website.

        The best practices and techniques for editing and compositing

        - To create amazing videos with Action Essentials 2, you need to follow some best practices and techniques for editing and compositing. Here are some of them: - Plan your shots and scenes in advance. Before you start editing and compositing, you should have a clear idea of what you want to achieve and how you want to achieve it. You should plan your shots and scenes in terms of camera angles, lighting, framing, movement, timing, and mood. You should also plan how you want to use the elements and effects in your shots and scenes, such as where to place them, how to animate them, how to blend them, and how to color correct them. - Use reference footage and images. To make your elements and effects look realistic and integrated with your footage, you should use reference footage and images that match the style and genre of your video. You can use reference footage and images from movies, TV shows, video games, or online sources that have similar elements and effects to the ones you want to use. You can also use reference footage and images from real life situations that have similar lighting, color, and atmosphere to your footage. You can use reference footage and images as guides for placing, animating, blending, and color correcting your elements and effects. Blur, Distort, Noise, Color Correction, and more. - Use masks and mattes. To make your elements and effects look more realistic and integrated with your footage, you should use masks and mattes to hide or reveal parts of them. You can use masks and mattes to create custom shapes and effects for your elements. You can also use masks and mattes to blend your elements with your footage and create seamless transitions. You can use various tools and effects in your software to create masks and mattes, such as Pen Tool, Roto Brush Tool, Luma Key, Track Matte, and more. - Use motion blur and depth of field. To make your elements and effects look more dynamic and cinematic, you should use motion blur and depth of field in your compositions. Motion blur is the effect of blurring an object or a point that is moving fast in relation to the camera or the viewer. Depth of field is the effect of blurring the objects or points that are far away or close to the camera or the viewer. You can use various tools and effects in your software to create motion blur and depth of field, such as Motion Blur, CC Force Motion Blur, Camera Lens Blur, Frischluft Lenscare, and more.

        Some examples and inspiration from other users

        - To get some inspiration and ideas for creating amazing videos with Action Essentials 2, you can watch some examples from other users who have used the product in their projects. Here are some of them: - Action Essentials 2 - Official Trailer: This is the official trailer for Action Essentials 2 that showcases some of the elements and effects in action. You can watch how Video Copilot used Action Essentials 2 to create stunning scenes and sequences for different genres and styles. - Action Essentials 2 - User Videos: This is a playlist of user videos that have used Action Essentials 2 in their projects. You can watch how other users used Action Essentials 2 to create various types of videos, such as action movies, horror movies, sci-fi movies, music videos, commercials, and more. such as After Effects, Premiere Pro, Final Cut Pro X, and more. You can also learn how to use Action Essentials 2 with different plugins, such as Element 3D, Optical Flares, Saber, and more. You can also learn how to use Action Essentials 2 for different effects, such as explosions, fire, smoke, blood, water, and more.

        Conclusion and FAQs

        - In conclusion, Action Essentials 2 is a great product for video editors and filmmakers who want to create realistic and impressive action scenes and effects. It offers over 500 pre-keyed high-definition action stock footage elements that you can easily drag and drop into your compositions. It is compatible with any video editing software that supports QuickTime files with alpha channels. It is easy to install and use, and it comes with tutorials and tips from Video Copilot on how to use it effectively. It is also affordable and worth the price. If you want to buy Action Essentials 2 or learn more about it, you can visit Video Copilot's website or follow them on social media. You can also watch some examples and inspiration from other users who have used Action Essentials 2 in their projects. Here are some frequently asked questions about Action Essentials 2: - Q: How can I get Action Essentials 2 for free? - A: You can get Action Essentials 2 for free by downloading the trial version from Video Copilot's website. The trial version includes 20 free elements from different categories, as well as some bonus tutorials on how to use them. You can use the trial version for personal or commercial projects, as long as you credit Video Copilot as the source of the elements. - Q: How can I download Action Essentials 2 from torrent sites? - A: You can download Action Essentials 2 from torrent sites by using a torrent client software, such as BitTorrent or uTorrent. However, this method is illegal and risky, as it violates the copyright laws and the terms of service of Video Copilot. By downloading Action Essentials 2 from torrent sites, you are stealing from Video Copilot and depriving them of their rightful income. This can have serious consequences for both you and Video Copilot. - Q: How can I install and activate Action Essentials 2? open your video editing software and import the elements you want to use from the Action Essentials 2 folder into your project, and activate Action Essentials 2 by entering your license key when prompted by Video Copilot's activation tool. - Q: How can I use Action Essentials 2 in my video projects? - A: To use Action Essentials 2 in your video projects, you need to know some basic concepts and techniques for compositing and editing. You need to plan your shots and scenes in advance, use reference footage and images, use multiple layers and effects, use masks and mattes, use motion blur and depth of field, and use color correction. You also need to choose the right elements and effects for your video genre and style, and customize them according to your preferences and needs. - Q: How can I create amazing videos with Action Essentials 2? - A: To create amazing videos with Action Essentials 2, you need to follow some best practices and techniques for editing and compositing. You also need to get some inspiration and ideas from other users who have used Action Essentials 2 in their projects. You can watch some examples and inspiration from Video Copilot's official trailer, user videos, and tutorials. I hope you enjoyed reading this article. If you have any questions or feedback, please let me know. Thank you for your attention. ?

        -

        action essentials 2 free download full 12


        Download Zip » https://tinourl.com/2uL2cq



        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Addison Wesley Professional Unity Certified 3D Artist Courseware Master the Skills and Techniques of 3D Art in Unity.md b/spaces/raedeXanto/academic-chatgpt-beta/Addison Wesley Professional Unity Certified 3D Artist Courseware Master the Skills and Techniques of 3D Art in Unity.md deleted file mode 100644 index 8bb6f5fd1c884abd6d7ad9722ba876ddd4af832b..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Addison Wesley Professional Unity Certified 3D Artist Courseware Master the Skills and Techniques of 3D Art in Unity.md +++ /dev/null @@ -1,13 +0,0 @@ - -
        - Asset Creation and Management: How to import and manipulate 3D assets in Unity
        - Lighting, Reflection and Post-Processing Effects: How to create realistic and stylized lighting effects in Unity
        - Integrating Scripts for Scene Integration: How to use scripts to control scene logic and interactivity in Unity
        - Character Setup: How to set up character models, animations and controllers in Unity
        - Setting up Cutscenes: How to create cinematic sequences using Timeline and Cinemachine in Unity
        - Conclusion: Why take the courseware and how to prepare for the exam
        - FAQs: Some common questions and answers about the courseware | # Article

        Addison Wesley Professional – Unity Certified 3D Artist Courseware

        - Are you a 3D artist who wants to learn how to use Unity, the world's most popular real-time 3D development platform? Do you want to boost your skills and career prospects by earning a professional certification from Unity? If so, you might be interested in the Addison Wesley Professional – Unity Certified 3D Artist Courseware. This courseware is a series of five self-paced courses that will help you prepare for the Unity Certified 3D Artist exam, the official certification for entry- to mid-level Unity artists. By taking this courseware, you will learn how to complete realistic art implementation tasks in Unity that are aligned to the topics covered on the exam. In this article, we will give you an overview of what you will learn in each course, and why you should take this courseware if you want to become a Unity Certified 3D Artist.

        Asset Creation and Management

        - The first course in the series covers asset creation and management. In this course, you will learn how to: - Select the relevant import settings for importing 3D assets into Unity - Troubleshoot common issues with imported 3D assets - Identify techniques to prototype scenes and maintain prefabs throughout the production cycle - Recognize proper folder structure and naming conventions for organizing assets - Apply materials and textures to 3D models using the Standard Shader - Use UV mapping tools to adjust texture coordinates - Create custom shaders using Shader Graph - Optimize assets for performance and quality You will practice these skills by working on two main projects: a Kitchen Configuration application with a realistic aesthetic, and a 3D video game level with a more stylized science-fantasy look.

        Lighting, Reflection and Post-Processing Effects

        - The second course in the series covers lighting, reflection and post-processing effects. In this course, you will learn how to: - Use different types of lights and shadows in Unity - Adjust light settings such as intensity, color, range, spot angle, etc. - Use light probes and reflection probes to create dynamic lighting and reflections - Use baked lighting and lightmapping to improve performance and quality - Use post-processing effects such as bloom, depth of field, color grading, etc. to enhance the mood and atmosphere of your scenes - Use the High Definition Render Pipeline (HDRP) to create high-fidelity graphics You will practice these skills by working on the same two projects as before, but with different lighting scenarios and effects.

        Integrating Scripts for Scene Integration

        - The third course in the series covers integrating scripts for scene integration. In this course, you will learn how to: - Use C# scripts to control scene logic and interactivity in Unity - Use variables, methods, classes, loops, conditionals, etc. in C# - Use built-in Unity components such as Rigidbody, Collider, Animator, etc. - Use events and delegates to communicate between scripts - Use scriptable objects to store data and logic - Use UI elements such as buttons, sliders, text, etc. to create user interfaces - Use raycasting and collision detection to interact with objects in your scenes You will practice these skills by working on the same two projects as before, but with added functionality and interactivity.

        Character Setup

        - The fourth course in the series covers character setup. In this course, you will learn how to: - Set up character models, animations and controllers in Unity - Use humanoid and generic rigs for character animation - Use blend trees, state machines, parameters, transitions, etc. to create animation logic - Use inverse kinematics (IK) to adjust character poses - Use root motion to drive character movement - Use animation events to trigger actions or sounds - Use ragdoll physics to create realistic character reactions You will practice these skills by working on a new project: a third-person shooter game with a sci-fi theme.

        Setting up Cutscenes

        - The fifth and final course in the series covers setting up cutscenes. In this course, you will learn how to: - Create cinematic sequences using Timeline and Cinemachine in Unity - Use tracks, clips, markers, signals, etc. to edit your cutscenes - Use virtual cameras, shots, blends, transitions, etc. to control your camera angles and movements - Use animation tracks, audio tracks, activation tracks, etc. to synchronize your cutscenes with your gameplay - Use Cinemachine features such as noise, impulse source, clear shot camera, etc. to add realism and variety to your cutscenes You will practice these skills by working on the same third-person shooter game project as before, but with added cutscenes.

        Conclusion

        - As you can see, the Addison Wesley Professional – Unity Certified 3D Artist Courseware covers a lot of ground when it comes to learning how to use Unity as a 3D artist. By taking this courseware, you will not only gain valuable knowledge and skills, but also prepare yourself for the official certification exam from Unity. The certification exam is a 90-minute online test that consists of 60 multiple-choice questions. You need a score of at least 70% to pass the exam and earn your certification. The certification is valid for two years from the date of passing. If you are interested in taking this courseware, you can find it on O'Reilly Media's learning platform, where you can access it with a 10-day free trial or a subscription. You can also find more information about the certification exam on Unity's website, where you can register for it online. We hope this article has given you a good overview of what you can expect from this courseware, and why it is worth taking if you want to become a Unity Certified 3D Artist.

        FAQs

        - Here are some common questions and answers about the courseware: Q: How long does it take to complete the courseware? A: The courseware consists of five courses, each with about four hours of video content. However, the actual time it takes to complete each course may vary depending on your pace, experience level, and how much time you spend on practicing the projects. Q: Do I need any prior experience with Unity or C# to take this courseware? A: This courseware is designed for entry- to mid-level Unity artists who have some basic familiarity with Unity's interface, tools, and workflows. You do not need any prior experience with C#, as the courseware will teach you the basics of scripting in Unity. However, if you have some programming background, it may help you grasp some concepts faster. Q: What software do I need to take this courseware? A: You need a computer that meets Unity's system requirements, and a stable internet connection. You also need access to O'Reilly Media's learning platform, where you can watch the videos, download the project files, and take quizzes. You also need access to Unity's website, where you can download the latest version of Unity (2019.4 or later), and register for the certification exam. Q: How much does it cost to take this courseware? A: The cost of taking this courseware depends on whether you have a subscription or a free trial with O'Reilly Media's learning platform. A subscription costs $49 per month or $499 per year, and gives you unlimited access to all their books, videos, courses, and live events. A free trial lasts for 10 days, and gives you limited access to some of their content. You can cancel your subscription or free trial at any time. The cost of taking the certification exam is $249 USD per attempt. You can pay for it online using a credit card or PayPal. Q: How do I get my certification after passing the exam? A: After passing the exam, you will receive an email from Unity with instructions on how to claim your digital badge and certificate. You can also access your certification status and credentials on Unity's website. You can use your badge and certificate to showcase your skills and achievements on your resume, portfolio, social media profiles, etc.

        -

        Addison Wesley Professional – Unity Certified 3D Artist Courseware


        DOWNLOADhttps://tinourl.com/2uL07s



        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Asunsoft Windows Password Reset Advanced LINK Full Crack.md b/spaces/raedeXanto/academic-chatgpt-beta/Asunsoft Windows Password Reset Advanced LINK Full Crack.md deleted file mode 100644 index 3f18f3971a337f305aa960ea234c2f6f47ef481d..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Asunsoft Windows Password Reset Advanced LINK Full Crack.md +++ /dev/null @@ -1,118 +0,0 @@ -
        -

        Asunsoft Windows Password Reset Advanced Full Crack: What Is It and Why You Should Avoid It

        -

        If you have forgotten or lost your Windows password, you may be looking for a way to reset it without reinstalling your system or losing your data. One of the software that claims to help you with this task is Asunsoft Windows Password Reset Advanced. However, you may also be tempted to download a cracked version of this software to save money or bypass the license activation. In this article, we will explain what Asunsoft Windows Password Reset Advanced is, what a crack is, and why you should avoid using cracked software at all costs.

        -

        asunsoft windows password reset advanced full crack


        Download Zip →→→ https://tinourl.com/2uL0l6



        -

        What Is Asunsoft Windows Password Reset Advanced?

        -

        Asunsoft Windows Password Reset Advanced is a program that enables you to reset all Windows local and domain passwords for Windows 11/10/8/7/Vista/XP/NT/2008/2003/2000. It supports running from CD/DVD or USB and you can create a new administrator account without logging in.

        -

        Features and benefits of Asunsoft Windows Password Reset Advanced

        -

        Some of the features and benefits of Asunsoft Windows Password Reset Advanced are:

        -
          -
        • It can reset passwords for all user accounts, including administrator, standard, guest, domain administrator, domain user, etc.
        • -
        • It can reset passwords for all Windows versions, including 32-bit and 64-bit systems.
        • -
        • It can reset passwords for various file systems, such as NTFS, FAT16, FAT32, exFAT, etc.
        • -
        • It can reset passwords for various types of hard drives, such as IDE, SATA, SCSI, RAID, etc.
        • -
        • It can create a new administrator account without logging in.
        • -
        • It can burn a password reset disk on CD/DVD or USB flash drive.
        • -
        • It has a user-friendly interface and easy-to-follow steps.
        • -
        • It has a high success rate and fast speed.
        • -
        -

        Pricing and licensing of Asunsoft Windows Password Reset Advanced

        -

        The official price of Asunsoft Windows Password Reset Advanced is $45.95 for a single license. You can purchase it from the official website or from authorized resellers. You will receive a license key via email after payment. You need to activate the software with the license key before using it. The license key is valid for one computer only. If you want to use the software on multiple computers, you need to purchase multiple licenses or choose a different edition.

        -

        What Is a Crack and How Does It Work?

        -

        A crack is a modified version of a software that bypasses or removes its protection mechanisms, such as license keys, encryption keys, digital rights management (DRM), etc. A crack can be a file that replaces or modifies the original executable file of the software, or a program that generates valid license keys for the software.

        -

        -

        The definition and types of cracks

        -

        The methods and tools used by crackers

        -

        Crackers use various methods and tools to crack software, such as:

        -
          -
        • Reverse engineering: This is the process of analyzing the software code or behavior to understand how it works and how to modify it.
        • -
        • Debugging: This is the process of finding and fixing errors or bugs in the software code or execution.
        • -
        • Disassembling: This is the process of converting the software code from a binary format to a human-readable format, such as assembly language or source code.
        • -
        • Patching: This is the process of changing or adding some bytes or instructions in the software code to alter its functionality or behavior.
        • -
        • Keygen: This is a program that generates valid license keys for the software based on some algorithm or pattern.
        • -
        • Loader: This is a program that loads the software with some modifications or bypasses some checks or validations.
        • -
        • Crackme: This is a challenge or a puzzle that crackers try to solve by cracking a software or a file.
        • -
        -

        What Are the Risks of Using Cracked Software?

        -

        Using cracked software may seem like a good idea to save money or time, but it comes with many risks and disadvantages that outweigh any benefits. Here are some of the risks of using cracked software :

        -

        Malware infections and data theft

        -

        One of the most common and serious risks of using cracked software is malware infection. Malware is any malicious software that can harm your computer or data, such as viruses, worms, trojans, ransomware, spyware, adware, etc. Crackers often embed malware into their cracks to infect your computer or steal your data. For example, they may use keyloggers to record your keystrokes and passwords, ransomware to encrypt your files and demand payment for decryption, or spyware to monitor your online activities and personal information. Malware can also damage your system files, slow down your computer, consume your bandwidth, display unwanted ads, or redirect you to malicious websites. Malware infections can be hard to detect and remove, and they can compromise your security and privacy.

        -

        Legal issues and penalties

        -

        Another risk of using cracked software is legal trouble. Cracking and using cracked software is illegal in most countries and regions, as it violates the intellectual property rights of the software developers and distributors. If you are caught using cracked software, you may face legal consequences, such as lawsuits, fines, penalties, or even jail time. For example, in the US, you can be fined up to $150,000 for each infringed copy of software. In addition, you may also lose your reputation, credibility, or trustworthiness as a user or a professional.

        -

        Performance and functionality problems

        -

        A third risk of using cracked software is performance and functionality issues. Cracked software may not work properly or as intended, as it may have errors, bugs, glitches, or compatibility problems. For example, it may crash frequently, freeze your computer, corrupt your files, display incorrect results, or fail to perform some functions. Cracked software may also lack some features or updates that are available in the original version. For example, it may not support some formats, languages, devices, or platforms. Cracked software may also have conflicts with other programs or systems on your computer. These issues can affect your productivity, efficiency, quality, or satisfaction as a user.

        -

        What Are the Alternatives to Cracked Software?

        -

        If you want to avoid the risks of using cracked software, you should look for legitimate alternatives that can meet your needs and budget. Here are some of the alternatives to cracked software :

        -

        Free and open source software

        -

        Free and open source software (FOSS) is software that is available for free and whose source code is accessible and modifiable by anyone. FOSS can be a great alternative to cracked software, as it offers many benefits, such as:

        -
          -
        • It is legal and ethical to use.
        • -
        • It is secure and reliable to use.
        • -
        • It is updated and maintained by a community of developers and users.
        • -
        • It is customizable and adaptable to your preferences and needs.
        • -
        • It supports various platforms and standards.
        • -
        • It fosters innovation and collaboration among users.
        • -
        -

        Some examples of FOSS that can replace popular proprietary software are:

        - - - - - - - -
        Microsoft OfficeLibreOffice, OpenOffice, Google Docs
        Adobe PhotoshopGIMP, Krita, Inkscape
        Windows OSLinux, Ubuntu, Fedora
        WinRAR7-Zip, PeaZip, Bzip2
        Norton AntivirusAvast, AVG, ClamAV
        -

        Trial versions and discounts

        -

        Trial versions and discounts are another alternative to cracked software. Trial versions are software that you can use for free for a limited period of time or with limited features. Discounts are software that you can buy for a lower price than the original price. Trial versions and discounts can help you test the software before buying it or save some money while buying it legally. Some of the benefits of trial versions and discounts are:

        -
          -
        • They are legal and ethical to use.
        • -
        • They are secure and reliable to use.
        • -
        • They are updated and supported by the software developers.
        • -
        • They have full or partial features and functionality of the software.
        • -
        • They can be extended or upgraded to the full version of the software.
        • -
        -

        Some examples of trial versions and discounts that you can find online are:

        - - - - - - - -
        SoftwareTrial Version or Discount
        Asunsoft Windows Password Reset AdvancedTrial version: You can reset passwords for local accounts only. Discount: You can get 20% off with coupon code ASUN-8I5G-HGPO.
        Microsoft Office 365Trial version: You can use it for free for one month. Discount: You can get it for free or at a reduced price if you are a student or an educator.
        Adobe Creative CloudTrial version: You can use it for free for seven days. Discount: You can get 60% off if you are a student or a teacher.
        Windows 10Trial version: You can use it for free for 90 days. Discount: You can get it for free if you have a genuine Windows 7 or 8 license.
        NordVPNTrial version: You can use it for free for 30 days. Discount: You can get 68% off if you buy a two-year plan.
        -

        Password recovery tools

        -

        Password recovery tools are another alternative to cracked software. Password recovery tools are software that can help you recover or reset your forgotten or lost passwords for various accounts or files. Password recovery tools can be useful if you want to access your Windows system or data without resetting your password or losing your data. Some of the benefits of password recovery tools are:

        -
          -
        • They are legal and ethical to use.
        • -
        • They are secure and reliable to use.
        • -
        • They are updated and supported by the software developers.
        • -
        • They have various features and options to recover or reset your passwords.
        • -
        • They have high success rates and fast speeds.
        • -
        -

        Some examples of password recovery tools that you can use instead of Asunsoft Windows Password Reset Advanced are:

        - - - - - - - -
        Password Recovery Tool Features and Benefits
        OphcrackIt can recover Windows passwords using rainbow tables. It can run from a CD or a USB. It can crack passwords up to 14 characters long. It supports Windows 11/10/8/7/Vista/XP.
        PassFab 4WinKeyIt can reset or remove Windows passwords for local and domain accounts. It can create a new administrator account without logging in. It can run from a CD, a DVD, or a USB. It supports Windows 11/10/8/7/Vista/XP/NT/2008/2003/2000.
        PCUnlockerIt can reset or bypass Windows passwords for local and domain accounts. It can enable or unlock disabled or locked accounts. It can run from a CD, a DVD, or a USB. It supports Windows 11/10/8/7/Vista/XP/NT/2008/2003/2000.
        Lazesoft Recover My PasswordIt can reset or remove Windows passwords for local and domain accounts. It can create a new administrator account without logging in. It can run from a CD, a DVD, or a USB. It supports Windows 11/10/8/7/Vista/XP/NT/2008/2003.
        Kon-BootIt can bypass Windows passwords for local accounts without resetting them. It can run from a CD, a DVD, or a USB. It supports Windows 11/10/8/7/Vista/XP.
        -

        Conclusion

        -

        In conclusion, Asunsoft Windows Password Reset Advanced is a software that can help you reset your Windows passwords for local and domain accounts. However, using a cracked version of this software is not a good idea, as it can expose you to malware infections, legal issues, and performance problems. Instead, you should look for legitimate alternatives, such as free and open source software, trial versions and discounts, or password recovery tools. These alternatives can help you access your Windows system or data without compromising your security, privacy, or quality.

        -

        FAQs

        -

        Here are some frequently asked questions about Asunsoft Windows Password Reset Advanced and cracked software:

        -
          -
        1. Q: How do I download Asunsoft Windows Password Reset Advanced?
        2. -
        3. A: You can download Asunsoft Windows Password Reset Advanced from the official website or from authorized resellers. You need to purchase a license key to activate the software before using it.
        4. -
        5. Q: How do I use Asunsoft Windows Password Reset Advanced?
        6. -
        7. A: You need to create a password reset disk on a CD/DVD or USB flash drive using another accessible computer. Then, you need to boot your locked computer from the password reset disk and follow the instructions on the screen to reset your password.
        8. -
        9. Q: How do I find a crack for Asunsoft Windows Password Reset Advanced?
        10. -
        11. A: You should not look for or use a crack for Asunsoft Windows Password Reset Advanced, as it is illegal and risky. A crack is a modified version of the software that bypasses its protection mechanisms, such as license keys, encryption keys, etc. A crack can infect your computer with malware, expose you to legal consequences, or cause performance issues.
        12. -
        13. Q: What are some free and open source alternatives to Asunsoft Windows Password Reset Advanced?
        14. -
        15. A: Some free and open source alternatives to Asunsoft Windows Password Reset Advanced are Ophcrack, Lazesoft Recover My Password, Kon-Boot, etc. These are software that can help you recover or reset your Windows passwords for free and legally.
        16. -
        17. Q: What are some trial versions and discounts for Asunsoft Windows Password Reset Advanced?
        18. -
        19. A: You can try Asunsoft Windows Password Reset Advanced for free for local accounts only. You can also get 20% off with coupon code ASUN-8I5G-HGPO. Some other software that offer trial versions and discounts are PassFab 4WinKey, PCUnlocker, etc.
        20. -

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Crack para Halo CE CD Key Solucin a los Errores ms Comunes.md b/spaces/raedeXanto/academic-chatgpt-beta/Crack para Halo CE CD Key Solucin a los Errores ms Comunes.md deleted file mode 100644 index 3c25413b1315d8fd88da8b5ea0da70154c2345f4..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Crack para Halo CE CD Key Solucin a los Errores ms Comunes.md +++ /dev/null @@ -1,96 +0,0 @@ -
        -

        Crack para Halo CE CD Key: How to Play Halo Custom Edition Online Without a Valid CD Key

        -

        If you are a fan of Halo, you might have heard of Halo Custom Edition, a standalone expansion for the PC version of Halo: Combat Evolved. This expansion allows you to play custom maps created by other players and only has multiplayer mode. However, to play online, you need a valid CD key, which can be hard to find or expensive to buy. Fortunately, there are ways to bypass this requirement and play online without a valid CD key. In this article, we will show you two methods to get a crack para Halo CE CD key and enjoy playing online with your friends.

        -

        crack para halo ce cd key


        DOWNLOADhttps://tinourl.com/2uL0b3



        -

        Introduction

        -

        What is Halo Custom Edition?

        -

        Halo Custom Edition, commonly abbreviated as Halo CE, is a standalone expansion for the PC version of Halo: Combat Evolved. It was released in 2004 by Gearbox Software and Bungie, with the support of Microsoft Game Studios. Unlike the original game, Halo CE does not have a single-player campaign mode. Instead, it focuses on multiplayer mode and allows players to create and play custom maps using a map editor tool called Halo Editing Kit. These custom maps can have different game modes, weapons, vehicles, enemies, and scenery. Some of the most popular custom maps are Coldsnap, Hugeass, Extinction, and Yoyorast Island.

        -

        Why do you need a crack para Halo CE CD key?

        -

        To play online on official servers or join other players' games, you need a valid CD key for Halo CE. A CD key is a unique alphanumeric code that is used to verify that you own a legitimate copy of the game. However, since Halo CE is an old game, it can be hard to find or buy a new copy with a valid CD key. Moreover, some players might have lost their original CD keys or have them stolen by hackers. If you don't have a valid CD key, you will get an error message saying "Your CD key is invalid" when you try to join or host an online game.

        -

        How to get a crack para Halo CE CD key?

        -

        There are two main methods to get a crack para Halo CE CD key and play online without any problems. The first method is to download a no-CD-key crack folder that contains some files that will bypass the CD key check. The second method is to use a CD-key generator that will create a random but valid CD key for you. Both methods are easy and safe to use if you follow the instructions carefully and download from trusted sources. We will explain each method in detail below.

        -

        halo combat evolved cd key generator
        -halo ce no cd crack download
        -halo ce product key free
        -halo ce crack multiplayer
        -halo ce serial key 2021
        -halo ce activation code
        -halo ce crack patch
        -halo ce license key
        -halo ce no cd patch
        -halo ce crack online
        -halo combat evolved product key generator
        -halo ce crack fix
        -halo ce registration code
        -halo ce crack file
        -halo ce no cd exe
        -halo combat evolved cd key crack
        -halo ce crack version
        -halo ce serial number
        -halo ce no cd key
        -halo combat evolved activation code
        -halo ce crack download free
        -halo ce product key generator
        -halo ce crack gamecopyworld
        -halo combat evolved license key
        -halo ce no cd mod
        -halo combat evolved no cd crack download
        -halo ce crack skidrow
        -halo combat evolved serial key 2021
        -halo ce no cd launcher
        -halo combat evolved crack multiplayer
        -halo ce crack only
        -halo combat evolved product key free
        -halo ce crack rar
        -halo combat evolved serial number
        -halo ce no cd patch download
        -halo combat evolved crack online
        -halo ce crack reloaded
        -halo combat evolved registration code
        -halo ce no cd required patch
        -halo combat evolved crack patch
        -halo ce crack steam
        -halo combat evolved product key generator download
        -halo ce no cd update patch v1.09.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

        -

        Method 1: Downloading a No-CD-Key Crack Folder

        -

        Step 1: Download Halo Custom Edition and the latest patch

        -

        The first step is to download and install Halo Custom Edition on your PC. You can download it from this link, which also contains the original PC port for Halo: Combat Evolved. You will need both games to play online. After downloading, run the installer and follow the instructions on the screen. You will also need to enter a CD key during the installation process. You can use any random CD key for now, as we will replace it later with the crack folder. You can find some example CD keys in this link.

        -

        The next step is to download and install the latest patch for Halo Custom Edition, which is version 1.0.10. This patch fixes some bugs and improves compatibility with newer operating systems. You can download it from this link, which also contains the patch for Halo: Combat Evolved. After downloading, run the patch exe file and follow the instructions on the screen.

        -

        Step 2: Download a no-CD-key crack folder from a trusted source

        -

        The second step is to download a no-CD-key crack folder that contains some files that will bypass the CD key check when you play online. You can download it from this link, which also contains the crack folder for Halo: Combat Evolved. Make sure you download from a trusted source and scan the files for viruses before using them.

        -

        Step 3: Copy the files from the crack folder to your Halo Custom Edition directory

        -

        The third step is to copy the files from the crack folder to your Halo Custom Edition directory, which is usually located at C:\Program Files (x86)\Microsoft Games\Halo Custom Edition\. You will need to replace some existing files with the ones from the crack folder, so make sure you back up your original files before doing this. The files you need to copy are:

        -
          -
        • haloce.exe
        • -
        • strings.dll
        • -
        • binkw32.dll
        • -
        • binkw32.ini
        • -
        • msvcp60.dll
        • -
        • msvcr70.dll
        • -
        • msvcr71.dll
        • -
        • msvcr80.dll
        • -
        • msvcr90.dll
        • -
        • vcredist_x86.exe
        • -
        -

        After copying these files, you are done with this method.

        -

        Step 4: Run Halo Custom Edition and enjoy playing online without a valid CD key

        -

        The final step is to run Halo Custom Edition and enjoy playing online without any problems. You can launch the game by double-clicking on haloce.exe or using a shortcut on your desktop or start menu. You can join or host online games using either LAN or Internet options in the multiplayer menu. You can also browse servers using tools like Halo Anticheat 2 (HAC2) or Halo Chimera. You don't need to worry about getting banned or kicked out of servers because of your invalid CD key.

        -

        Method 2: Using a CD-Key Generator

        -

        Step 1: Download Halo Custom Edition and the latest patch

        -

        This step is exactly the same as in method 1. You need to download and install both Halo Custom Edition and its latest patch on your PC.

        -

        Step 2: Download a CD-key generator from a trusted source

        -

        The second step is to download a CD-key generator that will create a random but valid CD key for you. You can download it from this link, which also contains a video tutorial on how to use it. Make sure you download from a trusted source and scan the file for viruses before using it.

        -

        Step 3: Run the CD-key generator and copy the generated key

        -or changing your settings. To play offline, you need to launch the game with the -console parameter. You can do this by right-clicking on haloce.exe or its shortcut and selecting Properties. Then, in the Target field, add -console at the end of the line. For example, it should look like this: "C:\Program Files (x86)\Microsoft Games\Halo Custom Edition\haloce.exe" -console. Then, click OK and run the game. You will see a console window appear in the game. To play offline, type map_name [map name] in the console and press Enter. For example, to play Blood Gulch, type map_name bloodgulch and press Enter.
      • -
      • Q: Can I use the same CD key for both Halo Custom Edition and Halo: Combat Evolved?
      • -
      • A: No, you cannot use the same CD key for both Halo Custom Edition and Halo: Combat Evolved. Each game requires a different CD key to play online. However, you can use the same crack folder for both games to bypass the CD key check. You can download it from this link, which also contains both games and their latest patches.
      • -
      • Q: Can I play Halo Custom Edition online with other players who have a valid CD key?
      • -
      • A: Yes, you can play Halo Custom Edition online with other players who have a valid CD key. The crack folder or the CD-key generator does not affect your compatibility with other players or servers. You can join or host any online game as long as you have the same version and map as the other players.
      • -
      • Q: Can I create my own custom maps for Halo Custom Edition?
      • -
      • A: Yes, you can create your own custom maps for Halo Custom Edition using a map editor tool called Halo Editing Kit (HEK). You can download it from this link, which also contains tutorials and resources on how to use it. You can also download and install other custom maps created by other players from websites like Halo Maps or Halo CE3.
      • -
      • Q: Can I get banned or kicked out of servers for using a crack para Halo CE CD key?
      • -
      • A: No, you cannot get banned or kicked out of servers for using a crack para Halo CE CD key. The crack folder or the CD-key generator does not modify any files that are detected by anti-cheat systems or server administrators. However, you can still get banned or kicked out of servers for other reasons, such as cheating, hacking, griefing, spamming, or breaking server rules. Therefore, we advise you to play fair and respect other players and servers.
      • -
      -

      0a6ba089eb
      -
      -
      \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download Arcgis 93 Crack Free.md b/spaces/raedeXanto/academic-chatgpt-beta/Download Arcgis 93 Crack Free.md deleted file mode 100644 index f73068f95baad57fa792783ad396b110e35039b9..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Download Arcgis 93 Crack Free.md +++ /dev/null @@ -1,68 +0,0 @@ - -

      Download Arcgis 9.3 Crack Free: A Complete Guide

      -

      Arcgis is one of the most popular and powerful geographic information system (GIS) software in the world. It allows you to create, analyze, visualize, and share spatial data and maps for various purposes. Whether you are a student, researcher, planner, engineer, or business owner, you can benefit from using Arcgis to solve your spatial problems.

      -

      Download Arcgis 93 Crack Free


      Downloadhttps://tinourl.com/2uL3pv



      -

      However, Arcgis is not a cheap software to buy or maintain. The latest version of Arcgis Desktop (10.8) costs $1,500 for a single-use license or $700 per year for a concurrent-use license. And if you want to use some of the advanced extensions or tools, you have to pay extra fees.

      -

      That's why many people are looking for ways to download Arcgis crack free, especially the older versions like Arcgis 9.3. A crack is a modified file that bypasses the software's security system and allows you to use it without paying for a license.

      -

      But is it legal and safe to download Arcgis crack free? How can you find a reliable source for the crack file? How can you install and use Arcgis crack free without any problems?

      -

      -

      In this article, we will answer all these questions and more. We will provide you with a complete guide on how to download Arcgis 9.3 crack free and use it effectively.

      -

      What is Arcgis 9.3?

      -

      Arcgis 9.3 is an older version of Arcgis Desktop that was released in June 2008. It consists of three main applications: ArcMap, ArcCatalog, and ArcToolbox.

      -

      ArcMap is the main application for creating and editing maps, performing spatial analysis, and displaying geographic data.

      -

      ArcCatalog is the application for managing and organizing your spatial data sources, such as shapefiles, geodatabases, raster files, etc.

      -

      ArcToolbox is the application that contains hundreds of tools for performing various spatial operations, such as geoprocessing, conversion, projection, etc.

      -

      Features and Benefits of Arcgis 9.3

      -

      Even though Arcgis 9.3 is an older version of Arcgis Desktop, it still has many features and benefits that make it a useful and powerful GIS software. Some of these features and benefits are:

      -
        -
      • It supports a wide range of spatial data formats, such as vector, raster, tabular, network, 3D, etc.
      • -
      • It has a user-friendly interface that allows you to easily navigate and customize your workspace.
      • -
      • It has a rich set of symbology and cartography options that enable you to create professional and attractive maps.
      • -
      • It has a comprehensive set of spatial analysis tools that allow you to perform various types of spatial queries, measurements, statistics, modeling, etc.
      • -
      • It has a flexible framework that allows you to extend its functionality by adding extensions, scripts, models, etc.
      • -
      • It has a strong online community that provides you with resources, support, tutorials, tips, etc.
      • -
      -

      System Requirements for Arcgis 9.3

      -

      Before you download Arcgis 9.3 crack free, you need to make sure that your computer meets the minimum system requirements for running the software. According to the official website, the minimum system requirements for Arcgis 9.3 are:

      - - - -
      Operating SystemProcessorMemoryDisk SpaceDisplay
      Windows XP (SP2 or later), Windows Vista (SP1 or later), Windows Server 2003 (SP1 or later), Windows Server 2008Pentium 4 or higher512 MB RAM (1 GB recommended)1.5 GB free disk space (2.5 GB recommended)24-bit color depth and 1024 x 768 resolution or higher
      -

      If your computer does not meet these requirements, you may experience problems with installing or running Arcgis 9.3.

      -

      Why Do You Need a Crack for Arcgis 9.3?

      -

      As we mentioned earlier, Arcgis is not a cheap software to buy or maintain. The latest version of Arcgis Desktop (10.8) costs $1,500 for a single-use license or $700 per year for a concurrent-use license. And if you want to use some of the advanced extensions or tools, you have to pay extra fees.

      -

      Arcgis 9.3 is also not free to use. According to the official website, the price of Arcgis 9.3 was $1,500 for a single-use license or $500 per year for a concurrent-use license. And if you wanted to use some of the extensions or tools, you had to pay extra fees.

      -

      Therefore, many people who cannot afford to buy or maintain a license for Arcgis 9.3 are looking for ways to download Arcgis crack free. A crack is a modified file that bypasses the software's security system and allows you to use it without paying for a license.

      -

      The Cost of Arcgis 9.3 License

      -

      The cost of Arcgis 9.3 license depends on the type of license and the number of extensions or tools you want to use. According to the official website, the price of Arcgis 9.3 was as follows:

      - - - - - - - - - - - - - - - - - -
      Type of LicenseArcGIS Desktop Basic (ArcView)ArcGIS Desktop Standard (ArcEditor)ArcGIS Desktop Advanced (ArcInfo)
      Single Use$1,500$7,000$15,000
      Concurrent Use (Annual)$500$2,500$5,000
      ArcGIS Spatial Analyst Extension$2,500$2,500$2,500
      ArcGIS 3D Analyst Extension$2,500$2,500$2,500
      ArcGIS Geostatistical Analyst Extension$2,500$2,500$ 2,500
      ArcGIS Network Analyst Extension$2,500$2,500$2,500
      ArcGIS Tracking Analyst Extension$2,500$2,500$2,500
      ArcGIS Publisher Extension$1,000$1,000$1,000
      ArcGIS Data Interoperability Extension$3,000$3,000$3,000
      ArcGIS Schematics Extension$3,000$3,000$3,000
      ArcGIS Survey Analyst Extension$3,000$3,000$3,000
      ArcScan for ArcGIS Extension$1,000$1,000$1,000
      Maplex for ArcGIS Extension$1,000$1,000$1,000
      Total Cost (Single Use)$25,500$31,000$39,000
      Total Cost (Concurrent Use)$17,500/year$23,000/year$31,000/year
      -

      As you can see, the cost of Arcgis 9.3 license is quite high and may not be affordable for many people who need to use the software for their projects or studies.

      -

      The Risks of Using a Cracked Version of Arcgis 9.3

      -

      While downloading Arcgis crack free may seem like a tempting option to save money and use the software without any limitations, it also comes with some risks and disadvantages that you should be aware of before you decide to do so. Some of these risks and disadvantages are:

      -
        -
      • It is illegal and unethical to use a cracked version of Arcgis 9.3. You are violating the terms and conditions of the software license agreement and the intellectual property rights of the software developer. You may face legal consequences or penalties if you are caught using a cracked version of Arcgis 9.3.
      • -
      • It is unsafe and unreliable to use a cracked version of Arcgis 9.3. You are exposing your computer to potential viruses, malware, spyware, or other harmful programs that may be hidden in the crack file or the source website. You may also experience problems with installing or running the software, such as errors, crashes, bugs, or compatibility issues.
      • -
      • It is unsupported and outdated to use a cracked version of Arcgis 9.3. You are not eligible for any technical support or customer service from the software developer or the official website. You are also missing out on any updates or upgrades that may improve the performance or functionality of the software.
      • -
      • It is unfair and disrespectful to use a cracked version of Arcgis 9.3. You are depriving the software developer of their rightful income and recognition for their hard work and innovation. You are also undermining the value and quality of the software and the GIS industry.
      • -
      • It is unprofessional and irresponsible to use a cracked version of Arcgis 9.3. You are compromising the integrity and credibility of your work and your reputation as a GIS user or professional. You are also risking the accuracy and validity of your spatial data and analysis results.
      • -
      -

      Therefore, we do not recommend or endorse downloading Arcgis crack free as a way to use the software. We advise you to respect the law and the software developer and purchase a legitimate license for Arcgis 9.3 or any other version that suits your needs and budget.

      b2dd77e56b
      -
      -
      \ No newline at end of file diff --git a/spaces/rcajegas/WHO_1/index.html b/spaces/rcajegas/WHO_1/index.html deleted file mode 100644 index b7e1ff630bc1ed08f2e120061cae61c4ab47f476..0000000000000000000000000000000000000000 --- a/spaces/rcajegas/WHO_1/index.html +++ /dev/null @@ -1,43 +0,0 @@ - - - - Goals of the World Health Organization - - - - - - -
      -

      Goals of the World Health Organization

      -

      Loading...

      - GIF design -
      - Learn more about the WHO -
      - - diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Abelssoft Find My Files 2020 V2.01.1 With Crack [Latest].md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Abelssoft Find My Files 2020 V2.01.1 With Crack [Latest].md deleted file mode 100644 index dfb9f07ae06f4ef587d1af07c6043876bdbb1f9e..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Abelssoft Find My Files 2020 V2.01.1 With Crack [Latest].md +++ /dev/null @@ -1,11 +0,0 @@ -

      Abelssoft Find My Files 2020 v2.01.1 With Crack [Latest]


      Download ››› https://urlgoal.com/2uCMGE



      -
      -Dec 21, 2019 - Abelssoft Find My Files 2020 Crack Free Download Use locate my documents seek app to search and find files in lightning fast ... Find My Files 2019. -Download . -Find My Files. -Find My Files is a program to find and easily recover files lost due to accidental deletion, formatting or damage. -Find My Files will help you get back your files that have been deleted or lost due to virus attack, formatting or hard drive corruption. -Now with this application you can get back your important files even if they have been deleted with Find My Files. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Atpl Radio Navigation Cbt __FULL__ Download.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Atpl Radio Navigation Cbt __FULL__ Download.md deleted file mode 100644 index ed0465e3f34904a22033ce6f5f8cdb25d96b64ed..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Atpl Radio Navigation Cbt __FULL__ Download.md +++ /dev/null @@ -1,25 +0,0 @@ - -

      How to Download ATPL Radio Navigation CBT Course

      -

      If you are an aspiring pilot who wants to prepare for the EASA ATPL 062 Radio Navigation exam, you might be interested in downloading a computer-based training (CBT) course that covers all the topics and requirements of this subject. A CBT course is a convenient and effective way of learning online, as it provides interactive and visually attractive content that helps you understand and retain the concepts better.

      -

      atpl radio navigation cbt download


      DOWNLOAD ····· https://urlgoal.com/2uCMiY



      -

      There are many sources of ATPL Radio Navigation CBT courses online, but not all of them are reliable or up-to-date. Some of them might be outdated, incomplete, or not compliant with the latest EASA regulations and 100 KSA (knowledge, skills and attitudes) requirements. Therefore, you need to be careful when choosing a CBT course provider and make sure that they offer quality content that meets the standards and expectations of the examiners.

      -

      One of the best sources of ATPL Radio Navigation CBT courses is Aviation Insider[^2^], a platform that offers a next-generation E-learning experience for pilots. Their ATPL (A) CBT course is 100% compliant with EASA regulations and the latest 100 KSA requirements. It covers all the topics of the ATPL 062 Radio Navigation syllabus, such as performance-based navigation (PBN), navigation specifications, RNAV and RNP systems, VOR, DME, ADF, ILS, MLS, GNSS, INS, FMS, and more. The course also includes quizzes, mock exams, animations, videos, and diagrams to help you test your knowledge and prepare for the exam.

      -

      To download the ATPL Radio Navigation CBT course from Aviation Insider[^2^], you need to follow these steps:

      -

      -
        -
      1. Visit their website at https://aviationinsider.com/product/atpla-cbt-course/ and add the course to your basket.
      2. -
      3. Proceed to checkout and enter your personal and payment details. You can pay with PayPal or credit card.
      4. -
      5. After completing the payment, you will receive an email with a link to access your account and download the course.
      6. -
      7. Download the course to your computer or mobile device and start learning at your own pace.
      8. -
      -

      If you are looking for a free alternative to download ATPL Radio Navigation CBT course, you can try Oxford Complete ATPL Study Pack CBT[^1^], which is available on Google Drive. However, this course might not be as comprehensive or updated as Aviation Insider's course[^2^], so use it at your own risk.

      -

      To download Oxford Complete ATPL Study Pack CBT[^1^] from Google Drive, you need to follow these steps:

      -
        -
      1. Visit this link: https://drive.google.com/file/d/0BzdprlE5s-nsME5UV2JZcnJRY2M/preview?pli=1 and click on the download icon at the top right corner.
      2. -
      3. Save the file to your computer or mobile device. It is a PDF file that contains links to 23 CD-ROMs that contain the CBT course.
      4. -
      5. Open the PDF file and click on each link to download each CD-ROM separately. You will need a software that can extract ISO files, such as WinRAR or 7-Zip.
      6. -
      7. Extract each ISO file to a folder on your computer or mobile device and run the setup.exe file to install the CBT course.
      8. -
      -

      We hope this article has helped you find a suitable source of ATPL Radio Navigation CBT course that you can download and use for your exam preparation. Good luck!

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Daniel Sipper Planeacion Y Control De La Produccion Pdf.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Daniel Sipper Planeacion Y Control De La Produccion Pdf.md deleted file mode 100644 index 6ba7e434ff8a19a946bcc9b045ce4c8dd03dc6ad..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Daniel Sipper Planeacion Y Control De La Produccion Pdf.md +++ /dev/null @@ -1,12 +0,0 @@ -

      Daniel Sipper Planeacion Y Control De La Produccion Pdf


      Download 🔗 https://urlgoal.com/2uCN9B



      -
      -Daniel Sipper - Planeación y Control de la Producción - Free download as PDF (.pdf), text file (.txt) or read online for free. Title: Planeación y Control de la Producción Artist: Daniel Sipper Year: 2015 Genre: House, Electro House, Progressive House Country: Italy -Duration: 54:04 Format/Codec: MP3 Audio Bitrate: 320 kbps -Official site -Tracklist 01. -Daniel Sipper -Daniel Sipper - Planeación y Control de la Producción (Original Mix) -Daniel Sipper - Planeacion y Control de la Produccion (Dub Mix) 41 4 190KB Read more... Planeacion y Control de la Produccion Daniel Sipper.pdf. 19 3 857KB Read more... Pension de Produccion. 22 893KB Read more... Planeacion y Control de la Produccion. 41 4 190KB Read more... Pension de Produccion. 22 893KB Read more... Pension de Produccion. 41 4 190KB Read more... Planeacion y Control de la Produccion. 41 4 190KB Read more... Pension de Produccion. 22 893KB Read more... Pension de Produccion. 41 4 190KB Read more... Pension de Produccion. 22 893KB Read more... Pension de Produccion. 41 4 190KB Read more... Pension de Produccion. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Humpty Sharma Ki Dulhania Mp4 Full Movie.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Humpty Sharma Ki Dulhania Mp4 Full Movie.md deleted file mode 100644 index 084d5ff939d19be9b53ffd34183723ac83916566..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Humpty Sharma Ki Dulhania Mp4 Full Movie.md +++ /dev/null @@ -1,54 +0,0 @@ -

      download Humpty Sharma Ki Dulhania mp4 full movie


      DOWNLOAD ••• https://urlgoal.com/2uCM2t



      - -Story: - -It's a story about Kavya Pratap Singh, a chirpy girl, from Ambala who decides to go for shopping for her wedding in Delhi. She meets two young, loveable and cheeky fellows, Sattu and Rohan. With their help and help of the train, she reaches Delhi and is delighted with the experience she gets. She also falls for Delhi and misses her people in Ambala. When she comes back, she gets a job as a receptionist. Sattu is in her college and with time they get closer and he proposes her but later she doesn't give an answer. Rohan, however, keeps on calling her and they even go for a movie. In the end, she chooses him over Sattu, and she is heartbroken when she comes to know about her dad and they are no longer together. - -Cast: - - Sonali Phukan as Kavya Pratap Singh/Kavya Pratap I - - Shefali Zariwala as Sudha - - Rahul Dev as Arjun Pratap Singh/Arjun Pratap I - - Mahesh Anand as Sattu - - Ankur Dave as Rohan - - Jeetu Kamble as Rakesh - - Harisree Ashoknivash as Sudha's dad - - Ritu Shivpuri as Kavya's mother - - Akash Desai as Shekhar - - Bharat Jadhav as Robert - - Swati Chitnis as Rose - -Production - -The film was shot in Ambala in February 2008. - -Music - -The music of the film was composed by Rajnish Tewari and lyrics were written by Naushad Ali. - -Release - -Box office - -Kavya Pratap was one of the highest-grossing Marathi films of 2009. It was released in 2007. - -Critical reception - -The Hindu wrote, "It's a story that is told in a superb manner and creates a very positive impact. It's a treat for all those who like to watch films with a positive message." - -India Today wrote, "Kavya Pratap is a nice tale, which has a heart-warming message at its core. It doesn't try too hard to be different and it does what it sets out to do — to entertain." - -Times of India wrote, "A fresh and positive story, Kavya Pratap may prove to be a breath 4fefd39f24
      -
      -
      -

      diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (The Martian (English) Movie Hindi Du) ((FULL)).md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (The Martian (English) Movie Hindi Du) ((FULL)).md deleted file mode 100644 index 873d1660ff00a518333831562bf203566bfe0737..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (The Martian (English) Movie Hindi Du) ((FULL)).md +++ /dev/null @@ -1,56 +0,0 @@ -
      -

      HD Online Player (The Martian (English) movie hindi du)

      -

      If you are looking for a thrilling and inspiring sci-fi adventure movie, you might want to watch The Martian, starring Matt Damon as an astronaut who gets stranded on Mars and has to survive against all odds. The movie is based on the best-selling novel by Andy Weir and directed by Ridley Scott. It was released in 2015 and received critical acclaim and several awards nominations.

      -

      HD Online Player (The Martian (English) movie hindi du)


      Download Zip - https://urlgoal.com/2uCJP1



      -

      But what if you want to watch The Martian in Hindi-English dual audio? Maybe you are more comfortable with Hindi language, or you want to enjoy the movie with your friends or family who speak Hindi. Or maybe you just want to experience the movie in a different way. Whatever your reason, you might be wondering how to find a HD online player that can stream or download The Martian in Hindi-English dual audio.

      -

      Well, you are in luck, because we have done the research for you and found some of the best options for watching The Martian in Hindi-English dual audio online. Here are some of the websites and platforms that offer this service:

      -
        -
      • Disney+ Hotstar: This is one of the most popular streaming platforms in India, and it has a huge collection of movies and shows in various languages, including Hindi-English dual audio. You can watch The Martian on Disney+ Hotstar with a subscription or a VIP plan. The quality is excellent and the subtitles are optional.
      • -
      • JustWatch: This is a website that helps you find where to watch movies and shows online legally. You can search for The Martian and filter by language, quality, price, and provider. You will see that there are several options for renting or buying The Martian in Hindi-English dual audio on platforms like Google Play Movies, YouTube, and Apple TV. You can compare the prices and choose the best deal for you.
      • -
      • Archive.org: This is a website that hosts millions of free books, movies, music, and more. You can find The Martian in Hindi-English dual audio on Archive.org as a free download or stream. The quality is decent and the file size is reasonable. However, you should be careful about the legality and safety of downloading or streaming from this website.
      • -
      -

      These are some of the best ways to watch The Martian in Hindi-English dual audio online. We hope you enjoy this amazing movie and have a great time watching it.

      -

      Why You Should Watch The Martian in Hindi-English Dual Audio

      -

      Now that you know where to watch The Martian in Hindi-English dual audio online, you might be wondering why you should watch it in the first place. After all, there are many other movies and shows that you can choose from. What makes The Martian so special and worth your time?

      -

      -

      Well, there are many reasons why The Martian is a great movie that you should not miss. Here are some of them:

      -
        -
      • The Martian is a smart, thrilling, and surprisingly funny movie that will keep you on the edge of your seat and make you laugh at the same time. The movie has a perfect balance of drama, humor, and suspense, and never gets boring or predictable. The movie also has a lot of scientific accuracy and realism, which makes it more engaging and believable.
      • -
      • The Martian has an amazing cast and performance by Matt Damon, who carries the movie with his charisma and charm. Damon plays Mark Watney, an astronaut who is left behind on Mars and has to survive with limited resources and no communication with Earth. Damon portrays Watney as a witty, optimistic, and resourceful character who never gives up hope and makes the best out of his situation. Damon's performance is captivating and inspiring, and he makes you care about his character and his fate.
      • -
      • The Martian is a movie that celebrates human spirit, ingenuity, and cooperation. The movie shows how Watney uses his knowledge, skills, and creativity to overcome various challenges and obstacles on Mars. The movie also shows how NASA and other international agencies work together to find a way to bring Watney back home. The movie delivers a positive message about the power of science, teamwork, and perseverance.
      • -
      • The Martian is a movie that can be enjoyed by anyone, regardless of their age, gender, or background. The movie has something for everyone: action, adventure, comedy, drama, sci-fi, and more. The movie is also suitable for family viewing, as it has no violence, nudity, or profanity. The movie is rated PG-13 for some strong language and injury images.
      • -
      -

      As you can see, The Martian is a movie that deserves your attention and appreciation. Watching it in Hindi-English dual audio will enhance your experience and make it more enjoyable for you and your loved ones. So what are you waiting for? Grab your HD online player and watch The Martian in Hindi-English dual audio today!

      -

      Some Fun Facts About The Martian Movie

      -

      Watching The Martian in Hindi-English dual audio online is not only entertaining, but also educational. You can learn a lot about Mars, science, and space exploration from this movie. But did you know that there are also some fun facts and trivia about the movie that you might not be aware of? Here are some of them:

      -
        -
      • A page of the script flew on NASA's Orion spacecraft. According to screenwriter Drew Goddard, NASA put a page of the script inside the Orion Multi-Purpose Crew Vehicle when it made its first test flight in December 2014. Orion may one day carry humans to Mars, so the gesture was fitting. Goddard said director Ridley Scott would sketch storyboard images and other drawings on the script pages, and the page that was chosen featured one of Scott's drawings of Watney.
      • -
      • Scientists from the European Space Agency (ESA) visited the set while the Philae lander was touching down on a comet. Chiwetel Ejiofor, who plays a NASA mission director named Vincent Kapoor, seems to have been deeply influenced by an ESA scientist who visited the movie set as a science consultant. Ejiofor got to see how the scientist reacted when he heard the news that the Rosetta mission successfully set down the Philae lander on the surface of comet 67P/Churyumov-Gerasimenko on Nov. 12, 2014. This was the first time humans had soft-landed a probe on a comet.
      • -
      • The \"Mars\" landscapes can be seen in other Martian movies and \"Lawrence of Arabia.\" The majority of The Martian was filmed on indoor sets in Budapest, Hungary, according to the filmmakers, but many of the exterior shots of Mars were filmed in Wadi Rum, also known as the Valley of the Moon, in southern Jordan. This same location was used for certain shots in the epic Hollywood classic \"Lawrence of Arabia\" (1962), according to Scott. (While he didn't mention it at the panel session, according to the Internet Movie Database, Wadi Rum was also a location in Scott's film \"Prometheus\" (2012)).
      • -
      -

      These are some of the fun facts and trivia about The Martian movie that you might find interesting and amusing. Watching it in Hindi-English dual audio online will make you appreciate it even more and enjoy it with your friends and family.

      -

      Some Awards and Nominations Received by The Martian Movie

      -

      Watching The Martian in Hindi-English dual audio online is not only a great way to enjoy this movie, but also to appreciate its excellence and achievements. The Martian is a movie that has been widely praised by critics and audiences alike, and has received many awards and nominations from various prestigious organizations and associations. Here are some of them:

      -
        -
      • The Martian was nominated for seven Academy Awards, including Best Picture, Best Actor (for Matt Damon), Best Adapted Screenplay (for Drew Goddard), Best Sound Editing, Best Sound Mixing, Best Production Design, and Best Visual Effects.
      • -
      • The Martian won two Golden Globe Awards, for Best Motion Picture – Musical or Comedy, and Best Actor – Motion Picture Musical or Comedy (for Matt Damon). The movie was also nominated for Best Director – Motion Picture (for Ridley Scott).
      • -
      • The Martian was nominated for six BAFTA Awards, including Best Film, Best Leading Actor (for Matt Damon), Best Director (for Ridley Scott), Best Editing, Best Production Design, and Best Sound.
      • -
      • The Martian was named Film of the Year by National Board of Review, also winning Best Director (for Ridley Scott), Best Actor (for Matt Damon), and Best Adapted Screenplay (for Drew Goddard).
      • -
      • The Martian was nominated for nine Critics' Choice Awards, including Best Picture, Best Actor (for Matt Damon), Best Director (for Ridley Scott), Best Adapted Screenplay (for Drew Goddard), Best Cinematography, Best Production Design, Best Editing, Best Visual Effects, and Best Sci-Fi/Horror Movie.
      • -
      -

      These are some of the awards and nominations received by The Martian movie that show how remarkable and outstanding this movie is. Watching it in Hindi-English dual audio online will make you admire it more and have a wonderful time with it.

      -

      Some Memorable Quotes from The Martian Movie

      -

      Watching The Martian in Hindi-English dual audio online is not only a thrilling and inspiring experience, but also a humorous and witty one. The Martian is a movie that has many memorable quotes that will make you laugh, think, and feel. Here are some of them:

      -
        -
      • \"I don't want to come off as arrogant here, but I'm the greatest botanist on this planet.\" - Mark Watney
      • -
      • \"I've been thinking about laws on Mars. There's an international treaty saying that no country can lay claim to anything that's not on Earth. By another treaty if you're not in any country's territory, maritime law aplies. So Mars is international waters. Now, NASA is an American non-military organization, it owns the Hab. But the second I walk outside I'm in international waters. So Here's the cool part. I'm about to leave for the Schiaparelli crater where I'm going to commandeer the Ares IV lander. Nobody explicitly gave me permission to do this, and they can't until I'm on board the Ares IV. So I'm going to be taking a craft over in international waters without permission, which by definition... makes me a pirate. Mark Watney: Space Pirate.\" - Mark Watney
      • -
      • \"If I want water, I'll have to make it from scratch. Fortunately, I know the recipe: Take hydrogen. Add oxygen. Burn.\" - Mark Watney
      • -
      • \"They say once you grow crops somewhere, you have officially 'colonized' it. So, technically, I colonized Mars. In your face, Neil Armstrong!\" - Mark Watney
      • -
      • \"Tell Commander Lewis, disco sucks.\" - Mark Watney
      • -
      -

      These are some of the memorable quotes from The Martian movie that will make you enjoy it more and appreciate its humor and wit. Watching it in Hindi-English dual audio online will make you laugh and smile with it.

      -

      Conclusion

      -

      The Martian is a movie that you should not miss if you love science fiction, adventure, and survival stories. It is a movie that will keep you on the edge of your seat, make you laugh, and inspire you. It is a movie that has been praised by critics and audiences alike, and has received many awards and nominations. It is a movie that you can watch in Hindi-English dual audio online with your HD online player, and have a great time with it.

      -

      So what are you waiting for? Grab your HD online player and watch The Martian in Hindi-English dual audio today! You will not regret it!

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/rehanuddin/02-GradioArt-From-Text-And-Images/app.py b/spaces/rehanuddin/02-GradioArt-From-Text-And-Images/app.py deleted file mode 100644 index 10939427025b17176765402185cd11e23caa1523..0000000000000000000000000000000000000000 --- a/spaces/rehanuddin/02-GradioArt-From-Text-And-Images/app.py +++ /dev/null @@ -1,224 +0,0 @@ -import os - -os.system("git clone --recursive https://github.com/JD-P/cloob-latent-diffusion") -os.system("cd cloob-latent-diffusion;pip install omegaconf pillow pytorch-lightning einops wandb ftfy regex ./CLIP") - -import argparse -from functools import partial -from pathlib import Path -import sys -sys.path.append('./cloob-latent-diffusion') -sys.path.append('./cloob-latent-diffusion/cloob-training') -sys.path.append('./cloob-latent-diffusion/latent-diffusion') -sys.path.append('./cloob-latent-diffusion/taming-transformers') -sys.path.append('./cloob-latent-diffusion/v-diffusion-pytorch') -from omegaconf import OmegaConf -from PIL import Image -import torch -from torch import nn -from torch.nn import functional as F -from torchvision import transforms -from torchvision.transforms import functional as TF -from tqdm import trange -from CLIP import clip -from cloob_training import model_pt, pretrained -import ldm.models.autoencoder -from diffusion import sampling, utils -import train_latent_diffusion as train -from huggingface_hub import hf_hub_url, cached_download -import random - -# Download the model files -checkpoint = cached_download(hf_hub_url("huggan/distill-ccld-wa", filename="model_student.ckpt")) -ae_model_path = cached_download(hf_hub_url("huggan/ccld_wa", filename="ae_model.ckpt")) -ae_config_path = cached_download(hf_hub_url("huggan/ccld_wa", filename="ae_model.yaml")) - -# Define a few utility functions - - -def parse_prompt(prompt, default_weight=3.): - if prompt.startswith('http://') or prompt.startswith('https://'): - vals = prompt.rsplit(':', 2) - vals = [vals[0] + ':' + vals[1], *vals[2:]] - else: - vals = prompt.rsplit(':', 1) - vals = vals + ['', default_weight][len(vals):] - return vals[0], float(vals[1]) - - -def resize_and_center_crop(image, size): - fac = max(size[0] / image.size[0], size[1] / image.size[1]) - image = image.resize((int(fac * image.size[0]), int(fac * image.size[1])), Image.LANCZOS) - return TF.center_crop(image, size[::-1]) - - -# Load the models -device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') -print('Using device:', device) -print('loading models') - -# autoencoder -ae_config = OmegaConf.load(ae_config_path) -ae_model = ldm.models.autoencoder.AutoencoderKL(**ae_config.model.params) -ae_model.eval().requires_grad_(False).to(device) -ae_model.load_state_dict(torch.load(ae_model_path)) -n_ch, side_y, side_x = 4, 32, 32 - -# diffusion model -model = train.DiffusionModel(192, [1,1,2,2], autoencoder_scale=torch.tensor(4.3084)) -model.load_state_dict(torch.load(checkpoint, map_location='cpu')) -model = model.to(device).eval().requires_grad_(False) - -# CLOOB -cloob_config = pretrained.get_config('cloob_laion_400m_vit_b_16_16_epochs') -cloob = model_pt.get_pt_model(cloob_config) -checkpoint = pretrained.download_checkpoint(cloob_config) -cloob.load_state_dict(model_pt.get_pt_params(cloob_config, checkpoint)) -cloob.eval().requires_grad_(False).to(device) - - -# The key function: returns a list of n PIL images -def generate(n=1, prompts=['a red circle'], images=[], seed=42, steps=15, - method='plms', eta=None): - zero_embed = torch.zeros([1, cloob.config['d_embed']], device=device) - target_embeds, weights = [zero_embed], [] - - for prompt in prompts: - txt, weight = parse_prompt(prompt) - target_embeds.append(cloob.text_encoder(cloob.tokenize(txt).to(device)).float()) - weights.append(weight) - - for prompt in images: - path, weight = parse_prompt(prompt) - img = Image.open(utils.fetch(path)).convert('RGB') - clip_size = cloob.config['image_encoder']['image_size'] - img = resize_and_center_crop(img, (clip_size, clip_size)) - batch = TF.to_tensor(img)[None].to(device) - embed = F.normalize(cloob.image_encoder(cloob.normalize(batch)).float(), dim=-1) - target_embeds.append(embed) - weights.append(weight) - - weights = torch.tensor([1 - sum(weights), *weights], device=device) - - torch.manual_seed(seed) - - def cfg_model_fn(x, t): - n = x.shape[0] - n_conds = len(target_embeds) - x_in = x.repeat([n_conds, 1, 1, 1]) - t_in = t.repeat([n_conds]) - clip_embed_in = torch.cat([*target_embeds]).repeat_interleave(n, 0) - vs = model(x_in, t_in, clip_embed_in).view([n_conds, n, *x.shape[1:]]) - v = vs.mul(weights[:, None, None, None, None]).sum(0) - return v - - def run(x, steps): - if method == 'ddpm': - return sampling.sample(cfg_model_fn, x, steps, 1., {}) - if method == 'ddim': - return sampling.sample(cfg_model_fn, x, steps, eta, {}) - if method == 'prk': - return sampling.prk_sample(cfg_model_fn, x, steps, {}) - if method == 'plms': - return sampling.plms_sample(cfg_model_fn, x, steps, {}) - if method == 'pie': - return sampling.pie_sample(cfg_model_fn, x, steps, {}) - if method == 'plms2': - return sampling.plms2_sample(cfg_model_fn, x, steps, {}) - assert False - - batch_size = n - x = torch.randn([n, n_ch, side_y, side_x], device=device) - t = torch.linspace(1, 0, steps + 1, device=device)[:-1] - steps = utils.get_spliced_ddpm_cosine_schedule(t) - pil_ims = [] - for i in trange(0, n, batch_size): - cur_batch_size = min(n - i, batch_size) - out_latents = run(x[i:i+cur_batch_size], steps) - outs = ae_model.decode(out_latents * torch.tensor(2.55).to(device)) - for j, out in enumerate(outs): - pil_ims.append(utils.to_pil_image(out)) - - return pil_ims - - -import gradio as gr - -def gen_ims(prompt, im_prompt=None, seed=None, n_steps=10, method='plms'): - if seed == None : - seed = random.randint(0, 10000) - print( prompt, im_prompt, seed, n_steps) - prompts = [prompt] - im_prompts = [] - if im_prompt != None: - im_prompts = [im_prompt] - pil_ims = generate(n=1, prompts=prompts, images=im_prompts, seed=seed, steps=n_steps, method=method) - return pil_ims[0] - -iface = gr.Interface(fn=gen_ims, - inputs=[#gr.inputs.Slider(minimum=1, maximum=1, step=1, default=1,label="Number of images"), - #gr.inputs.Slider(minimum=0, maximum=200, step=1, label='Random seed', default=0), - gr.inputs.Textbox(label="Text prompt"), - gr.inputs.Image(optional=True, label="Image prompt", type='filepath'), - #gr.inputs.Slider(minimum=10, maximum=35, step=1, default=15,label="Number of steps") - ], - outputs=[gr.outputs.Image(type="pil", label="Generated Image")], - examples=[ - ["Futurism, in the style of Wassily Kandinsky"], - ["Art Nouveau, in the style of John Singer Sargent"], - ["Surrealism, in the style of Edgar Degas"], - ["Expressionism, in the style of Wassily Kandinsky"], - ["Futurism, in the style of Egon Schiele"], - ["Neoclassicism, in the style of Gustav Klimt"], - ["Cubism, in the style of Gustav Klimt"], - ["Op Art, in the style of Marc Chagall"], - ["Romanticism, in the style of M.C. Escher"], - ["Futurism, in the style of M.C. Escher"], - ["Abstract Art, in the style of M.C. Escher"], - ["Mannerism, in the style of Paul Klee"], - ["Romanesque Art, in the style of Leonardo da Vinci"], - ["High Renaissance, in the style of Rembrandt"], - ["Magic Realism, in the style of Gustave Dore"], - ["Realism, in the style of Jean-Michel Basquiat"], - ["Art Nouveau, in the style of Paul Gauguin"], - ["Avant-garde, in the style of Pierre-Auguste Renoir"], - ["Baroque, in the style of Edward Hopper"], - ["Post-Impressionism, in the style of Wassily Kandinsky"], - ["Naturalism, in the style of Rene Magritte"], - ["Constructivism, in the style of Paul Cezanne"], - ["Abstract Expressionism, in the style of Henri Matisse"], - ["Pop Art, in the style of Vincent van Gogh"], - ["Futurism, in the style of Wassily Kandinsky"], - ["Futurism, in the style of Zdzislaw Beksinski"], - ['Surrealism, in the style of Salvador Dali'], - ["Aaron Wacker, oil on canvas"], - ["abstract"], - ["landscape"], - ["portrait"], - ["sculpture"], - ["genre painting"], - ["installation"], - ["photo"], - ["figurative"], - ["illustration"], - ["still life"], - ["history painting"], - ["cityscape"], - ["marina"], - ["animal painting"], - ["design"], - ["calligraphy"], - ["symbolic painting"], - ["graffiti"], - ["performance"], - ["mythological painting"], - ["battle painting"], - ["self-portrait"], - ["Impressionism, oil on canvas"] - ], - title='Art Generator and Style Mixer from 🧠 Cloob and 🎨 WikiArt - Visual Art Encyclopedia:', - description="Trained on images from the [WikiArt](https://www.wikiart.org/) dataset, comprised of visual arts", - article = 'Model used is: [model card](https://huggingface.co/huggan/distill-ccld-wa)..' - -) -iface.launch(enable_queue=True) # , debug=True for colab debugging \ No newline at end of file diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/fovea_head.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/fovea_head.py deleted file mode 100644 index 8be7fc94c767005da5d31d201dcc55fb760b5c53..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/fovea_head.py +++ /dev/null @@ -1,385 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.ops import DeformConv2d -from mmcv.runner import BaseModule - -from mmdet.core import multi_apply -from mmdet.core.utils import filter_scores_and_topk -from ..builder import HEADS -from .anchor_free_head import AnchorFreeHead - -INF = 1e8 - - -class FeatureAlign(BaseModule): - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - deform_groups=4, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.1, - override=dict( - type='Normal', name='conv_adaption', std=0.01))): - super(FeatureAlign, self).__init__(init_cfg) - offset_channels = kernel_size * kernel_size * 2 - self.conv_offset = nn.Conv2d( - 4, deform_groups * offset_channels, 1, bias=False) - self.conv_adaption = DeformConv2d( - in_channels, - out_channels, - kernel_size=kernel_size, - padding=(kernel_size - 1) // 2, - deform_groups=deform_groups) - self.relu = nn.ReLU(inplace=True) - - def forward(self, x, shape): - offset = self.conv_offset(shape) - x = self.relu(self.conv_adaption(x, offset)) - return x - - -@HEADS.register_module() -class FoveaHead(AnchorFreeHead): - """FoveaBox: Beyond Anchor-based Object Detector - https://arxiv.org/abs/1904.03797 - """ - - def __init__(self, - num_classes, - in_channels, - base_edge_list=(16, 32, 64, 128, 256), - scale_ranges=((8, 32), (16, 64), (32, 128), (64, 256), (128, - 512)), - sigma=0.4, - with_deform=False, - deform_groups=4, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='conv_cls', - std=0.01, - bias_prob=0.01)), - **kwargs): - self.base_edge_list = base_edge_list - self.scale_ranges = scale_ranges - self.sigma = sigma - self.with_deform = with_deform - self.deform_groups = deform_groups - super().__init__(num_classes, in_channels, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - # box branch - super()._init_reg_convs() - self.conv_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - - # cls branch - if not self.with_deform: - super()._init_cls_convs() - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - else: - self.cls_convs = nn.ModuleList() - self.cls_convs.append( - ConvModule( - self.feat_channels, (self.feat_channels * 4), - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.norm_cfg is None)) - self.cls_convs.append( - ConvModule((self.feat_channels * 4), (self.feat_channels * 4), - 1, - stride=1, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.norm_cfg is None)) - self.feature_adaption = FeatureAlign( - self.feat_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.conv_cls = nn.Conv2d( - int(self.feat_channels * 4), - self.cls_out_channels, - 3, - padding=1) - - def forward_single(self, x): - cls_feat = x - reg_feat = x - for reg_layer in self.reg_convs: - reg_feat = reg_layer(reg_feat) - bbox_pred = self.conv_reg(reg_feat) - if self.with_deform: - cls_feat = self.feature_adaption(cls_feat, bbox_pred.exp()) - for cls_layer in self.cls_convs: - cls_feat = cls_layer(cls_feat) - cls_score = self.conv_cls(cls_feat) - return cls_score, bbox_pred - - def loss(self, - cls_scores, - bbox_preds, - gt_bbox_list, - gt_label_list, - img_metas, - gt_bboxes_ignore=None): - assert len(cls_scores) == len(bbox_preds) - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - points = self.prior_generator.grid_priors( - featmap_sizes, - dtype=bbox_preds[0].dtype, - device=bbox_preds[0].device) - num_imgs = cls_scores[0].size(0) - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(-1, self.cls_out_channels) - for cls_score in cls_scores - ] - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - for bbox_pred in bbox_preds - ] - flatten_cls_scores = torch.cat(flatten_cls_scores) - flatten_bbox_preds = torch.cat(flatten_bbox_preds) - flatten_labels, flatten_bbox_targets = self.get_targets( - gt_bbox_list, gt_label_list, featmap_sizes, points) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - pos_inds = ((flatten_labels >= 0) - & (flatten_labels < self.num_classes)).nonzero().view(-1) - num_pos = len(pos_inds) - - loss_cls = self.loss_cls( - flatten_cls_scores, flatten_labels, avg_factor=num_pos + num_imgs) - if num_pos > 0: - pos_bbox_preds = flatten_bbox_preds[pos_inds] - pos_bbox_targets = flatten_bbox_targets[pos_inds] - pos_weights = pos_bbox_targets.new_zeros( - pos_bbox_targets.size()) + 1.0 - loss_bbox = self.loss_bbox( - pos_bbox_preds, - pos_bbox_targets, - pos_weights, - avg_factor=num_pos) - else: - loss_bbox = torch.tensor( - 0, - dtype=flatten_bbox_preds.dtype, - device=flatten_bbox_preds.device) - return dict(loss_cls=loss_cls, loss_bbox=loss_bbox) - - def get_targets(self, gt_bbox_list, gt_label_list, featmap_sizes, points): - label_list, bbox_target_list = multi_apply( - self._get_target_single, - gt_bbox_list, - gt_label_list, - featmap_size_list=featmap_sizes, - point_list=points) - flatten_labels = [ - torch.cat([ - labels_level_img.flatten() for labels_level_img in labels_level - ]) for labels_level in zip(*label_list) - ] - flatten_bbox_targets = [ - torch.cat([ - bbox_targets_level_img.reshape(-1, 4) - for bbox_targets_level_img in bbox_targets_level - ]) for bbox_targets_level in zip(*bbox_target_list) - ] - flatten_labels = torch.cat(flatten_labels) - flatten_bbox_targets = torch.cat(flatten_bbox_targets) - return flatten_labels, flatten_bbox_targets - - def _get_target_single(self, - gt_bboxes_raw, - gt_labels_raw, - featmap_size_list=None, - point_list=None): - - gt_areas = torch.sqrt((gt_bboxes_raw[:, 2] - gt_bboxes_raw[:, 0]) * - (gt_bboxes_raw[:, 3] - gt_bboxes_raw[:, 1])) - label_list = [] - bbox_target_list = [] - # for each pyramid, find the cls and box target - for base_len, (lower_bound, upper_bound), stride, featmap_size, \ - points in zip(self.base_edge_list, self.scale_ranges, - self.strides, featmap_size_list, point_list): - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - points = points.view(*featmap_size, 2) - x, y = points[..., 0], points[..., 1] - labels = gt_labels_raw.new_zeros(featmap_size) + self.num_classes - bbox_targets = gt_bboxes_raw.new(featmap_size[0], featmap_size[1], - 4) + 1 - # scale assignment - hit_indices = ((gt_areas >= lower_bound) & - (gt_areas <= upper_bound)).nonzero().flatten() - if len(hit_indices) == 0: - label_list.append(labels) - bbox_target_list.append(torch.log(bbox_targets)) - continue - _, hit_index_order = torch.sort(-gt_areas[hit_indices]) - hit_indices = hit_indices[hit_index_order] - gt_bboxes = gt_bboxes_raw[hit_indices, :] / stride - gt_labels = gt_labels_raw[hit_indices] - half_w = 0.5 * (gt_bboxes[:, 2] - gt_bboxes[:, 0]) - half_h = 0.5 * (gt_bboxes[:, 3] - gt_bboxes[:, 1]) - # valid fovea area: left, right, top, down - pos_left = torch.ceil( - gt_bboxes[:, 0] + (1 - self.sigma) * half_w - 0.5).long(). \ - clamp(0, featmap_size[1] - 1) - pos_right = torch.floor( - gt_bboxes[:, 0] + (1 + self.sigma) * half_w - 0.5).long(). \ - clamp(0, featmap_size[1] - 1) - pos_top = torch.ceil( - gt_bboxes[:, 1] + (1 - self.sigma) * half_h - 0.5).long(). \ - clamp(0, featmap_size[0] - 1) - pos_down = torch.floor( - gt_bboxes[:, 1] + (1 + self.sigma) * half_h - 0.5).long(). \ - clamp(0, featmap_size[0] - 1) - for px1, py1, px2, py2, label, (gt_x1, gt_y1, gt_x2, gt_y2) in \ - zip(pos_left, pos_top, pos_right, pos_down, gt_labels, - gt_bboxes_raw[hit_indices, :]): - labels[py1:py2 + 1, px1:px2 + 1] = label - bbox_targets[py1:py2 + 1, px1:px2 + 1, 0] = \ - (x[py1:py2 + 1, px1:px2 + 1] - gt_x1) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 1] = \ - (y[py1:py2 + 1, px1:px2 + 1] - gt_y1) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 2] = \ - (gt_x2 - x[py1:py2 + 1, px1:px2 + 1]) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 3] = \ - (gt_y2 - y[py1:py2 + 1, px1:px2 + 1]) / base_len - bbox_targets = bbox_targets.clamp(min=1. / 16, max=16.) - label_list.append(labels) - bbox_target_list.append(torch.log(bbox_targets)) - return label_list, bbox_target_list - - # Same as base_dense_head/_get_bboxes_single except self._bbox_decode - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_priors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image. Fovea head does not need this value. - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid, has shape - (num_priors, 2). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_score_list) == len(bbox_pred_list) - img_shape = img_meta['img_shape'] - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_labels = [] - for level_idx, (cls_score, bbox_pred, stride, base_len, priors) in \ - enumerate(zip(cls_score_list, bbox_pred_list, self.strides, - self.base_edge_list, mlvl_priors)): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - - scores = cls_score.permute(1, 2, 0).reshape( - -1, self.cls_out_channels).sigmoid() - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - results = filter_scores_and_topk( - scores, cfg.score_thr, nms_pre, - dict(bbox_pred=bbox_pred, priors=priors)) - scores, labels, _, filtered_results = results - - bbox_pred = filtered_results['bbox_pred'] - priors = filtered_results['priors'] - - bboxes = self._bbox_decode(priors, bbox_pred, base_len, img_shape) - - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_labels.append(labels) - - return self._bbox_post_process(mlvl_scores, mlvl_labels, mlvl_bboxes, - img_meta['scale_factor'], cfg, rescale, - with_nms) - - def _bbox_decode(self, priors, bbox_pred, base_len, max_shape): - bbox_pred = bbox_pred.exp() - - y = priors[:, 1] - x = priors[:, 0] - x1 = (x - base_len * bbox_pred[:, 0]). \ - clamp(min=0, max=max_shape[1] - 1) - y1 = (y - base_len * bbox_pred[:, 1]). \ - clamp(min=0, max=max_shape[0] - 1) - x2 = (x + base_len * bbox_pred[:, 2]). \ - clamp(min=0, max=max_shape[1] - 1) - y2 = (y + base_len * bbox_pred[:, 3]). \ - clamp(min=0, max=max_shape[0] - 1) - decoded_bboxes = torch.stack([x1, y1, x2, y2], -1) - return decoded_bboxes - - def _get_points_single(self, *args, **kwargs): - """Get points according to feature map size. - - This function will be deprecated soon. - """ - warnings.warn( - '`_get_points_single` in `FoveaHead` will be ' - 'deprecated soon, we support a multi level point generator now' - 'you can get points of a single level feature map ' - 'with `self.prior_generator.single_level_grid_priors` ') - y, x = super()._get_points_single(*args, **kwargs) - return y + 0.5, x + 0.5 diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/losses/dice_loss.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/losses/dice_loss.py deleted file mode 100644 index 585beeaf1c6bb86205f40c73a54e2826edc1fe5d..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/losses/dice_loss.py +++ /dev/null @@ -1,146 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -def dice_loss(pred, - target, - weight=None, - eps=1e-3, - reduction='mean', - naive_dice=False, - avg_factor=None): - """Calculate dice loss, there are two forms of dice loss is supported: - - - the one proposed in `V-Net: Fully Convolutional Neural - Networks for Volumetric Medical Image Segmentation - `_. - - the dice loss in which the power of the number in the - denominator is the first power instead of the second - power. - - Args: - pred (torch.Tensor): The prediction, has a shape (n, *) - target (torch.Tensor): The learning label of the prediction, - shape (n, *), same shape of pred. - weight (torch.Tensor, optional): The weight of loss for each - prediction, has a shape (n,). Defaults to None. - eps (float): Avoid dividing by zero. Default: 1e-3. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. - Options are "none", "mean" and "sum". - naive_dice (bool, optional): If false, use the dice - loss defined in the V-Net paper, otherwise, use the - naive dice loss in which the power of the number in the - denominator is the first power instead of the second - power.Defaults to False. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - - input = pred.flatten(1) - target = target.flatten(1).float() - - a = torch.sum(input * target, 1) - if naive_dice: - b = torch.sum(input, 1) - c = torch.sum(target, 1) - d = (2 * a + eps) / (b + c + eps) - else: - b = torch.sum(input * input, 1) + eps - c = torch.sum(target * target, 1) + eps - d = (2 * a) / (b + c) - - loss = 1 - d - if weight is not None: - assert weight.ndim == loss.ndim - assert len(weight) == len(pred) - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - -@LOSSES.register_module() -class DiceLoss(nn.Module): - - def __init__(self, - use_sigmoid=True, - activate=True, - reduction='mean', - naive_dice=False, - loss_weight=1.0, - eps=1e-3): - """Compute dice loss. - - Args: - use_sigmoid (bool, optional): Whether to the prediction is - used for sigmoid or softmax. Defaults to True. - activate (bool): Whether to activate the predictions inside, - this will disable the inside sigmoid operation. - Defaults to True. - reduction (str, optional): The method used - to reduce the loss. Options are "none", - "mean" and "sum". Defaults to 'mean'. - naive_dice (bool, optional): If false, use the dice - loss defined in the V-Net paper, otherwise, use the - naive dice loss in which the power of the number in the - denominator is the first power instead of the second - power. Defaults to False. - loss_weight (float, optional): Weight of loss. Defaults to 1.0. - eps (float): Avoid dividing by zero. Defaults to 1e-3. - """ - - super(DiceLoss, self).__init__() - self.use_sigmoid = use_sigmoid - self.reduction = reduction - self.naive_dice = naive_dice - self.loss_weight = loss_weight - self.eps = eps - self.activate = activate - - def forward(self, - pred, - target, - weight=None, - reduction_override=None, - avg_factor=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction, has a shape (n, *). - target (torch.Tensor): The label of the prediction, - shape (n, *), same shape of pred. - weight (torch.Tensor, optional): The weight of loss for each - prediction, has a shape (n,). Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - - if self.activate: - if self.use_sigmoid: - pred = pred.sigmoid() - else: - raise NotImplementedError - - loss = self.loss_weight * dice_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - naive_dice=self.naive_dice, - avg_factor=avg_factor) - - return loss diff --git a/spaces/rosenthal/chess/chessfenbot/message_template.py b/spaces/rosenthal/chess/chessfenbot/message_template.py deleted file mode 100644 index 34e72d392b1a7376f918b2e7b75767379d0bcdb8..0000000000000000000000000000000000000000 --- a/spaces/rosenthal/chess/chessfenbot/message_template.py +++ /dev/null @@ -1,38 +0,0 @@ -# -*- coding: utf-8 -*- -# Response message template -MESSAGE_TEMPLATE = """[◕ _ ◕]^* - -I attempted to generate a [chessboard layout]({unaligned_fen_img_link}) from the posted image[^(what I saw)]({visualize_link}), -with a certainty of **{certainty:.3f}%**. *{pithy_message}* - -- - -◇ White to play : [Analysis]({lichess_analysis_w}) | [Editor]({lichess_editor_w}) -`{fen_w}` - -- - -◆ Black to play : [Analysis]({lichess_analysis_b}) | [Editor]({lichess_editor_b}) -`{fen_b}` - -- - -> ▾ Links for when pieces are inverted on the board: -> -> White to play : [Analysis]({inverted_lichess_analysis_w}) | [Editor]({inverted_lichess_editor_w}) -> `{inverted_fen_w}` -> -> Black to play : [Analysis]({inverted_lichess_analysis_b}) | [Editor]({inverted_lichess_editor_b}) -> `{inverted_fen_b}` - -- - - ---- - -^(Yes I am a machine learning bot | ) -[^(`How I work`)](http://github.com/Elucidation/tensorflow_chessbot 'Must go deeper') -^( | )[^(`Try your own images`)](http://tetration.xyz/ChessboardFenTensorflowJs/) -^( | Reply with a corrected FEN to add to my next training dataset) - -""" \ No newline at end of file diff --git a/spaces/rstallman/Mayfair-Partner-Music/audiocraft/modules/rope.py b/spaces/rstallman/Mayfair-Partner-Music/audiocraft/modules/rope.py deleted file mode 100644 index 4b8c70b9aba28eeb53d12ddc3de8852492847808..0000000000000000000000000000000000000000 --- a/spaces/rstallman/Mayfair-Partner-Music/audiocraft/modules/rope.py +++ /dev/null @@ -1,124 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -from torch import nn -import torch - - -class XPos(nn.Module): - """Length-extrapolatable positional embedding (xPos) from [Sun et al 2022](https://arxiv.org/abs/2212.10554v1). - This applies an exponential decay to the RoPE rotation matrix. - - Args: - dim (int): Embedding dimension. - smoothing (float): Smoothing factor applied to the decay rates. - base_scale (int): Base decay rate, given in terms of scaling time. - device (torch.device or None): Device on which to initialize the module. - dtype (torch.dtype): dtype to use to generate the embedding. - """ - def __init__(self, dim: int, smoothing: float = 0.4, base_scale: int = 512, - device=None, dtype: torch.dtype = torch.float32): - super().__init__() - assert dim % 2 == 0 - assert dtype in [torch.float64, torch.float32] - self.dtype = dtype - self.base_scale = base_scale - - half_dim = dim // 2 - adim = torch.arange(half_dim, device=device, dtype=dtype) - decay_rates = (adim / half_dim + smoothing) / (1.0 + smoothing) - self.register_buffer("decay_rates", decay_rates) - self.decay: tp.Optional[torch.Tensor] = None - - def get_decay(self, start: int, end: int): - """Create complex decay tensor, cache values for fast computation. - """ - if self.decay is None or end > self.decay.shape[0]: - assert isinstance(self.decay_rates, torch.Tensor) # Satisfy type checker. - idx = torch.arange(end, device=self.decay_rates.device, dtype=self.dtype) - power = idx / self.base_scale - scale = self.decay_rates ** power.unsqueeze(-1) - self.decay = torch.polar(scale, torch.zeros_like(scale)) - return self.decay[start:end] # [T, C/2] - - -class RotaryEmbedding(nn.Module): - """Rotary positional embedding (RoPE) from [Su et al 2022](https://arxiv.org/abs/2104.09864). - - Args: - dim (int): Embedding dimension (twice the number of frequencies). - max_period (float): Maximum period of the rotation frequencies. - xpos (bool): Use xPos, applies an exponential decay to rotation matrix. - scale (float): Scale of positional embedding, set to 0 to deactivate. - device (torch.device or None): Device on which to initialize the module. - dtype (torch.dtype): dtype to use to generate the embedding. - """ - def __init__(self, dim: int, max_period: float = 10000.0, xpos: bool = False, - scale: float = 1.0, device=None, dtype: torch.dtype = torch.float32): - super().__init__() - assert dim % 2 == 0 - self.scale = scale - assert dtype in [torch.float64, torch.float32] - self.dtype = dtype - - adim = torch.arange(0, dim, 2, device=device, dtype=dtype)[: (dim // 2)] - frequencies = 1.0 / (max_period ** (adim / dim)) - self.register_buffer("frequencies", frequencies) - self.rotation: tp.Optional[torch.Tensor] = None - - self.xpos = XPos(dim, device=device, dtype=dtype) if xpos else None - - def get_rotation(self, start: int, end: int): - """Create complex rotation tensor, cache values for fast computation. - """ - if self.rotation is None or end > self.rotation.shape[0]: - assert isinstance(self.frequencies, torch.Tensor) # Satisfy type checker. - idx = torch.arange(end, device=self.frequencies.device, dtype=self.dtype) - angles = torch.outer(idx, self.frequencies) - self.rotation = torch.polar(torch.ones_like(angles), angles) - return self.rotation[start:end] - - def rotate(self, x: torch.Tensor, start: int = 0, invert_decay: bool = False): - """Apply rope rotation to query or key tensor. - """ - T = x.shape[1] - rotation = self.get_rotation(start, start + T).unsqueeze(0).unsqueeze(2) - - if self.xpos: - decay = self.xpos.get_decay(start, start + T).unsqueeze(0).unsqueeze(2) - else: - decay = 1.0 - - if invert_decay: - decay = decay ** -1 - - x_complex = torch.view_as_complex(x.to(self.dtype).reshape(*x.shape[:-1], -1, 2)) - scaled_rotation = (rotation * decay) * self.scale + (1.0 - self.scale) - x_out = torch.view_as_real(x_complex * scaled_rotation).flatten(-2) - - return x_out.type_as(x) - - def rotate_qk(self, query: torch.Tensor, key: torch.Tensor, start: int = 0): - """ Apply rope rotation to both query and key tensors. - Supports streaming mode, in which query and key are not expected to have the same shape. - In streaming mode, key will be of legnth [P + C] with P the cached past timesteps, but - query will be [C] (typically C == 1). - - Args: - query (torch.Tensor): Query to rotate. - key (torch.Tensor): Key to rotate. - start (int): Start index of the sequence for time offset. - """ - query_timesteps = query.shape[1] - key_timesteps = key.shape[1] - streaming_offset = key_timesteps - query_timesteps - - query_out = self.rotate(query, start + streaming_offset) - key_out = self.rotate(key, start, invert_decay=True) - - return query_out, key_out diff --git a/spaces/sadafpy/Malaria-Infected-Cell-Predictor/README.md b/spaces/sadafpy/Malaria-Infected-Cell-Predictor/README.md deleted file mode 100644 index 3b44f21def57829dc646e2bff876004fa4a0355b..0000000000000000000000000000000000000000 --- a/spaces/sadafpy/Malaria-Infected-Cell-Predictor/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Malaria Infected Cell Predictor -emoji: 🦟 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.3 -app_file: malaria_predictor.py -pinned: false -license: bigscience-bloom-rail-1.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sam-hq-team/sam-hq/sam-hq/README.md b/spaces/sam-hq-team/sam-hq/sam-hq/README.md deleted file mode 100644 index 049cb8fa2790ddd7a8d1fb3bad5a3720d85da31d..0000000000000000000000000000000000000000 --- a/spaces/sam-hq-team/sam-hq/sam-hq/README.md +++ /dev/null @@ -1,147 +0,0 @@ -# Segment Anything in High Quality - -> [**Segment Anything in High Quality**](https://arxiv.org/abs/2306.01567) -> Lei Ke, Mingqiao Ye, Martin Danelljan, Yifan Liu, Yu-Wing Tai, Chi-Keung Tang, Fisher Yu \ -> ETH Zurich & HKUST - -We propose HQ-SAM to upgrade SAM for high-quality zero-shot segmentation. Refer to our [paper](https://arxiv.org/abs/2306.01567) for more details. - -Updates ------------------ -:fire::fire: We released the [colab notebook demo](https://colab.research.google.com/drive/1QwAbn5hsdqKOD5niuBzuqQX4eLCbNKFL?usp=sharing) and [automatic mask generator notebook](https://colab.research.google.com/drive/1dhRq4eR6Fbl-yl1vbQvU9hqyyeOidQaU?usp=sharing). - -:fire::fire: We released the [model checkpoints](#model-checkpoints) and [demo visualization codes](#getting-started). - -Visual comparison between SAM and HQ-SAM ------------------ -**SAM vs. HQ-SAM** - - - - - - - - - - - -
      - -image - -Introduction ------------------ -The recent Segment Anything Model (SAM) represents a big leap in scaling up segmentation models, allowing for powerful zero-shot capabilities and flexible prompting. Despite being trained with 1.1 billion masks, SAM's mask prediction quality falls short in many cases, particularly when dealing with objects that have intricate structures. We propose HQ-SAM, equipping SAM with the ability to accurately segment any object, while maintaining SAM's original promptable design, efficiency, and zero-shot generalizability. Our careful design reuses and preserves the pre-trained model weights of SAM, while only introducing minimal additional parameters and computation. We design a learnable High-Quality Output Token, which is injected into SAM's mask decoder and is responsible for predicting the high-quality mask. Instead of only applying it on mask-decoder features, we first fuse them with early and final ViT features for improved mask details. To train our introduced learnable parameters, we compose a dataset of 44K fine-grained masks from several sources. HQ-SAM is only trained on the introduced detaset of 44k masks, which takes only 4 hours on 8 GPUs. We show the efficacy of HQ-SAM in a suite of 9 diverse segmentation datasets across different downstream tasks, where 7 out of them are evaluated in a zero-shot transfer protocol. - -image - - -Quantitative comparison between SAM and HQ-SAM ------------------ -Note: For box-prompting-based evaluation, we feed SAM and our HQ-SAM with the same image/video bounding boxes and adopt the single mask output mode of SAM. - -### Various ViT backbones on COCO: -![backbones](figs/sam_vs_hqsam_backbones.png) -Note: For the COCO dataset, we use a SOTA detector FocalNet-DINO trained on the COCO dataset as our box prompt generator. - -### YTVIS and HQ-YTVIS -Note:Using ViT-L backbone. We adopt the SOTA detector Mask2Former trained on the YouTubeVIS 2019 dataset as our video boxes prompt generator while reusing its object association prediction. -![ytvis](figs/ytvis.png) - -### DAVIS -Note: Using ViT-L backbone. We adopt the SOTA model XMem as our video boxes prompt generator while reusing its object association prediction. -![davis](figs/davis.png) - - ### Interactive segmentation comparison using various points -Note:Using ViT-L backbone. On the high-quality COIFT (zero-shot) and DIS val set. -![point_comp](figs/points_comp.png) - -### **Installation** -The code requires `python>=3.8`, as well as `pytorch>=1.7` and `torchvision>=0.8`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended. - -Clone the repository locally and install with - -``` -git clone https://github.com/SysCV/sam-hq.git -cd sam-hq; pip install -e . -``` - -The following optional dependencies are necessary for mask post-processing, saving masks in COCO format, the example notebooks, and exporting the model in ONNX format. `jupyter` is also required to run the example notebooks. - -``` -pip install opencv-python pycocotools matplotlib onnxruntime onnx -``` - -### Example conda environment setup -```bash -conda create --name sam_hq python=3.8 -y -conda activate sam_hq -conda install pytorch==1.10.0 torchvision==0.11.0 cudatoolkit=11.1 -c pytorch -c nvidia -pip install opencv-python pycocotools matplotlib onnxruntime onnx - -# under your working directory -git clone https://github.com/SysCV/sam-hq.git -cd sam-hq -pip install -e . -export PYTHONPATH=$(pwd) -``` - -### **Model Checkpoints** - -Three HQ-SAM model versions of the model are available with different backbone sizes. These models can be instantiated by running - -``` -from segment_anything import sam_model_registry -sam = sam_model_registry[""](checkpoint="") -``` - -Download the provided trained model below and put them into the pretrained_checkpoint folder: -``` -mkdir pretrained_checkpoint -``` - -Click the links below to download the checkpoint for the corresponding model type. We also provide **alternative model downloading links** [here](https://github.com/SysCV/sam-hq/issues/5) or at [hugging face](https://huggingface.co/lkeab/hq-sam/tree/main). -- `vit_b`: [ViT-B HQ-SAM model.](https://drive.google.com/file/d/11yExZLOve38kRZPfRx_MRxfIAKmfMY47/view?usp=sharing) -- `vit_l`: [ViT-L HQ-SAM model.](https://drive.google.com/file/d/1Uk17tDKX1YAKas5knI4y9ZJCo0lRVL0G/view?usp=sharing) -- `vit_h`: [ViT-H HQ-SAM model.](https://drive.google.com/file/d/1qobFYrI4eyIANfBSmYcGuWRaSIXfMOQ8/view?usp=sharing) - -### **Getting Started** - -First download a [model checkpoint](#model-checkpoints). Then the model can be used in just a few lines to get masks from a given prompt: - -``` -from segment_anything import SamPredictor, sam_model_registry -sam = sam_model_registry[""](checkpoint="") -predictor = SamPredictor(sam) -predictor.set_image() -masks, _, _ = predictor.predict() -``` - -Additionally, see the usage examples in our [demo](/demo/demo_hqsam.py) , [colab notebook](https://colab.research.google.com/drive/1QwAbn5hsdqKOD5niuBzuqQX4eLCbNKFL?usp=sharing) and [automatic mask generator notebook](https://colab.research.google.com/drive/1dhRq4eR6Fbl-yl1vbQvU9hqyyeOidQaU?usp=sharing). - -To obtain HQ-SAM's visual result: -``` -python demo/demo_hqsam.py -``` - -To obtain baseline SAM's visual result. Note that you need to download original SAM checkpoint from [baseline-SAM-L model](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_l_0b3195.pth) and put it into the pretrained_checkpoint folder. -``` -python demo/demo_sam.py -``` - - -Citation ---------------- -If you find HQ-SAM useful in your research or refer to the provided baseline results, please star :star: this repository and consider citing :pencil:: -``` -@article{sam_hq, - title={Segment Anything in High Quality}, - author={Ke, Lei and Ye, Mingqiao and Danelljan, Martin and Liu, Yifan and Tai, Yu-Wing and Tang, Chi-Keung and Yu, Fisher}, - journal = {arXiv:2306.01567}, - year = {2023} -} -``` - -## Acknowledgments -- Thanks [SAM](https://github.com/facebookresearch/segment-anything) for their public code and released models. diff --git a/spaces/sanwuchengqun/bingai/Dockerfile b/spaces/sanwuchengqun/bingai/Dockerfile deleted file mode 100644 index aab2666c200ad56ff127d6b7ee32aed9f2f44bbe..0000000000000000000000000000000000000000 --- a/spaces/sanwuchengqun/bingai/Dockerfile +++ /dev/null @@ -1,34 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量——Cookies"_U",此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="kJs8hD92ncMzLaoQWYtX5rG6bE3fZ4iO" - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令,理论来讲会直接启动 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/sayakpaul/sots-outdoor-dehazing-maxim/create_maxim_model.py b/spaces/sayakpaul/sots-outdoor-dehazing-maxim/create_maxim_model.py deleted file mode 100644 index f6f8ef29093d5defdaa51e3f99ce25fcdc77b513..0000000000000000000000000000000000000000 --- a/spaces/sayakpaul/sots-outdoor-dehazing-maxim/create_maxim_model.py +++ /dev/null @@ -1,37 +0,0 @@ -from tensorflow import keras - -from maxim import maxim -from maxim.configs import MAXIM_CONFIGS - - -def Model(variant=None, input_resolution=(256, 256), **kw) -> keras.Model: - """Factory function to easily create a Model variant like "S". - - Args: - variant: UNet model variants. Options: 'S-1' | 'S-2' | 'S-3' - | 'M-1' | 'M-2' | 'M-3' - input_resolution: Size of the input images. - **kw: Other UNet config dicts. - - Returns: - The MAXIM model. - """ - - if variant is not None: - config = MAXIM_CONFIGS[variant] - for k, v in config.items(): - kw.setdefault(k, v) - - if "variant" in kw: - _ = kw.pop("variant") - if "input_resolution" in kw: - _ = kw.pop("input_resolution") - model_name = kw.pop("name") - - maxim_model = maxim.MAXIM(**kw) - - inputs = keras.Input((*input_resolution, 3)) - outputs = maxim_model(inputs) - final_model = keras.Model(inputs, outputs, name=f"{model_name}_model") - - return final_model diff --git a/spaces/scedlatioru/img-to-music/example/BUCK Saturday Morning Cartoon Apocalypse Torrent Full ((NEW)).md b/spaces/scedlatioru/img-to-music/example/BUCK Saturday Morning Cartoon Apocalypse Torrent Full ((NEW)).md deleted file mode 100644 index f897d08c5e11a0c412d7d5535d639729d9eeec0f..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/BUCK Saturday Morning Cartoon Apocalypse Torrent Full ((NEW)).md +++ /dev/null @@ -1,15 +0,0 @@ -
      -

      Download BUCK: Saturday Morning Cartoon Apocalypse torrent Full

      -

      If you are looking for a story-driven, action-adventure game set in a post-apocalyptic wasteland, you might want to check out BUCK: Saturday Morning Cartoon Apocalypse. This game is inspired by Saturday morning cartoons of the early 90's and features Buck, a motorcycle garage mechanic who decides to leave everything he knows behind in order to find the truth behind a girl's disappearance[^1^].

      -

      BUCK: Saturday Morning Cartoon Apocalypse is an early access game that is available on Steam[^1^] [^2^]. You can download the demo for free and try it out before buying the full version. However, if you want to get the full game without paying anything, you can also download it from a torrent site. But be warned, this might be illegal and risky, as you could get infected by malware or face legal consequences.

      -

      BUCK: Saturday Morning Cartoon Apocalypse torrent Full


      Download Filehttps://gohhs.com/2uEzEA



      -

      To download BUCK: Saturday Morning Cartoon Apocalypse torrent full, you will need a torrent client such as BitTorrent or uTorrent. You will also need to find a reliable torrent site that has the game file. You can search for the game name on Google or use a specialized torrent search engine like Torrentz2 or The Pirate Bay. Once you find a torrent file that has good ratings and comments, you can download it and open it with your torrent client. Then, you will have to wait for the download to finish and install the game on your computer.

      -

      BUCK: Saturday Morning Cartoon Apocalypse is a game that offers a unique blend of 2D platforming, shooting, stealth, dialogue and item crafting. You can explore a vast wasteland full of dangers and secrets, interact with various characters and factions, and customize your weapons and equipment. The game also has a dark and gritty story that deals with themes such as loss, revenge, survival and redemption[^3^].

      -

      -

      If you are a fan of old-school cartoons and post-apocalyptic games, you might enjoy playing BUCK: Saturday Morning Cartoon Apocalypse. However, we recommend that you buy the game from Steam or other official sources instead of downloading it from a torrent site. This way, you can support the developers and get updates and bug fixes. You can also avoid potential legal issues and malware infections that could harm your computer or data.

      - -

      BUCK: Saturday Morning Cartoon Apocalypse has received mostly positive reviews from players and critics. The game has been praised for its retro-style graphics, atmospheric soundtrack, engaging gameplay and immersive story. Some of the features that players have enjoyed are the 2D platforming, shooting, stealth, dialogue and item crafting mechanics. The game also has a dark and gritty story that deals with themes such as loss, revenge, survival and redemption[^3^].

      -

      However, BUCK: Saturday Morning Cartoon Apocalypse is not without its flaws. The game is still in early access and has some bugs and glitches that need to be fixed. Some players have reported issues with the controls, the camera, the save system and the performance. The game also has a steep learning curve and can be quite challenging for some players. The game is not very long and has limited replay value. Some players have also criticized the game for being too violent, depressing and mature for a cartoon-inspired game[^1^] [^2^].

      -

      Overall, BUCK: Saturday Morning Cartoon Apocalypse is a game that offers a unique blend of 2D platforming, shooting, stealth, dialogue and item crafting. The game is inspired by Saturday morning cartoons of the early 90's and features Buck, a motorcycle garage mechanic who decides to leave everything he knows behind in order to find the truth behind a girl's disappearance. The game has a retro-style graphics, atmospheric soundtrack, engaging gameplay and immersive story. The game is still in early access and has some bugs and glitches that need to be fixed. The game also has a steep learning curve and can be quite challenging for some players. The game is not very long and has limited replay value. The game is not for everyone and might be too violent, depressing and mature for some players.

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Tenorshare ReiBoot Pro 7.3.5 Crack Plus Registration Code HOT!.md b/spaces/scedlatioru/img-to-music/example/Tenorshare ReiBoot Pro 7.3.5 Crack Plus Registration Code HOT!.md deleted file mode 100644 index aa4f4925c316304e18fad655ec66c41def483796..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Tenorshare ReiBoot Pro 7.3.5 Crack Plus Registration Code HOT!.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Tenorshare ReiBoot Pro 7.3.5 Crack Plus Registration Code


      Download ❤❤❤ https://gohhs.com/2uEADX



      -
      -September 1, 2019 - due to frustration with escaping or minimizing. Reiboot PRO Key allows you to restore iPad/iPod touch and iPhone. from. iHelp is one of the most famous IDEVICE forums. Here you can find tips, tricks and software. You can ask for advice if you are having problems with your iPhone, iPad, or iPod touch. from. If you can't find the feature you need in the Hardware Overview, it's because Apple hasn't added it to iDevice Software. Instead, you must install unofficial software that will provide you with all the features. from . 8a78ff9644
      -
      -
      -

      diff --git a/spaces/shgao/EditAnything/ldm/models/autoencoder.py b/spaces/shgao/EditAnything/ldm/models/autoencoder.py deleted file mode 100644 index d122549995ce2cd64092c81a58419ed4a15a02fd..0000000000000000000000000000000000000000 --- a/spaces/shgao/EditAnything/ldm/models/autoencoder.py +++ /dev/null @@ -1,219 +0,0 @@ -import torch -import pytorch_lightning as pl -import torch.nn.functional as F -from contextlib import contextmanager - -from ldm.modules.diffusionmodules.model import Encoder, Decoder -from ldm.modules.distributions.distributions import DiagonalGaussianDistribution - -from ldm.util import instantiate_from_config -from ldm.modules.ema import LitEma - - -class AutoencoderKL(pl.LightningModule): - def __init__(self, - ddconfig, - lossconfig, - embed_dim, - ckpt_path=None, - ignore_keys=[], - image_key="image", - colorize_nlabels=None, - monitor=None, - ema_decay=None, - learn_logvar=False - ): - super().__init__() - self.learn_logvar = learn_logvar - self.image_key = image_key - self.encoder = Encoder(**ddconfig) - self.decoder = Decoder(**ddconfig) - self.loss = instantiate_from_config(lossconfig) - assert ddconfig["double_z"] - self.quant_conv = torch.nn.Conv2d(2*ddconfig["z_channels"], 2*embed_dim, 1) - self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1) - self.embed_dim = embed_dim - if colorize_nlabels is not None: - assert type(colorize_nlabels)==int - self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1)) - if monitor is not None: - self.monitor = monitor - - self.use_ema = ema_decay is not None - if self.use_ema: - self.ema_decay = ema_decay - assert 0. < ema_decay < 1. - self.model_ema = LitEma(self, decay=ema_decay) - print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") - - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys) - - def init_from_ckpt(self, path, ignore_keys=list()): - sd = torch.load(path, map_location="cpu")["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - self.load_state_dict(sd, strict=False) - print(f"Restored from {path}") - - @contextmanager - def ema_scope(self, context=None): - if self.use_ema: - self.model_ema.store(self.parameters()) - self.model_ema.copy_to(self) - if context is not None: - print(f"{context}: Switched to EMA weights") - try: - yield None - finally: - if self.use_ema: - self.model_ema.restore(self.parameters()) - if context is not None: - print(f"{context}: Restored training weights") - - def on_train_batch_end(self, *args, **kwargs): - if self.use_ema: - self.model_ema(self) - - def encode(self, x): - h = self.encoder(x) - moments = self.quant_conv(h) - posterior = DiagonalGaussianDistribution(moments) - return posterior - - def decode(self, z): - z = self.post_quant_conv(z) - dec = self.decoder(z) - return dec - - def forward(self, input, sample_posterior=True): - posterior = self.encode(input) - if sample_posterior: - z = posterior.sample() - else: - z = posterior.mode() - dec = self.decode(z) - return dec, posterior - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float() - return x - - def training_step(self, batch, batch_idx, optimizer_idx): - inputs = self.get_input(batch, self.image_key) - reconstructions, posterior = self(inputs) - - if optimizer_idx == 0: - # train encoder+decoder+logvar - aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - self.log("aeloss", aeloss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=False) - return aeloss - - if optimizer_idx == 1: - # train the discriminator - discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - - self.log("discloss", discloss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=False) - return discloss - - def validation_step(self, batch, batch_idx): - log_dict = self._validation_step(batch, batch_idx) - with self.ema_scope(): - log_dict_ema = self._validation_step(batch, batch_idx, postfix="_ema") - return log_dict - - def _validation_step(self, batch, batch_idx, postfix=""): - inputs = self.get_input(batch, self.image_key) - reconstructions, posterior = self(inputs) - aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, 0, self.global_step, - last_layer=self.get_last_layer(), split="val"+postfix) - - discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, 1, self.global_step, - last_layer=self.get_last_layer(), split="val"+postfix) - - self.log(f"val{postfix}/rec_loss", log_dict_ae[f"val{postfix}/rec_loss"]) - self.log_dict(log_dict_ae) - self.log_dict(log_dict_disc) - return self.log_dict - - def configure_optimizers(self): - lr = self.learning_rate - ae_params_list = list(self.encoder.parameters()) + list(self.decoder.parameters()) + list( - self.quant_conv.parameters()) + list(self.post_quant_conv.parameters()) - if self.learn_logvar: - print(f"{self.__class__.__name__}: Learning logvar") - ae_params_list.append(self.loss.logvar) - opt_ae = torch.optim.Adam(ae_params_list, - lr=lr, betas=(0.5, 0.9)) - opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(), - lr=lr, betas=(0.5, 0.9)) - return [opt_ae, opt_disc], [] - - def get_last_layer(self): - return self.decoder.conv_out.weight - - @torch.no_grad() - def log_images(self, batch, only_inputs=False, log_ema=False, **kwargs): - log = dict() - x = self.get_input(batch, self.image_key) - x = x.to(self.device) - if not only_inputs: - xrec, posterior = self(x) - if x.shape[1] > 3: - # colorize with random projection - assert xrec.shape[1] > 3 - x = self.to_rgb(x) - xrec = self.to_rgb(xrec) - log["samples"] = self.decode(torch.randn_like(posterior.sample())) - log["reconstructions"] = xrec - if log_ema or self.use_ema: - with self.ema_scope(): - xrec_ema, posterior_ema = self(x) - if x.shape[1] > 3: - # colorize with random projection - assert xrec_ema.shape[1] > 3 - xrec_ema = self.to_rgb(xrec_ema) - log["samples_ema"] = self.decode(torch.randn_like(posterior_ema.sample())) - log["reconstructions_ema"] = xrec_ema - log["inputs"] = x - return log - - def to_rgb(self, x): - assert self.image_key == "segmentation" - if not hasattr(self, "colorize"): - self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x)) - x = F.conv2d(x, weight=self.colorize) - x = 2.*(x-x.min())/(x.max()-x.min()) - 1. - return x - - -class IdentityFirstStage(torch.nn.Module): - def __init__(self, *args, vq_interface=False, **kwargs): - self.vq_interface = vq_interface - super().__init__() - - def encode(self, x, *args, **kwargs): - return x - - def decode(self, x, *args, **kwargs): - return x - - def quantize(self, x, *args, **kwargs): - if self.vq_interface: - return x, None, [None, None, None] - return x - - def forward(self, x, *args, **kwargs): - return x - diff --git a/spaces/shi-labs/OneFormer/deform_setup.sh b/spaces/shi-labs/OneFormer/deform_setup.sh deleted file mode 100644 index a9e31922423a94acf918def8436a25876203d065..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/OneFormer/deform_setup.sh +++ /dev/null @@ -1,21 +0,0 @@ -#!/usr/bin/env bash - -# ln -s ./oneformer/modeling/pixel_decoder/ops/ ./ -# ls -# cd ops/ && bash make.sh && cd .. -echo '----------------------------------------------------------------' -echo '----------------------------------------------------------------' -pip3 freeze | grep MultiScaleDeformableAttention -pip3 freeze | grep torch -pip3 freeze | grep detectron2 -pip3 freeze | grep natten -echo '----------------------------------------------------------------' -echo '----------------------------------------------------------------' - -# echo '----------------------------------------------------------------' -# echo '----------------------------------------------------------------' -# cd /home/user/.pyenv/versions/3.8.15/lib/python3.8/site-packages -# ls -# ls | grep MultiScale -# echo '----------------------------------------------------------------' -# echo '----------------------------------------------------------------' diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy the Realistic Graphics and Sound Effects of Mafia City with APK Download on Android Oyun Club.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy the Realistic Graphics and Sound Effects of Mafia City with APK Download on Android Oyun Club.md deleted file mode 100644 index 5d2b13874cb398a6e3c676e05ae2de5895188666..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy the Realistic Graphics and Sound Effects of Mafia City with APK Download on Android Oyun Club.md +++ /dev/null @@ -1,166 +0,0 @@ - -

      Mafia City APK: A Strategy Game That Requires Wit and Time Management

      -

      Do you want to become the Godfather of the underworld? Do you have what it takes to lead a gang of loyal followers and fight against other players for power and glory? If you answered yes, then you might want to try Mafia City APK, a strategy game that requires wit and time management. In this article, we will tell you everything you need to know about this game, including what it is, how to play it, and why you should play it. Let's get started!

      -

      What is Mafia City APK?

      -

      A brief introduction to the game and its features

      -

      Mafia City APK is a game developed by YottaGames that lets you experience the life of a mafia boss. You can build your own empire, recruit and train your members, steal from banks, form alliances with other players, and fight together to take over the city and the mafia world. The game has many features that make it realistic and immersive, such as:

      -

      mafia city apk android oyun club


      Downloadhttps://ssurll.com/2uO17j



      -
        -
      • Real-time strategy gameplay that allows you to interact with millions of players around the world.
      • -
      • High-quality graphics and sound effects that create a vivid and thrilling atmosphere.
      • -
      • A variety of roles and characters that you can choose from, such as gunmen, bikers, snipers, nurses, lawyers, etc.
      • -
      • A rich and diverse storyline that involves romance, betrayal, revenge, and more.
      • -
      • A lot of customization options that let you design your own avatar, mansion, vehicles, weapons, etc.
      • -
      -

      How to download and install Mafia City APK on Android devices

      -

      If you are interested in playing Mafia City APK on your Android device, you can easily download and install it by following these steps:

      -
        -
      1. Go to [Mafia City APK (Android Game) - Free Download - APKCombo](^1^) or [Download Mafia City APK - Latest Version 2023 - APKCombo](^2^) and click on the download button.
      2. -
      3. Once the download is complete, open the file manager on your device and locate the downloaded file.
      4. -
      5. Tap on the file and allow the installation from unknown sources if prompted.
      6. -
      7. Wait for the installation to finish and then launch the game from your app drawer or home screen.
      8. -
      9. Enjoy playing Mafia City APK!
      10. -
      -

      How to Play Mafia City APK?

      -

      The basics of the game: roles, resources, buildings, and troops

      -

      The first thing you need to do when you start playing Mafia City APK is to choose your role. You can be a Bulker, who specializes in melee combat; a Shooter, who excels in ranged attacks; a Biker, who is fast and agile; or a Vehicle, who has high defense and mobility. Each role has its own advantages and disadvantages, so choose wisely according to your preference and strategy.

      -

      The next thing you need to do is to collect resources. Resources are essential for building your empire and upgrading your troops. There are four types of resources in the game: cash, cargo, arms, and metal. You can get them by building and upgrading resource buildings, robbing banks and other players, completing tasks and events, and joining alliances. You should always have enough resources to support your growth and expansion.

      -

      The third thing you need to do is to build and upgrade your buildings. Buildings are the foundation of your empire and provide various benefits and functions. There are many types of buildings in the game, such as the mansion, the hospital, the investment center, the black market, the club, etc. Each building has its own level and requirements, and you can upgrade them by spending resources and time. You should always prioritize the most important buildings for your strategy and goals.

      -

      The fourth thing you need to do is to recruit and train your troops. Troops are the backbone of your army and the key to your success in battles. There are four types of troops in the game: bulkers, shooters, bikers, and vehicles. Each type of troop has its own strengths and weaknesses, and you can train them by spending resources and time. You should always balance your troop composition and quality according to your role and enemy.

      -

      The strategies of the game: alliances, wars, raids, and events

      -

      The fifth thing you need to do is to join or create an alliance. Alliances are groups of players who cooperate and support each other in the game. You can join an existing alliance or create your own one by spending gold. Being in an alliance has many benefits, such as:

      -
        -
      • Sharing resources, information, and tips with other members.
      • -
      • Getting help from other members in building, healing, and fighting.
      • -
      • Participating in alliance wars, raids, and events for rewards and glory.
      • -
      • Accessing exclusive alliance features, such as the alliance store, the alliance territory, the alliance tech, etc.
      • -
      -

      The sixth thing you need to do is to participate in wars. Wars are conflicts between alliances or players that involve attacking and defending each other's bases. You can initiate a war by declaring it on another alliance or player, or you can join a war that is already ongoing. Wars have many consequences, such as:

      -
        -
      • Losing or gaining resources, troops, reputation, and territory.
      • -
      • Triggering or ending a peace shield that protects your base from attacks.
      • -
      • Earning or losing points for the war ranking and rewards.
      • -
      • Increasing or decreasing your threat level that affects your enemies' willingness to attack you.
      • -
      -

      The seventh thing you need to do is to participate in raids. Raids are attacks on specific targets that offer high rewards but also high risks. You can initiate a raid by selecting a target from the map or the list, or you can join a raid that is already ongoing. Raids have many challenges, such as:

      -

      mafia city mod apk android oyun club
      -mafia city apk indir android oyun club
      -mafia city hile apk android oyun club
      -mafia city apk download android oyun club
      -mafia city apk hile android oyun club
      -mafia city apk son sürüm android oyun club
      -mafia city apk güncel android oyun club
      -mafia city apk full android oyun club
      -mafia city apk hack android oyun club
      -mafia city apk para hilesi android oyun club
      -mafia city apk altın hilesi android oyun club
      -mafia city apk elmas hilesi android oyun club
      -mafia city apk vip hilesi android oyun club
      -mafia city apk mega hile android oyun club
      -mafia city apk online android oyun club
      -mafia city apk offline android oyun club
      -mafia city apk türkçe android oyun club
      -mafia city apk english android oyun club
      -mafia city apk latest version android oyun club
      -mafia city apk old version android oyun club
      -mafia city apk update android oyun club
      -mafia city apk free download android oyun club
      -mafia city apk unlimited money android oyun club
      -mafia city apk unlimited gold android oyun club
      -mafia city apk unlimited gems android oyun club
      -mafia city apk unlimited vip android oyun club
      -mafia city apk premium android oyun club
      -mafia city apk pro android oyun club
      -mafia city apk plus android oyun club
      -mafia city apk cracked android oyun club
      -mafia city apk patched android oyun club
      -mafia city apk modded android oyun club
      -mafia city apk unlocked android oyun club
      -mafia city apk cheat android oyun club
      -mafia city apk no root android oyun club
      -mafia city apk no ads android oyun club
      -mafia city apk no ban android oyun club
      -mafia city apk no survey android oyun club
      -mafia city apk no verification android oyun club
      -mafia city apk safe android oyun club
      -mafia city apk secure android oyun club
      -mafia city apk virus free android oyun club
      -mafia city apk malware free android oyun club
      -mafia city apk original android oyun club
      -mafia city apk official android oyun club
      -mafia city game download for Android OYUN Club

      -
        -
      • Facing strong enemies with high defense and firepower.
      • -
      • Dealing with traps, obstacles, and reinforcements that hinder your progress.
      • -
      • Managing your time limit and stamina that limit your actions.
      • -
      • Competing with other players for the loot and glory.
      • -
      -

      The eighth thing you need to do is to participate in events. Events are special activities that occur periodically in the game and offer various rewards and fun. You can participate in events by following the instructions and requirements of each event. Events have many types, such as:

      -
        -
      • Daily events that reward you for completing daily tasks and objectives.
      • -
      • Weekly events that reward you for achieving weekly goals and milestones.
      • -
      • Monthly events that reward you for reaching monthly targets and rankings.
      • -
      • Festival events that reward you for celebrating special occasions and holidays.
      • -
      -

      Why Play Mafia City APK?

      -

      The benefits of playing Mafia City APK: fun, challenge, and social interaction

      -

      There are many reasons why you should play Mafia City APK if you are looking for a strategy game that requires wit and time management. Some of the benefits are:

      -
        -
      • Fun: Mafia City APK is a fun game that offers a lot of entertainment and excitement. You can enjoy the realistic and immersive gameplay, the high-quality graphics and sound effects, the rich and diverse storyline, the customization options, etc.
      • -
      • Challenge: Mafia City APK is a challenging game that tests your skills and intelligence. You can face the difficulties and risks of being a mafia boss, the competition and conflict with other players, the complexity and diversity of the game mechanics, etc.
      • -
      • Social interaction: Mafia City APK is a social game that encourages communication and cooperation with other players. You can interact with millions of players around the world, form alliances with like-minded players , chat with friends and enemies, participate in alliance wars, raids, and events, etc.
      • -
      -

      The drawbacks of playing Mafia City APK: addiction, violence, and in-app purchases

      -

      However, there are also some drawbacks of playing Mafia City APK that you should be aware of and avoid. Some of the drawbacks are:

      -
        -
      • Addiction: Mafia City APK is an addictive game that can make you spend a lot of time and energy on it. You can get hooked on the thrill and satisfaction of building your empire, fighting your rivals, and dominating the city. You should always play the game in moderation and balance it with other aspects of your life.
      • -
      • Violence: Mafia City APK is a violent game that involves a lot of crime and bloodshed. You can witness and commit acts of robbery, murder, torture, and more. You should always remember that the game is not a reflection of reality and that violence is not a solution to any problem.
      • -
      • In-app purchases: Mafia City APK is a free-to-play game that offers in-app purchases for various items and features. You can buy gold, gems, VIP memberships, bundles, etc. to enhance your gameplay and progress faster. However, you should always be careful with your spending and not fall for the temptation of buying unnecessary or overpriced things.
      • -
      -

      Conclusion

      -

      A summary of the main points and a call to action

      -

      Mafia City APK is a strategy game that requires wit and time management. It lets you experience the life of a mafia boss who builds his own empire, recruits and trains his members, steals from banks, forms alliances with other players, and fights together to take over the city and the mafia world. The game has many features that make it realistic and immersive, such as real-time strategy gameplay, high-quality graphics and sound effects, a variety of roles and characters, a rich and diverse storyline, a lot of customization options, etc. The game also has many challenges that test your skills and intelligence, such as collecting resources, building and upgrading buildings, recruiting and training troops, joining or creating alliances, participating in wars, raids, and events, etc. The game has many benefits that make it fun, challenging, and social, such as enjoying the entertainment and excitement, facing the difficulties and risks, interacting with millions of players around the world, etc. However, the game also has some drawbacks that you should be aware of and avoid, such as getting addicted to the game, witnessing and committing violence in the game, spending too much money on in-app purchases, etc.

      -

      If you are looking for a strategy game that requires wit and time management, you should give Mafia City APK a try. You can download and install it on your Android device by following the steps we mentioned above. You can also check out our FAQs below for more information about the game. We hope you enjoy playing Mafia City APK and become the Godfather of the underworld!

      -

      FAQs

      -

      Q1: Is Mafia City APK safe to download and play?

      -

      A1: Yes, Mafia City APK is safe to download and play as long as you get it from a reliable source like [Mafia City APK (Android Game) - Free Download - APKCombo] or [Download Mafia City APK - Latest Version 2023 - APKCombo]. You should also scan the file with an antivirus software before installing it on your device. However, you should also be careful with your personal information and privacy when playing online games like Mafia City APK.

      -

      Q2: How can I get free gold in Mafia City APK?

      -

      A2: There are several ways to get free gold in Mafia City APK without spending real money. Some of them are:

      -
        -
      • Completing daily tasks and objectives.
      • -
      • Achieving weekly goals and milestones.
      • -
      • Reaching monthly targets and rankings.
      • -
      • Celebrating special occasions and holidays.
      • -
      • Inviting friends to join the game.
      • -
      • Watching ads or videos.
      • -
      • Participating in surveys or offers.
      • -
      -

      Q3: How can I join or create an alliance in Mafia City APK?

      -

      A3: To join or create an alliance in Mafia City APK, you need to follow these steps:

      -
        -
      1. Tap on the alliance icon on the bottom left corner of the screen.
      2. -
      3. Tap on the join or create button on the top right corner of the screen.
      4. -
      5. If you want to join an existing alliance, you can browse through the list of alliances or search for one by name or ID. You can also apply for an alliance by tapping on its name and then tapping on the apply button. You will have to wait for the alliance leader or manager to approve your application before you can join the alliance.
      6. -
      7. If you want to create your own alliance, you can enter a name, an ID, a slogan, and a logo for your alliance. You can also set the language, the region, the level, and the status of your alliance. You will have to spend some gold to create your alliance.
      8. -
      -

      Q4: How can I upgrade my buildings and troops in Mafia City APK?

      -

      A4: To upgrade your buildings and troops in Mafia City APK, you need to follow these steps:

      -
        -
      1. Tap on the building or the troop that you want to upgrade.
      2. -
      3. Tap on the upgrade button on the bottom right corner of the screen.
      4. -
      5. Check the requirements and the benefits of the upgrade.
      6. -
      7. If you have enough resources and time, tap on the confirm button to start the upgrade.
      8. -
      9. If you want to speed up the upgrade, you can use speed-ups, gold, or ask for help from your alliance members.
      10. -
      -

      Q5: How can I contact the customer service of Mafia City APK?

      -

      A5: To contact the customer service of Mafia City APK, you need to follow these steps:

      -
        -
      1. Tap on the settings icon on the top right corner of the screen.
      2. -
      3. Tap on the customer service button on the bottom left corner of the screen.
      4. -
      5. Choose the type of issue that you want to report or ask about.
      6. -
      7. Fill in the details and attach screenshots if necessary.
      8. -
      9. Tap on the submit button to send your message.
      10. -
      11. Wait for a reply from the customer service team.
      12. -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/sklearn-docs/Gaussian-Mixture-Model-Ellipsoids/README.md b/spaces/sklearn-docs/Gaussian-Mixture-Model-Ellipsoids/README.md deleted file mode 100644 index 889200ddb6d65df1fd57d16566b547c53bf2be0b..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/Gaussian-Mixture-Model-Ellipsoids/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Gaussian Mixture Model Ellipsoids -emoji: 🦀 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/softcatala/comparativa-tts-catala/mms.py b/spaces/softcatala/comparativa-tts-catala/mms.py deleted file mode 100644 index 599a380f1518a08dd43aef054676c835b1746dc1..0000000000000000000000000000000000000000 --- a/spaces/softcatala/comparativa-tts-catala/mms.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os -import torch -import commons -import utils -from models import SynthesizerTrn -from scipy.io.wavfile import write -from pathlib import Path -from typing import Union - -class TextMapper(object): - def __init__(self, vocab_file): - self.symbols = [x.replace("\n", "") for x in open(vocab_file).readlines()] - self.SPACE_ID = self.symbols.index(" ") - self._symbol_to_id = {s: i for i, s in enumerate(self.symbols)} - self._id_to_symbol = {i: s for i, s in enumerate(self.symbols)} - - def text_to_sequence(self, text, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [] - clean_text = text.strip() - for symbol in clean_text: - symbol_id = self._symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - def get_text(self, text, hps): - text_norm = self.text_to_sequence(text, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def filter_oov(self, text): - val_chars = self._symbol_to_id - txt_filt = "".join(list(filter(lambda x: x in val_chars, text))) - print(f"text after filtering OOV: {txt_filt}") - return txt_filt - -class MMS(): - def __init__(self, model_path: Union[str, Path]): - ckpt_dir = model_path - vocab_file = f"{ckpt_dir}/vocab.txt" - config_file = f"{ckpt_dir}/config.json" - assert os.path.isfile(config_file), f"{config_file} doesn't exist" - self.hps = utils.get_hparams_from_file(config_file) - self.text_mapper = TextMapper(vocab_file) - self.net_g = SynthesizerTrn( - len(self.text_mapper.symbols), - self.hps.data.filter_length // 2 + 1, - self.hps.train.segment_size // self.hps.data.hop_length, - **self.hps.model) - g_pth = f"{ckpt_dir}/G_100000.pth" - print(f"load {g_pth}") - - _ = utils.load_checkpoint(g_pth, self.net_g, None) - - def synthesize(self, wav_path: str, txt): - print(f"text: {txt}") - txt = txt.lower() - txt = self.text_mapper.filter_oov(txt) - stn_tst = self.text_mapper.get_text(txt, self.hps) - with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]) - hyp = self.net_g.infer( - x_tst, x_tst_lengths, noise_scale=.667, - noise_scale_w=0.8, length_scale=1.0 - )[0][0,0].cpu().float().numpy() - - os.makedirs(os.path.dirname(wav_path), exist_ok=True) - print(f"wav: {wav_path}") - write(wav_path, self.hps.data.sampling_rate, hyp) - return \ No newline at end of file diff --git a/spaces/songwy/VITS-Umamusume-voice-synthesizer/hubert_model.py b/spaces/songwy/VITS-Umamusume-voice-synthesizer/hubert_model.py deleted file mode 100644 index 6c7f8716c268d0f371f5a9f7995f59bd4b9082d1..0000000000000000000000000000000000000000 --- a/spaces/songwy/VITS-Umamusume-voice-synthesizer/hubert_model.py +++ /dev/null @@ -1,221 +0,0 @@ -import copy -from typing import Optional, Tuple -import random - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present - -class Hubert(nn.Module): - def __init__(self, num_label_embeddings: int = 100, mask: bool = True): - super().__init__() - self._mask = mask - self.feature_extractor = FeatureExtractor() - self.feature_projection = FeatureProjection() - self.positional_embedding = PositionalConvEmbedding() - self.norm = nn.LayerNorm(768) - self.dropout = nn.Dropout(0.1) - self.encoder = TransformerEncoder( - nn.TransformerEncoderLayer( - 768, 12, 3072, activation="gelu", batch_first=True - ), - 12, - ) - self.proj = nn.Linear(768, 256) - - self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_()) - self.label_embedding = nn.Embedding(num_label_embeddings, 256) - - def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - mask = None - if self.training and self._mask: - mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2) - x[mask] = self.masked_spec_embed.to(x.dtype) - return x, mask - - def encode( - self, x: torch.Tensor, layer: Optional[int] = None - ) -> Tuple[torch.Tensor, torch.Tensor]: - x = self.feature_extractor(x) - x = self.feature_projection(x.transpose(1, 2)) - x, mask = self.mask(x) - x = x + self.positional_embedding(x) - x = self.dropout(self.norm(x)) - x = self.encoder(x, output_layer=layer) - return x, mask - - def logits(self, x: torch.Tensor) -> torch.Tensor: - logits = torch.cosine_similarity( - x.unsqueeze(2), - self.label_embedding.weight.unsqueeze(0).unsqueeze(0), - dim=-1, - ) - return logits / 0.1 - - def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - x, mask = self.encode(x) - x = self.proj(x) - logits = self.logits(x) - return logits, mask - - -class HubertSoft(Hubert): - def __init__(self): - super().__init__() - - @torch.inference_mode() - def units(self, wav: torch.Tensor) -> torch.Tensor: - wav = F.pad(wav, ((400 - 320) // 2, (400 - 320) // 2)) - x, _ = self.encode(wav) - return self.proj(x) - - -class FeatureExtractor(nn.Module): - def __init__(self): - super().__init__() - self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False) - self.norm0 = nn.GroupNorm(512, 512) - self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False) - self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = F.gelu(self.norm0(self.conv0(x))) - x = F.gelu(self.conv1(x)) - x = F.gelu(self.conv2(x)) - x = F.gelu(self.conv3(x)) - x = F.gelu(self.conv4(x)) - x = F.gelu(self.conv5(x)) - x = F.gelu(self.conv6(x)) - return x - - -class FeatureProjection(nn.Module): - def __init__(self): - super().__init__() - self.norm = nn.LayerNorm(512) - self.projection = nn.Linear(512, 768) - self.dropout = nn.Dropout(0.1) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.norm(x) - x = self.projection(x) - x = self.dropout(x) - return x - - -class PositionalConvEmbedding(nn.Module): - def __init__(self): - super().__init__() - self.conv = nn.Conv1d( - 768, - 768, - kernel_size=128, - padding=128 // 2, - groups=16, - ) - self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.conv(x.transpose(1, 2)) - x = F.gelu(x[:, :, :-1]) - return x.transpose(1, 2) - - -class TransformerEncoder(nn.Module): - def __init__( - self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int - ) -> None: - super(TransformerEncoder, self).__init__() - self.layers = nn.ModuleList( - [copy.deepcopy(encoder_layer) for _ in range(num_layers)] - ) - self.num_layers = num_layers - - def forward( - self, - src: torch.Tensor, - mask: torch.Tensor = None, - src_key_padding_mask: torch.Tensor = None, - output_layer: Optional[int] = None, - ) -> torch.Tensor: - output = src - for layer in self.layers[:output_layer]: - output = layer( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask - ) - return output - - -def _compute_mask( - shape: Tuple[int, int], - mask_prob: float, - mask_length: int, - device: torch.device, - min_masks: int = 0, -) -> torch.Tensor: - batch_size, sequence_length = shape - - if mask_length < 1: - raise ValueError("`mask_length` has to be bigger than 0.") - - if mask_length > sequence_length: - raise ValueError( - f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`" - ) - - # compute number of masked spans in batch - num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random()) - num_masked_spans = max(num_masked_spans, min_masks) - - # make sure num masked indices <= sequence_length - if num_masked_spans * mask_length > sequence_length: - num_masked_spans = sequence_length // mask_length - - # SpecAugment mask to fill - mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool) - - # uniform distribution to sample from, make sure that offset samples are < sequence_length - uniform_dist = torch.ones( - (batch_size, sequence_length - (mask_length - 1)), device=device - ) - - # get random indices to mask - mask_indices = torch.multinomial(uniform_dist, num_masked_spans) - - # expand masked indices to masked spans - mask_indices = ( - mask_indices.unsqueeze(dim=-1) - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - offsets = ( - torch.arange(mask_length, device=device)[None, None, :] - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - mask_idxs = mask_indices + offsets - - # scatter indices to mask - mask = mask.scatter(1, mask_idxs, True) - - return mask - - -def hubert_soft( - path: str -) -> HubertSoft: - r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`. - Args: - path (str): path of a pretrained model - """ - hubert = HubertSoft() - checkpoint = torch.load(path) - consume_prefix_in_state_dict_if_present(checkpoint, "module.") - hubert.load_state_dict(checkpoint) - hubert.eval() - return hubert diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/noisychannel/rerank_tune.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/noisychannel/rerank_tune.py deleted file mode 100644 index b2e8b7594a370b2462f77252d54d7ef80e290f7c..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/noisychannel/rerank_tune.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import random - -import numpy as np -from fairseq import options - -from examples.noisychannel import rerank, rerank_options - - -def random_search(args): - param_values = [] - tuneable_parameters = ["lenpen", "weight1", "weight2", "weight3"] - initial_params = [args.lenpen, args.weight1, args.weight2, args.weight3] - for i, elem in enumerate(initial_params): - if type(elem) is not list: - initial_params[i] = [elem] - else: - initial_params[i] = elem - - tune_parameters = args.tune_param.copy() - for i in range(len(args.tune_param)): - assert args.upper_bound[i] >= args.lower_bound[i] - index = tuneable_parameters.index(args.tune_param[i]) - del tuneable_parameters[index] - del initial_params[index] - - tune_parameters += tuneable_parameters - param_values += initial_params - random.seed(args.seed) - - random_params = np.array( - [ - [ - random.uniform(args.lower_bound[i], args.upper_bound[i]) - for i in range(len(args.tune_param)) - ] - for k in range(args.num_trials) - ] - ) - set_params = np.array( - [ - [initial_params[i][0] for i in range(len(tuneable_parameters))] - for k in range(args.num_trials) - ] - ) - random_params = np.concatenate((random_params, set_params), 1) - - rerank_args = vars(args).copy() - if args.nbest_list: - rerank_args["gen_subset"] = "test" - else: - rerank_args["gen_subset"] = args.tune_subset - - for k in range(len(tune_parameters)): - rerank_args[tune_parameters[k]] = list(random_params[:, k]) - - if args.share_weights: - k = tune_parameters.index("weight2") - rerank_args["weight3"] = list(random_params[:, k]) - - rerank_args = argparse.Namespace(**rerank_args) - best_lenpen, best_weight1, best_weight2, best_weight3, best_score = rerank.rerank( - rerank_args - ) - rerank_args = vars(args).copy() - rerank_args["lenpen"] = [best_lenpen] - rerank_args["weight1"] = [best_weight1] - rerank_args["weight2"] = [best_weight2] - rerank_args["weight3"] = [best_weight3] - - # write the hypothesis from the valid set from the best trial - - if args.gen_subset != "valid": - rerank_args["gen_subset"] = "valid" - rerank_args = argparse.Namespace(**rerank_args) - rerank.rerank(rerank_args) - - # test with the best hyperparameters on gen subset - rerank_args = vars(args).copy() - rerank_args["gen_subset"] = args.gen_subset - rerank_args["lenpen"] = [best_lenpen] - rerank_args["weight1"] = [best_weight1] - rerank_args["weight2"] = [best_weight2] - rerank_args["weight3"] = [best_weight3] - rerank_args = argparse.Namespace(**rerank_args) - rerank.rerank(rerank_args) - - -def cli_main(): - parser = rerank_options.get_tuning_parser() - args = options.parse_args_and_arch(parser) - - random_search(args) - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/tasks/multilingual_denoising.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/tasks/multilingual_denoising.py deleted file mode 100644 index d1c914917feb5165aad7482cd1377f5f65b21635..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/tasks/multilingual_denoising.py +++ /dev/null @@ -1,254 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os - -import numpy as np -from fairseq.data import ( - AppendTokenDataset, - ConcatDataset, - DenoisingDataset, - Dictionary, - PrependTokenDataset, - ResamplingDataset, - SortDataset, - TokenBlockDataset, - data_utils, -) -from fairseq.data.encoders.utils import get_whole_word_mask -from fairseq.tasks import register_task - -from .denoising import DenoisingTask - - -logger = logging.getLogger(__name__) - - -@register_task("multilingual_denoising") -class MultilingualDenoisingTask(DenoisingTask): - @staticmethod - def add_args(parser): - DenoisingTask.add_args(parser) - parser.add_argument( - "--multilang-sampling-alpha", - type=float, - default=1.0, - help="smoothing alpha for sample ratios across multiple datasets", - ) - parser.add_argument("--add-lang-token", default=False, action="store_true") - parser.add_argument( - "--langs", type=str, help="language ids we are considering", default=None - ) - parser.add_argument( - "--no-whole-word-mask-langs", - type=str, - default="", - metavar="N", - help="languages without spacing between words dont support whole word masking", - ) - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task.""" - paths = args.data.split(":") - assert len(paths) > 0 - dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt")) - - data_path = paths[0] - if args.langs is None: - languages = sorted( - [ - name - for name in os.listdir(data_path) - if os.path.isdir(os.path.join(data_path, name)) - ] - ) - else: - languages = args.langs.split(",") - - if args.add_lang_token: - for lang in languages: - dictionary.add_symbol("[{}]".format(lang)) - - logger.info("dictionary: {} types".format(len(dictionary))) - if not hasattr(args, "shuffle_instance"): - args.shuffle_instance = False - return cls(args, dictionary) - - def __init__(self, args, dictionary): - super().__init__(args, dictionary) - self.dictionary = dictionary - self.seed = args.seed - - # add mask token - self.mask_idx = self.dictionary.add_symbol("") - self.langs = args.langs - self.args = args - - def _get_sample_prob(self, dataset_lens): - """ - Get smoothed sampling porbability by languages. This helps low resource - languages by upsampling them. - """ - prob = dataset_lens / dataset_lens.sum() - smoothed_prob = prob ** self.args.multilang_sampling_alpha - smoothed_prob = smoothed_prob / smoothed_prob.sum() - return smoothed_prob - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - paths = self.args.data.split(":") - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - split_path = os.path.join(data_path, split) - - if self.langs is None: - languages = sorted( - [ - name - for name in os.listdir(data_path) - if os.path.isdir(os.path.join(data_path, name)) - ] - ) - else: - languages = self.langs.split(",") - for name in languages: - p = os.path.join(data_path, name) - assert os.path.exists(p), "data not found: {}".format(p) - - logger.info("Training on {0} languages: {1}".format(len(languages), languages)) - logger.info( - "Language to id mapping: ", {lang: id for id, lang in enumerate(languages)} - ) - - mask_whole_words = get_whole_word_mask(self.args, self.dictionary) - language_without_segmentations = self.args.no_whole_word_mask_langs.split(",") - lang_datasets = [] - for language in languages: - split_path = os.path.join(data_path, language, split) - - dataset = data_utils.load_indexed_dataset( - split_path, - self.source_dictionary, - self.args.dataset_impl, - combine=combine, - ) - if dataset is None: - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, split_path) - ) - - end_token = ( - self.source_dictionary.index("[{}]".format(language)) - if self.args.add_lang_token - else self.source_dictionary.eos() - ) - - # create continuous blocks of tokens - dataset = TokenBlockDataset( - dataset, - dataset.sizes, - self.args.tokens_per_sample - 2, # one less for - pad=self.source_dictionary.pad(), - eos=end_token, - break_mode=self.args.sample_break_mode, - ) - logger.info("loaded {} blocks from: {}".format(len(dataset), split_path)) - - # prepend beginning-of-sentence token (, equiv. to [CLS] in BERT) - dataset = PrependTokenDataset(dataset, self.source_dictionary.bos()) - dataset = AppendTokenDataset(dataset, end_token) - - lang_mask_whole_words = ( - mask_whole_words - if language not in language_without_segmentations - else None - ) - lang_dataset = DenoisingDataset( - dataset, - dataset.sizes, - self.dictionary, - self.mask_idx, - lang_mask_whole_words, - shuffle=self.args.shuffle_instance, - seed=self.seed, - args=self.args, - eos=None - if not self.args.add_lang_token - else self.source_dictionary.index("[{}]".format(language)), - ) - lang_datasets.append(lang_dataset) - - dataset_lengths = np.array( - [len(d) for d in lang_datasets], - dtype=float, - ) - logger.info( - "loaded total {} blocks for all languages".format( - int(dataset_lengths.sum()), - ) - ) - if split == self.args.train_subset: - # For train subset, additionally up or down sample languages. - sample_probs = self._get_sample_prob(dataset_lengths) - logger.info( - "Sample probability by language: {}".format( - { - lang: "{0:.4f}".format(sample_probs[id]) - for id, lang in enumerate(languages) - } - ) - ) - size_ratio = (sample_probs * dataset_lengths.sum()) / dataset_lengths - logger.info( - "Up/Down Sampling ratio by language: {}".format( - { - lang: "{0:.2f}".format(size_ratio[id]) - for id, lang in enumerate(languages) - } - ) - ) - - resampled_lang_datasets = [ - ResamplingDataset( - lang_datasets[i], - size_ratio=size_ratio[i], - seed=self.args.seed, - epoch=epoch, - replace=size_ratio[i] >= 1.0, - ) - for i, d in enumerate(lang_datasets) - ] - dataset = ConcatDataset( - resampled_lang_datasets, - ) - else: - dataset = ConcatDataset(lang_datasets) - lang_splits = [split] - for lang_id, lang_dataset in enumerate(lang_datasets): - split_name = split + "_" + languages[lang_id] - lang_splits.append(split_name) - self.datasets[split_name] = lang_dataset - - if split in self.args.valid_subset: - self.args.valid_subset = self.args.valid_subset.replace( - split, ",".join(lang_splits) - ) - - with data_utils.numpy_seed(self.args.seed + epoch): - shuffle = np.random.permutation(len(dataset)) - - self.datasets[split] = SortDataset( - dataset, - sort_order=[ - shuffle, - dataset.sizes, - ], - ) diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_metrics.py b/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_metrics.py deleted file mode 100644 index 2de6969cf4445bc6cda44dacf6de765ea30d5f5b..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_metrics.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest -import uuid - -from fairseq import metrics - - -class TestMetrics(unittest.TestCase): - def test_nesting(self): - with metrics.aggregate() as a: - metrics.log_scalar("loss", 1) - with metrics.aggregate() as b: - metrics.log_scalar("loss", 2) - - self.assertEqual(a.get_smoothed_values()["loss"], 1.5) - self.assertEqual(b.get_smoothed_values()["loss"], 2) - - def test_new_root(self): - with metrics.aggregate() as a: - metrics.log_scalar("loss", 1) - with metrics.aggregate(new_root=True) as b: - metrics.log_scalar("loss", 2) - - self.assertEqual(a.get_smoothed_values()["loss"], 1) - self.assertEqual(b.get_smoothed_values()["loss"], 2) - - def test_nested_new_root(self): - with metrics.aggregate() as layer1: - metrics.log_scalar("loss", 1) - with metrics.aggregate(new_root=True) as layer2: - metrics.log_scalar("loss", 2) - with metrics.aggregate() as layer3: - metrics.log_scalar("loss", 3) - with metrics.aggregate(new_root=True) as layer4: - metrics.log_scalar("loss", 4) - metrics.log_scalar("loss", 1.5) - - self.assertEqual(layer4.get_smoothed_values()["loss"], 4) - self.assertEqual(layer3.get_smoothed_values()["loss"], 3) - self.assertEqual(layer2.get_smoothed_values()["loss"], 2.5) - self.assertEqual(layer1.get_smoothed_values()["loss"], 1.25) - - def test_named(self): - name = str(uuid.uuid4()) - metrics.reset_meters(name) - - with metrics.aggregate(name): - metrics.log_scalar("loss", 1) - - metrics.log_scalar("loss", 3) - - with metrics.aggregate(name): - metrics.log_scalar("loss", 2) - - self.assertEqual(metrics.get_smoothed_values(name)["loss"], 1.5) - - def test_nested_duplicate_names(self): - name = str(uuid.uuid4()) - metrics.reset_meters(name) - - with metrics.aggregate(name): - metrics.log_scalar("loss", 1) - with metrics.aggregate() as other: - with metrics.aggregate(name): - metrics.log_scalar("loss", 2) - metrics.log_scalar("loss", 6) - - self.assertEqual(metrics.get_smoothed_values(name)["loss"], 3) - self.assertEqual(other.get_smoothed_values()["loss"], 2) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/stomexserde/gpt4-ui/Examples/3 Kingdoms Resurrection Of The Dragon Torrentl.md b/spaces/stomexserde/gpt4-ui/Examples/3 Kingdoms Resurrection Of The Dragon Torrentl.md deleted file mode 100644 index 51a8139d8145521030a4e9ee6939e0f6d7d8a404..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/3 Kingdoms Resurrection Of The Dragon Torrentl.md +++ /dev/null @@ -1,15 +0,0 @@ - -

      How to Download Three Kingdoms: Resurrection of the Dragon (2008) Movie Torrent

      -

      Three Kingdoms: Resurrection of the Dragon is a 2008 Hong Kong martial arts historical drama film based on Luo Guanzhong's famous novel, Romance of the Three Kingdoms. The movie was directed by Hong Kong film director, Daniel Lee and stars Andy Lau as Zhao Zilong, a legendary general of the Three Kingdoms period. The movie follows Zhao Zilong's rise from a common soldier to a war hero, and his final battle against the warlord Cao Cao's forces.

      -

      If you are a fan of epic historical movies with stunning action scenes and impressive costumes, you might want to download Three Kingdoms: Resurrection of the Dragon movie torrent and watch it on your device. However, downloading torrents can be risky and illegal, so you need to be careful and use a VPN to protect your privacy and security. Here are some steps to download Three Kingdoms: Resurrection of the Dragon movie torrent safely and legally:

      -

      3 Kingdoms Resurrection Of The Dragon Torrentl


      Download Filehttps://urlgoal.com/2uI8Dz



      -
        -
      1. Download and install a VPN on your device. A VPN is a service that encrypts your internet traffic and hides your IP address, making it harder for anyone to track your online activities or block your access to certain websites. A VPN also allows you to bypass geo-restrictions and access content that is not available in your region. Some of the best VPNs for torrenting are ExpressVPN, NordVPN, Surfshark, and CyberGhost.
      2. -
      3. Connect to a VPN server in a country where torrenting is legal and safe. Some of the best countries for torrenting are Switzerland, Netherlands, Spain, Canada, and Mexico. Avoid connecting to servers in countries where torrenting is illegal or heavily monitored, such as the US, UK, Germany, France, Australia, and Japan.
      4. -
      5. Go to a reliable torrent website that has Three Kingdoms: Resurrection of the Dragon movie torrent. Some of the best torrent websites are The Pirate Bay[^1^], YTS[^2^], 1337x[^4^], RARBG, and LimeTorrents. Make sure to check the comments and ratings of the torrent before downloading it, and avoid torrents that have low seeds or leeches, as they might be slow or corrupted.
      6. -
      7. Download Three Kingdoms: Resurrection of the Dragon movie torrent using a torrent client. A torrent client is a software that allows you to download and manage torrent files. Some of the best torrent clients are uTorrent, BitTorrent, qBittorrent, Vuze, and Deluge. Follow the instructions on the torrent website or the torrent client to download Three Kingdoms: Resurrection of the Dragon movie torrent.
      8. -
      9. Enjoy watching Three Kingdoms: Resurrection of the Dragon movie on your device. Make sure to keep your VPN on while watching the movie, as some ISPs might monitor your streaming activities or throttle your bandwidth. Also, remember to seed the torrent after downloading it, as this helps other users to download it faster and keeps the torrent alive.
      10. -
      -

      Disclaimer: This article is for educational purposes only and does not condone or encourage any illegal activity. Downloading torrents can expose you to malware, viruses, copyright infringement, legal issues, and other risks. We are not responsible for any consequences that may arise from downloading torrents.

      cec2833e83
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/7th Sense Telugu Movie Online Youku 16 BETTER.md b/spaces/stomexserde/gpt4-ui/Examples/7th Sense Telugu Movie Online Youku 16 BETTER.md deleted file mode 100644 index 2b46289d1bc30fdc90f110e52e64789bebccb2ec..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/7th Sense Telugu Movie Online Youku 16 BETTER.md +++ /dev/null @@ -1,17 +0,0 @@ - -

      How to Watch 7th Sense Telugu Movie Online for Free

      -

      7th Sense is a 2011 Telugu action thriller movie starring Suriya, Shruti Haasan and Johnny Trí Nguyễn. It is directed by A.R. Murugadoss and produced by Udhayanidhi Stalin. The movie revolves around a genetic engineering project that aims to create a superhuman with the DNA of Bodhidharma, a legendary martial arts master and healer. Suriya plays a circus artist who has inherited the genes of Bodhidharma and has to stop a deadly virus unleashed by a rogue scientist.

      -

      If you are a fan of Suriya or action movies, you might be interested in watching 7th Sense online for free. Here are some ways you can do that:

      -

      7th Sense Telugu Movie Online Youku 16


      DOWNLOAD >>> https://urlgoal.com/2uI79k



      -
        -
      • One option is to watch it on Sun NXT, a streaming platform that offers Telugu movies and shows. You can sign up for a free trial and watch 7th Sense without any ads or interruptions. You can also download the movie and watch it offline. To watch 7th Sense on Sun NXT, click here[^1^].
      • -
      • Another option is to watch it on YouTube, where you can find the full movie uploaded by Cinema Theatre, a channel that provides Telugu movies and trailers. You can watch 7th Sense for free on YouTube, but you might have to deal with some ads and low quality. To watch 7th Sense on YouTube, click here[^2^].
      • -
      • A third option is to listen to the audio version of the movie on SoundCloud, where you can find an excerpt uploaded by Mark, a user who shares Telugu movies and songs. You can listen to 7th Sense for free on SoundCloud, but you might miss out on the visual effects and action scenes. To listen to 7th Sense on SoundCloud, click here[^4^].
      • -
      -

      These are some of the ways you can watch 7th Sense Telugu movie online for free. We hope you enjoy the movie and let us know what you think of it in the comments below.

      7th Sense is not just an action movie, but also a sci-fi thriller that explores the concept of genetic memory and reincarnation. The movie has a lot of twists and turns that keep the audience engaged and entertained. The movie also showcases the rich culture and history of India and China, as it traces the origins and legacy of Bodhidharma, who is considered the founder of Zen Buddhism and Shaolin Kung Fu.

      -

      The movie has received mixed reviews from critics and audiences. Some praised the movie for its unique plot, stunning visuals, and Suriya's performance. Others criticized the movie for its logical flaws, scientific inaccuracies, and excessive length. The movie was also dubbed in Tamil, Hindi, and Malayalam languages. The movie was a commercial success, grossing over 100 crore rupees worldwide.

      -

      If you are looking for a movie that combines action, science fiction, and history, you might want to give 7th Sense a try. You can watch it online for free using any of the methods mentioned above. You can also share your thoughts on the movie with us on social media using the hashtag #7thSense.

      7th Sense is not only a movie, but also a source of inspiration for many people. The movie has inspired many fans to learn more about Bodhidharma and his teachings. The movie has also motivated many people to pursue their dreams and passions, just like Suriya's character in the movie. The movie has also sparked a lot of discussions and debates on topics such as genetic engineering, bio-terrorism, and human potential.

      -

      7th Sense is a movie that has something for everyone. Whether you are a fan of action, sci-fi, or history, you will find something to enjoy in this movie. The movie is also a great way to learn about a different culture and perspective. The movie is a testament to the power of cinema and storytelling.

      -

      We hope you liked this article and learned something new from it. If you want to read more articles like this, please subscribe to our newsletter and follow us on social media. You can also check out our other articles on Telugu movies and entertainment. Thank you for reading and have a great day.

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/A Gentleman Full Movie In Hindi Free Download Generateur Sarah Res.md b/spaces/stomexserde/gpt4-ui/Examples/A Gentleman Full Movie In Hindi Free Download Generateur Sarah Res.md deleted file mode 100644 index 41554729dc2cc4d73423f03f9787c6f762de05fe..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/A Gentleman Full Movie In Hindi Free Download Generateur Sarah Res.md +++ /dev/null @@ -1,23 +0,0 @@ -
      -

      A Gentleman Full Movie In Hindi Free Download Generateur Sarah Res

      -

      If you are looking for a way to watch A Gentleman full movie in Hindi for free, you might be interested in using a generateur sarah res. This is a tool that can generate links to download or stream movies online without paying any fees or registering on any websites. In this article, we will explain how to use a generateur sarah res to watch A Gentleman full movie in Hindi for free.

      -

      A Gentleman Full Movie In Hindi Free Download generateur sarah res


      DOWNLOADhttps://urlgoal.com/2uI9u0



      -

      A Gentleman is a 2017 action comedy film starring Sidharth Malhotra and Jacqueline Fernandez. The film follows Gaurav, a simple and honest man who wants to settle down with his girlfriend Kavya, and Rishi, a spy who works for a covert agency called Unit X. Their lives get intertwined when Rishi's boss Colonel Vijay Saxena assigns him a mission to retrieve a hard disk containing sensitive information from a rival gangster. The film is full of twists and turns, romance and humor, and some thrilling action sequences.

      -

      To watch A Gentleman full movie in Hindi for free, you will need to use a generateur sarah res. This is a tool that can create links to download or stream movies from various sources on the internet. You can find many generateur sarah res online by searching on Google or other search engines. However, you should be careful and avoid clicking on any suspicious or malicious links that might harm your device or compromise your privacy.

      -

      One of the most reliable and popular generateur sarah res that you can use is sarahres.com. This website has a large collection of movies in different languages and genres that you can access for free. To use this website, you just need to follow these simple steps:

      -
        -
      1. Go to sarahres.com and type "A Gentleman" in the search box.
      2. -
      3. Select the movie from the list of results and click on the "Watch Now" button.
      4. -
      5. Choose the language option as "Hindi" and the quality option as "HD" or "SD" depending on your preference and internet speed.
      6. -
      7. Wait for a few seconds until the generateur sarah res generates a link to stream or download the movie.
      8. -
      9. Click on the link and enjoy watching A Gentleman full movie in Hindi for free.
      10. -
      -

      Note: You might encounter some ads or pop-ups while using the generateur sarah res. You can close them or ignore them as they are not part of the website. You might also need to disable your ad blocker or antivirus software if they interfere with the generateur sarah res.

      -

      We hope this article helped you learn how to use a generateur sarah res to watch A Gentleman full movie in Hindi for free. If you liked this article, please share it with your friends and family who might also be interested in watching this movie. Thank you for reading!

      -

      - -

      A Gentleman is a remake of the 2014 Hollywood film The Nice Guys, which starred Ryan Gosling and Russell Crowe. The Hindi version has some changes and adaptations to suit the Indian audience and culture. For example, the setting of the film is shifted from Los Angeles to Miami and Mumbai, and the plot involves a corrupt politician and a fake identity instead of a porn star and a missing girl.

      -

      The film received mixed reviews from critics and audiences. Some praised the film for its entertainment value, chemistry between the lead actors, and stylish action scenes. Others criticized the film for its weak script, lack of originality, and poor editing. The film was also a box office flop, failing to recover its production cost of ₹60 crore.

      -

      Despite its commercial failure, A Gentleman has gained a cult following among some fans who appreciate its humor, romance, and action. The film also has some memorable songs composed by Sachin-Jigar, such as "Disco Disco", "Baat Ban Jaye", and "Bandook Meri Laila". The film is available on various online platforms such as Netflix, Amazon Prime Video, and Hotstar for those who want to watch it again or for the first time.

      7b8c122e87
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Crack Njstar Japanese Wp 6.md b/spaces/stomexserde/gpt4-ui/Examples/Crack Njstar Japanese Wp 6.md deleted file mode 100644 index 8cd312510c3b404c706f932292f7fca97ed2e7fc..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Crack Njstar Japanese Wp 6.md +++ /dev/null @@ -1,29 +0,0 @@ - -

      NJStar Japanese WP 6: A Powerful Word Processor with Language Learning Features

      -

      If you are looking for a Japanese word processor that can handle both traditional and modern Japanese writing, as well as provide you with useful tools for learning and teaching the language, you might want to check out NJStar Japanese WP 6. This software is designed to work on Microsoft Windows and Linux under WINE, and it offers a free trial version for 30 days evaluation[^1^].

      -

      Crack Njstar Japanese Wp 6


      Download Zip - https://urlgoal.com/2uI82Q



      -

      NJStar Japanese WP 6 has many features that make it stand out from other word processors. For example, it supports more than 70,000 different Kanji characters in a single document, which is a major improvement from the previous version[^3^]. It also has a built-in bilingual dictionary that can translate English, German, Dutch, French and Russian into Japanese, and vice versa[^1^]. You can access the dictionary by hovering over a word, typing it in the input bar, or extracting all the key words from a paragraph. The dictionary is based on the JMdict project by Monash University, which is updated on daily basis[^1^].

      -

      Another feature that makes NJStar Japanese WP 6 a great tool for learning and teaching Japanese is its verb conjugation system. It can automatically show you the different forms of any Japanese verb, such as present, past, future, presumptive, imperative, progressive and more[^1^]. You can also customize the verb forms according to your preference and level of proficiency. This feature can help you master the complex grammar of Japanese and expand your vocabulary.

      -

      NJStar Japanese WP 6 also has other functions that can enhance your writing and reading experience. For example, it can convert between different writing systems, such as Hiragana, Katakana, Romaji and Kanji[^1^]. It can also display furigana (small kana) above or below Kanji to indicate their pronunciation[^1^]. It can check your spelling and grammar errors and suggest corrections[^1^]. It can insert special symbols and emoticons into your text[^1^]. It can print your document with high quality fonts and layout[^1^]. And it can export your document to various formats, such as PDF, HTML, RTF and more[^1^].

      -

      NJStar Japanese WP 6 is not only a word processor, but also a language companion that can help you learn and teach Japanese with ease. It has a user-friendly interface and a comprehensive help system that can guide you through its features. It also has professional editions that come with plenty of NJStar Japanese Opentype Fonts[^1^]. If you are interested in trying out this software, you can download it from their official website: https://www.njstar.com/cms/njstar-japanese-word-processor.

      - -

      In this article, we have introduced NJStar Japanese WP 6, a powerful word processor with language learning features. But how does it compare to other Japanese word processors on the market? Let's take a look at some of the advantages and disadvantages of NJStar Japanese WP 6.

      -

      -

      Advantages of NJStar Japanese WP 6

      -
        -
      • It supports more than 70,000 different Kanji characters in a single document, which is more than any other word processor. This means you can write and read any kind of Japanese text, from classical literature to modern manga.
      • -
      • It has a built-in bilingual dictionary that can translate between Japanese and five other languages. This means you can easily look up the meaning and pronunciation of any word, without having to switch to another application or website.
      • -
      • It has a verb conjugation system that can show you the different forms of any Japanese verb. This means you can learn and practice the complex grammar of Japanese and expand your vocabulary.
      • -
      • It has other features that can enhance your writing and reading experience, such as writing system conversion, furigana display, spelling and grammar check, special symbols and emoticons, high quality printing and exporting. This means you can create and edit your document with ease and style.
      • -
      • It has a user-friendly interface and a comprehensive help system that can guide you through its features. This means you can use the software without much difficulty or confusion.
      • -
      • It has professional editions that come with plenty of NJStar Japanese Opentype Fonts. This means you can choose from a variety of fonts that suit your preference and purpose.
      • -
      -

      Disadvantages of NJStar Japanese WP 6

      -
        -
      • It is not compatible with Windows 95, 98, 2000, ME and NT4. This means you cannot use the software on older versions of Windows.
      • -
      • It is not free. The trial version is only for 30 days evaluation. This means you have to pay for the software if you want to use it for longer or access all its features.
      • -
      • It is not widely used. Most people use Microsoft Word or Google Docs for their word processing needs. This means you may encounter compatibility issues when sharing your document with others or opening their document with NJStar Japanese WP 6.
      • -
      -

      In conclusion, NJStar Japanese WP 6 is a powerful word processor with language learning features that can help you learn and teach Japanese with ease. It has many advantages over other word processors, but it also has some disadvantages that you should consider before buying it. You can download the trial version from their official website and see for yourself if it meets your needs and expectations.

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Ibm Ipmi Drivers For Mac.md b/spaces/stomexserde/gpt4-ui/Examples/Ibm Ipmi Drivers For Mac.md deleted file mode 100644 index 026952dabd849d7498fcac8512af5c831eceb581..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Ibm Ipmi Drivers For Mac.md +++ /dev/null @@ -1,58 +0,0 @@ -
      -

      How to Install and Use Ibm Ipmi Drivers For Mac

      -

      IPMI (Intelligent Platform Management Interface) is a standardized message-based hardware management interface that allows you to monitor and control the system hardware and sensors of your IBM servers. IPMI is implemented by a hardware chip called the Baseboard Management Controller (BMC) or Management Controller (MC) that runs independently of the main CPU, BIOS, and OS. IPMI provides various interfaces for user channels, monitoring elements, system event log, recovery operations, and secure serial over LAN.

      -

      Ibm Ipmi Drivers For Mac


      DOWNLOAD ===> https://urlgoal.com/2uI701



      -

      To use IPMI on your IBM servers, you need to install and configure the OpenIPMI driver and the IPMItool utility on your Mac. The OpenIPMI driver is a kernel module that enables inband communication with the BMC through the system interface. The IPMItool utility is a command-line tool that allows you to access and manage the BMC through different channels, such as LAN, serial, or local.

      -

      In this article, we will show you how to install and use the IBM IPMI drivers for Mac step by step.

      -

      Step 1: Download and Install the OpenIPMI Driver

      -

      The OpenIPMI driver is included in most Linux distributions, but not in Mac OS X. Therefore, you need to download and install it manually from the source code. Here are the steps:

      -
        -
      1. Download the latest version of the OpenIPMI driver from https://sourceforge.net/projects/openipmi/files/.
      2. -
      3. Extract the downloaded file and open a terminal window in the extracted folder.
      4. -
      5. Run the following commands to compile and install the driver:
        -./configure
        -make
        -sudo make install
      6. -
      7. Load the driver module with the command:
        -sudo modprobe ipmi_devintf
      8. -
      9. Verify that the driver is loaded by checking the output of:
        -dmesg | grep ipmi
        -You should see something like:
        -ipmi message handler version 39.2
        -ipmi device interface
      10. -
      -

      Step 2: Download and Install the IPMItool Utility

      -

      The IPMItool utility is available in most Linux distributions, but not in Mac OS X. Therefore, you need to download and install it manually from the source code. Here are the steps:

      -
        -
      1. Download the latest version of the IPMItool utility from https://github.com/ipmitool/ipmitool/releases.
      2. -
      3. Extract the downloaded file and open a terminal window in the extracted folder.
      4. -
      5. Run the following commands to compile and install the utility:
        -./configure
        -make
        -sudo make install
      6. -
      7. Verify that the utility is installed by running:
        -ipmitool -V
        -You should see something like:
        -ipmitool version 1.8.18
      8. -
      -

      Step 3: Configure and Use the IPMItool Utility

      -

      The IPMItool utility can be used to access and manage the BMC through different channels, such as LAN, serial, or local. To use it, you need to specify the channel type, the target server address, and optionally some authentication parameters. Here are some examples:

      -

      - -
        - -
      • To access the BMC of a local server through the system interface, run:
        - -ipmitool -I open sdr list
        - -This will list all the sensor data records (SDR) of the server.
      • - -
      • To access the BMC of a remote server through LAN, run:
        - -ipmitool -I lan -H 192.168.1.100 -U admin -P admin chassis status
        - -This will show the chassis status of the server with IP address 192.168.1.100, using admin as username and password.
      • - -
      • To access the BMC of a remote

        cec2833e83
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Ileniacad4 ((NEW)).md b/spaces/stomexserde/gpt4-ui/Examples/Ileniacad4 ((NEW)).md deleted file mode 100644 index 17018f0dc58aeb713a15d53269701f41cac1faad..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Ileniacad4 ((NEW)).md +++ /dev/null @@ -1,102 +0,0 @@ -
        -

        What is ileniacad4?

        |

        If you are an architect, engineer, or visualization specialist who needs a professional computer graphics software that supports modern 3D modeling and rendering, you might want to check out ileniacad4. This is a CAD software that is part of the Ilenia CAD software family, which also includes other modules such as Ilenia Cad5 and Ilenia Cad Text.

        -

        ileniacad4


        Downloadhttps://urlgoal.com/2uIbVS



        -

        Ileniacad4 is designed to help you create stunning designs for your projects, whether they are buildings, furniture, interiors, or landscapes. You can import DXF files with pre-assigned tools and manufacturing operations, or create your own models from scratch using a variety of tools and features. You can also edit raster images such as layering, image modification, and drawing.

        -

        In this article, we will explore the features, benefits, and how to get ileniacad4 for your computer graphics needs. Read on to find out more.

        -

        Features of ileniacad4

        -

        Ileniacad4 has many features that make it a powerful and versatile CAD software for your projects. Here are some of them:

        -

        3D modeling and rendering

        -

        Ileniacad4 supports modern 3D modeling and rendering techniques that allow you to create realistic and detailed models of your designs. You can use various tools such as extrusion, lofting, sweeping, boolean operations, filleting, chamfering, shelling, mirroring, scaling, rotating, moving, copying, trimming, splitting, joining, aligning, snapping, measuring, dimensioning, texturing, lighting, shading, coloring, transparency, reflection, refraction, and more.

        -

        You can also apply different materials and textures to your models such as wood, metal, stone, glass, fabric, and more. You can choose from a library of predefined materials or create your own custom ones. You can also adjust the properties of the materials such as glossiness, roughness, bumpiness, and more.

        -

        -

        You can also render your models using different modes such as wireframe, solid, hidden line, flat shading, smooth shading, ray tracing, radiosity, and more. You can also adjust the settings of the rendering such as resolution,

        antialiasing, shadows, reflections, refractions, ambient occlusion, global illumination, and more. You can also add different types of lights to your scene such as point lights, spot lights, directional lights, area lights, and more. You can also adjust the properties of the lights such as color, intensity, attenuation, angle, and more.

        -

        You can also create animations of your models using keyframes, curves, and timelines. You can also export your animations to various formats such as AVI, MPEG, MOV, GIF, and more.

        -

        DXF importing

        -

        Ileniacad4 can import DXF files with pre-assigned tools and manufacturing operations. DXF is a common file format for exchanging CAD data between different software applications. You can use ileniacad4 to open DXF files that contain 2D or 3D geometry, layers, blocks, attributes, text, dimensions, and more.

        -

        When you import a DXF file, ileniacad4 will automatically recognize the tools and operations that are assigned to the geometry. For example, if a DXF file contains a circle with a drill tool and a hole operation, ileniacad4 will create a hole in the circle using the drill tool. You can also modify the tools and operations after importing the DXF file.

        -

        Ileniacad4 can also export DXF files with tools and operations. This allows you to share your designs with other software applications that support DXF files.

        -

        Raster image processing

        -

        Ileniacad4 can also edit raster images such as layering, image modification, and drawing. Raster images are images that are composed of pixels or dots of color. You can use ileniacad4 to open raster images in various formats such as BMP, JPG, PNG, TIFF, and more.

        -

        When you open a raster image in ileniacad4, you can use various tools to edit it such as cropping, resizing, rotating, flipping, skewing, distorting, warping, and more. You can also apply different filters to the image such as blur, sharpen, noise, contrast, brightness, saturation, hue, and more. You can also adjust the color mode of the image such as RGB, CMYK, grayscale, and more.

        -

        You can also use ileniacad4 to draw on the image using different tools such as pencil, brush, eraser, fill, text, line, rectangle, ellipse, polygon, and more. You can also choose from different colors and styles for your drawing tools such as solid, gradient, pattern, texture, and more.

        -

        You can also use ileniacad4 to create layers for your image. Layers are like transparent sheets that you can stack on top of each other. You can use layers to organize your image elements and apply different effects to them. For example, you can create a layer for the background of your image and another layer for the foreground. You can then change the opacity of the foreground layer to make it semi-transparent. You can also apply different blending modes to the layers such as normal, multiply, screen, overlay, and more.

        -

        Benefits of ileniacad4

        -

        Ileniacad4 has many benefits that make it a great choice for your computer graphics projects. Here are some of them:

        -

        Easy to use

        -

        Ileniacad4 has a user-friendly interface that is easy to navigate and use. You can access all the tools and features from the menus, toolbars, panels, and dialogs. You can also customize the interface according to your preferences and needs. You can resize, rearrange, dock, undock, hide, show, and group the interface elements as you wish. You can also create your own shortcuts and macros for faster and easier operation.

        -

        Ileniacad4 also has a comprehensive help system that provides you with useful information and tips on how to use the software. You can access the help system from the menu or by pressing F1 on your keyboard. You can also search for topics or keywords in the help system using the search box. The help system also includes tutorials and examples that show you how to perform common tasks and create various designs using ileniacad4.

        -

        Flexible and customizable

        -

        Ileniacad4 is flexible and customizable software that allows you to create any design you want. You can use ileniacad4 to create 2D or 3D models of any shape or size using various tools and features. You can also apply different materials and textures to your models using a library of predefined materials or creating your own custom ones. You can also render your models using different modes and settings to achieve different effects and styles

        for your projects. You can also export your models to various formats such as DXF, STL, OBJ, 3DS, and more.

        -

        Ileniacad4 is also customizable software that allows you to modify and extend its functionality according to your needs. You can use ileniacad4 to create your own tools and features using the built-in scripting language or the SDK (software development kit). You can also use ileniacad4 to create your own plugins and add-ons that enhance the software's capabilities. You can also use ileniacad4 to integrate with other software applications using the API (application programming interface).

        -

        Compatible with other software

        -

        Ileniacad4 is compatible with other software applications that support common file formats and standards. You can use ileniacad4 to import and export files in various formats such as DXF, BMP, JPG, PNG, TIFF, AVI, MPEG, MOV, GIF, STL, OBJ, 3DS, and more. You can also use ileniacad4 to exchange data with other software applications using the API or the SDK.

        -

        Ileniacad4 is also part of the Ilenia CAD software family, which also includes other modules such as Ilenia Cad5 and Ilenia Cad Text. You can use ileniacad4 to work seamlessly with these modules and share data and resources between them. For example, you can use Ilenia Cad5 to create 2D drawings and layouts from your 3D models created in ileniacad4. You can also use Ilenia Cad Text to create and edit text documents that contain information and instructions for your projects.

        -

        How to get ileniacad4

        -

        If you are interested in getting ileniacad4 for your computer graphics projects, here are some ways you can do so:

        -

        Standalone or add-in mode

        -

        You can use ileniacad4 in standalone mode or add-in mode. Standalone mode means that you can use ileniacad4 as a separate software application that runs independently on your computer. Add-in mode means that you can use ileniacad4 as an add-in or extension for another software application such as AutoCAD, SketchUp, Revit, or SolidWorks. This way, you can access ileniacad4's tools and features from within the host application's interface.

        -

        To use ileniacad4 in standalone mode, you need to download and install the software from the official website or from a trusted distributor. To use ileniacad4 in add-in mode, you need to download and install the add-in module for the host application from the official website or from a trusted distributor. You also need to have the host application installed on your computer.

        -

        Pricing and licensing

        -

        Ileniacad4 is a commercial software that requires a license to use. You can choose from different types of licenses depending on your needs and preferences. Here are some of them:

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        Type of licenseDescriptionPrice
        Single-user licenseThis license allows you to use ileniacad4 on one computer only. You need to activate the license online or offline using a serial number and a registration code.$499
        Multi-user licenseThis license allows you to use ileniacad4 on multiple computers within a network or a domain. You need to install a license server on one computer and activate the license online or offline using a serial number and a registration code. The license server will then distribute the license to the other computers that request it.$999
        Subscription licenseThis license allows you to use ileniacad4 for a limited period of time such as a month or a year. You need to pay a recurring fee to access the software and its updates. You need to activate the license online using a serial number and an email address.$49 per month or $499 per year
        Trial licenseThis license allows you to use ileniacad4 for free for a limited period of time such as 15 days or 30 days. You need to activate the license online using an email address. The trial license has some limitations such as watermarks on the output files and restricted access to some tools and features.Free
        Educational licenseThis license allows you to use ileniacad4 for free for educational purposes only such as teaching or learning. You need to activate the license online using an email address and a verification code. You need to provide proof of your educational status such as a student ID or a teacher ID. The educational license has some limitations such as watermarks on the output files and restricted access to some tools and features.Free
        -

        You can purchase or request a license from the official website or from a trusted distributor. You can also contact the customer service for any questions or issues regarding the licensing.

        -

        Support and training

        -

        Ileniacad4 provides you with various support and training options to help you use the software effectively and efficiently. Here are some of them:

        -
          -
        • Online help: You can access the online help system from the menu or by pressing F1 on your keyboard. The online help system provides you with useful information and tips on how to use the software. You can also search for topics or keywords in the online help system using the search box. The online help system also includes tutorials and examples that show you how to perform common tasks and create various designs using ileniacad4.
        • -
        • Online forum: You can access the online forum from the official website or from the software's interface. The online forum is a place where you can interact with other users and experts of ileniacad4. You can ask questions, share ideas, give feedback, report bugs, request features, and more. You can also browse through the existing topics and posts to find answers and solutions to your problems.
        • -
        • Online videos: You can access the online videos from the official website or from the software's interface. The online videos are short and informative videos that demonstrate how to use the software's tools and features. You can also watch the online videos to learn new tips and tricks, best practices, and advanced techniques for using ileniacad4.
        • -
        • Online courses: You can access the online courses from the official website or from the software's interface. The online courses are comprehensive and interactive courses that teach you how to use the software from beginner to advanced level. You can also take quizzes and tests to assess your knowledge and skills. You can also earn certificates and badges for completing the online courses.
        • -
        • Email support: You can contact the email support team by sending an email to support@ileniacad.com. The email support team is a group of professional and friendly staff who are ready to assist you with any questions or issues regarding the software. You can also attach screenshots, files, or logs to your email to help them understand your problem better.
        • -
        • Phone support: You can contact the phone support team by calling +1-800-ILENIACAD (453-6422). The phone support team is a group of expert and courteous agents who are available 24/7 to help you with any questions or issues regarding the software. You can also request a callback from them if you prefer.
        • -
        -

        Conclusion

        -

        Ileniacad4 is a CAD software for architects, engineers, and visualization specialists who need a professional computer graphics software that supports modern 3D modeling and rendering. It also supports DXF importing with pre-assigned tools and operations, and raster image processing. It has many features, benefits, and ways to get it for your computer graphics projects.

        -

        If you are looking for a CAD software that is easy to use, flexible and customizable, compatible with other software, and provides various support and training options, you might want to give ileniacad4 a try. You can download a free trial version from the official website or request a free educational license if you are eligible. You can also purchase or request a license that suits your needs and preferences.

        -

        We hope this article has given you some useful information about ileniacad4. If you have any questions or feedback, please feel free to contact us or leave a comment below. Thank you for reading.

        -

        FAQs

        -

        Here are some frequently asked questions about ileniacad4:

        -
          -
        1. What are the system requirements for ileniacad4?
        2. -

          Ileniacad4 requires a Windows operating system (Windows 7 or later), a 64-bit processor (Intel Core i5 or equivalent), 8 GB of RAM, 10 GB of free disk space, a graphics card (NVIDIA GeForce GTX 1050 or equivalent), and an internet connection.

          -
        3. Can I use ileniacad4 on Mac or Linux?
        4. -

          No, ileniacad4 is only compatible with Windows operating system. However, you can use ileniacad4 on Mac or Linux using a virtual machine or an emulator such as Parallels Desktop, VMware Fusion, Wine, or CrossOver.

          -
        5. Can I use ileniacad4 offline?Yes, you can use ileniacad4 offline if you have a valid license that does not require online activation or verification. However, you will not be able to access some features and services that require an internet connection such as online help, online forum, online videos, online courses, email support, phone support, and updates.

          -
        6. How can I update ileniacad4?
        7. -

          You can update ileniacad4 by downloading and installing the latest version from the official website or from a trusted distributor. You can also check for updates from the software's interface by clicking on the menu Help > Check for Updates. You need to have an internet connection and a valid license to update ileniacad4.

          -
        8. How can I uninstall ileniacad4?
        9. -

          You can uninstall ileniacad4 by using the Windows Control Panel or the uninstaller program that comes with the software. You need to close ileniacad4 and any other applications that use it before uninstalling it. You also need to deactivate your license if you have one before uninstalling it.

          -

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/IntelliJ IDEA 2019.2.3 Crack UPD.md b/spaces/stomexserde/gpt4-ui/Examples/IntelliJ IDEA 2019.2.3 Crack UPD.md deleted file mode 100644 index 434c83fc7c90c41e703131ec46cce6dda8712dc3..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/IntelliJ IDEA 2019.2.3 Crack UPD.md +++ /dev/null @@ -1,31 +0,0 @@ - -

        What's New in IntelliJ IDEA 2019.2.3?

        -

        IntelliJ IDEA is a popular integrated development environment (IDE) for Java, Kotlin, Groovy, Scala and other JVM languages. It offers smart code completion, refactoring, debugging, testing and code analysis features that help developers write high-quality code faster and easier. IntelliJ IDEA comes in two editions: Ultimate and Community. The Ultimate edition provides advanced features for web and enterprise development, while the Community edition is free and open source.

        -

        In September 2019, JetBrains released the third minor update for IntelliJ IDEA 2019.2: IntelliJ IDEA 2019.2.3[^1^]. This update delivers many important fixes, better performance and improved usability. Some of the key improvements are:

        -

        IntelliJ IDEA 2019.2.3 Crack


        DOWNLOAD 🌟 https://urlgoal.com/2uIc47



        -
          -
        • Maven 3.6.2 support: IDEA-221882.
        • -
        • A new option to change the scrollbar contrast: IDEA-69682.
        • -
        • We’ve brought back the old ‘Compare with Current’ dialog: IDEA-209664, IDEA-216382.
        • -
        • The IDE now supports native password storage on Linux: IDEA-185926.
        • -
        • Fixed the IDE freezes caused by a lot of ignored files: IDEA-219152.
        • -
        • Improved the performance of SVN operations: IDEA-219881.
        • -
        • Fixed the error that occurred when importing a patch to Shelf: IDEA-220599.
        • -
        • Fixed the regression: ‘Find in Path’ called from a change list now selects that change list in the ‘local change’ scope of the ‘Find in Path’ dialog: IDEA-216936.
        • -
        • Fixed the regression: usage of a deprecated API is now highlighted in the editor: IDEA-216982.
        • -
        -

        The update also includes fixes for JetBrains Runtime (JBR) 11 and 8[^1^]. JBR is a custom runtime based on OpenJDK that is bundled with IntelliJ IDEA and provides better performance and stability for the IDE. Some of the fixes are:

        -
          -
        • JetBrains Runtime was rebased on top of OpenJDK 11.0.4: JBR-1702.
        • -
        • Fixed the issue with an empty bar overlapping the navigation bar: JBR-1649.
        • -
        • Fixed the issue that affected the opening of projects on macOS Catalina: JBR-1721.
        • -
        • Fixed the issue where the focus was being lost after displaying the ‘Add File to Git’ dialog: JBR-1696.
        • -
        • Fixed the incorrect font formatting (italics) in the editor: JBR-1778.
        • -
        • Fixed the broken Fira Code font rendering: JBR-1624, JBR-1683.
        • -
        -

        You can download IntelliJ IDEA 2019.2.3 from the official website, update via the Toolbox App or from inside the IDE, or use snaps (on Ubuntu)[^1^]. You can also check out other versions of IntelliJ IDEA if you need them[^3^].

        -

        If you have any feedback or suggestions, you can share them with JetBrains here in the comments, in their issue tracker, or on Twitter. You can also refer to the IDE and the JBR release notes for more details.

        -

        Happy Developing!

        -

        e93f5a0c3f
        -
        -
        \ No newline at end of file diff --git a/spaces/studiobrn/SplitTrack/audiocraft/quantization/base.py b/spaces/studiobrn/SplitTrack/audiocraft/quantization/base.py deleted file mode 100644 index 1b16c130d266fbd021d3fc29bb9f98c33dd3c588..0000000000000000000000000000000000000000 --- a/spaces/studiobrn/SplitTrack/audiocraft/quantization/base.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Base class for all quantizers. -""" - -from dataclasses import dataclass, field -import typing as tp - -import torch -from torch import nn - - -@dataclass -class QuantizedResult: - x: torch.Tensor - codes: torch.Tensor - bandwidth: torch.Tensor # bandwidth in kb/s used, per batch item. - penalty: tp.Optional[torch.Tensor] = None - metrics: dict = field(default_factory=dict) - - -class BaseQuantizer(nn.Module): - """Base class for quantizers. - """ - - def forward(self, x: torch.Tensor, frame_rate: int) -> QuantizedResult: - """ - Given input tensor x, returns first the quantized (or approximately quantized) - representation along with quantized codes, bandwidth, and any penalty term for the loss. - Finally, this returns a dict of metrics to update logging etc. - Frame rate must be passed so that the bandwidth is properly computed. - """ - raise NotImplementedError() - - def encode(self, x: torch.Tensor) -> torch.Tensor: - """Encode a given input tensor with the specified sample rate at the given bandwidth. - """ - raise NotImplementedError() - - def decode(self, codes: torch.Tensor) -> torch.Tensor: - """Decode the given codes to the quantized representation. - """ - raise NotImplementedError() - - @property - def total_codebooks(self): - """Total number of codebooks. - """ - raise NotImplementedError() - - @property - def num_codebooks(self): - """Number of active codebooks. - """ - raise NotImplementedError() - - def set_num_codebooks(self, n: int): - """Set the number of active codebooks. - """ - raise NotImplementedError() - - -class DummyQuantizer(BaseQuantizer): - """Fake quantizer that actually does not perform any quantization. - """ - def __init__(self): - super().__init__() - - def forward(self, x: torch.Tensor, frame_rate: int): - q = x.unsqueeze(1) - return QuantizedResult(x, q, torch.tensor(q.numel() * 32 * frame_rate / 1000 / len(x)).to(x)) - - def encode(self, x: torch.Tensor) -> torch.Tensor: - """Encode a given input tensor with the specified sample rate at the given bandwidth. - In the case of the DummyQuantizer, the codes are actually identical - to the input and resulting quantized representation as no quantization is done. - """ - return x.unsqueeze(1) - - def decode(self, codes: torch.Tensor) -> torch.Tensor: - """Decode the given codes to the quantized representation. - In the case of the DummyQuantizer, the codes are actually identical - to the input and resulting quantized representation as no quantization is done. - """ - return codes.squeeze(1) - - @property - def total_codebooks(self): - """Total number of codebooks. - """ - return 1 - - @property - def num_codebooks(self): - """Total number of codebooks. - """ - return self.total_codebooks - - def set_num_codebooks(self, n: int): - """Set the number of active codebooks. - """ - raise AttributeError("Cannot override the number of codebooks for the dummy quantizer") diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Srv Bangla Keyman Exe Download.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Srv Bangla Keyman Exe Download.md deleted file mode 100644 index b6b222244456d6041175bc488e00151ac45bc2b1..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Srv Bangla Keyman Exe Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

        srv bangla keyman exe download


        Download Ziphttps://cinurl.com/2uEYpP



        - -Download Avro Keyboard, a free Bangla Software and Bangla Spell Checker for Windows. 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Wivi Band Vsti Download Torrent [Extra Quality].md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Wivi Band Vsti Download Torrent [Extra Quality].md deleted file mode 100644 index b9ff8739d2d33341d8de09d70a07c478c029ba64..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Wivi Band Vsti Download Torrent [Extra Quality].md +++ /dev/null @@ -1,24 +0,0 @@ -
        -```html -

        Wivi Band Vsti: A Powerful and Easy-to-Use Wind Instrument Plugin

        -

        Are you looking for a realistic and expressive wind instrument plugin for your music production? Do you want to play solo or in sections of up to 8 players with automatic divisi? Do you want to have access to 10 modelled brasses and woodwinds with various mutes and articulations? If you answered yes to any of these questions, then you might be interested in Wivi Band Vsti, a virtual instrument plugin by Wallander Instruments.

        -

        Wivi Band Vsti Download Torrent


        Downloadhttps://cinurl.com/2uEZ8g



        -

        Wivi Band Vsti is based on the same high-quality sound engine as the world-renowned synthesis/modeling software WIVI, but with a simplified and intuitive interface that lets you focus on your performance. You can control the dynamics, vibrato, pitch bend, tonguing and more with your MIDI keyboard, expression pedal or even a breath controller. You can also choose from different room types and microphone positions to create the perfect ambience for your music.

        -

        Wivi Band Vsti includes 10 wind instruments: Bb-Trumpet, Tenor Trombone, French Horn, F-Tuba, Concert Flute, A-Clarinet, Modern Oboe, Modern Bassoon, Tenor Saxophone and Soprano Recorder. Each instrument can be played solo or in sections of up to 8 players with automatic divisi. You can also switch between different mutes and articulations for the brass instruments, such as straight mute, cup mute, harmon mute, plunger mute and more.

        -

        Wivi Band Vsti runs as its own dedicated software instrument, available in AU, VST & RTAS format (including native 64-bit) on both Mac and PC. It is compatible with most DAWs and hosts that support these formats. You can download a free demo version from the Wallander Instruments website and try it out for yourself.

        -

        If you are impressed by the demo and want to get the full version of Wivi Band Vsti, you might be wondering where to find it. Unfortunately, Wallander Instruments has discontinued the development and support of Wivi Band Vsti and their other products. However, there is still a way to get your hands on this amazing plugin: by downloading a torrent file.

        -

        -

        A torrent file is a small file that contains information about the files and folders that you want to download. You need a torrent client software to open the torrent file and connect to other users who have the same file. This way, you can download the file from multiple sources at once, which makes it faster and more reliable. However, downloading torrents also comes with some risks: you might encounter viruses, malware or legal issues if you download copyrighted material without permission.

        -

        Therefore, we do not recommend or endorse downloading Wivi Band Vsti or any other software from torrent sites. We only provide this information for educational purposes and we are not responsible for any consequences that may arise from your actions. If you decide to download Wivi Band Vsti from a torrent site, you do so at your own risk and discretion.

        -

        That being said, if you still want to proceed with downloading Wivi Band Vsti from a torrent site, here are some steps that you can follow:

        -
          -
        1. Download and install a torrent client software such as uTorrent or BitTorrent.
        2. -
        3. Go to a torrent search engine such as The Pirate Bay or 1337x and type "Wivi Band Vsti" in the search box.
        4. -
        5. Look for a torrent file that has a high number of seeders (users who have the complete file) and leechers (users who are downloading the file). This indicates that the file is popular and likely working.
        6. -
        7. Click on the torrent file link and download it to your computer.
        8. -
        9. Open the torrent file with your torrent client software and choose where to save the files on your computer.
        10. -
        11. Wait for the download to finish. This may take some time depending on your internet speed and the size of the file.
        12. -
        13. Once the download is complete, locate the folder where you saved the files and open it.
        14. -
        15. You should see a folder named "Wivi Band Vsti" that contains several files such as "setup.exe", "crack.rar", "readme

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/suvradip2000/space1/app/templates/similarity.html b/spaces/suvradip2000/space1/app/templates/similarity.html deleted file mode 100644 index 7294ec54c539e5debfe83af97a15edb7c59711bd..0000000000000000000000000000000000000000 --- a/spaces/suvradip2000/space1/app/templates/similarity.html +++ /dev/null @@ -1,35 +0,0 @@ - - - - Index - - -
          -

          -
          Face Similarity
          -

          -
          -
          -
          -
            - -
            -
            - Upload First Image:

            - -


            - Upload Second Image:

            - -



            - -
            - -

            -
            - -
            -
          -
          -
          - - diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/utils/__init__.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/utils/__init__.py deleted file mode 100644 index 3d3bdd349b9f2ae499a2fcb2ac1d2e3c77befebe..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/utils/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -from .drop import DropPath -from .inverted_residual import InvertedResidual, InvertedResidualV3 -from .make_divisible import make_divisible -from .res_layer import ResLayer -from .se_layer import SELayer -from .self_attention_block import SelfAttentionBlock -from .up_conv_block import UpConvBlock -from .weight_init import trunc_normal_ - -__all__ = [ - 'ResLayer', 'SelfAttentionBlock', 'make_divisible', 'InvertedResidual', - 'UpConvBlock', 'InvertedResidualV3', 'SELayer', 'DropPath', 'trunc_normal_' -] diff --git a/spaces/swufewyd/xyz-nlp-XuanYuan2.0/app.py b/spaces/swufewyd/xyz-nlp-XuanYuan2.0/app.py deleted file mode 100644 index a316fdbcbe23ae117e0ab0f47f84fa94cba5e4a2..0000000000000000000000000000000000000000 --- a/spaces/swufewyd/xyz-nlp-XuanYuan2.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/xyz-nlp/XuanYuan2.0").launch() \ No newline at end of file diff --git a/spaces/t13718236382/bingoGPT4/src/components/voice.tsx b/spaces/t13718236382/bingoGPT4/src/components/voice.tsx deleted file mode 100644 index ab886394487445e4b0675770b76096bba0e61b0e..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/bingoGPT4/src/components/voice.tsx +++ /dev/null @@ -1,52 +0,0 @@ -import React, { useEffect } from 'react' -import { useSetAtom } from 'jotai' -import { useBing } from '@/lib/hooks/use-bing' -import Image from 'next/image' -import VoiceIcon from '@/assets/images/voice.svg' -import VoiceButton from './ui/voice' -import { SR } from '@/lib/bots/bing/sr' -import { voiceListenAtom } from '@/state' - -const sr = new SR(['发送', '清空', '退出']) - -const Voice = ({ setInput, input, sendMessage, isSpeaking }: Pick, 'setInput' | 'sendMessage' | 'input' | 'isSpeaking'>) => { - const setListen = useSetAtom(voiceListenAtom) - useEffect(() => { - if (sr.listening) return - sr.transcript = !isSpeaking - }, [isSpeaking]) - - useEffect(() => { - sr.onchange = (msg: string, command?: string) => { - switch (command) { - case '退出': - sr.stop() - break; - case '发送': - sendMessage(input) - case '清空': - setInput('') - break; - default: - setInput(input + msg) - } - } - }, [input, setInput, sendMessage]) - - const switchSR = (enable: boolean = false) => { - setListen(enable) - if (enable) { - sr.start() - } else { - sr.stop() - } - } - - return sr.listening ? ( - switchSR(false)} /> - ) : ( - start voice switchSR(true)} /> - ) -}; - -export default Voice; diff --git a/spaces/teddyhugzz/venus/Dockerfile b/spaces/teddyhugzz/venus/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/teddyhugzz/venus/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate.md b/spaces/terfces0erbo/CollegeProjectV2/Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate.md deleted file mode 100644 index a4c6244e2d7eaa3c5156c1ff45b70615aa7b597d..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate.md +++ /dev/null @@ -1,119 +0,0 @@ -
          -

          Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate: How to Improve Your PC Sound Quality with a Simple Software

          - -

          Do you want to enjoy a better sound quality on your PC? Do you want to make your music, movies, games, and voice calls sound more clear, crisp, and immersive? Do you want to do all that with a simple software that does not require any complicated settings or hardware upgrades? If you answered yes to any of these questions, then you need Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate.

          -

          Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 activate


          Download File ✔✔✔ https://bytlly.com/2uGmal



          - -

          Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate is a software that can transform the sound of your PC in real-time. It is a software that uses a patented algorithm called Digital Power Station (DPS) technology to optimize any audio signal according to the type of speakers, headphones, or device that you are using. It is a software that can correct, improve, and enhance the sound of any computer system with just one click.

          - -

          In this article, we will tell you how to download and use Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate on your PC. We will show you how to activate the software using a keygen file and how to customize the software according to your preferences and needs. We will also show you how to enjoy the benefits of the software for different types of audio content and applications.

          - -

          How to Download and Activate Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 on Your PC

          - -

          The first step to use Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 on your PC is to download and activate it on your PC. To do that, you will need two files: a torrent file and a keygen file.

          - -

          A torrent file is a file that allows you to download large files from other users who have the same file on their computers, using a software called a torrent client. A keygen file is a file that generates a serial number for the software that allows you to activate it without any errors or restrictions.

          - -

          There are many websites that offer torrent files and keygen files for Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate, but not all of them are safe or reliable. Some of them may contain viruses, malware, or unwanted programs that can harm your computer or steal your personal information. Therefore, you should always be careful when downloading files from unknown sources.

          -

          - -

          One of the websites that we recommend for Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate is Torrentz2.eu . This website is a meta-search engine that indexes torrents from various sources and provides links to them. You can find the link to the torrent file here: https://torrentz2.eu/4f8a8c6b7a5c3f3b4f6e8a5d7d8e6c6b7a5c3f3b

          - -

          To download the torrent file , you will need a torrent client , such as BitTorrent or uTorrent . A torrent client is a software that allows you to download files from other users who have the same file on their computers . You can download BitTorrent here: https://www.bittorrent.com/downloads/win or uTorrent here: https://www.utorrent.com/downloads/win

          - -

          After you download the torrent client , you will need to open it and add the torrent file that you downloaded from Torrentz2.eu . You can do that by clicking on File > Add Torrent or by dragging and dropping the file into the torrent client window.

          - -

          Then , you will need to choose a location where you want to save the software files on your PC and start the download process . The download may take some time depending on your internet speed and the number of seeders (users who have the complete file and are sharing it with others).

          - -

          Once the download is complete , you will need to extract the software files from the compressed folder using a program like WinRAR or 7-Zip . You will also need to extract the keygen file from another folder called "Keygen - XFORCE". The keygen file is usually named after the software's executable file (e.g., Bongiovi.DPS.exe).

          - -

          The next step is to run the keygen file as administrator and generate a serial number for the software . You can do that by clicking on Generate button and copying the serial number that appears on the screen.

          - -

          Then , you will need to run the software's executable file as administrator and enter the serial number that you generated with

          -

          - -

          Finally , you can launch the software by double-clicking on its shortcut on your desktop or start menu . You will see a DPS icon on your system tray that indicates that the software is running and enhancing your sound quality.

          - -

          How to Customize Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 According to Your Preferences and Needs

          - -

          After you activate Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 on your PC, you can customize it according to your preferences and needs. You can do that by right-clicking on the DPS icon on your system tray and selecting Settings . You will see a window with various tabs and options that you can adjust.

          - -

          On the General tab , you can choose the language , the startup mode , the update settings , and the hotkeys for the software . You can also enable or disable the notifications , the tooltips , and the sound effects for the software .

          - -

          On the Profiles tab , you can choose the profile that matches your type of speakers , headphones , or device that you are using . You can also create your own custom profile by clicking on New Profile button and adjusting the sliders for volume , bass , treble , clarity , etc. You can also save , rename , delete , or export your custom profiles .

          - -

          On the Content tab , you can choose the content type that matches the type of audio content that you are listening to . You can choose from music , movie , game , voice , or custom . You can also adjust the sliders for volume , bass , treble , clarity , etc., for each content type . You can also save , rename , delete , or export your custom content types .

          - -

          On the Output tab , you can choose the output device that you want to use for the software . You can choose from speakers , headphones , or external device . You can also adjust the volume level and balance for each output device .

          - -

          On the Advanced tab , you can enable or disable some advanced features of the software , such as virtual surround sound , stereo widening , dynamic range control , etc. You can also adjust the sliders for these features according to your liking .

          - -

          How to Enjoy the Benefits of Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 for Different Types of Audio Content and Applications

          - -

          Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 can improve the sound quality of any audio content and application that you use on your PC. Whether you are listening to music, watching movies, playing games, or making voice calls, you will experience a better sound quality with Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15.

          - -

          To enjoy the benefits of Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 for different types of audio content and applications, you just need to make sure that the software is running and activated on your PC. You can also switch between different profiles and content types according to your needs and preferences.

          - -

          For example, if you are listening to music, you can choose the music profile and content type on the software settings window. This will optimize the sound quality for music playback and enhance the details, dynamics, and richness of your music.

          - -

          If you are watching movies, you can choose the movie profile and content type on the software settings window. This will optimize the sound quality for movie playback and enhance -

          - -

          If you are playing games, you can choose the game profile and content type on the software settings window. This will optimize the sound quality for game playback and enhance the immersion, directionality, and impact of your game sounds.

          - -

          If you are making voice calls, you can choose the voice profile and content type on the software settings window. This will optimize the sound quality for voice communication and enhance the intelligibility, loudness, and quality of your voice and the voice of your interlocutor.

          - -

          For any other type of audio content or application that you use on your PC, you can choose the custom profile and content type on the software settings window. This will allow you to adjust the sound quality according to your own preferences and needs.

          - -

          Conclusion

          - -

          Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate is a software that can improve your PC sound quality with a simple software. It is a software that uses a patented algorithm called Digital Power Station (DPS) technology to optimize any audio signal according to the type of speakers, headphones, or device that you are using. It is a software that can correct, improve, and enhance the sound of any computer system with just one click.

          - -

          With Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate, you can enjoy a better sound quality for any type of audio content and application that you use on your PC. Whether you are listening to music, watching movies, playing games, or making voice calls, you will experience a better sound quality with Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate.

          - -

          If you want to try Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate, you can download it from here: https://torrentz2.eu/4f8a8c6b7a5c3f3b4f6e8a5d7d8e6c6b7a5c3f3b

          - -

          If you want to learn more about Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate, you can visit its official website here: https://bongiovidps.com/

          - -

          If you want to improve your PC sound quality with a simple software, you should definitely give Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate a try.

          -

          How to Uninstall Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate from Your PC

          - -

          If you want to uninstall Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate from your PC, you can do that easily and safely. You can do that by following these steps:

          - -
            -
          1. Close the software and exit it from your system tray . You can do that by right-clicking on the DPS icon on your system tray and selecting Exit .
          2. -
          3. Open the Control Panel on your PC and go to Programs and Features . You can do that by clicking on Start button and typing Control Panel in the search box .
          4. -
          5. Find Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate in the list of installed programs and click on Uninstall button . You can also right-click on it and select Uninstall .
          6. -
          7. Follow the instructions on the screen to complete the uninstallation process . You may need to restart your PC to complete the process .
          8. -
          9. Delete the software files and folders from your PC . You can do that by going to C:\Program Files (x86)\Bongiovi Acoustics\DPS and deleting the folder . You can also use a program like CCleaner to remove any leftover files and registry entries from your PC .
          10. -
          - -

          By following these steps, you can uninstall Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate from your PC without any problems or issues.

          - -

          How to Contact Bongiovi Acoustics for Support and Feedback

          - -

          If you have any questions, problems, or feedback regarding Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate, you can contact Bongiovi Acoustics for support and feedback. You can do that by using one of these methods:

          - -
            -
          • Email : You can send an email to support@bongiovidps.com with your name, email address, product name, version number, operating system, and a detailed description of your issue or feedback . You can also attach screenshots or log files if necessary . You will receive a reply within 24 hours .
          • -
          • Phone : You can call Bongiovi Acoustics at +1 (407) 562-0111 from Monday to Friday , 9 AM to 5 PM EST . You will be connected to a customer service representative who will assist you with your issue or feedback .
          • -
          • Website : You can visit Bongiovi Acoustics website at https://bongiovidps.com/ and use the contact form on the bottom of the page to send your message . You will need to fill in your name, email address, subject, and message . You will receive a confirmation email after you submit your message .
          • -
          • Social Media : You can follow Bongiovi Acoustics on Facebook , Twitter , Instagram , YouTube , and LinkedIn to get the latest news, updates, tips, and tricks about their products and services . You can also leave comments, reviews, ratings, or messages on their social media pages to share your experience or feedback with them and other users .
          • -
          - -

          Bongiovi Acoustics is always happy to hear from their customers and users and provide them with the best support and feedback possible.

          -

          Conclusion

          - -

          Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate is a software that can improve your PC sound quality with a simple software. It is a software that uses a patented algorithm called Digital Power Station (DPS) technology to optimize any audio signal according to the type of speakers, headphones, or device that you are using. It is a software that can correct, improve, and enhance the sound of any computer system with just one click.

          - -

          With Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate, you can enjoy a better sound quality for any type of audio content and application that you use on your PC. Whether you are listening to music, watching movies, playing games, or making voice calls, you will experience a better sound quality with Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate.

          - -

          In this article, we have shown you how to download and activate Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 on your PC using a torrent file and a keygen file. We have also shown you how to customize the software according to your preferences and needs and how to enjoy the benefits of the software for different types of audio content and applications. We have also shown you how to uninstall the software from your PC and how to contact Bongiovi Acoustics for support and feedback.

          - -

          If you want to try Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate, you can download it from here: https://torrentz2.eu/4f8a8c6b7a5c3f3b4f6e8a5d7d8e6c6b7a5c3f3b

          - -

          If you want to learn more about Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate, you can visit its official website here: https://bongiovidps.com/

          - -

          If you want to improve your PC sound quality with a simple software, you should definitely give Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate a try.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/God Eater 2 Psp Iso Full 180 ((FULL)).md b/spaces/tialenAdioni/chat-gpt-api/logs/God Eater 2 Psp Iso Full 180 ((FULL)).md deleted file mode 100644 index 1b1303c4928c2bf75c915a9fa7a09086d58769a6..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/God Eater 2 Psp Iso Full 180 ((FULL)).md +++ /dev/null @@ -1,197 +0,0 @@ - -

          God Eater 2 PSP ISO Full 180: A Guide for Gamers

          -

          If you are a fan of action role-playing games with a post-apocalyptic theme, you might have heard of God Eater 2. This game is a sequel to the popular God Eater Burst, which was released in 2010 for the PlayStation Portable (PSP) console. In this article, we will tell you everything you need to know about God Eater 2 PSP ISO Full 180, which is a modified version of the original game that offers more content and features. We will explain what God Eater 2 is, how to download and play it, and why you should give it a try.

          -

          What is God Eater 2?

          -

          God Eater 2 is an action role-playing game developed by Shift and published by Bandai Namco Entertainment. It was released in Japan in November 2013 for the PSP and PlayStation Vita consoles. It is set in the year 2074, three years after the events of God Eater Burst. The game follows the exploits of a special unit called Blood, which is composed of elite God Eaters who can use a new type of weapon called Blood Arts. These weapons allow them to unleash powerful attacks against the Aragami, monstrous creatures that have devastated the world.

          -

          god eater 2 psp iso full 180


          Download ❤❤❤ https://urlcod.com/2uK4Uh



          -

          The story and setting of God Eater 2

          -

          The story of God Eater 2 revolves around a mysterious pandemic called the Black Plague, which has infected humans and Aragami alike. The Blood unit is tasked with investigating the origin and cure of the disease, while also fighting against hostile factions that seek to exploit it. The game features multiple endings depending on the choices and actions of the player. The game also has a rich lore and backstory that expands on the world and characters of the God Eater series.

          -

          The gameplay and features of God Eater 2

          -

          The gameplay of God Eater 2 is similar to that of its predecessor, but with some improvements and additions. The player controls a custom character who can choose from various weapons, outfits, accessories, skills, and Blood Arts. The player can also customize their own bullet types and effects. The game consists of missions that involve hunting Aragami in various environments, such as cities, ruins, forests, deserts, and more. The game supports up to four players in co-op mode, either online or locally.

          -

          One of the new features of God Eater 2 is the Blood Rage mode, which allows the player to enter a state of enhanced power and speed when their blood gauge is full. Another new feature is the Character Episodes, which are side stories that focus on the personal backgrounds and relationships of the Blood members. These episodes can unlock new skills and items for the player.

          -

          The differences between God Eater 2 and God Eater 2 Rage Burst

          -

          In February 2015, an enhanced version of God Eater 2 was released in Japan for the PSP, PlayStation Vita, and PlayStation 4 consoles. This version is called God Eater 2 Rage Burst, and it adds more content and features to the original game. Some of these additions are:

          -
            -
          • A new story arc that takes place after the main story.
          • -
          • A new difficulty level called Rage Mode.
          • -
          • A new weapon type called Variant Scythe.
          • -
          • New Aragami types and variants.
          • -
          • New Blood Arts and skills.
          • -
          • New outfits and accessories.
          • -
          • New missions and challenges.
          • -
          • New trophies and achievements.
          • -
          -

          God Eater 2 Rage Burst was also released in North America and Europe in August 2016 for the PlayStation Vita, PlayStation 4, and Microsoft Windows platforms.

          -

          How to download and play God Eater 2 PSP ISO Full 180?

          -

          If you want to experience God Eater 2 on your PSP console or emulator, you can download a modified version called God Eater 2 PSP ISO Full 2 Rage Burst, but with some changes and improvements. Some of these changes are:

          -
            -
          • The game is fully translated into English.
          • -
          • The game has all the DLCs and updates included.
          • -
          • The game has a higher resolution and frame rate.
          • -
          • The game has some bug fixes and tweaks.
          • -
          -

          To download and play God Eater 2 PSP ISO Full 180, you will need the following:

          -

          god eater 2 rage burst psp iso download full
          -god eater 2 english patch psp iso free
          -god eater 2 psp iso highly compressed 180mb
          -god eater 2 psp iso full game with cheats
          -god eater 2 psp iso direct link mega
          -god eater 2 psp iso full version android
          -god eater 2 psp iso romsmania
          -god eater 2 psp iso full crack pc
          -god eater 2 psp iso emulator ppsspp
          -god eater 2 psp iso full mod apk
          -god eater 2 psp iso torrent kickass
          -god eater 2 psp iso full update dlc
          -god eater 2 psp iso online multiplayer
          -god eater 2 psp iso full gameplay walkthrough
          -god eater 2 psp iso review ign
          -god eater 2 psp iso full soundtrack ost
          -god eater 2 psp iso tips and tricks
          -god eater 2 psp iso full characters list
          -god eater 2 psp iso best weapons guide
          -god eater 2 psp iso full story mode
          -god eater 2 psp iso save data file
          -god eater 2 psp iso full size mb
          -god eater 2 psp iso system requirements
          -god eater 2 psp iso full english sub
          -god eater 2 psp iso how to install
          -god eater 2 psp iso full hd graphics
          -god eater 2 psp iso new features
          -god eater 2 psp iso comparison ps vita
          -god eater 2 psp iso full unlockables
          -god eater 2 psp iso hidden secrets
          -god eater 2 psp iso full voice actors
          -god eater 2 psp iso fan art gallery
          -god eater 2 psp iso wiki fandom
          -god eater 2 psp iso full trailer youtube
          -god eater 2 psp iso official website
          -god eater 2 psp iso rating metacritic
          -god eater 2 psp iso full endings explained
          -god eater 2 psp iso bonus content codes
          -god eater 2 psp iso full theme song lyrics
          -god eater 2 psp iso merchandise store
          -god eater 2 psp iso sequel rumors
          -god eater 2 psp iso crossover fanfiction
          -god eater 2 psp iso spin-off manga
          -god eater 2 psp iso anime adaptation netflix
          -god eater 2 psp iso live action movie cast
          -god eater 2 psp iso vr experience oculus quest
          -god eater 2 psp iso board game tabletop simulator
          -god eater 2 psp iso cosplay costume ideas
          -god eater 2 psp iso quiz which character are you
          -god eater 2 psp iso memes funny images

          -

          The requirements and compatibility of God Eater 2 PSP ISO Full 180

          -

          Before you download and play God Eater 2 PSP ISO Full 180, you should make sure that your device meets the minimum requirements and is compatible with the game. Here are the requirements and compatibility of God Eater 2 PSP ISO Full 180:

          -
            -
          • A PSP console or a PSP emulator such as PPSSPP.
          • -
          • A memory stick or a storage device with at least 4 GB of free space.
          • -
          • A stable internet connection to download the game file.
          • -
          • A region-free or hacked PSP console or emulator to run the game file.
          • -
          -

          God Eater 2 PSP ISO Full 180 is compatible with most PSP models and emulators, but some users may encounter some issues or errors depending on their device settings and specifications. If you encounter any problems, you can try to adjust your device settings or consult online forums for solutions.

          -

          The steps to download and install God Eater 2 PSP ISO Full 180

          -

          Once you have confirmed that your device meets the requirements and is compatible with God Eater 2 PSP ISO Full 180, you can follow these steps to download and install the game:

          -
            -
          1. Go to this link: https://bit.ly/3HJZy8L and click on the download button. This will take you to a page where you can choose a server to download the game file from. The game file is about 3 GB in size, so it may take some time to download depending on your internet speed.
          2. -
          3. After downloading the game file, extract it using a file manager or a zip extractor. You will get a folder named "God Eater 2 PSP ISO Full 180" that contains an ISO file named "GE2RB.iso". This is the game file that you need to run on your device.
          4. -
          5. Copy or move the "God Eater 2 PSP ISO Full 180" folder to your memory stick or storage device. Make sure that you place it in the right directory depending on your device. For example, if you are using a PSP console, you should place it in the "ISO" folder under the "PSP" folder. If you are using a PSP emulator, you should place it in the "ROMS" folder under the emulator folder.
          6. -
          7. Insert your memory stick or storage device into your device and launch your PSP console or emulator. Navigate to the "God Eater 2 PSP ISO Full 180" folder and select the "GE2RB.iso" file. Press the start button or enter key to run the game.
          8. -
          9. Enjoy playing God Eater 2 PSP ISO Full 180!
          10. -
          -

          The tips and tricks to enjoy God Eater 2 PSP ISO Full 180

          -

          Now that you have downloaded and installed God Eater 2 PSP ISO Full 180, you can start playing it and have fun. However, if you want to get the most out of the game, you should know some tips and tricks that can help you improve your performance and experience. Here are some tips and tricks to enjoy God Eater 2 PSP ISO Full 180:

          -
            -
          • Experiment with different weapons, outfits, accessories, skills, and Blood Arts. Find out what suits your playstyle and preferences best.
          • -
          • Upgrade your weapons and equipment regularly. You can use materials that you obtain from missions or shops to enhance your gear.
          • -
          • Use items wisely. You can use items such as healing potions, grenades, traps, buffs, and more to aid you in combat. However, they have limited quantities and effects, so use them sparingly and strategically.
          • -
          • Cooperate with your teammates. You can play with up to three other players in co-op mode, either online or locally. You can also play with AI-controlled teammates that have their own personalities and skills. You can communicate with them using commands or chat messages. You can also bond with them by completing Character Episodes and giving them gifts.
          • -
          • Explore the environments. You can find hidden items, secrets, and shortcuts in various locations. You can also interact with objects such as cars, barrels, vending machines, and more.
          • -
          • Save your progress frequently. You can save your progress at any time by accessing the menu or by visiting a terminal in your base. You can also create multiple save files for different purposes.
          • -
          - 180? -

          You might be wondering why you should play God Eater 2 PSP ISO Full 180 instead of the original or other versions of the game. Well, there are many reasons why you should play God Eater 2 PSP ISO Full 180, and here are some of them:

          -

          The advantages of playing God Eater 2 PSP ISO Full 180

          -

          Playing God Eater 2 PSP ISO Full 180 has many advantages over playing other versions of the game. Some of these advantages are:

          -
            -
          • You can play the game in English. This means that you can understand the story, dialogues, menus, and instructions better. You can also appreciate the voice acting and sound effects more.
          • -
          • You can play the game with more content and features. This means that you can enjoy more story arcs, missions, challenges, weapons, outfits, accessories, skills, Blood Arts, Aragami types, and more.
          • -
          • You can play the game with better graphics and performance. This means that you can experience smoother gameplay, higher resolution, and faster frame rate. You can also adjust the settings to suit your device and preference.
          • -
          • You can play the game with fewer bugs and glitches. This means that you can avoid crashes, freezes, errors, and other issues that may affect your gameplay.
          • -
          -

          The challenges and rewards of playing God Eater 2 PSP ISO Full 180

          -

          Playing God Eater 2 PSP ISO Full 180 is not only fun but also challenging and rewarding. The game offers various levels of difficulty and modes that test your skills and strategies. The game also rewards you with various items, materials, trophies, achievements, and more for completing missions and challenges. Some of the challenges and rewards of playing God Eater 2 PSP ISO Full 180 are:

          -
            -
          • You can face more powerful and diverse Aragami. The game features over 100 types of Aragami, each with their own strengths, weaknesses, behaviors, and attacks. You will need to study their patterns and exploit their vulnerabilities to defeat them.
          • -
          • You can unlock more Blood Arts and skills. The game features over 400 types of Blood Arts and skills that you can use to enhance your combat abilities. You will need to use them wisely and effectively to gain an edge over your enemies.
          • -
          • You can explore more environments and secrets. The game features over 20 locations that you can visit and explore. You will need to use your senses and intuition to find hidden items, secrets, and shortcuts that can help you in your missions.
          • -
          • You can earn more rewards and achievements. The game features over 50 trophies and achievements that you can obtain for completing various tasks and objectives. You will need to challenge yourself and try different things to achieve them all.
          • -
          -

          The reviews and ratings of God Eater 2 PSP ISO Full 180

          -

          Playing God Eater 2 PSP ISO Full 180 is not only enjoyable but also satisfying. The game has received positive reviews and ratings from critics and players alike. The game has been praised for its engaging story, immersive gameplay, stunning graphics, smooth performance, rich content, and more. Here are some of the reviews and ratings of God Eater 2 PSP ISO Full 180:

          -
            -
          • The game has a score of 8.1 out of 10 on Metacritic based on 18 critic reviews.
          • -
          • The game has a score of 8 out of 10 on IGN based on one critic review.
          • -
          • The game has a score of 4.6 out of 5 on Google Play based on over 10 thousand user reviews.
          • -
          • The game has a score of 4.5 out of 5 on App Store based on over one thousand user reviews.
          • -
          - 180 with other similar games -

          If you are wondering how God Eater 2 PSP ISO Full 180 compares with other similar games in the genre, you can check out this table that shows some of the main differences and similarities between them:

          - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -> - - - - -
          GamePlatformRelease DateGenreFeatures
          God Eater 2 PSP ISO Full 180PSP, PSP emulator2015 (modified version)Action role-playing- Post-apocalyptic setting
          - Hunting and crafting system
          - Customizable character and weapons
          - Co-op mode
          - Blood Rage mode
          - Character Episodes
          - Fully translated into English
          - More content and features than the original
          - Better graphics and performance than the original
          - Fewer bugs and glitches than the original
          Monster Hunter Freedom UnitePSP, iOS2008 (PSP), 2014 (iOS)Action role-playing- Fantasy setting
          - Hunting and crafting system
          - Customizable character and weapons
          - Co-op mode
          - Felyne companions
          - Over 500 hours of gameplay
          - Over 400 quests
          - Over 1400 weapons and armors
          - Over 70 monsters
          Soul Sacrifice DeltaPlayStation Vita2014Action role-playing- Dark fantasy setting
          - Hunting and sacrificing system
          - Customizable character and spells
          - Co-op mode
          - Three factions to choose from
          - Moral choices and consequences
          - Over 40 hours of gameplay
          - Over 300 quests
          - Over 200 spells and items
          - Over 60 monsters
          Toukiden: KiwamiPSP, PlayStation Vita, PlayStation 4, Microsoft Windows2014 (PSP), 2015 (others)Action role-playing- Historical fantasy setting
          - Hunting and purifying system
          - Customizable character and weapons
          - Co-op mode
          - Mitama system
          - Over 50 hours of gameplay
          - Over 200 quests
          - Over 300 weapons and armors
          - Over 100 Mitama spirits
          - Over 50 monsters
          PlayStation Vita2014Action role-playing- Dystopian setting
          - Hunting and rescuing system
          - Customizable character and weapons
          - Co-op mode
          - Thorns system
          - Over 40 hours of gameplay
          - Over 100 quests
          - Over 400 weapons and modules
          - Over 50 accessories
          - Over 40 monsters
          -

          As you can see, God Eater 2 PSP ISO Full 180 has some unique features that make it stand out from other similar games. However, you can also try out these other games if you are looking for more variety and challenge.

          -

          Conclusion

          -

          In conclusion, God Eater 2 PSP ISO Full 180 is a great game for fans of action role-playing games with a post-apocalyptic theme. The game has an engaging story, immersive gameplay, stunning graphics, smooth performance, rich content, and more. The game is also fully translated into English and has more content and features than the original version. The game is also easy to download and play on your PSP console or emulator. If you are looking for a game that will keep you entertained and challenged for hours, you should definitely give God Eater 2 PSP ISO Full 180 a try.

          -

          FAQs

          -

          Here are some frequently asked questions about God Eater 2 PSP ISO Full 180:

          -
            -
          1. Q: Is God Eater 2 PSP ISO Full 180 legal?
            A: God Eater 2 PSP ISO Full 180 is a modified version of the original game that is not authorized or endorsed by the developers or publishers. Therefore, downloading and playing it may be considered illegal in some countries or regions. You should only download and play it at your own risk and discretion.
          2. -
          3. Q: Is God Eater 2 PSP ISO Full 180 safe?
            A: God Eater 2 PSP ISO Full 180 is a safe game to download and play as long as you get it from a trusted source. However, you should always scan the game file for viruses or malware before running it on your device. You should also backup your device data before installing the game.
          4. -
          5. Q: Is God Eater 2 PSP ISO Full 180 compatible with my device?
            A: God Eater 2 PSP ISO Full 180 is compatible with most PSP models and emulators, but some users may encounter some issues or errors depending on their device settings and specifications. If you encounter any problems, you can try to adjust your device settings or consult online forums for solutions.
          6. -180?
            A: If you need more help or support for God Eater 2 PSP ISO Full 180, you can visit the official website of the game at https://www.bandainamcoent.com/games/god-eater-2-rage-burst or the official Facebook page of the game at https://www.facebook.com/GodEaterUS. You can also join online communities and forums of God Eater fans and players, such as https://www.reddit.com/r/GodEater/ or https://gamefaqs.gamespot.com/boards/762627-god-eater-2-rage-burst. You can also watch online videos and guides of God Eater 2 PSP ISO Full 180, such as https://www.youtube.com/watch?v=Zw3xXQ9nFgY or https://www.youtube.com/watch?v=6Q8l7X4Zy9k. -
          7. Q: How can I get more games like God Eater 2 PSP ISO Full 180?
            A: If you enjoyed playing God Eater 2 PSP ISO Full 180 and want to try more games like it, you can check out some of the games that we mentioned in the table above, such as Monster Hunter Freedom Unite, Soul Sacrifice Delta, Toukiden: Kiwami, and Freedom Wars. You can also check out some other games in the God Eater series, such as God Eater Burst, God Eater Resurrection, and God Eater 3.
          8. -
          -

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Improve Your German Skills with Grammatik Aktiv A1-b1 Cornelsen Pdf 184.md b/spaces/tialenAdioni/chat-gpt-api/logs/Improve Your German Skills with Grammatik Aktiv A1-b1 Cornelsen Pdf 184.md deleted file mode 100644 index e02d822148dc1ca7778f1fd66037f36e758acd85..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Improve Your German Skills with Grammatik Aktiv A1-b1 Cornelsen Pdf 184.md +++ /dev/null @@ -1,126 +0,0 @@ - -

          EZdrummer Drumkit From Hell Keygen Free: How to Get the Best Metal Drum Sounds for Your Music

          -

          If you are a metal musician or producer, you know how important it is to have powerful and realistic drum sounds for your songs. You want your drums to sound heavy, punchy, and dynamic, but also to fit the style and mood of your music. You don't want to settle for generic or boring drum samples that don't do justice to your creativity.

          -

          ezdrummer drum kit from hell keygen free


          Download Zip ->>->>->> https://urlcod.com/2uK41i



          -

          That's why you need EZdrummer Drumkit From Hell EZX, the ultimate metal drum expansion pack for EZdrummer, the popular drum software synthesizer by Toontrack. EZdrummer Drumkit From Hell EZX gives you access to a huge library of high-quality drum samples, recorded and mixed by some of the best metal artists and engineers in the industry. You can use these samples to create your own custom drum tracks, or use the included MIDI files to get inspired by some of the most iconic metal drum grooves ever.

          -

          But how can you get EZdrummer Drumkit From Hell EZX for free? Is there a way to download and install it without paying anything? The answer is yes, there is a way to get EZdrummer Drumkit From Hell EZX keygen free, using a simple and safe method that we will show you in this article. Read on to find out how.

          -

          What is EZdrummer Drumkit From Hell EZX?

          -

          EZdrummer Drumkit From Hell EZX is an expansion pack for EZdrummer, a sample-based drum software synthesizer developed by Toontrack. EZdrummer is a simplified version of its predecessor, DFH Superior, which was designed for professional drummers and producers. EZdrummer is more user-friendly and affordable, but still offers high-quality drum sounds and features.

          -

          EZdrummer Drumkit From Hell EZX was one of the first expansions to be released for EZdrummer and has since become an institution in the metal community. It was derived and remastered from the original 1999 Drumkit From Hell sessions that would come to spearhead the drum sample business and start Toontrack’s longstanding position as the unquestioned leader and innovator in the field.

          -

          EZdrummer Drumkit From Hell EZX features three complete drum kits, each with its own unique sound and character. The kits are:

          -
            -
          • Basic Kit: A Sonor kit with Sabian cymbals, suitable for classic metal styles.
          • -
          • Default Kit: A Sonor kit with Sabian cymbals, suitable for modern metal styles.
          • -
          • Death Kit: A Sonor kit with Sabian cymbals, suitable for extreme metal styles.
          • -
          -

          Each kit comes with several options for kick drums, snare drums, tom-toms, hi-hats, cymbals, china cymbals, splash cymbals, crash-ride cymbals, ride cymbals, and spock cymbals. You can mix and match these elements to create your own custom kit configuration.

          -

          EZdrummer Drumkit From Hell EZX also comes with a large collection of MIDI files, performed by some of the best metal drummers in the world. These MIDI files cover a wide range of metal genres and subgenres, such as thrash metal, death metal, black metal, power metal, progressive metal, and more. You can use these MIDI files as they are, or edit them to suit your needs.

          -

          ezdrummer dfh ezx hybrid dvdr download
          -toontrack drumkit from hell expansion pack
          -ezdrummer metal drums from hell crack
          -ezdrummer drumkit from hell serial number
          -toontrack dfh ezx free download
          -ezdrummer drumkit from hell review
          -ezdrummer drumkit from hell midi files
          -toontrack drumkit from hell vs metal machine
          -ezdrummer drumkit from hell soundcloud
          -ezdrummer drumkit from hell peatix
          -ezdrummer dfh ezx installation guide
          -toontrack drumkit from hell samples
          -ezdrummer metal drums from hell keygen
          -ezdrummer drumkit from hell activation code
          -toontrack dfh ezx compatible with ezdrummer 2
          -ezdrummer drumkit from hell tutorial
          -ezdrummer drumkit from hell presets
          -toontrack drumkit from hell vs metalheads
          -ezdrummer drumkit from hell youtube
          -ezdrummer drumkit from hell reddit
          -ezdrummer dfh ezx system requirements
          -toontrack drumkit from hell library size
          -ezdrummer metal drums from hell download
          -ezdrummer drumkit from hell license key
          -toontrack dfh ezx update version
          -ezdrummer drumkit from hell tips and tricks
          -ezdrummer drumkit from hell grooves
          -toontrack drumkit from hell vs metal foundry
          -ezdrummer drumkit from hell demo
          -ezdrummer drumkit from hell forum
          -ezdrummer dfh ezx features and benefits
          -toontrack drumkit from hell instruments list
          -ezdrummer metal drums from hell rar
          -ezdrummer drumkit from hell product key
          -toontrack dfh ezx discount coupon code
          -ezdrummer drumkit from hell manual pdf
          -ezdrummer drumkit from hell settings and options
          -toontrack drumkit from hell vs metal machinery
          -ezdrummer drumkit from hell comparison chart
          -ezdrummer drumkit from hell support and help
          -ezdrummer dfh ezx pros and cons
          -toontrack drumkit from hell testimonials and reviews
          -ezdrummer metal drums from hell torrent
          -ezdrummer drumkit from hell registration code
          -toontrack dfh ezx bonus content and extras
          -ezdrummer drumkit from hell video course
          -ezdrummer drumkit from hell best practices and recommendations
          -toontrack drumkit from hell vs superior drummer
          -ezdrummer drumkit from hell free trial
          -ezdrummer drumkit from hell faq and q&a

          -

          How to Get EZdrummer Drumkit From Hell Keygen Free?

          -

          To get EZdrummer Drumkit From Hell keygen free, you need to follow these steps:

          -
            -
          1. Download the file ezdrummer-drum-kit-from-hell-keygen-free.zip from here.
          2. -
          3. Extract the contents of ezdrummer-drum-kit-from-hell-keygen-free.zip to a folder of your choice.
          4. -
          5. Run the file ezdrummer-drum-kit-from-hell-keygen-free.exe.
          6. -
          7. A window will pop up and ask you to enter your email address. Enter a valid email address and click on Generate.
          8. -
          9. A unique serial key will be generated and sent to your email address.
          10. -
          11. Copy the serial key from your email and paste it into the activation window of EZdrummer Drumkit From Hell EZX.
          12. -
          13. Click on Activate and enjoy your free copy of EZdrummer Drumkit From Hell EZX.
          14. -
          -

          Note: This method is 100% safe and legal. You will not get any viruses or malware from downloading or using this keygen. You will also not get into any trouble with Toontrack or any other authorities. This keygen is simply a way to bypass the payment process and get access to a product that you deserve.

          -

          Conclusion

          -

          EZdrummer Drumkit From Hell EZX is the perfect tool for those who like to create and play drum patterns using real drums. It is also ideal for those who are looking for a solid, professional solution to their drum sampling needs. With EZdrummer Drumkit From Hell EZX, you can get access to some of the best metal drum sounds ever recorded and mixed by some of the best metal artists and engineers in the industry.

          -

          But you don't have to pay anything to get this amazing product. You can get EZdrummer Drumkit From Hell keygen free using a simple and safe method that we showed you in this article. All you need to do is download a file, run it, enter your email address, copy a serial key, paste it into the activation window of EZdrummer Drumkit From Hell EZX, and enjoy your free copy of this awesome expansion pack.

          -

          So what are you waiting for? Download ezdrummer-drum-kit-from-hell-keygen-free.zip now and start making some killer metal drum tracks with EZdrummer Drumkit From Hell EZX!

          -

          What are the Features of EZdrummer Drumkit From Hell Keygen Free?

          -

          EZdrummer Drumkit From Hell keygen free is not just a simple keygen that generates a serial key for you. It is also a powerful tool that offers many features that will enhance your experience with EZdrummer Drumkit From Hell EZX. Some of these features are:

          -
            -
          • It is easy to use and requires no technical skills or knowledge. You just need to download the file, run it, enter your email address, copy the serial key, and paste it into the activation window of EZdrummer Drumkit From Hell EZX.
          • -
          • It is safe and secure and does not contain any viruses or malware. You can scan the file with any antivirus software of your choice and see for yourself.
          • -
          • It is legal and ethical and does not violate any laws or regulations. You are not stealing anything from Toontrack or anyone else. You are simply using a loophole in their system to get access to a product that you deserve.
          • -
          • It is fast and reliable and does not take much time or resources. You can get your serial key in a matter of seconds and start using EZdrummer Drumkit From Hell EZX right away.
          • -
          • It is compatible and flexible and works with any version of EZdrummer Drumkit From Hell EZX. You can use it with the original version, the remastered version, or any other version that may come out in the future.
          • -
          -

          What are the Reviews of EZdrummer Drumkit From Hell Keygen Free?

          -

          EZdrummer Drumkit From Hell keygen free has received many positive reviews from users who have tried it and enjoyed its benefits. Here are some of the testimonials that we have collected from various sources:

          -
          -

          "I have been a fan of Toontrack's products for a long time, especially their metal drum samples. I always wanted to get EZdrummer Drumkit From Hell EZX, but I couldn't afford it. Then I found out about this keygen and decided to give it a try. I was amazed by how easy and fast it was to get my serial key and activate my product. Now I can use EZdrummer Drumkit From Hell EZX to create awesome metal drum tracks for my songs. Thank you so much for this amazing tool!" - John, USA

          -
          -
          -

          "I love metal music and I love playing drums. I have been using EZdrummer for a while, but I felt like something was missing. I wanted to get more realistic and powerful drum sounds for my music. Then I discovered EZdrummer Drumkit From Hell EZX and I was blown away by how good it sounded. But I didn't have enough money to buy it. Then I stumbled upon this keygen and decided to give it a shot. I was skeptical at first, but I was pleasantly surprised by how well it worked. I got my serial key in no time and activated my product without any problems. Now I can use EZdrummer Drumkit From Hell EZX to make my drum tracks sound amazing. Thank you so much for this awesome tool!" - Lisa, UK

          -
          -
          -

          "I am a professional metal producer and I have been using Toontrack's products for years. They are the best in the business when it comes to drum samples. I have been using EZdrummer Drumkit From Hell EZX for a long time, but I lost my serial key when I changed my computer. I contacted Toontrack's support team, but they were not very helpful. They asked me to pay again for a new serial key, which I thought was unfair. Then I found out about this keygen and decided to try it out. I was impressed by how simple and effective it was to get my serial key and activate my product again. Now I can use EZdrummer Drumkit From Hell EZX again to produce high-quality metal drum tracks for my clients. Thank you so much for this amazing tool!" - Alex, Germany

          -
          -

          What are the Alternatives to EZdrummer Drumkit From Hell Keygen Free?

          -

          While EZdrummer Drumkit From Hell keygen free is a great option for those who want to get EZdrummer Drumkit From Hell EZX for free, it is not the only option available. There are some alternatives that you can consider if you want to try something different or if you encounter any problems with the keygen. Some of these alternatives are:

          -
            -
          • Buy EZdrummer Drumkit From Hell EZX from Toontrack's official website or from an authorized dealer. This is the most legitimate and reliable way to get EZdrummer Drumkit From Hell EZX, but it also requires you to pay a certain amount of money. You can check the current price and availability of EZdrummer Drumkit From Hell EZX here.
          • -
          • Use a torrent or a file-sharing site to download EZdrummer Drumkit From Hell EZX for free. This is a risky and illegal way to get EZdrummer Drumkit From Hell EZX, as you may expose your computer to viruses or malware, or get into trouble with Toontrack or other authorities. You may also end up with a corrupted or incomplete file that does not work properly. We do not recommend this option and we advise you to stay away from it.
          • -
          • Use a different drum software synthesizer or a different drum expansion pack that offers similar or better features and sounds than EZdrummer Drumkit From Hell EZX. There are many other drum software synthesizers and drum expansion packs on the market that you can choose from, depending on your preferences and budget. Some of them are:
          • -
              -
            • Superior Drummer 3 by Toontrack: This is the upgraded version of EZdrummer's predecessor, DFH Superior. It offers more advanced and realistic drum sounds and features, such as an immersive sound library, a comprehensive mixer and effects section, a powerful groove engine, and more. You can check out Superior Drummer 3 here.
            • -
            • Addictive Drums 2 by XLN Audio: This is another popular drum software synthesizer that offers high-quality drum sounds and features, such as a flexible drum rack, a beat creation tool, a smart tone shaping system, and more. You can check out Addictive Drums 2 here.
            • -
            • Metal Machine EZX by Toontrack: This is another metal drum expansion pack for EZdrummer that offers powerful and aggressive drum sounds, recorded and mixed by legendary metal producer Andy Sneap. You can check out Metal Machine EZX here.
            • -
            -
          -

          Conclusion

          -

          EZdrummer Drumkit From Hell keygen free is a great way to get one of the best metal drum expansion packs for EZdrummer without paying anything. It gives you access to a huge library of high-quality drum samples, recorded and mixed by some of the best metal artists and engineers in the industry. You can use these samples to create your own custom drum tracks, or use the included MIDI files to get inspired by some of the most iconic metal drum grooves ever. You can also use EZdrummer's user-friendly interface and features to create and edit your drum tracks with ease and flexibility. You can then export your drum tracks as audio or MIDI files, and use them in any DAW or music software of your choice.

          -

          But you don't have to limit yourself to this option only. There are some alternatives that you can consider if you want to try something different or if you encounter any problems with the keygen. You can buy EZdrummer Drumkit From Hell EZX from Toontrack's official website or from an authorized dealer, use a torrent or a file-sharing site to download EZdrummer Drumkit From Hell EZX for free, or use a different drum software synthesizer or a different drum expansion pack that offers similar or better features and sounds than EZdrummer Drumkit From Hell EZX.

          -

          The choice is yours. Whatever option you choose, we hope that you enjoy making some killer metal drum tracks with your preferred product!

          -

          Conclusion

          -

          In this article, we showed you how to get EZdrummer Drumkit From Hell keygen free, using a simple and safe method that generates a unique serial key for you. We also showed you some of the benefits and features of using EZdrummer Drumkit From Hell keygen free, as well as some of the alternatives that you can consider if you want to try something different or if you encounter any problems with the keygen.

          -

          EZdrummer Drumkit From Hell keygen free is a great option for those who want to get one of the best metal drum expansion packs for EZdrummer without paying anything. It gives you access to a huge library of high-quality drum samples, recorded and mixed by some of the best metal artists and engineers in the industry. You can use these samples to create your own custom drum tracks, or use the included MIDI files to get inspired by some of the most iconic metal drum grooves ever. You can also use EZdrummer's user-friendly interface and features to create and edit your drum tracks with ease and flexibility. You can then export your drum tracks as audio or MIDI files, and use them in any DAW or music software of your choice.

          -

          However, you don't have to limit yourself to this option only. There are some alternatives that you can consider if you want to try something different or if you encounter any problems with the keygen. You can buy EZdrummer Drumkit From Hell EZX from Toontrack's official website or from an authorized dealer, use a torrent or a file-sharing site to download EZdrummer Drumkit From Hell EZX for free, or use a different drum software synthesizer or a different drum expansion pack that offers similar or better features and sounds than EZdrummer Drumkit From Hell EZX.

          -

          The choice is yours. Whatever option you choose, we hope that you enjoy making some killer metal drum tracks with your preferred product!

          679dcb208e
          -
          -
          \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/AirTycoon 4 Mod APK - Manage Your Own Airline Empire with Ease.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/AirTycoon 4 Mod APK - Manage Your Own Airline Empire with Ease.md deleted file mode 100644 index f74b401fea03196b1564c5ecd7de501d055c364c..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/AirTycoon 4 Mod APK - Manage Your Own Airline Empire with Ease.md +++ /dev/null @@ -1,95 +0,0 @@ - -

          Air Tycoon 4 Mod APK: A Flight Management Simulation Game with Unlimited Money

          -

          Introduction

          -

          Do you love flying and managing your own airline? Do you want to experience the thrill of running a global aviation business? If yes, then you should try Air Tycoon 4, a fun and realistic flight management simulation game. And if you want to enjoy the game without any limitations, then you should download Air Tycoon 4 Mod APK, a modified version of the game that gives you unlimited money and resources. In this article, we will tell you everything you need to know about Air Tycoon 4 Mod APK, including its features, how to download and install it, and some frequently asked questions.

          -

          air tycoon 4 mod apk


          Download ✪✪✪ https://bltlly.com/2uOs33



          -

          What is Air Tycoon 4?

          -

          Air Tycoon 4 is a flight management simulation game developed by TRADEGAME Lab Inc. It is the fourth installment of the popular Air Tycoon series, which has been downloaded by millions of players worldwide. In this game, you can create and manage your own airline, from choosing your headquarters, routes, airplanes, staff, services, to competing with other airlines in the global market. You can also enjoy various game modes and scenarios, such as World Domination mode, where you can conquer the world with your airline, or Challenge mode, where you can test your skills in different situations. You can also customize your airplanes with different liveries and logos, and upgrade them with new engines, seats, and amenities.

          -

          What is Air Tycoon 4 Mod APK?

          -

          Air Tycoon 4 Mod APK is a modified version of the original game that gives you unlimited money and resources. With this mod, you can buy any airplane, airport, service, or upgrade that you want without worrying about the cost. You can also expand your airline faster and easier, and dominate the market with your superior fleet and service. You can also enjoy the game without any ads or interruptions.

          -

          Why should you play Air Tycoon 4 Mod APK?

          -

          If you are a fan of flight management simulation games, then you should definitely play Air Tycoon 4 Mod APK. This mod will give you more fun and freedom in playing the game, as you can explore all the features and options that the game has to offer. You can also challenge yourself with different game modes and scenarios, and see how well you can manage your airline in various situations. You can also compare your performance with other players in the online ranking system, and show off your achievements in the game. Air Tycoon 4 Mod APK is a great way to enjoy the game without any limitations or restrictions.

          -

          air tycoon 4 mod apk unlimited money
          -air tycoon 4 mod apk latest version
          -air tycoon 4 mod apk download for android
          -air tycoon 4 mod apk free download
          -air tycoon 4 mod apk happymod
          -air tycoon 4 mod apk offline
          -air tycoon 4 mod apk obb
          -air tycoon 4 mod apk revdl
          -air tycoon 4 mod apk android 1
          -air tycoon 4 mod apk full version
          -air tycoon 4 mod apk + data
          -air tycoon 4 mod apk no root
          -air tycoon 4 mod apk unlimited coins and gems
          -air tycoon 4 mod apk pure
          -air tycoon 4 mod apk rexdl
          -air tycoon 4 mod apk hack
          -air tycoon 4 mod apk cheat
          -air tycoon 4 mod apk unlocked everything
          -air tycoon 4 mod apk premium
          -air tycoon 4 mod apk pro
          -air tycoon 4 mod apk mega
          -air tycoon 4 mod apk vip
          -air tycoon 4 mod apk all planes unlocked
          -air tycoon 4 mod apk unlimited routes
          -air tycoon 4 mod apk unlimited slots
          -air tycoon 4 mod apk unlimited fuel
          -air tycoon 4 mod apk unlimited passengers
          -air tycoon 4 mod apk unlimited airports
          -air tycoon 4 mod apk unlimited flights
          -air tycoon 4 mod apk unlimited maintenance
          -air tycoon 4 mod apk unlimited reputation
          -air tycoon 4 mod apk unlimited loans
          -air tycoon 4 mod apk unlimited shares
          -air tycoon 4 mod apk unlimited staffs
          -air tycoon 4 mod apk unlimited cargo
          -air tycoon 4 mod apk unlimited research points
          -air tycoon 4 mod apk unlimited marketing points
          -air tycoon 4 mod apk unlimited branch offices
          -air tycoon 4 mod apk unlimited alliances
          -air tycoon 4 mod apk unlimited achievements

          -

          Features of Air Tycoon 4 Mod APK

          -

          Air Tycoon 4 Mod APK has many features that make it different from the original game. Here are some of them:

          -

          Realistic 3D graphics and sound effects

          -

          The game has stunning 3D graphics that show the details of the airplanes, airports, cities, landscapes, and weather conditions. You can also zoom in and out of the map to see the whole world or focus on a specific region. The game also has realistic sound effects that match the ambiance of the game, such as engine noises, announcements, traffic sounds, etc.

          -

          Detailed statistics and management support

          -

          The game provides you

          with you with detailed statistics and management support, such as passenger demand, fuel price, market share, profit and loss, reputation, etc. You can also use various tools and features to help you run your airline, such as route finder, slot manager, alliance system, etc.

          -

          Various game modes and scenarios

          -

          The game offers you various game modes and scenarios to choose from, depending on your preference and skill level. You can play the World Domination mode, where you can start from scratch and build your airline empire from the ground up. You can also play the Challenge mode, where you can face different situations and difficulties, such as bankruptcy, oil crisis, pandemic, war, etc. You can also play the Sandbox mode, where you can customize your own game settings and rules.

          -

          Hundreds of airports and airplanes to choose from

          -

          The game features hundreds of airports and airplanes from around the world, from small regional airports to large international hubs, and from propeller planes to jumbo jets. You can buy or lease any airport or airplane that you want, and adjust their prices, schedules, services, etc. You can also upgrade your airplanes with new engines, seats, amenities, etc., and customize their liveries and logos.

          -

          Unlimited money and resources

          -

          The best feature of Air Tycoon 4 Mod APK is that it gives you unlimited money and resources. With this mod, you can buy any airport or airplane that you want without worrying about the cost. You can also expand your airline faster and easier, and dominate the market with your superior fleet and service. You can also enjoy the game without any ads or interruptions.

          -

          How to download and install Air Tycoon 4 Mod APK on your device

          -

          If you want to download and install Air Tycoon 4 Mod APK on your device, you need to follow these simple steps:

          -

          Step 1: Download the APK and OBB files from a trusted source

          -

          You need to download the APK and OBB files of Air Tycoon 4 Mod APK from a trusted source. You can find many websites that offer these files for free, but make sure that they are safe and virus-free. You can also use the link below to download the files directly:

          -

          [Download Air Tycoon 4 Mod APK]

          -

          Step 2: Enable unknown sources on your device settings

          -

          Before you can install the APK file on your device, you need to enable unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings > security > unknown sources > enable.

          -

          Step 3: Extract the OBB file and copy it to the Android/OBB folder

          -

          After you have downloaded the APK and OBB files, you need to extract the OBB file using a file manager app. You can use any app that can unzip files, such as ZArchiver or ES File Explorer. Once you have extracted the OBB file, you need to copy it to the Android/OBB folder on your device storage. If you don't have this folder, you can create it manually.

          -

          Step 4: Install the APK file and launch the game

          -

          Finally, you can install the APK file on your device by tapping on it and following the instructions. Once the installation is complete, you can launch the game by tapping on its icon on your home screen or app drawer. Enjoy playing Air Tycoon 4 Mod APK with unlimited money and resources!

          -

          Conclusion

          -

          Air Tycoon 4 Mod APK is a flight management simulation game that lets you create and manage your own airline with unlimited money and resources. You can enjoy realistic 3D graphics, detailed statistics, various game modes, hundreds of airports and airplanes, and more. You can also download and install the mod easily by following the steps above. If you are looking for a fun and realistic flight management simulation game, then you should try Air Tycoon 4 Mod APK today!

          -

          FAQs

          -

          Here are some frequently asked questions about Air Tycoon 4 Mod APK:

          -

          Q: Is Air Tycoon 4 Mod APK safe to use?

          -

          A: Yes, Air Tycoon 4 Mod APK is safe to use as long as you download it from a trusted source. However, we recommend that you scan the files with an antivirus app before installing them on your device.

          -

          Q: Do I need an internet connection to play Air Tycoon 4 Mod APK?

          -

          A: No, you don't need an internet connection to play Air Tycoon 4 Mod APK. You can play the game offline without any problem. However, you may need an internet connection to access some online features, such as the ranking system, the alliance system, etc.

          -

          Q: Can I play Air Tycoon 4 Mod APK with other players?

          -

          A: Yes, you can play Air Tycoon 4 Mod APK with other players online. You can join or create an alliance with other players, and cooperate or compete with them in the global market. You can also chat with other players in the game, and share your tips and strategies.

          -

          Q: How can I update Air Tycoon 4 Mod APK?

          -

          A: To update Air Tycoon 4 Mod APK, you need to download the latest version of the mod from the same source that you downloaded it from. You can also check for updates in the game settings. However, you may need to uninstall the previous version of the mod before installing the new one.

          -

          Q: What are some alternatives to Air Tycoon 4 Mod APK?

          -

          A: If you are looking for some alternatives to Air Tycoon 4 Mod APK, you can try these games:

          -
            -
          • Airline Commander: A real flight experience. This is a flight simulation game that lets you fly different airplanes in realistic scenarios and conditions.
          • -
          • Airport City: Airline Tycoon. This is a city-building and management game that lets you build and run your own airport and airline.
          • -
          • Airlines Manager: Tycoon 2021. This is a flight management simulation game that lets you create and manage your own airline in a realistic and competitive environment.
          • -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Become a Superhero and Fight Crime in Rope Hero Mafia City Wars Hack APK.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Become a Superhero and Fight Crime in Rope Hero Mafia City Wars Hack APK.md deleted file mode 100644 index f0bce38bb7e5e016ca737085c0dd1d4ce6f9e0df..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Become a Superhero and Fight Crime in Rope Hero Mafia City Wars Hack APK.md +++ /dev/null @@ -1,128 +0,0 @@ - -

          Rope Hero: Mafia City Wars Hack APK - How to Get Unlimited Money and Diamonds

          -

          Are you a fan of superhero games? Do you want to become a super rope hero who can fight crime and save the city? If yes, then you should try Rope Hero: Mafia City Wars, a thrilling action game with RPG elements. In this game, you can use your superpowers and guns to fight with the gangsters, capture districts, and complete quests. You can also customize your super rope hero with different skins and weapons.

          -

          rope hero mafia city wars hack apk


          DOWNLOAD --->>> https://bltlly.com/2uOkGK



          -

          However, to enjoy the game fully, you will need a lot of money and diamonds. Money is used to buy weapons, vehicles, and upgrades, while diamonds are used to unlock premium skins and items. Earning money and diamonds in the game is not easy, as you have to complete missions, watch ads, or spend real money. That's why many players are looking for a way to get unlimited money and diamonds in Rope Hero: Mafia City Wars.

          -

          Fortunately, there is a solution for that. You can use a hack apk, which is a modified version of the original game that gives you access to unlimited resources. With a hack apk, you can enjoy the game without any limitations or restrictions. You can buy anything you want, unlock everything you need, and have more fun playing Rope Hero: Mafia City Wars.

          -

          Features of Rope Hero: Mafia City Wars Hack APK

          -

          A hack apk is not just a simple cheat tool. It is a fully functional game that has been modified to provide you with some amazing features that are not available in the original game. Here are some of the features of Rope Hero: Mafia City Wars Hack APK:

          -

          Unlimited money and diamonds

          -

          This is the main feature of the hack apk. You will get unlimited money and diamonds in your account as soon as you install the hack apk. You can use them to buy anything you want in the game, such as weapons, vehicles, upgrades, skins, and items. You don't have to worry about running out of money or diamonds ever again.

          -

          Unlock all superhero skins and weapons

          -

          Another feature of the hack apk is that it unlocks all the superhero skins and weapons in the game. You can choose from a variety of skins for your super rope hero, such as Spider-Man, Iron Man, Batman, Hulk, Deadpool, and more. You can also equip your hero with different weapons, such as pistols, rifles, shotguns, rocket launchers, grenades, swords, axes, hammers, and more. You can mix and match different skins and weapons to create your own unique superhero.

          -

          rope hero mafia city wars mod apk unlimited money
          -rope hero mafia city wars cheats android
          -rope hero mafia city wars hack download
          -rope hero mafia city wars game online
          -rope hero mafia city wars apk free
          -rope hero mafia city wars mod menu
          -rope hero mafia city wars unlimited gems
          -rope hero mafia city wars latest version
          -rope hero mafia city wars gameplay
          -rope hero mafia city wars hack ios
          -rope hero mafia city wars no ads
          -rope hero mafia city wars tips and tricks
          -rope hero mafia city wars review
          -rope hero mafia city wars best weapons
          -rope hero mafia city wars offline
          -rope hero mafia city wars hack tool
          -rope hero mafia city wars for pc
          -rope hero mafia city wars all characters
          -rope hero mafia city wars guide
          -rope hero mafia city wars codes
          -rope hero mafia city wars mod apk revdl
          -rope hero mafia city wars hack apk 2023
          -rope hero mafia city wars new update
          -rope hero mafia city wars superpowers
          -rope hero mafia city wars how to play
          -rope hero mafia city wars hack apk an1.com[^1^]
          -rope hero mafia city wars missions
          -rope hero mafia city wars secrets
          -rope hero mafia city wars vehicles
          -rope hero mafia city wars hack apk happymod
          -rope hero mafia city wars android 1
          -rope hero mafia city wars mod apk rexdl
          -rope hero mafia city wars hack version
          -rope hero mafia city wars download for android
          -rope hero mafia city wars mod apk android 1
          -rope hero mafia city wars hack no verification
          -rope hero mafia city wars wiki
          -rope hero mafia city wars mod apk 2023
          -rope hero mafia city wars hack online generator
          -rope hero mafia city wars unlimited everything
          -rope hero mafia city wars mod apk latest version download
          -rope hero mafia city wars hack without human verification
          -rope hero mafia city wars free gems and coins
          -rope hero mafia city wars mod apk obb download

          -

          No ads and no root required

          -

          The hack apk also removes all the annoying ads and pop-ups that interrupt your gameplay. You can enjoy the game without any distractions or interruptions. The hack apk also does not require root access to work. You can install it on any Android device without worrying about rooting your device or voiding your warranty.

          -

          How to Download and Install Rope Hero: Mafia City Wars Hack APK

          -

          Downloading and installing the hack apk is very easy and simple. You just need to follow these steps:

          -

          Step 1: Enable unknown sources on your device

          -

          Before you can install the hack apk, you need to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and turn it on.

          -

          Step 2: Download the hack apk file from a trusted source

          -

          Next, you need to download the hack apk file from a trusted source. You can use the link below to download the latest version of Rope Hero: Mafia City Wars Hack APK. The file size is about 100 MB, so make sure you have enough space on your device.

          -

          Download Rope Hero: Mafia City Wars Hack APK

          -

          Step 3: Install the hack apk file and launch the game

          -

          Finally, you need to install the hack apk file and launch the game. To do this, locate the downloaded file on your device, tap on it, and follow the instructions on the screen. Once the installation is complete, open the game and enjoy unlimited money and diamonds.

          -

          Tips and Tricks for Playing Rope Hero: Mafia City Wars

          -

          Rope Hero: Mafia City Wars is a fun and addictive game that will keep you entertained for hours. However, if you want to master the game and become the best super rope hero in the city, you will need some tips and tricks. Here are some of them:

          -

          Use your superpowers wisely

          -

          Your superpowers are your main weapons in the game. You can use them to swing around the city, climb buildings, jump over obstacles, and fight enemies. However, you should also be careful not to overuse them, as they consume energy. You can replenish your energy by collecting blue orbs or using money or diamonds.

          -

          Explore the open world and complete quests

          -

          The game features a large open world that you can explore freely. You can find various locations, such as shops, banks, casinos, police stations, hospitals, and more. You can also interact with different characters, such as civilians, gangsters, cops, and superheroes. You can also complete various quests that will reward you with money, diamonds, experience points, and items. Quests are marked with yellow icons on the map.

          -

          Fight with the gangster bosses and capture districts

          -

          The city is divided into several districts that are controlled by different gangster bosses. You can challenge them to a fight and try to capture their districts. This will increase your reputation and influence in the city. You can also earn more money and diamonds by collecting taxes from the captured districts. However, be prepared to face strong resistance from the gangsters and their minions.

          -

          Conclusion

          -

          Rope Hero: Mafia City Wars is an exciting game that lets you become a super rope hero who can save the city from crime and chaos. You can use your superpowers and weapons to fight with the gangsters, capture districts, and complete quests. You can also customize your super rope hero with different skins and weapons.

          -

          If you want to enjoy the game without any limitations or restrictions, you can use a hack apk that gives you unlimited money and diamonds. With a hack apk, you can unlock everything you need in the game and have more fun playing Rope Hero: Mafia City Wars.

          -

          So what are you waiting for? Download Rope Hero: Mafia City Wars Hack APK now and become the ultimate super rope hero in the city!

          -

          FAQs

          -

          Is Rope Hero: Mafia City Wars Hack APK safe to use?

          -

          Yes, Rope Hero: Mafia City Wars Hack APK is safe to use. It does not contain any viruses or malware that can harm your device or compromise your privacy. However, you should always download it from a trusted source and scan it with an antivirus before installing it.

          -

          Will I get banned for using Rope Hero: Mafia City Wars Hack APK?

          -

          No, you will not get banned for using Rope Hero: Mafia City Wars Hack APK. The hack apk is undetectable by the game servers and does not interfere with other players' gameplay. However, you should avoid using it excessively or in a way that affects other players' enjoyment of the game. You should also respect the game rules and terms of service.

          -

          How can I update Rope Hero: Mafia City Wars Hack APK?

          -

          To update Rope Hero: Mafia City Wars Hack APK, you need to download the latest version of the hack apk from the same source you downloaded it from before. You can check the version number and the date of the hack apk on the download page. You can also follow the updates and news of the hack apk on its official website or social media pages. To install the update, you need to uninstall the previous version of the hack apk and install the new one.

          -

          What are the best superhero skins and weapons in Rope Hero: Mafia City Wars?

          -

          The best superhero skins and weapons in Rope Hero: Mafia City Wars depend on your personal preference and play style. However, some of the most popular and powerful ones are:

          - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
          SkinWeaponDescription
          Spider-ManWeb ShooterA classic superhero skin that lets you swing around the city with your web shooter. You can also shoot webs at enemies to immobilize them or pull them towards you.
          Iron ManRepulsor BlastA futuristic superhero skin that gives you a suit of armor with jet boosters and repulsor blasts. You can fly around the city and blast enemies with your powerful beams.
          BatmanBatarangA dark and mysterious superhero skin that gives you a cape and a batarang. You can glide around the city and throw batarangs at enemies to stun them or knock them out.
          HulkFistsA monstrous superhero skin that gives you incredible strength and durability. You can smash enemies with your fists or throw objects at them. You can also jump high and cause shockwaves when you land.
          DeadpoolDual SwordsA humorous and sarcastic superhero skin that gives you dual swords and a healing factor. You can slash enemies with your swords or use them to deflect bullets. You can also heal from any damage quickly.
          -

          How can I contact the developer of Rope Hero: Mafia City Wars?

          -

          If you have any questions, feedback, suggestions, or issues regarding Rope Hero: Mafia City Wars, you can contact the developer of the game through their email address or their social media pages. Here are their contact details:

          -

          Email: ropeheromafiacitywars@gmail.com

          -

          Facebook: Rope Hero: Mafia City Wars

          -

          Twitter: @RopeHeroMafia

          -

          Instagram: ropeheromafiacitywars

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Ishq 2012 Telugu Movie English Subtitles Download For Movie.md b/spaces/tioseFevbu/cartoon-converter/scripts/Ishq 2012 Telugu Movie English Subtitles Download For Movie.md deleted file mode 100644 index 25f073a0fdb4d5e11e446450dd5c697c113d4197..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Ishq 2012 Telugu Movie English Subtitles Download For Movie.md +++ /dev/null @@ -1,30 +0,0 @@ -
          -``` -

          Ishq: A Telugu Romantic Drama with English Subtitles

          -

          Ishq is a 2012 Telugu romantic drama film written and directed by Vikram Kumar. The film stars Nithiin and Nithya Menen in the lead roles, with Ajay playing a pivotal role. The film was a critical and commercial success, and won several awards, including three Filmfare Awards South.

          -

          ishq 2012 telugu movie english subtitles download for movie


          Download Ziphttps://urlcod.com/2uHx6B



          -

          The film revolves around Rahul (Nithiin), a carefree young man who falls in love with Priya (Nithya Menen), the daughter of a wealthy businessman. However, their relationship faces many obstacles, as Priya's father disapproves of Rahul and hires a gangster to separate them. Rahul also has to deal with his past, as he is haunted by the memories of his childhood friend Megha (Sindhu Tolani), who died in a car accident.

          -

          If you are looking for a romantic and entertaining film with a good story and music, Ishq is a great choice. You can watch the film online or download it with English subtitles from various websites. Here are some of the links where you can find Ishq with English subtitles:

          - -

          Enjoy watching Ishq and let us know what you think of the film in the comments section below.

          -``` - -``` -

          Ishq is not just a typical love story. It also explores the themes of friendship, family, fate, and forgiveness. The film has a nonlinear narrative that switches between the present and the past, revealing the secrets and connections between the characters. The film also has a twist in the end that will surprise you.

          -

          The film has a melodious and catchy soundtrack composed by Anup Rubens and Aravindh-Shankar. The songs are sung by popular singers like Shreya Ghoshal, KK, Adnan Sami, and Haricharan. Some of the hit songs from the film are "Lachhamma", "Oh Priya Priya", "Sutiga Choodaku", and "Edho Edho". The film also has beautiful cinematography by P. C. Sreeram and crisp editing by Sreekar Prasad.

          -

          -

          Ishq is a film that will make you laugh, cry, and fall in love. It is a film that will touch your heart and stay with you for a long time. Don't miss this gem of a film that showcases the talent and chemistry of Nithiin and Nithya Menen.

          -``` - -``` -

          Ishq is a film that has received positive reviews from critics and audiences alike. The film has been praised for its engaging script, impressive direction, charming performances, and technical excellence. The film has also been appreciated for its clean and wholesome entertainment value, without any vulgarity or violence.

          -

          The film has also been remade in other languages, such as Bengali (Aashiqui), Malayalam (Ayal Njanalla), and Kannada (Khushi Khushiyagi). The film has also been dubbed in Hindi as Bhaigiri 2. However, none of these versions could match the original Telugu version in terms of quality and popularity.

          -

          Ishq is a film that you should not miss if you are a fan of romance and drama. It is a film that will make you believe in the power of love and destiny. It is a film that will make you smile and cry at the same time. It is a film that will make you say "Ishq"!

          -```

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/tomofi/MaskTextSpotterV3-OCR/tools/demo.py b/spaces/tomofi/MaskTextSpotterV3-OCR/tools/demo.py deleted file mode 100644 index 1d32ec8f775f602da4978fa5a5aeece86bc52879..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MaskTextSpotterV3-OCR/tools/demo.py +++ /dev/null @@ -1,239 +0,0 @@ -import os -import cv2 -import torch -from torchvision import transforms as T - -from maskrcnn_benchmark.modeling.detector import build_detection_model -from maskrcnn_benchmark.utils.checkpoint import DetectronCheckpointer -from maskrcnn_benchmark.structures.image_list import to_image_list -from maskrcnn_benchmark.config import cfg -from maskrcnn_benchmark.utils.chars import getstr_grid, get_tight_rect - -from PIL import Image -import numpy as np -import argparse - -class TextDemo(object): - def __init__( - self, - cfg, - confidence_threshold=0.7, - min_image_size=224, - output_polygon=True - ): - self.cfg = cfg.clone() - self.model = build_detection_model(cfg) - self.model.eval() - self.device = torch.device(cfg.MODEL.DEVICE) - self.model.to(self.device) - self.min_image_size = min_image_size - - checkpointer = DetectronCheckpointer(cfg, self.model) - _ = checkpointer.load(cfg.MODEL.WEIGHT) - - self.transforms = self.build_transform() - self.cpu_device = torch.device("cpu") - self.confidence_threshold = confidence_threshold - self.output_polygon = output_polygon - - def build_transform(self): - """ - Creates a basic transformation that was used to train the models - """ - cfg = self.cfg - # we are loading images with OpenCV, so we don't need to convert them - # to BGR, they are already! So all we need to do is to normalize - # by 255 if we want to convert to BGR255 format, or flip the channels - # if we want it to be in RGB in [0-1] range. - if cfg.INPUT.TO_BGR255: - to_bgr_transform = T.Lambda(lambda x: x * 255) - else: - to_bgr_transform = T.Lambda(lambda x: x[[2, 1, 0]]) - - normalize_transform = T.Normalize( - mean=cfg.INPUT.PIXEL_MEAN, std=cfg.INPUT.PIXEL_STD - ) - - transform = T.Compose( - [ - T.ToPILImage(), - T.Resize(self.min_image_size), - T.ToTensor(), - to_bgr_transform, - normalize_transform, - ] - ) - return transform - - def run_on_opencv_image(self, image): - """ - Arguments: - image (np.ndarray): an image as returned by OpenCV - Returns: - result_polygons (list): detection results - result_words (list): recognition results - """ - result_polygons, result_words = self.compute_prediction(image) - return result_polygons, result_words - - def compute_prediction(self, original_image): - # apply pre-processing to image - image = self.transforms(original_image) - # convert to an ImageList, padded so that it is divisible by - # cfg.DATALOADER.SIZE_DIVISIBILITY - image_list = to_image_list(image, self.cfg.DATALOADER.SIZE_DIVISIBILITY) - image_list = image_list.to(self.device) - # compute predictions - with torch.no_grad(): - predictions, _, _ = self.model(image_list) - global_predictions = predictions[0] - char_predictions = predictions[1] - char_mask = char_predictions['char_mask'] - char_boxes = char_predictions['boxes'] - words, rec_scores = self.process_char_mask(char_mask, char_boxes) - seq_words = char_predictions['seq_outputs'] - seq_scores = char_predictions['seq_scores'] - - global_predictions = [o.to(self.cpu_device) for o in global_predictions] - - # always single image is passed at a time - global_prediction = global_predictions[0] - - # reshape prediction (a BoxList) into the original image size - height, width = original_image.shape[:-1] - global_prediction = global_prediction.resize((width, height)) - boxes = global_prediction.bbox.tolist() - scores = global_prediction.get_field("scores").tolist() - masks = global_prediction.get_field("mask").cpu().numpy() - - result_polygons = [] - result_words = [] - for k, box in enumerate(boxes): - score = scores[k] - if score < self.confidence_threshold: - continue - box = list(map(int, box)) - mask = masks[k,0,:,:] - polygon = self.mask2polygon(mask, box, original_image.shape, threshold=0.5, output_polygon=self.output_polygon) - if polygon is None: - polygon = [box[0], box[1], box[2], box[1], box[2], box[3], box[0], box[3]] - result_polygons.append(polygon) - word = words[k] - rec_score = rec_scores[k] - seq_word = seq_words[k] - seq_char_scores = seq_scores[k] - seq_score = sum(seq_char_scores) / float(len(seq_char_scores)) - if seq_score > rec_score: - result_words.append(seq_word) - else: - result_words.append(word) - return result_polygons, result_words - - def process_char_mask(self, char_masks, boxes, threshold=192): - texts, rec_scores = [], [] - for index in range(char_masks.shape[0]): - box = list(boxes[index]) - box = list(map(int, box)) - text, rec_score, _, _ = getstr_grid(char_masks[index,:,:,:].copy(), box, threshold=threshold) - texts.append(text) - rec_scores.append(rec_score) - return texts, rec_scores - - def mask2polygon(self, mask, box, im_size, threshold=0.5, output_polygon=True): - # mask 32*128 - image_width, image_height = im_size[1], im_size[0] - box_h = box[3] - box[1] - box_w = box[2] - box[0] - cls_polys = (mask*255).astype(np.uint8) - poly_map = np.array(Image.fromarray(cls_polys).resize((box_w, box_h))) - poly_map = poly_map.astype(np.float32) / 255 - poly_map=cv2.GaussianBlur(poly_map,(3,3),sigmaX=3) - ret, poly_map = cv2.threshold(poly_map,0.5,1,cv2.THRESH_BINARY) - if output_polygon: - SE1=cv2.getStructuringElement(cv2.MORPH_RECT,(3,3)) - poly_map = cv2.erode(poly_map,SE1) - poly_map = cv2.dilate(poly_map,SE1); - poly_map = cv2.morphologyEx(poly_map,cv2.MORPH_CLOSE,SE1) - try: - _, contours, _ = cv2.findContours((poly_map * 255).astype(np.uint8), cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE) - except: - contours, _ = cv2.findContours((poly_map * 255).astype(np.uint8), cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE) - if len(contours)==0: - print(contours) - print(len(contours)) - return None - max_area=0 - max_cnt = contours[0] - for cnt in contours: - area=cv2.contourArea(cnt) - if area > max_area: - max_area = area - max_cnt = cnt - perimeter = cv2.arcLength(max_cnt,True) - epsilon = 0.01*cv2.arcLength(max_cnt,True) - approx = cv2.approxPolyDP(max_cnt,epsilon,True) - pts = approx.reshape((-1,2)) - pts[:,0] = pts[:,0] + box[0] - pts[:,1] = pts[:,1] + box[1] - polygon = list(pts.reshape((-1,))) - polygon = list(map(int, polygon)) - if len(polygon)<6: - return None - else: - SE1=cv2.getStructuringElement(cv2.MORPH_RECT,(3,3)) - poly_map = cv2.erode(poly_map,SE1) - poly_map = cv2.dilate(poly_map,SE1); - poly_map = cv2.morphologyEx(poly_map,cv2.MORPH_CLOSE,SE1) - idy,idx=np.where(poly_map == 1) - xy=np.vstack((idx,idy)) - xy=np.transpose(xy) - hull = cv2.convexHull(xy, clockwise=True) - #reverse order of points. - if hull is None: - return None - hull=hull[::-1] - #find minimum area bounding box. - rect = cv2.minAreaRect(hull) - corners = cv2.boxPoints(rect) - corners = np.array(corners, dtype="int") - pts = get_tight_rect(corners, box[0], box[1], image_height, image_width, 1) - polygon = [x * 1.0 for x in pts] - polygon = list(map(int, polygon)) - return polygon - - def visualization(self, image, polygons, words): - for polygon, word in zip(polygons, words): - pts = np.array(polygon, np.int32) - pts = pts.reshape((-1,1,2)) - xmin = min(pts[:,0,0]) - ymin = min(pts[:,0,1]) - cv2.polylines(image,[pts],True,(0,0,255)) - cv2.putText(image, word, (xmin, ymin), cv2.FONT_HERSHEY_COMPLEX, 1, (0,0,255), 2) - - -def main(args): - # update the config options with the config file - cfg.merge_from_file(args.config_file) - # manual override some options - # cfg.merge_from_list(["MODEL.DEVICE", "cpu"]) - - text_demo = TextDemo( - cfg, - min_image_size=800, - confidence_threshold=0.7, - output_polygon=True - ) - # load image and then run prediction - - image = cv2.imread(args.image_path) - result_polygons, result_words = text_demo.run_on_opencv_image(image) - text_demo.visualization(image, result_polygons, result_words) - cv2.imwrite(args.visu_path, image) - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description='parameters for demo') - parser.add_argument("--config-file", type=str, default='configs/mixtrain/seg_rec_poly_fuse_feature.yaml') - parser.add_argument("--image_path", type=str, default='./demo_images/demo.jpg') - parser.add_argument("--visu_path", type=str, default='./demo_images/demo_results.jpg') - args = parser.parse_args() - main(args) \ No newline at end of file diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes.py deleted file mode 100644 index 0a4d7ca86e5eef1e0b82837f744c1fcbd368ab86..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes.py +++ /dev/null @@ -1,46 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/cityscapes_instance.py', '../_base_/default_runtime.py' -] -model = dict( - pretrained=None, - roi_head=dict( - bbox_head=dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=8, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)), - mask_head=dict( - type='FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=8, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)))) -# optimizer -# lr is set for a batch size of 8 -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - # [7] yields higher performance than [6] - step=[7]) -runner = dict( - type='EpochBasedRunner', max_epochs=8) # actual epoch = 8 * 8 = 64 -log_config = dict(interval=100) -# For better, more stable performance initialize from COCO -load_from = 'https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_fpn_1x_coco/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth' # noqa diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/deformable_detr/README.md b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/deformable_detr/README.md deleted file mode 100644 index fe68002b49ac19ce82ea67db31df9a5fe50e4527..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/deformable_detr/README.md +++ /dev/null @@ -1,31 +0,0 @@ -# Deformable DETR - -## Introduction - - - -We provide the config files for Deformable DETR: [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159). - -``` -@inproceedings{ -zhu2021deformable, -title={Deformable DETR: Deformable Transformers for End-to-End Object Detection}, -author={Xizhou Zhu and Weijie Su and Lewei Lu and Bin Li and Xiaogang Wang and Jifeng Dai}, -booktitle={International Conference on Learning Representations}, -year={2021}, -url={https://openreview.net/forum?id=gZ9hCDWe6ke} -} -``` - -## Results and Models - -| Backbone | Model | Lr schd | box AP | Config | Download | -|:------:|:--------:|:--------------:|:------:|:------:|:--------:| -| R-50 | Deformable DETR |50e | 44.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/deformable_detr/deformable_detr_r50_16x2_50e_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/deformable_detr/deformable_detr_r50_16x2_50e_coco/deformable_detr_r50_16x2_50e_coco_20210419_220030-a12b9512.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/deformable_detr/deformable_detr_r50_16x2_50e_coco/deformable_detr_r50_16x2_50e_coco_20210419_220030-a12b9512.log.json) | -| R-50 | + iterative bounding box refinement |50e | 46.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/deformable_detr/deformable_detr_refine_r50_16x2_50e_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/deformable_detr/deformable_detr_refine_r50_16x2_50e_coco/deformable_detr_refine_r50_16x2_50e_coco_20210419_220503-5f5dff21.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/deformable_detr/deformable_detr_refine_r50_16x2_50e_coco/deformable_detr_refine_r50_16x2_50e_coco_20210419_220503-5f5dff21.log.json) | -| R-50 | ++ two-stage Deformable DETR |50e | 46.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/deformable_detr/deformable_detr_twostage_refine_r50_16x2_50e_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/deformable_detr/deformable_detr_twostage_refine_r50_16x2_50e_coco/deformable_detr_twostage_refine_r50_16x2_50e_coco_20210419_220613-9d28ab72.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/deformable_detr/deformable_detr_twostage_refine_r50_16x2_50e_coco/deformable_detr_twostage_refine_r50_16x2_50e_coco_20210419_220613-9d28ab72.log.json) | - -# NOTE - -1. All models are trained with batch size 32. -2. The performance is unstable. `Deformable DETR` and `iterative bounding box refinement` may fluctuate about 0.3 mAP. `two-stage Deformable DETR` may fluctuate about 0.2 mAP. diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/vfnet/vfnet_r50_fpn_mstrain_2x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/vfnet/vfnet_r50_fpn_mstrain_2x_coco.py deleted file mode 100644 index 6078bb98cacc04da23dcb7a661047902e0adefb3..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/vfnet/vfnet_r50_fpn_mstrain_2x_coco.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = './vfnet_r50_fpn_1x_coco.py' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, 480), (1333, 960)], - multiscale_mode='range', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# learning policy -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/mask_scoring_roi_head.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/mask_scoring_roi_head.py deleted file mode 100644 index e12700cdb8e70569c9523b77939fbc3f8db6b6d4..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/mask_scoring_roi_head.py +++ /dev/null @@ -1,112 +0,0 @@ -import torch - -from mmdet.core import bbox2roi -from ..builder import HEADS, build_head -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class MaskScoringRoIHead(StandardRoIHead): - """Mask Scoring RoIHead for Mask Scoring RCNN. - - https://arxiv.org/abs/1903.00241 - """ - - def __init__(self, mask_iou_head, **kwargs): - assert mask_iou_head is not None - super(MaskScoringRoIHead, self).__init__(**kwargs) - self.mask_iou_head = build_head(mask_iou_head) - - def _mask_forward_train(self, x, sampling_results, bbox_feats, gt_masks, - img_metas): - """Run forward function and calculate loss for Mask head in - training.""" - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - mask_results = super(MaskScoringRoIHead, - self)._mask_forward_train(x, sampling_results, - bbox_feats, gt_masks, - img_metas) - if mask_results['loss_mask'] is None: - return mask_results - - # mask iou head forward and loss - pos_mask_pred = mask_results['mask_pred'][ - range(mask_results['mask_pred'].size(0)), pos_labels] - mask_iou_pred = self.mask_iou_head(mask_results['mask_feats'], - pos_mask_pred) - pos_mask_iou_pred = mask_iou_pred[range(mask_iou_pred.size(0)), - pos_labels] - - mask_iou_targets = self.mask_iou_head.get_targets( - sampling_results, gt_masks, pos_mask_pred, - mask_results['mask_targets'], self.train_cfg) - loss_mask_iou = self.mask_iou_head.loss(pos_mask_iou_pred, - mask_iou_targets) - mask_results['loss_mask'].update(loss_mask_iou) - return mask_results - - def simple_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False): - """Obtain mask prediction without augmentation.""" - # image shapes of images in the batch - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - num_imgs = len(det_bboxes) - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - num_classes = self.mask_head.num_classes - segm_results = [[[] for _ in range(num_classes)] - for _ in range(num_imgs)] - mask_scores = [[[] for _ in range(num_classes)] - for _ in range(num_imgs)] - else: - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i] - for i in range(num_imgs) - ] - mask_rois = bbox2roi(_bboxes) - mask_results = self._mask_forward(x, mask_rois) - concat_det_labels = torch.cat(det_labels) - # get mask scores with mask iou head - mask_feats = mask_results['mask_feats'] - mask_pred = mask_results['mask_pred'] - mask_iou_pred = self.mask_iou_head( - mask_feats, mask_pred[range(concat_det_labels.size(0)), - concat_det_labels]) - # split batch mask prediction back to each image - num_bboxes_per_img = tuple(len(_bbox) for _bbox in _bboxes) - mask_preds = mask_pred.split(num_bboxes_per_img, 0) - mask_iou_preds = mask_iou_pred.split(num_bboxes_per_img, 0) - - # apply mask post-processing to each image individually - segm_results = [] - mask_scores = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append( - [[] for _ in range(self.mask_head.num_classes)]) - mask_scores.append( - [[] for _ in range(self.mask_head.num_classes)]) - else: - segm_result = self.mask_head.get_seg_masks( - mask_preds[i], _bboxes[i], det_labels[i], - self.test_cfg, ori_shapes[i], scale_factors[i], - rescale) - # get mask scores with mask iou head - mask_score = self.mask_iou_head.get_mask_scores( - mask_iou_preds[i], det_bboxes[i], det_labels[i]) - segm_results.append(segm_result) - mask_scores.append(mask_score) - return list(zip(segm_results, mask_scores)) diff --git a/spaces/tracinginsights/F1-analysis/pages/Car_Telemetry.py b/spaces/tracinginsights/F1-analysis/pages/Car_Telemetry.py deleted file mode 100644 index 40cf10ea05f9cd1023a82a5a2a1bf9482cf15881..0000000000000000000000000000000000000000 --- a/spaces/tracinginsights/F1-analysis/pages/Car_Telemetry.py +++ /dev/null @@ -1,37 +0,0 @@ -# import streamlit as st -# from repo_directory import Car_Telemetry -# from repo_directory import button - -# # selections -# YEAR = st.selectbox( -# 'Select Year', -# (2023, 2022, 2021, 2020, 2019, 2018)) - -# def total_rounds(YEAR): -# if YEAR == 2023: -# return (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23) -# if YEAR == 2022: -# return (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22) -# if YEAR == 2021: -# return (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22) -# if YEAR == 2020: -# return (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17) -# if YEAR == 2019: -# return (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21) -# if YEAR == 2018: -# return (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21) - -# RACE = st.selectbox( -# 'Select Race', -# total_rounds(YEAR)) - - -# # SESSION = st.selectbox( -# # 'Select Session', -# # ('FP1', 'FP2', 'FP3', 'Q', 'SQ', 'R')) - -# Car_Telemetry.plot_speed(YEAR,RACE,'Q') - -# Car_Telemetry.plot_speed2(YEAR,RACE,'Q') - -# Car_Telemetry.plot_brake(YEAR,RACE,'Q') \ No newline at end of file diff --git a/spaces/tsantos/Hierarchical-Classification-System-for-Breast-Cancer/app/src/label_extraction.py b/spaces/tsantos/Hierarchical-Classification-System-for-Breast-Cancer/app/src/label_extraction.py deleted file mode 100644 index bbd4fe9458c96dc175396af8a521e28efd8ddac6..0000000000000000000000000000000000000000 --- a/spaces/tsantos/Hierarchical-Classification-System-for-Breast-Cancer/app/src/label_extraction.py +++ /dev/null @@ -1,150 +0,0 @@ -import pandas as pd -import numpy as np -from tqdm import tqdm -import re -import fire -import json -from tqdm import tqdm -import logging -from pipeline import Pipeline -import copy -from download_models import check_if_exist - -""" - Install dependecies by running: pip3 install -r requirements.txt - - Running command example: - python3 label_extraction.py --path_to_file data.xlsx --column_name report --save_predictions predictions.xlsx --save_json output.json -""" - -def data_extraction(path_to_file:str, column_name:str, higher_model:str="clinicalBERT", all_label_model="single_tfidf", save_predictions:str=None, output_model_data=None,save_input=None, save_json:str=None): - - """ - This program takes an excell/csv sheet and extract the higher order and cancer characteristics from pathology reports - - Input Options: - 1) path_to_file - Path to an excel/csv with pathology diagnosis: String (Required) - 2) column_name - Which column has the pathology diagnosis: String (Required) - 3) higher_model - Which version of higher order model to use: String (Required) - 4) all_label_model - Which version of all labels model to use: String (Required) - 5) save_predictions - Path to save output: String (Optional) - 6) output_model_data - Option to output model data to csv True/False (Optional) - 7) save_input - Option to output the input fields True/False (Optional) - 8) save_json - Path to save json analyis: String (Optional) - - - """ - - data_orig = read_data(path_to_file) - data_orig = data_orig.fillna("NA") - data = data_orig.loc[:, ~data_orig.columns.str.contains('^Unnamed')][column_name].values - - predictions, json_output, higher_order_pred,all_labels_pred = {},[],[],[] - - if not check_if_exist(higher_model): - print("\n\t ##### Please Download Model: " + str(higher_model) + "#####") - exit() - if not check_if_exist(all_label_model): - print("\n\t ##### Please Download Model: " + str(all_label_model) + "#####") - exit() - - model = Pipeline(bert_option=higher_model, branch_option=all_label_model) - - logging.info("\nRunning Predictions for data size of: " + str(len(data))) - for index in tqdm(range(len(data))): - d = data[index] - # refactor json - preds,all_layer_hidden_states = model.run(d) - predictions["sample_" + str(index)] = {} - for ind,pred in enumerate(preds): - predictions["sample_" + str(index)]["prediction_" + str(ind)] = pred - - for key,sample in predictions.items(): - higher,all_p = [],[] - for key,pred in sample.items(): - for higher_order, sub_arr in pred.items(): - higher.append(higher_order) - for label,v in sub_arr['labels'].items(): - all_p.append(label) - - higher_order_pred.append(" && ".join(x for x in higher)) - all_labels_pred.append(" && ".join(x for x in all_p)) - - - predictions_refact = copy.deepcopy(predictions) - transformer_data, discriminator_data= [0 for x in range(len(data))], [0 for x in range(len(data))] - - for index in tqdm(range(len(data))): - key = "sample_" + str(index) - for k,v in predictions[key].items(): - for k_s, v_s in v.items(): - predictions_refact["sample_" + str(index)]["data"] = v_s['data'] - predictions_refact["sample_" + str(index)]["transformer_data"] = v_s['transformer_data'] - predictions_refact["sample_" + str(index)]["discriminator_data"] = v_s['word_analysis']['discriminator_data'] - transformer_data[index] = v_s['transformer_data'] - discriminator_data[index] = v_s['word_analysis']['discriminator_data'] - - del predictions_refact[key][k][k_s]['data'] - del predictions_refact[key][k][k_s]['transformer_data'] - del predictions_refact[key][k][k_s]['word_analysis']['discriminator_data'] - - json_output = predictions_refact - - - if save_predictions!= None: - logging.info("Saving Predictions") - if output_model_data != None: - all_preds = pd.DataFrame(list(zip(higher_order_pred, all_labels_pred,transformer_data,discriminator_data,data)), columns =['Higher Order',"All Labels", 'Higher Order Model Data','All Labels Model Data',column_name]) - else: - all_preds = pd.DataFrame(list(zip(higher_order_pred, all_labels_pred)), columns =['Higher Order',"All Labels"]) - - if save_input != None: - all_preds = pd.concat([data_orig, all_preds], axis=1) - try: - all_preds.to_excel(save_predictions) - except ValueError: - try: - all_preds.to_csv(save_predictions) - except ValueError: - logging.exception("Error while saving predictions " + str(e)) - exit() - logging.info("Done") - - if save_json!= None: - logging.info("Saving Json") - try: - with open(save_json, 'w') as f: - for k, v in json_output.items(): - f.write('{'+str(k) + ':'+ str(v) + '\n') - - except ValueError: - logging.exception("Error while saving json analysis " + str(e)) - exit() - logging.info("Done") - - -def read_data(path_to_file): - - try: - df = pd.read_excel(path_to_file) - return df - except ValueError: - try: - df = pd.read_csv(path_to_file) - return df - except ValueError: - logging.exception("### Error occurred while splitting document. Info: " + str(e)) - exit() - - - -def run(): - fire.Fire(data_extraction) - -if __name__ == '__main__': - logging.basicConfig(format="%(asctime)s - %(levelname)s - %(filename)s - %(message)s",datefmt="%d/%m/%Y %H:%M:%S",level=logging.INFO) - run() - - - - diff --git a/spaces/twizy/Linaqruf-animagine-xl/app.py b/spaces/twizy/Linaqruf-animagine-xl/app.py deleted file mode 100644 index b62d9962eba4cd4b8d7e2c184c743b5be8bc0b2e..0000000000000000000000000000000000000000 --- a/spaces/twizy/Linaqruf-animagine-xl/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Linaqruf/animagine-xl").launch() \ No newline at end of file diff --git a/spaces/ucalyptus/DragGAN-unofficial/stylegan2/op/conv2d_gradfix.py b/spaces/ucalyptus/DragGAN-unofficial/stylegan2/op/conv2d_gradfix.py deleted file mode 100644 index bb2f94bbcb8132299fd4d538972d32bd7ff6e7d6..0000000000000000000000000000000000000000 --- a/spaces/ucalyptus/DragGAN-unofficial/stylegan2/op/conv2d_gradfix.py +++ /dev/null @@ -1,227 +0,0 @@ -import contextlib -import warnings - -import torch -from torch import autograd -from torch.nn import functional as F - -enabled = True -weight_gradients_disabled = False - - -@contextlib.contextmanager -def no_weight_gradients(): - global weight_gradients_disabled - - old = weight_gradients_disabled - weight_gradients_disabled = True - yield - weight_gradients_disabled = old - - -def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1): - if could_use_op(input): - return conv2d_gradfix( - transpose=False, - weight_shape=weight.shape, - stride=stride, - padding=padding, - output_padding=0, - dilation=dilation, - groups=groups, - ).apply(input, weight, bias) - - return F.conv2d( - input=input, - weight=weight, - bias=bias, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - ) - - -def conv_transpose2d( - input, - weight, - bias=None, - stride=1, - padding=0, - output_padding=0, - groups=1, - dilation=1, -): - if could_use_op(input): - return conv2d_gradfix( - transpose=True, - weight_shape=weight.shape, - stride=stride, - padding=padding, - output_padding=output_padding, - groups=groups, - dilation=dilation, - ).apply(input, weight, bias) - - return F.conv_transpose2d( - input=input, - weight=weight, - bias=bias, - stride=stride, - padding=padding, - output_padding=output_padding, - dilation=dilation, - groups=groups, - ) - - -def could_use_op(input): - if (not enabled) or (not torch.backends.cudnn.enabled): - return False - - if input.device.type != "cuda": - return False - - if any(torch.__version__.startswith(x) for x in ["1.7.", "1.8."]): - return True - - warnings.warn( - f"conv2d_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.conv2d()." - ) - - return False - - -def ensure_tuple(xs, ndim): - xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim - - return xs - - -conv2d_gradfix_cache = dict() - - -def conv2d_gradfix( - transpose, weight_shape, stride, padding, output_padding, dilation, groups -): - ndim = 2 - weight_shape = tuple(weight_shape) - stride = ensure_tuple(stride, ndim) - padding = ensure_tuple(padding, ndim) - output_padding = ensure_tuple(output_padding, ndim) - dilation = ensure_tuple(dilation, ndim) - - key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups) - if key in conv2d_gradfix_cache: - return conv2d_gradfix_cache[key] - - common_kwargs = dict( - stride=stride, padding=padding, dilation=dilation, groups=groups - ) - - def calc_output_padding(input_shape, output_shape): - if transpose: - return [0, 0] - - return [ - input_shape[i + 2] - - (output_shape[i + 2] - 1) * stride[i] - - (1 - 2 * padding[i]) - - dilation[i] * (weight_shape[i + 2] - 1) - for i in range(ndim) - ] - - class Conv2d(autograd.Function): - @staticmethod - def forward(ctx, input, weight, bias): - if not transpose: - out = F.conv2d(input=input, weight=weight, bias=bias, **common_kwargs) - - else: - out = F.conv_transpose2d( - input=input, - weight=weight, - bias=bias, - output_padding=output_padding, - **common_kwargs, - ) - - ctx.save_for_backward(input, weight) - - return out - - @staticmethod - def backward(ctx, grad_output): - input, weight = ctx.saved_tensors - grad_input, grad_weight, grad_bias = None, None, None - - if ctx.needs_input_grad[0]: - p = calc_output_padding( - input_shape=input.shape, output_shape=grad_output.shape - ) - grad_input = conv2d_gradfix( - transpose=(not transpose), - weight_shape=weight_shape, - output_padding=p, - **common_kwargs, - ).apply(grad_output, weight, None) - - if ctx.needs_input_grad[1] and not weight_gradients_disabled: - grad_weight = Conv2dGradWeight.apply(grad_output, input) - - if ctx.needs_input_grad[2]: - grad_bias = grad_output.sum((0, 2, 3)) - - return grad_input, grad_weight, grad_bias - - class Conv2dGradWeight(autograd.Function): - @staticmethod - def forward(ctx, grad_output, input): - op = torch._C._jit_get_operation( - "aten::cudnn_convolution_backward_weight" - if not transpose - else "aten::cudnn_convolution_transpose_backward_weight" - ) - flags = [ - torch.backends.cudnn.benchmark, - torch.backends.cudnn.deterministic, - torch.backends.cudnn.allow_tf32, - ] - grad_weight = op( - weight_shape, - grad_output, - input, - padding, - stride, - dilation, - groups, - *flags, - ) - ctx.save_for_backward(grad_output, input) - - return grad_weight - - @staticmethod - def backward(ctx, grad_grad_weight): - grad_output, input = ctx.saved_tensors - grad_grad_output, grad_grad_input = None, None - - if ctx.needs_input_grad[0]: - grad_grad_output = Conv2d.apply(input, grad_grad_weight, None) - - if ctx.needs_input_grad[1]: - p = calc_output_padding( - input_shape=input.shape, output_shape=grad_output.shape - ) - grad_grad_input = conv2d_gradfix( - transpose=(not transpose), - weight_shape=weight_shape, - output_padding=p, - **common_kwargs, - ).apply(grad_output, grad_grad_weight, None) - - return grad_grad_output, grad_grad_input - - conv2d_gradfix_cache[key] = Conv2d - - return Conv2d diff --git a/spaces/unb-lamfo-nlp-mcti/nlp-mcti-preprocessing-single/app.py b/spaces/unb-lamfo-nlp-mcti/nlp-mcti-preprocessing-single/app.py deleted file mode 100644 index ad7f8c68d507f205035bf2d8a6689e77f34f20bd..0000000000000000000000000000000000000000 --- a/spaces/unb-lamfo-nlp-mcti/nlp-mcti-preprocessing-single/app.py +++ /dev/null @@ -1,60 +0,0 @@ -import gradio as gr -import re -import contractions -import unicodedata - -import numpy as np -import nltk -nltk.download('punkt') -nltk.download('stopwords') - -import os - -os.system('python -m spacy download en_core_web_sm') - -import spacy -import en_core_web_sm -nlp = en_core_web_sm.load() -# nlp = spacy.load('en_core_web_sm') - -def spacy_lemmatize_text(text): - text = nlp(text) - text = ' '.join([word.lemma_ if word.lemma_ != '-PRON-' else word.text for word in text]) - return text - -def remove_accented_chars(text): - text = unicodedata.normalize('NFC', text).encode('ascii', 'ignore').decode('utf-8', 'ignore') - return text - -def remove_special_characters(text, remove_digits=False): - pattern = r'[^a-zA-Z0-9\s]' if not remove_digits else r'[^a-zA-Z\s]' - text = re.sub(pattern, '', text) - return text - -def remove_stopwords(text, is_lower_case=False, stopwords=None): - if not stopwords: - stopwords = nltk.corpus.stopwords.words('english') - tokens = nltk.word_tokenize(text) - tokens = [token.strip() for token in tokens] - - if is_lower_case: - filtered_tokens = [token for token in tokens if token not in stopwords] - else: - filtered_tokens = [token for token in tokens if token.lower() not in stopwords] - - filtered_text = ' '.join(filtered_tokens) - return filtered_text - -def greet(sentence): - opo_texto_sem_caracteres_especiais = (remove_accented_chars(sentence)) - # sentenceMCTIList_base = nltk.word_tokenize(opo_texto_sem_caracteres_especiais) - sentenceExpanded = contractions.fix(opo_texto_sem_caracteres_especiais) - sentenceWithoutPunctuation = remove_special_characters(sentenceExpanded , remove_digits=True) - sentenceLowered = sentenceWithoutPunctuation.lower() - sentenceLemmatized = spacy_lemmatize_text(sentenceLowered) - sentenceLemStopped = remove_stopwords(sentenceLemmatized, is_lower_case=False) - - return nltk.word_tokenize(sentenceLemStopped) - -iface = gr.Interface(fn=greet, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Deklarata E Pavaresise Se Kosoves Pdf Free.md b/spaces/usbethFlerru/sovits-modelsV2/example/Deklarata E Pavaresise Se Kosoves Pdf Free.md deleted file mode 100644 index f4a38b72474d90d1bdb4e4e8c257733d8ea5e1b7..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Deklarata E Pavaresise Se Kosoves Pdf Free.md +++ /dev/null @@ -1,6 +0,0 @@ -
          -

          eeef58cd847 ubuntu-16.04-desktop-i386.iso.torrent
          Yamaha YD200DW pdf with key user manual
          13th Floor Watch Online Film Subtitles Full //TOP Server 12 Torrent Pc Iron Full Version Utorrent Nulled X32 License Build Cash(PTHC) Notta 2008 G 5yo B Grade Movie Jungali Bahar Virtual Application Studio Crack Alpine Racing 2007 No Cd Crack 2 Native 4.1.8 Osx Tamil Dubbed Movies Engineering Estimating And Costing B N Dutta Pdf.rar Hit Calculus Daniel Kleppner 13 Rar Full Torrent Book [mobi] chrkamg Free GVOXencore506 Utorrent 32 Pc.zip Patch Build Veerey Ki Wedding Blu-ray Blu-ray Mp4 Watch Online Movie Movie Songs Download BETTER Naa Songs BlueSoleil 10.0.496.1 Multilingual Fix [SadeemPC].zip.rar 1080p One Night Stand Kickass Dubbed Movies Subtitles Watch Online Tia Portal V11 License Crack Mega Download Manager IDM V6 21 Build 1 Crack Schranz Queste Pacch

          -

          1ef36b4b3e New PSD Stock Type Designer v1.2.05 Build 21 Https Fonedown.zip
          The Gisha ALF-X 11.0 Full Crack +Serial Full Version Кливлаван Зайчетото (2009) Кливлаван Жемла.avi
          DEVONKICKER 1.5.1.1 Activation Key Vipfull - DEVONKICKER 1.5.1.1 Full Version With Crack DEVONKICKER 1.5.1.1 Activation Key FULL Edition _NuXBOX&Reg;_ DEVONKICKER 1.5.1.1 Activation Code Free DEVONKICKER 1.5.1.1 Crack for 2019 DEVONKICKER 1.5.1.1 Activation Key + Full Version DEVONKICKER 1.5.1.1 Activation Key Plus Full Version DEVONKICKER 1.5.1.1 Activation Code Full DEVONKICKER 1.5.1.1 Activation for 2019 DEVONKICKER 1.5.1.1 Activation Code Free 2019 DEVONKICKER 1.5.1.1 Activation Key Free 2019 DEVONKICKER 1.5.1.1 Activation code Full Version DEVONKICKER 1.5.1.1 Activation key Free Full Version DEVONKICKER 1.5.1.1 Activation code Full Version DEVONKICKER 1.5.1.1 Activation Key Full Version DEVONKICKER 1.5.1.1 Activation Key Free Full Version DEVONKICKER 1.

          -

          Deklarata E Pavaresise Se Kosoves Pdf Free


          Download File ✸✸✸ https://urlcod.com/2uyW0o



          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/user238921933/stable-diffusion-webui/modules/textual_inversion/preprocess.py b/spaces/user238921933/stable-diffusion-webui/modules/textual_inversion/preprocess.py deleted file mode 100644 index e1902115c97a076ace06e07f3a2e94085cb707cf..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/modules/textual_inversion/preprocess.py +++ /dev/null @@ -1,230 +0,0 @@ -import os -from PIL import Image, ImageOps -import math -import platform -import sys -import tqdm -import time - -from modules import paths, shared, images, deepbooru -from modules.shared import opts, cmd_opts -from modules.textual_inversion import autocrop - - -def preprocess(id_task, process_src, process_dst, process_width, process_height, preprocess_txt_action, process_flip, process_split, process_caption, process_caption_deepbooru=False, split_threshold=0.5, overlap_ratio=0.2, process_focal_crop=False, process_focal_crop_face_weight=0.9, process_focal_crop_entropy_weight=0.3, process_focal_crop_edges_weight=0.5, process_focal_crop_debug=False, process_multicrop=None, process_multicrop_mindim=None, process_multicrop_maxdim=None, process_multicrop_minarea=None, process_multicrop_maxarea=None, process_multicrop_objective=None, process_multicrop_threshold=None): - try: - if process_caption: - shared.interrogator.load() - - if process_caption_deepbooru: - deepbooru.model.start() - - preprocess_work(process_src, process_dst, process_width, process_height, preprocess_txt_action, process_flip, process_split, process_caption, process_caption_deepbooru, split_threshold, overlap_ratio, process_focal_crop, process_focal_crop_face_weight, process_focal_crop_entropy_weight, process_focal_crop_edges_weight, process_focal_crop_debug, process_multicrop, process_multicrop_mindim, process_multicrop_maxdim, process_multicrop_minarea, process_multicrop_maxarea, process_multicrop_objective, process_multicrop_threshold) - - finally: - - if process_caption: - shared.interrogator.send_blip_to_ram() - - if process_caption_deepbooru: - deepbooru.model.stop() - - -def listfiles(dirname): - return os.listdir(dirname) - - -class PreprocessParams: - src = None - dstdir = None - subindex = 0 - flip = False - process_caption = False - process_caption_deepbooru = False - preprocess_txt_action = None - - -def save_pic_with_caption(image, index, params: PreprocessParams, existing_caption=None): - caption = "" - - if params.process_caption: - caption += shared.interrogator.generate_caption(image) - - if params.process_caption_deepbooru: - if len(caption) > 0: - caption += ", " - caption += deepbooru.model.tag_multi(image) - - filename_part = params.src - filename_part = os.path.splitext(filename_part)[0] - filename_part = os.path.basename(filename_part) - - basename = f"{index:05}-{params.subindex}-{filename_part}" - image.save(os.path.join(params.dstdir, f"{basename}.png")) - - if params.preprocess_txt_action == 'prepend' and existing_caption: - caption = existing_caption + ' ' + caption - elif params.preprocess_txt_action == 'append' and existing_caption: - caption = caption + ' ' + existing_caption - elif params.preprocess_txt_action == 'copy' and existing_caption: - caption = existing_caption - - caption = caption.strip() - - if len(caption) > 0: - with open(os.path.join(params.dstdir, f"{basename}.txt"), "w", encoding="utf8") as file: - file.write(caption) - - params.subindex += 1 - - -def save_pic(image, index, params, existing_caption=None): - save_pic_with_caption(image, index, params, existing_caption=existing_caption) - - if params.flip: - save_pic_with_caption(ImageOps.mirror(image), index, params, existing_caption=existing_caption) - - -def split_pic(image, inverse_xy, width, height, overlap_ratio): - if inverse_xy: - from_w, from_h = image.height, image.width - to_w, to_h = height, width - else: - from_w, from_h = image.width, image.height - to_w, to_h = width, height - h = from_h * to_w // from_w - if inverse_xy: - image = image.resize((h, to_w)) - else: - image = image.resize((to_w, h)) - - split_count = math.ceil((h - to_h * overlap_ratio) / (to_h * (1.0 - overlap_ratio))) - y_step = (h - to_h) / (split_count - 1) - for i in range(split_count): - y = int(y_step * i) - if inverse_xy: - splitted = image.crop((y, 0, y + to_h, to_w)) - else: - splitted = image.crop((0, y, to_w, y + to_h)) - yield splitted - -# not using torchvision.transforms.CenterCrop because it doesn't allow float regions -def center_crop(image: Image, w: int, h: int): - iw, ih = image.size - if ih / h < iw / w: - sw = w * ih / h - box = (iw - sw) / 2, 0, iw - (iw - sw) / 2, ih - else: - sh = h * iw / w - box = 0, (ih - sh) / 2, iw, ih - (ih - sh) / 2 - return image.resize((w, h), Image.Resampling.LANCZOS, box) - - -def multicrop_pic(image: Image, mindim, maxdim, minarea, maxarea, objective, threshold): - iw, ih = image.size - err = lambda w, h: 1-(lambda x: x if x < 1 else 1/x)(iw/ih/(w/h)) - wh = max(((w, h) for w in range(mindim, maxdim+1, 64) for h in range(mindim, maxdim+1, 64) - if minarea <= w * h <= maxarea and err(w, h) <= threshold), - key= lambda wh: (wh[0]*wh[1], -err(*wh))[::1 if objective=='Maximize area' else -1], - default=None - ) - return wh and center_crop(image, *wh) - - -def preprocess_work(process_src, process_dst, process_width, process_height, preprocess_txt_action, process_flip, process_split, process_caption, process_caption_deepbooru=False, split_threshold=0.5, overlap_ratio=0.2, process_focal_crop=False, process_focal_crop_face_weight=0.9, process_focal_crop_entropy_weight=0.3, process_focal_crop_edges_weight=0.5, process_focal_crop_debug=False, process_multicrop=None, process_multicrop_mindim=None, process_multicrop_maxdim=None, process_multicrop_minarea=None, process_multicrop_maxarea=None, process_multicrop_objective=None, process_multicrop_threshold=None): - width = process_width - height = process_height - src = os.path.abspath(process_src) - dst = os.path.abspath(process_dst) - split_threshold = max(0.0, min(1.0, split_threshold)) - overlap_ratio = max(0.0, min(0.9, overlap_ratio)) - - assert src != dst, 'same directory specified as source and destination' - - os.makedirs(dst, exist_ok=True) - - files = listfiles(src) - - shared.state.job = "preprocess" - shared.state.textinfo = "Preprocessing..." - shared.state.job_count = len(files) - - params = PreprocessParams() - params.dstdir = dst - params.flip = process_flip - params.process_caption = process_caption - params.process_caption_deepbooru = process_caption_deepbooru - params.preprocess_txt_action = preprocess_txt_action - - pbar = tqdm.tqdm(files) - for index, imagefile in enumerate(pbar): - params.subindex = 0 - filename = os.path.join(src, imagefile) - try: - img = Image.open(filename).convert("RGB") - except Exception: - continue - - description = f"Preprocessing [Image {index}/{len(files)}]" - pbar.set_description(description) - shared.state.textinfo = description - - params.src = filename - - existing_caption = None - existing_caption_filename = os.path.splitext(filename)[0] + '.txt' - if os.path.exists(existing_caption_filename): - with open(existing_caption_filename, 'r', encoding="utf8") as file: - existing_caption = file.read() - - if shared.state.interrupted: - break - - if img.height > img.width: - ratio = (img.width * height) / (img.height * width) - inverse_xy = False - else: - ratio = (img.height * width) / (img.width * height) - inverse_xy = True - - process_default_resize = True - - if process_split and ratio < 1.0 and ratio <= split_threshold: - for splitted in split_pic(img, inverse_xy, width, height, overlap_ratio): - save_pic(splitted, index, params, existing_caption=existing_caption) - process_default_resize = False - - if process_focal_crop and img.height != img.width: - - dnn_model_path = None - try: - dnn_model_path = autocrop.download_and_cache_models(os.path.join(paths.models_path, "opencv")) - except Exception as e: - print("Unable to load face detection model for auto crop selection. Falling back to lower quality haar method.", e) - - autocrop_settings = autocrop.Settings( - crop_width = width, - crop_height = height, - face_points_weight = process_focal_crop_face_weight, - entropy_points_weight = process_focal_crop_entropy_weight, - corner_points_weight = process_focal_crop_edges_weight, - annotate_image = process_focal_crop_debug, - dnn_model_path = dnn_model_path, - ) - for focal in autocrop.crop_image(img, autocrop_settings): - save_pic(focal, index, params, existing_caption=existing_caption) - process_default_resize = False - - if process_multicrop: - cropped = multicrop_pic(img, process_multicrop_mindim, process_multicrop_maxdim, process_multicrop_minarea, process_multicrop_maxarea, process_multicrop_objective, process_multicrop_threshold) - if cropped is not None: - save_pic(cropped, index, params, existing_caption=existing_caption) - else: - print(f"skipped {img.width}x{img.height} image {filename} (can't find suitable size within error threshold)") - process_default_resize = False - - if process_default_resize: - img = images.resize_image(1, img, width, height) - save_pic(img, index, params, existing_caption=existing_caption) - - shared.state.nextjob() diff --git a/spaces/user238921933/stable-diffusion-webui/modules/ui_components.py b/spaces/user238921933/stable-diffusion-webui/modules/ui_components.py deleted file mode 100644 index d239d3f70938942f625f5f49e9398fcde10016bf..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/modules/ui_components.py +++ /dev/null @@ -1,58 +0,0 @@ -import gradio as gr - - -class ToolButton(gr.Button, gr.components.FormComponent): - """Small button with single emoji as text, fits inside gradio forms""" - - def __init__(self, **kwargs): - super().__init__(variant="tool", **kwargs) - - def get_block_name(self): - return "button" - - -class ToolButtonTop(gr.Button, gr.components.FormComponent): - """Small button with single emoji as text, with extra margin at top, fits inside gradio forms""" - - def __init__(self, **kwargs): - super().__init__(variant="tool-top", **kwargs) - - def get_block_name(self): - return "button" - - -class FormRow(gr.Row, gr.components.FormComponent): - """Same as gr.Row but fits inside gradio forms""" - - def get_block_name(self): - return "row" - - -class FormGroup(gr.Group, gr.components.FormComponent): - """Same as gr.Row but fits inside gradio forms""" - - def get_block_name(self): - return "group" - - -class FormHTML(gr.HTML, gr.components.FormComponent): - """Same as gr.HTML but fits inside gradio forms""" - - def get_block_name(self): - return "html" - - -class FormColorPicker(gr.ColorPicker, gr.components.FormComponent): - """Same as gr.ColorPicker but fits inside gradio forms""" - - def get_block_name(self): - return "colorpicker" - - -class DropdownMulti(gr.Dropdown): - """Same as gr.Dropdown but always multiselect""" - def __init__(self, **kwargs): - super().__init__(multiselect=True, **kwargs) - - def get_block_name(self): - return "dropdown" diff --git a/spaces/vagmi/isai/Prediction_Head/MTGGenre_head.py b/spaces/vagmi/isai/Prediction_Head/MTGGenre_head.py deleted file mode 100644 index 8d3b3f6111d18f4efd65d1f9db797d68eb88ae9c..0000000000000000000000000000000000000000 --- a/spaces/vagmi/isai/Prediction_Head/MTGGenre_head.py +++ /dev/null @@ -1,48 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -class MLPProberBase(nn.Module): - def __init__(self, d=768, layer='all', num_outputs=87): - super().__init__() - - self.hidden_layer_sizes = [512, ] # eval(self.cfg.hidden_layer_sizes) - - self.num_layers = len(self.hidden_layer_sizes) - - self.layer = layer - - for i, ld in enumerate(self.hidden_layer_sizes): - setattr(self, f"hidden_{i}", nn.Linear(d, ld)) - d = ld - self.output = nn.Linear(d, num_outputs) - - self.n_tranformer_layer = 12 - - self.init_aggregator() - - - def init_aggregator(self): - """Initialize the aggregator for weighted sum over different layers of features - """ - if self.layer == "all": - # use learned weights to aggregate features - self.aggregator = nn.Parameter(torch.randn((1, self.n_tranformer_layer, 1))) - - - def forward(self, x): - """ - x: (B, L, T, H) - T=#chunks, can be 1 or several chunks - """ - - if self.layer == "all": - weights = F.softmax(self.aggregator, dim=1) - x = (x * weights).sum(dim=1) - - for i in range(self.num_layers): - x = getattr(self, f"hidden_{i}")(x) - # x = self.dropout(x) - x = F.relu(x) - output = self.output(x) - return output \ No newline at end of file diff --git a/spaces/vumichien/canvas_controlnet/annotator/midas/midas/dpt_depth.py b/spaces/vumichien/canvas_controlnet/annotator/midas/midas/dpt_depth.py deleted file mode 100644 index 4e9aab5d2767dffea39da5b3f30e2798688216f1..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/midas/midas/dpt_depth.py +++ /dev/null @@ -1,109 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .base_model import BaseModel -from .blocks import ( - FeatureFusionBlock, - FeatureFusionBlock_custom, - Interpolate, - _make_encoder, - forward_vit, -) - - -def _make_fusion_block(features, use_bn): - return FeatureFusionBlock_custom( - features, - nn.ReLU(False), - deconv=False, - bn=use_bn, - expand=False, - align_corners=True, - ) - - -class DPT(BaseModel): - def __init__( - self, - head, - features=256, - backbone="vitb_rn50_384", - readout="project", - channels_last=False, - use_bn=False, - ): - - super(DPT, self).__init__() - - self.channels_last = channels_last - - hooks = { - "vitb_rn50_384": [0, 1, 8, 11], - "vitb16_384": [2, 5, 8, 11], - "vitl16_384": [5, 11, 17, 23], - } - - # Instantiate backbone and reassemble blocks - self.pretrained, self.scratch = _make_encoder( - backbone, - features, - False, # Set to true of you want to train from scratch, uses ImageNet weights - groups=1, - expand=False, - exportable=False, - hooks=hooks[backbone], - use_readout=readout, - ) - - self.scratch.refinenet1 = _make_fusion_block(features, use_bn) - self.scratch.refinenet2 = _make_fusion_block(features, use_bn) - self.scratch.refinenet3 = _make_fusion_block(features, use_bn) - self.scratch.refinenet4 = _make_fusion_block(features, use_bn) - - self.scratch.output_conv = head - - - def forward(self, x): - if self.channels_last == True: - x.contiguous(memory_format=torch.channels_last) - - layer_1, layer_2, layer_3, layer_4 = forward_vit(self.pretrained, x) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return out - - -class DPTDepthModel(DPT): - def __init__(self, path=None, non_negative=True, **kwargs): - features = kwargs["features"] if "features" in kwargs else 256 - - head = nn.Sequential( - nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1), - Interpolate(scale_factor=2, mode="bilinear", align_corners=True), - nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1), - nn.ReLU(True), - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - super().__init__(head, **kwargs) - - if path is not None: - self.load(path) - - def forward(self, x): - return super().forward(x).squeeze(dim=1) - diff --git a/spaces/wangrongsheng/ChatImprovement/crazy_functions/test_project/cpp/cppipc/pool_alloc.cpp b/spaces/wangrongsheng/ChatImprovement/crazy_functions/test_project/cpp/cppipc/pool_alloc.cpp deleted file mode 100644 index c94575903bdf2eef71ecbe66382375552446e510..0000000000000000000000000000000000000000 --- a/spaces/wangrongsheng/ChatImprovement/crazy_functions/test_project/cpp/cppipc/pool_alloc.cpp +++ /dev/null @@ -1,17 +0,0 @@ -#include "libipc/pool_alloc.h" - -#include "libipc/memory/resource.h" - -namespace ipc { -namespace mem { - -void* pool_alloc::alloc(std::size_t size) { - return async_pool_alloc::alloc(size); -} - -void pool_alloc::free(void* p, std::size_t size) { - async_pool_alloc::free(p, size); -} - -} // namespace mem -} // namespace ipc diff --git a/spaces/wasimmadha/entity-extraction/utils.py b/spaces/wasimmadha/entity-extraction/utils.py deleted file mode 100644 index 062b5e45b3d77e3ecc9bc18585cdb894209130c4..0000000000000000000000000000000000000000 --- a/spaces/wasimmadha/entity-extraction/utils.py +++ /dev/null @@ -1,104 +0,0 @@ -import itertools -import torch -import numpy as np -from tqdm.auto import tqdm - -def get_char_probs(texts, predictions, tokenizer): - """ - Maps prediction from encoded offset mapping to the text - - Prediction = 466 sequence length * batch - text = 768 * batch - Using offset mapping [(0, 4), ] -- 466 - - creates results that is size of texts - - for each text result[i] - result[0, 4] = pred[0] like wise for all - - """ - results = [np.zeros(len(t)) for t in texts] - for i, (text, prediction) in enumerate(zip(texts, predictions)): - encoded = tokenizer(text, - add_special_tokens=True, - return_offsets_mapping=True) - for idx, (offset_mapping, pred) in enumerate(zip(encoded['offset_mapping'], prediction)): - start = offset_mapping[0] - end = offset_mapping[1] - results[i][start:end] = pred - return results - - -def get_results(char_probs, th=0.5): - """ - Get the list of probabilites with size of text - And then get the index of the characters which are more than th - example: - char_prob = [0.1, 0.1, 0.9, 0.9, 0.9, 0.9, 0.2, 0.2, 0.2, 0.7, 0.7, 0.7] ## length == 766 - where > 0.5 index ## [ 2, 3, 4, 5, 9, 10, 11] - - Groupby same one -- [[2, 3, 4, 5], [9, 10, 11]] - And get the max and min and output the results - - """ - results = [] - for char_prob in char_probs: - result = np.where(char_prob >= th)[0] + 1 - result = [list(g) for _, g in itertools.groupby(result, key=lambda n, c=itertools.count(): n - next(c))] - result = [f"{min(r)} {max(r)}" for r in result] - result = ";".join(result) - results.append(result) - return results - - -def get_predictions(results): - """ - Will get the location, as a string, just like location in the df - results = ['2 5', '9 11'] - - loop through, split it and save it as start and end and store it in array - """ - predictions = [] - for result in results: - prediction = [] - if result != "": - for loc in [s.split() for s in result.split(';')]: - start, end = int(loc[0]), int(loc[1]) - prediction.append([start, end]) - predictions.append(prediction) - return predictions - -def inference_fn(test_loader, model, device): - preds = [] - model.eval() - model.to(device) - tk0 = tqdm(test_loader, total=len(test_loader)) - for inputs in tk0: - for k, v in inputs.items(): - inputs[k] = v.to(device) - with torch.no_grad(): - y_preds = model(inputs) - preds.append(y_preds.sigmoid().numpy()) - predictions = np.concatenate(preds) - return predictions - -def get_text(context, indexes): - if (indexes): - if ';' in indexes: - list_indexes = indexes.split(';') - - answer = '' - for idx in list_indexes: - start_index = int(idx.split(' ')[0]) - end_index = int(idx.split(' ')[1]) - answer += ' ' - answer += context[start_index:end_index] - return answer - else: - start_index = int(indexes.split(' ')[0]) - end_index = int(indexes.split(' ')[1]) - - return context[start_index:end_index] - else: - return 'Not found in this Context' - diff --git a/spaces/wendys-llc/panoptic-segment-anything/GroundingDINO/groundingdino/util/visualizer.py b/spaces/wendys-llc/panoptic-segment-anything/GroundingDINO/groundingdino/util/visualizer.py deleted file mode 100644 index 7a1b7b101e9b73f75f9136bc67f2063c7c1cf1c1..0000000000000000000000000000000000000000 --- a/spaces/wendys-llc/panoptic-segment-anything/GroundingDINO/groundingdino/util/visualizer.py +++ /dev/null @@ -1,318 +0,0 @@ -# -*- coding: utf-8 -*- -""" -@File : visualizer.py -@Time : 2022/04/05 11:39:33 -@Author : Shilong Liu -@Contact : slongliu86@gmail.com -""" - -import datetime -import os - -import cv2 -import matplotlib.pyplot as plt -import numpy as np -import torch -from matplotlib import transforms -from matplotlib.collections import PatchCollection -from matplotlib.patches import Polygon -from pycocotools import mask as maskUtils - - -def renorm( - img: torch.FloatTensor, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] -) -> torch.FloatTensor: - # img: tensor(3,H,W) or tensor(B,3,H,W) - # return: same as img - assert img.dim() == 3 or img.dim() == 4, "img.dim() should be 3 or 4 but %d" % img.dim() - if img.dim() == 3: - assert img.size(0) == 3, 'img.size(0) shoule be 3 but "%d". (%s)' % ( - img.size(0), - str(img.size()), - ) - img_perm = img.permute(1, 2, 0) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(2, 0, 1) - else: # img.dim() == 4 - assert img.size(1) == 3, 'img.size(1) shoule be 3 but "%d". (%s)' % ( - img.size(1), - str(img.size()), - ) - img_perm = img.permute(0, 2, 3, 1) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(0, 3, 1, 2) - - -class ColorMap: - def __init__(self, basergb=[255, 255, 0]): - self.basergb = np.array(basergb) - - def __call__(self, attnmap): - # attnmap: h, w. np.uint8. - # return: h, w, 4. np.uint8. - assert attnmap.dtype == np.uint8 - h, w = attnmap.shape - res = self.basergb.copy() - res = res[None][None].repeat(h, 0).repeat(w, 1) # h, w, 3 - attn1 = attnmap.copy()[..., None] # h, w, 1 - res = np.concatenate((res, attn1), axis=-1).astype(np.uint8) - return res - - -def rainbow_text(x, y, ls, lc, **kw): - """ - Take a list of strings ``ls`` and colors ``lc`` and place them next to each - other, with text ls[i] being shown in color lc[i]. - - This example shows how to do both vertical and horizontal text, and will - pass all keyword arguments to plt.text, so you can set the font size, - family, etc. - """ - t = plt.gca().transData - fig = plt.gcf() - plt.show() - - # horizontal version - for s, c in zip(ls, lc): - text = plt.text(x, y, " " + s + " ", color=c, transform=t, **kw) - text.draw(fig.canvas.get_renderer()) - ex = text.get_window_extent() - t = transforms.offset_copy(text._transform, x=ex.width, units="dots") - - # #vertical version - # for s,c in zip(ls,lc): - # text = plt.text(x,y," "+s+" ",color=c, transform=t, - # rotation=90,va='bottom',ha='center',**kw) - # text.draw(fig.canvas.get_renderer()) - # ex = text.get_window_extent() - # t = transforms.offset_copy(text._transform, y=ex.height, units='dots') - - -class COCOVisualizer: - def __init__(self, coco=None, tokenlizer=None) -> None: - self.coco = coco - - def visualize(self, img, tgt, caption=None, dpi=180, savedir="vis"): - """ - img: tensor(3, H, W) - tgt: make sure they are all on cpu. - must have items: 'image_id', 'boxes', 'size' - """ - plt.figure(dpi=dpi) - plt.rcParams["font.size"] = "5" - ax = plt.gca() - img = renorm(img).permute(1, 2, 0) - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - ax.imshow(img) - - self.addtgt(tgt) - - if tgt is None: - image_id = 0 - elif "image_id" not in tgt: - image_id = 0 - else: - image_id = tgt["image_id"] - - if caption is None: - savename = "{}/{}-{}.png".format( - savedir, int(image_id), str(datetime.datetime.now()).replace(" ", "-") - ) - else: - savename = "{}/{}-{}-{}.png".format( - savedir, caption, int(image_id), str(datetime.datetime.now()).replace(" ", "-") - ) - print("savename: {}".format(savename)) - os.makedirs(os.path.dirname(savename), exist_ok=True) - plt.savefig(savename) - plt.close() - - def addtgt(self, tgt): - """ """ - if tgt is None or not "boxes" in tgt: - ax = plt.gca() - - if "caption" in tgt: - ax.set_title(tgt["caption"], wrap=True) - - ax.set_axis_off() - return - - ax = plt.gca() - H, W = tgt["size"] - numbox = tgt["boxes"].shape[0] - - color = [] - polygons = [] - boxes = [] - for box in tgt["boxes"].cpu(): - unnormbbox = box * torch.Tensor([W, H, W, H]) - unnormbbox[:2] -= unnormbbox[2:] / 2 - [bbox_x, bbox_y, bbox_w, bbox_h] = unnormbbox.tolist() - boxes.append([bbox_x, bbox_y, bbox_w, bbox_h]) - poly = [ - [bbox_x, bbox_y], - [bbox_x, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y], - ] - np_poly = np.array(poly).reshape((4, 2)) - polygons.append(Polygon(np_poly)) - c = (np.random.random((1, 3)) * 0.6 + 0.4).tolist()[0] - color.append(c) - - p = PatchCollection(polygons, facecolor=color, linewidths=0, alpha=0.1) - ax.add_collection(p) - p = PatchCollection(polygons, facecolor="none", edgecolors=color, linewidths=2) - ax.add_collection(p) - - if "strings_positive" in tgt and len(tgt["strings_positive"]) > 0: - assert ( - len(tgt["strings_positive"]) == numbox - ), f"{len(tgt['strings_positive'])} = {numbox}, " - for idx, strlist in enumerate(tgt["strings_positive"]): - cate_id = int(tgt["labels"][idx]) - _string = str(cate_id) + ":" + " ".join(strlist) - bbox_x, bbox_y, bbox_w, bbox_h = boxes[idx] - # ax.text(bbox_x, bbox_y, _string, color='black', bbox={'facecolor': 'yellow', 'alpha': 1.0, 'pad': 1}) - ax.text( - bbox_x, - bbox_y, - _string, - color="black", - bbox={"facecolor": color[idx], "alpha": 0.6, "pad": 1}, - ) - - if "box_label" in tgt: - assert len(tgt["box_label"]) == numbox, f"{len(tgt['box_label'])} = {numbox}, " - for idx, bl in enumerate(tgt["box_label"]): - _string = str(bl) - bbox_x, bbox_y, bbox_w, bbox_h = boxes[idx] - # ax.text(bbox_x, bbox_y, _string, color='black', bbox={'facecolor': 'yellow', 'alpha': 1.0, 'pad': 1}) - ax.text( - bbox_x, - bbox_y, - _string, - color="black", - bbox={"facecolor": color[idx], "alpha": 0.6, "pad": 1}, - ) - - if "caption" in tgt: - ax.set_title(tgt["caption"], wrap=True) - # plt.figure() - # rainbow_text(0.0,0.0,"all unicorns poop rainbows ! ! !".split(), - # ['red', 'orange', 'brown', 'green', 'blue', 'purple', 'black']) - - if "attn" in tgt: - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - if isinstance(tgt["attn"], tuple): - tgt["attn"] = [tgt["attn"]] - for item in tgt["attn"]: - attn_map, basergb = item - attn_map = (attn_map - attn_map.min()) / (attn_map.max() - attn_map.min() + 1e-3) - attn_map = (attn_map * 255).astype(np.uint8) - cm = ColorMap(basergb) - heatmap = cm(attn_map) - ax.imshow(heatmap) - ax.set_axis_off() - - def showAnns(self, anns, draw_bbox=False): - """ - Display the specified annotations. - :param anns (array of object): annotations to display - :return: None - """ - if len(anns) == 0: - return 0 - if "segmentation" in anns[0] or "keypoints" in anns[0]: - datasetType = "instances" - elif "caption" in anns[0]: - datasetType = "captions" - else: - raise Exception("datasetType not supported") - if datasetType == "instances": - ax = plt.gca() - ax.set_autoscale_on(False) - polygons = [] - color = [] - for ann in anns: - c = (np.random.random((1, 3)) * 0.6 + 0.4).tolist()[0] - if "segmentation" in ann: - if type(ann["segmentation"]) == list: - # polygon - for seg in ann["segmentation"]: - poly = np.array(seg).reshape((int(len(seg) / 2), 2)) - polygons.append(Polygon(poly)) - color.append(c) - else: - # mask - t = self.imgs[ann["image_id"]] - if type(ann["segmentation"]["counts"]) == list: - rle = maskUtils.frPyObjects( - [ann["segmentation"]], t["height"], t["width"] - ) - else: - rle = [ann["segmentation"]] - m = maskUtils.decode(rle) - img = np.ones((m.shape[0], m.shape[1], 3)) - if ann["iscrowd"] == 1: - color_mask = np.array([2.0, 166.0, 101.0]) / 255 - if ann["iscrowd"] == 0: - color_mask = np.random.random((1, 3)).tolist()[0] - for i in range(3): - img[:, :, i] = color_mask[i] - ax.imshow(np.dstack((img, m * 0.5))) - if "keypoints" in ann and type(ann["keypoints"]) == list: - # turn skeleton into zero-based index - sks = np.array(self.loadCats(ann["category_id"])[0]["skeleton"]) - 1 - kp = np.array(ann["keypoints"]) - x = kp[0::3] - y = kp[1::3] - v = kp[2::3] - for sk in sks: - if np.all(v[sk] > 0): - plt.plot(x[sk], y[sk], linewidth=3, color=c) - plt.plot( - x[v > 0], - y[v > 0], - "o", - markersize=8, - markerfacecolor=c, - markeredgecolor="k", - markeredgewidth=2, - ) - plt.plot( - x[v > 1], - y[v > 1], - "o", - markersize=8, - markerfacecolor=c, - markeredgecolor=c, - markeredgewidth=2, - ) - - if draw_bbox: - [bbox_x, bbox_y, bbox_w, bbox_h] = ann["bbox"] - poly = [ - [bbox_x, bbox_y], - [bbox_x, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y], - ] - np_poly = np.array(poly).reshape((4, 2)) - polygons.append(Polygon(np_poly)) - color.append(c) - - # p = PatchCollection(polygons, facecolor=color, linewidths=0, alpha=0.4) - # ax.add_collection(p) - p = PatchCollection(polygons, facecolor="none", edgecolors=color, linewidths=2) - ax.add_collection(p) - elif datasetType == "captions": - for ann in anns: - print(ann["caption"]) diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/tools/web_browser_engine.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/tools/web_browser_engine.py deleted file mode 100644 index 1f1a5ec67f2ff9f7439bed02cf4c76ec7a03c235..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/tools/web_browser_engine.py +++ /dev/null @@ -1,60 +0,0 @@ -#!/usr/bin/env python -""" -@Modified By: mashenquan, 2023/8/20. Remove global configuration `CONFIG`, enable configuration support for business isolation. -""" - -from __future__ import annotations - -import importlib -from typing import Any, Callable, Coroutine, Dict, Literal, overload - -from metagpt.config import CONFIG -from metagpt.tools import WebBrowserEngineType -from metagpt.utils.parse_html import WebPage - - -class WebBrowserEngine: - def __init__( - self, - options: Dict, - engine: WebBrowserEngineType | None = None, - run_func: Callable[..., Coroutine[Any, Any, WebPage | list[WebPage]]] | None = None, - ): - engine = engine or options.get("web_browser_engine") - if engine is None: - raise NotImplementedError - - if WebBrowserEngineType(engine) is WebBrowserEngineType.PLAYWRIGHT: - module = "metagpt.tools.web_browser_engine_playwright" - run_func = importlib.import_module(module).PlaywrightWrapper(options=options).run - elif WebBrowserEngineType(engine) is WebBrowserEngineType.SELENIUM: - module = "metagpt.tools.web_browser_engine_selenium" - run_func = importlib.import_module(module).SeleniumWrapper(options=options).run - elif WebBrowserEngineType(engine) is WebBrowserEngineType.CUSTOM: - run_func = run_func - else: - raise NotImplementedError - self.run_func = run_func - self.engine = engine - - @overload - async def run(self, url: str) -> WebPage: - ... - - @overload - async def run(self, url: str, *urls: str) -> list[WebPage]: - ... - - async def run(self, url: str, *urls: str) -> WebPage | list[WebPage]: - return await self.run_func(url, *urls) - - -if __name__ == "__main__": - import fire - - async def main(url: str, *urls: str, engine_type: Literal["playwright", "selenium"] = "playwright", **kwargs): - return await WebBrowserEngine(options=CONFIG.options, engine=WebBrowserEngineType(engine_type), **kwargs).run( - url, *urls - ) - - fire.Fire(main) diff --git a/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/actions/test_write_code.py b/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/actions/test_write_code.py deleted file mode 100644 index d53e3724344ffdd3a8f91b8a9f427ed8c83ffcc4..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/actions/test_write_code.py +++ /dev/null @@ -1,40 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/11 17:45 -@Author : alexanderwu -@File : test_write_code.py -@Modified By: mashenquan, 2023-8-1, fix-bug: `filename` of `write_code.run()` is missing. -@Modified By: mashenquan, 2023/8/20. Remove global configuration `CONFIG`, enable configuration support for business isolation. -""" -import pytest - -from metagpt.config import Config -from metagpt.provider.openai_api import OpenAIGPTAPI as LLM, CostManager -from metagpt.actions.write_code import WriteCode -from metagpt.logs import logger -from tests.metagpt.actions.mock import TASKS_2, WRITE_CODE_PROMPT_SAMPLE - - -@pytest.mark.asyncio -async def test_write_code(): - api_design = "设计一个名为'add'的函数,该函数接受两个整数作为输入,并返回它们的和。" - conf = Config() - cost_manager = CostManager(**conf.runtime_options) - llm = LLM(options=conf.runtime_options, cost_manager=cost_manager) - write_code = WriteCode(options=conf.runtime_options, name="write_code", llm=llm) - code = await write_code.run(context=api_design, filename="test") - logger.info(code) - - # 我们不能精确地预测生成的代码,但我们可以检查某些关键字 - assert 'def add' in code - assert 'return' in code - - -@pytest.mark.asyncio -async def test_write_code_directly(): - prompt = WRITE_CODE_PROMPT_SAMPLE + '\n' + TASKS_2[0] - options = Config().runtime_options - llm = LLM(options=options, cost_manager=CostManager(**options)) - rsp = await llm.aask(prompt) - logger.info(rsp) diff --git a/spaces/will1885/will/Dockerfile b/spaces/will1885/will/Dockerfile deleted file mode 100644 index 3a4dc66fdb50519fca2a6eaf64cbe0ea05b09a3f..0000000000000000000000000000000000000000 --- a/spaces/will1885/will/Dockerfile +++ /dev/null @@ -1,13 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -COPY . . - -EXPOSE 7860 - -CMD ["shiny", "run", "app.py", "--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/wilson1/bingo/README.md b/spaces/wilson1/bingo/README.md deleted file mode 100644 index 218767d1d7debd26932ffddca2ec0f421c0171a9..0000000000000000000000000000000000000000 --- a/spaces/wilson1/bingo/README.md +++ /dev/null @@ -1,195 +0,0 @@ ---- -title: bingo -emoji: 📉 -colorFrom: red -colorTo: red -sdk: docker -pinned: true -license: mit -duplicated_from: hf4all/bingo ---- - -
          - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -
          - -## 演示站点 - -https://bing.github1s.tk - - - -[![img](./docs/images/demo.png)](https://bing.github1s.tk) - -## 功能和特点 - -- 完全基于 Next.js 重写,高度还原 New Bing Web 版 UI,使用体验和 Bing AI 基本一致。 -- 支持 Docker 构建,方便快捷地部署和访问。 -- Cookie 可全局配置,全局共享。 -- 支持持续语音对话 - -## RoadMap - - - [x] 支持 wss 转发 - - [x] 支持一键部署 - - [x] 优化移动端展示 - - [x] 支持画图 - - [x] 支持语音输入(支持语音指令,目前仅支持 PC 版 Edge 及 Chrome 浏览器) - - [x] 支持语音输出(需要手动开启) - - [x] 支持图片输入 - - [x] 支持自定义域名 - - [ ] 支持历史记录 - - [ ] 适配深色模式 - - [ ] 支持内置提示词 - - [ ] 支持离线访问 - - [ ] 国际化翻译 - -## 一键部署 -你也可以一键部署自己的 New Bing AI 到 🤗 HuggingFace 。 - -### 部署到 Huggingface -1. 点击此图标 -[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/login?next=%2Fspaces%2Fhf4all%2Fbingo%3Fduplicate%3Dtrue%26visibility%3Dpublic),配置可以不改。 - -2. 部署署完成后,点击“设置” 》“站点域名”,点一下,复制一下 HF 域名信息,然后分享给别人即可。 - -> Huggingface 不支持绑定自己的域名,不过我们可以使用曲线救国的方式来达到这个目的 -> 1. 方式二,借助 Cloudflare Workers [部署Cloudflare Workers](#使用Cloudflare-Workers自定义域名) -> 2. 方式一,借助 Github Pages 及 iframe [如何绑定域名](https://github.com/weaigc/bingo/issues/4) - -### 使用Cloudflare Workers自定义域名 - -> 核心代码 [worker.js](./cloudflare/worker.js) - -- [注册 Cloudflare 账号](https://dash.cloudflare.com/sign-up) - -- 添加一个新的网站,需要你有自己的域名并且将域名`Name Server`托管给 Cloudflare 才行(更多信息可自行 Google) - -- 通过左侧菜单进入「Workers」,并点击「Create a Worker」。 - -- 创建 Worker 服务,复制 [worker.js](./cloudflare/worker.js) 全部代码,粘贴至创建的服务中,根据注释进行改动,保存并部署。 - -- 触发器 中自定义访问域名。 - -### 部署其它平台 -
          - -由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看 - - -#### 部署到 Netlify -[![Deploy to Netlify Button](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo) - -#### 部署到 Vercel -如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用 - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example) - -#### 部署到 Render - -[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/weaigc/bingo) -
          - -## 环境和依赖 - -- Node.js >= 18 -- Bing AI 的[身份信息](#如何获取-BING_HEADER)) - -## 安装和使用 - -* 使用 Node 启动 - -```bash -git clone https://github.com/weaigc/bingo.git -npm i # 推荐使用 pnpm i -npm run build -npm run start -``` - -* 使用 Docker 启动 -```bash -docker pull weaigc/bingo -docker run --rm -it -p 7860:7860 weaigc/bingo -# 或者 -docker run --rm -it -e BING_HEADER=xxxx -p 7860:7860 weaigc/bingo -``` - -## 如何获取 BING_HEADER -> 配置了 BING_HEADER 意味着你将自己的账号共享给所有使用此服务的人,如果不需要免登录画图的功能,不建议设置此变量 - -打开 https://www.bing.com 并登录,然后访问 https://www.bing.com/turing/captcha/challenge,通过人机校验,然后 - -![BING HEADER](./docs/images/curl.png) - -> 复制出来的内容应该如下所示。确认格式无误后,打开 https://effulgent-bubblegum-e2f5df.netlify.app/#dialog=%22settings%22 ,粘贴进去,点击“转成 BING_HEADER 并复制”,然后从剪切板粘贴即可得到。(你也可以先在网页上进行验证) - -以下是格式参考,需要注意的是,网页端保存的格式是以`curl`开头, 而服务端配置的 `BING_HEADER` 是 `base64` 格式,两者不能互通。 -
          -正常格式/网页端保存的格式(格式仅供参考) - -``` -curl 'https://www.bing.com/turing/captcha/challenge' \ - -H 'authority: www.bing.com' \ - -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ - -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \ - -H 'cache-control: max-age=0' \ - -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \ - -H 'dnt: 1' \ - -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \ - -H 'sec-ch-ua-arch: "x86"' \ - -H 'sec-ch-ua-bitness: "64"' \ - -H 'sec-ch-ua-full-version: "116.0.1938.29"' \ - -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \ - -H 'sec-ch-ua-mobile: ?0' \ - -H 'sec-ch-ua-model: ""' \ - -H 'sec-ch-ua-platform: "Windows"' \ - -H 'sec-ch-ua-platform-version: "15.0.0"' \ - -H 'sec-fetch-dest: document' \ - -H 'sec-fetch-mode: navigate' \ - -H 'sec-fetch-site: none' \ - -H 'sec-fetch-user: ?1' \ - -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \ - -H 'sec-ms-gec-version: 1-116.0.1938.29' \ - -H 'upgrade-insecure-requests: 1' \ - -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \ - -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \ - -H 'x-edge-shopping-flag: 1' \ - --compressed -``` -
          - -
          -转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式) - -``` -Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA== -``` -
          - - -## 鸣谢 - - 感谢 [EdgeGPT](https://github.com/acheong08/EdgeGPT) 提供的代理 API 的方法。 - - 感谢 [Vercel AI](https://github.com/vercel-labs/ai-chatbot) 提供的基础脚手架和 [ChatHub](https://github.com/chathub-dev/chathub) [go-proxy-bingai](https://github.com/adams549659584/go-proxy-bingai) 提供的部分代码。 - - -## 答疑及交流 - - - -## License - -MIT © [LICENSE](https://github.com/weaigc/bingo/blob/main/LICENSE). - - diff --git a/spaces/wonderit-safeai/tts-announcer/text/cleaners.py b/spaces/wonderit-safeai/tts-announcer/text/cleaners.py deleted file mode 100644 index 4af8bcc1827b1a9a9f6fdd2a90aff30e8a7b5104..0000000000000000000000000000000000000000 --- a/spaces/wonderit-safeai/tts-announcer/text/cleaners.py +++ /dev/null @@ -1,215 +0,0 @@ -import re -from unidecode import unidecode -from phonemizer import phonemize -from text.korean import latin_to_hangul, number_to_hangul, divide_hangul, korean_to_lazy_ipa, korean_to_ipa -from text.english import english_to_lazy_ipa, english_to_ipa2, english_to_lazy_ipa2 - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r"\s+") - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [ - (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1]) - for x in [ - ("mrs", "misess"), - ("mr", "mister"), - ("dr", "doctor"), - ("st", "saint"), - ("co", "company"), - ("jr", "junior"), - ("maj", "major"), - ("gen", "general"), - ("drs", "doctors"), - ("rev", "reverend"), - ("lt", "lieutenant"), - ("hon", "honorable"), - ("sgt", "sergeant"), - ("capt", "captain"), - ("esq", "esquire"), - ("ltd", "limited"), - ("col", "colonel"), - ("ft", "fort"), - ] -] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def expand_numbers(text): - return normalize_numbers(text) - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, " ", text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def basic_cleaners(text): - """Basic pipeline that lowercases and collapses whitespace without transliteration.""" - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def transliteration_cleaners(text): - """Pipeline for non-English text that transliterates to ASCII.""" - text = convert_to_ascii(text) - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def japanese_cleaners(text): - text = japanese_to_romaji_with_accent(text) - text = re.sub(r'([A-Za-z])$', r'\1.', text) - return text - - -def japanese_cleaners2(text): - return japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…') - - - -def english_cleaners(text): - """Pipeline for English text, including abbreviation expansion.""" - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_abbreviations(text) - phonemes = phonemize(text, language="en-us", backend="espeak", strip=True) - phonemes = collapse_whitespace(phonemes) - return phonemes - - -def english_cleaners2(text): - """Pipeline for English text, including abbreviation expansion. + punctuation + stress""" - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_abbreviations(text) - phonemes = phonemize( - text, - language="en-us", - backend="espeak", - strip=True, - preserve_punctuation=True, - with_stress=True, - ) - phonemes = collapse_whitespace(phonemes) - return phonemes - -def korean_cleaners(text): - '''Pipeline for Korean text''' - text = latin_to_hangul(text) - text = number_to_hangul(text) - text = divide_hangul(text) - text = re.sub(r'([\u3131-\u3163])$', r'\1.', text) - return text - - -def chinese_cleaners(text): - '''Pipeline for Chinese text''' - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = re.sub(r'([ˉˊˇˋ˙])$', r'\1。', text) - return text - - -def zh_ja_mixture_cleaners(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_romaji(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_romaji_with_accent( - x.group(1)).replace('ts', 'ʦ').replace('u', 'ɯ').replace('...', '…')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def sanskrit_cleaners(text): - text = text.replace('॥', '।').replace('ॐ', 'ओम्') - text = re.sub(r'([^।])$', r'\1।', text) - return text - - -def cjks_cleaners(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\[SA\](.*?)\[SA\]', - lambda x: devanagari_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', lambda x: chinese_to_lazy_ipa(x.group(1)).replace( - 'ʧ', 'tʃ').replace('ʦ', 'ts').replace('ɥan', 'ɥæn')+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_ipa(x.group(1)).replace('ʧ', 'tʃ').replace( - 'ʦ', 'ts').replace('ɥan', 'ɥæn').replace('ʥ', 'dz')+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', lambda x: english_to_ipa2(x.group(1)).replace('ɑ', 'a').replace( - 'ɔ', 'o').replace('ɛ', 'e').replace('ɪ', 'i').replace('ʊ', 'u')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners2(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def thai_cleaners(text): - text = num_to_thai(text) - text = latin_to_thai(text) - return text - - -def shanghainese_cleaners(text): - text = shanghainese_to_ipa(text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def chinese_dialect_cleaners(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa3(x.group(1)).replace('Q', 'ʔ')+' ', text) - text = re.sub(r'\[SH\](.*?)\[SH\]', lambda x: shanghainese_to_ipa(x.group(1)).replace('1', '˥˧').replace('5', - '˧˧˦').replace('6', '˩˩˧').replace('7', '˥').replace('8', '˩˨').replace('ᴀ', 'ɐ').replace('ᴇ', 'e')+' ', text) - text = re.sub(r'\[GD\](.*?)\[GD\]', - lambda x: cantonese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_lazy_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[([A-Z]{2})\](.*?)\[\1\]', lambda x: ngu_dialect_to_ipa(x.group(2), x.group( - 1)).replace('ʣ', 'dz').replace('ʥ', 'dʑ').replace('ʦ', 'ts').replace('ʨ', 'tɕ')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text diff --git a/spaces/wz758727829/ChuanhuChatGPT/chatgpt - windows.bat b/spaces/wz758727829/ChuanhuChatGPT/chatgpt - windows.bat deleted file mode 100644 index 0b78fdc3a559abd692e3a9e9af5e482124d13a99..0000000000000000000000000000000000000000 --- a/spaces/wz758727829/ChuanhuChatGPT/chatgpt - windows.bat +++ /dev/null @@ -1,14 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" - -REM The web page can be accessed with delayed start http://127.0.0.1:7860/ -ping -n 5 127.0.0.1>nul - -REM access chargpt via your default browser -start "" "http://127.0.0.1:7860/" - - -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). \ No newline at end of file diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/data/datasets/image/__init__.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/data/datasets/image/__init__.py deleted file mode 100644 index f2216e96db061ee38f7172147778a495d3124db0..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/data/datasets/image/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from __future__ import print_function, absolute_import - -from .grid import GRID -from .prid import PRID -from .ilids import iLIDS -from .viper import VIPeR -from .cuhk01 import CUHK01 -from .cuhk02 import CUHK02 -from .cuhk03 import CUHK03 -from .msmt17 import MSMT17 -from .cuhksysu import CUHKSYSU -from .sensereid import SenseReID -from .market1501 import Market1501 -from .dukemtmcreid import DukeMTMCreID -from .university1652 import University1652 diff --git a/spaces/xfys/yolov5_tracking/yolov5/segment/predict.py b/spaces/xfys/yolov5_tracking/yolov5/segment/predict.py deleted file mode 100644 index 4d4d6036358a755e297cbc83e2579242712b128a..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/yolov5/segment/predict.py +++ /dev/null @@ -1,284 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license -""" -Run YOLOv5 segmentation inference on images, videos, directories, streams, etc. - -Usage - sources: - $ python segment/predict.py --weights yolov5s-seg.pt --source 0 # webcam - img.jpg # image - vid.mp4 # video - screen # screenshot - path/ # directory - list.txt # list of images - list.streams # list of streams - 'path/*.jpg' # glob - 'https://youtu.be/Zgi9g1ksQHc' # YouTube - 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream - -Usage - formats: - $ python segment/predict.py --weights yolov5s-seg.pt # PyTorch - yolov5s-seg.torchscript # TorchScript - yolov5s-seg.onnx # ONNX Runtime or OpenCV DNN with --dnn - yolov5s-seg_openvino_model # OpenVINO - yolov5s-seg.engine # TensorRT - yolov5s-seg.mlmodel # CoreML (macOS-only) - yolov5s-seg_saved_model # TensorFlow SavedModel - yolov5s-seg.pb # TensorFlow GraphDef - yolov5s-seg.tflite # TensorFlow Lite - yolov5s-seg_edgetpu.tflite # TensorFlow Edge TPU - yolov5s-seg_paddle_model # PaddlePaddle -""" - -import argparse -import os -import platform -import sys -from pathlib import Path - -import torch - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[1] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative - -from models.common import DetectMultiBackend -from utils.dataloaders import IMG_FORMATS, VID_FORMATS, LoadImages, LoadScreenshots, LoadStreams -from utils.general import (LOGGER, Profile, check_file, check_img_size, check_imshow, check_requirements, colorstr, cv2, - increment_path, non_max_suppression, print_args, scale_boxes, scale_segments, - strip_optimizer) -from utils.plots import Annotator, colors, save_one_box -from utils.segment.general import masks2segments, process_mask, process_mask_native -from utils.torch_utils import select_device, smart_inference_mode - - -@smart_inference_mode() -def run( - weights=ROOT / 'yolov5s-seg.pt', # model.pt path(s) - source=ROOT / 'data/images', # file/dir/URL/glob/screen/0(webcam) - data=ROOT / 'data/coco128.yaml', # dataset.yaml path - imgsz=(640, 640), # inference size (height, width) - conf_thres=0.25, # confidence threshold - iou_thres=0.45, # NMS IOU threshold - max_det=1000, # maximum detections per image - device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu - view_img=False, # show results - save_txt=False, # save results to *.txt - save_conf=False, # save confidences in --save-txt labels - save_crop=False, # save cropped prediction boxes - nosave=False, # do not save images/videos - classes=None, # filter by class: --class 0, or --class 0 2 3 - agnostic_nms=False, # class-agnostic NMS - augment=False, # augmented inference - visualize=False, # visualize features - update=False, # update all models - project=ROOT / 'runs/predict-seg', # save results to project/name - name='exp', # save results to project/name - exist_ok=False, # existing project/name ok, do not increment - line_thickness=3, # bounding box thickness (pixels) - hide_labels=False, # hide labels - hide_conf=False, # hide confidences - half=False, # use FP16 half-precision inference - dnn=False, # use OpenCV DNN for ONNX inference - vid_stride=1, # video frame-rate stride - retina_masks=False, -): - source = str(source) - save_img = not nosave and not source.endswith('.txt') # save inference images - is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS) - is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://')) - webcam = source.isnumeric() or source.endswith('.streams') or (is_url and not is_file) - screenshot = source.lower().startswith('screen') - if is_url and is_file: - source = check_file(source) # download - - # Directories - save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run - (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir - - # Load model - device = select_device(device) - model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half) - stride, names, pt = model.stride, model.names, model.pt - imgsz = check_img_size(imgsz, s=stride) # check image size - - # Dataloader - bs = 1 # batch_size - if webcam: - view_img = check_imshow(warn=True) - dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride) - bs = len(dataset) - elif screenshot: - dataset = LoadScreenshots(source, img_size=imgsz, stride=stride, auto=pt) - else: - dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride) - vid_path, vid_writer = [None] * bs, [None] * bs - - # Run inference - model.warmup(imgsz=(1 if pt else bs, 3, *imgsz)) # warmup - seen, windows, dt = 0, [], (Profile(), Profile(), Profile()) - for path, im, im0s, vid_cap, s in dataset: - with dt[0]: - im = torch.from_numpy(im).to(model.device) - im = im.half() if model.fp16 else im.float() # uint8 to fp16/32 - im /= 255 # 0 - 255 to 0.0 - 1.0 - if len(im.shape) == 3: - im = im[None] # expand for batch dim - - # Inference - with dt[1]: - visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False - pred, proto = model(im, augment=augment, visualize=visualize)[:2] - - # NMS - with dt[2]: - pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det, nm=32) - - # Second-stage classifier (optional) - # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s) - - # Process predictions - for i, det in enumerate(pred): # per image - seen += 1 - if webcam: # batch_size >= 1 - p, im0, frame = path[i], im0s[i].copy(), dataset.count - s += f'{i}: ' - else: - p, im0, frame = path, im0s.copy(), getattr(dataset, 'frame', 0) - - p = Path(p) # to Path - save_path = str(save_dir / p.name) # im.jpg - txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # im.txt - s += '%gx%g ' % im.shape[2:] # print string - imc = im0.copy() if save_crop else im0 # for save_crop - annotator = Annotator(im0, line_width=line_thickness, example=str(names)) - if len(det): - if retina_masks: - # scale bbox first the crop masks - det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round() # rescale boxes to im0 size - masks = process_mask_native(proto[i], det[:, 6:], det[:, :4], im0.shape[:2]) # HWC - else: - masks = process_mask(proto[i], det[:, 6:], det[:, :4], im.shape[2:], upsample=True) # HWC - det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round() # rescale boxes to im0 size - - # Segments - if save_txt: - segments = [ - scale_segments(im0.shape if retina_masks else im.shape[2:], x, im0.shape, normalize=True) - for x in reversed(masks2segments(masks))] - - # Print results - for c in det[:, 5].unique(): - n = (det[:, 5] == c).sum() # detections per class - s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string - - # Mask plotting - annotator.masks( - masks, - colors=[colors(x, True) for x in det[:, 5]], - im_gpu=torch.as_tensor(im0, dtype=torch.float16).to(device).permute(2, 0, 1).flip(0).contiguous() / - 255 if retina_masks else im[i]) - - # Write results - for j, (*xyxy, conf, cls) in enumerate(reversed(det[:, :6])): - if save_txt: # Write to file - seg = segments[j].reshape(-1) # (n,2) to (n*2) - line = (cls, *seg, conf) if save_conf else (cls, *seg) # label format - with open(f'{txt_path}.txt', 'a') as f: - f.write(('%g ' * len(line)).rstrip() % line + '\n') - - if save_img or save_crop or view_img: # Add bbox to image - c = int(cls) # integer class - label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}') - annotator.box_label(xyxy, label, color=colors(c, True)) - # annotator.draw.polygon(segments[j], outline=colors(c, True), width=3) - if save_crop: - save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True) - - # Stream results - im0 = annotator.result() - if view_img: - if platform.system() == 'Linux' and p not in windows: - windows.append(p) - cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux) - cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0]) - cv2.imshow(str(p), im0) - if cv2.waitKey(1) == ord('q'): # 1 millisecond - exit() - - # Save results (image with detections) - if save_img: - if dataset.mode == 'image': - cv2.imwrite(save_path, im0) - else: # 'video' or 'stream' - if vid_path[i] != save_path: # new video - vid_path[i] = save_path - if isinstance(vid_writer[i], cv2.VideoWriter): - vid_writer[i].release() # release previous video writer - if vid_cap: # video - fps = vid_cap.get(cv2.CAP_PROP_FPS) - w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - else: # stream - fps, w, h = 30, im0.shape[1], im0.shape[0] - save_path = str(Path(save_path).with_suffix('.mp4')) # force *.mp4 suffix on results videos - vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) - vid_writer[i].write(im0) - - # Print time (inference-only) - LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms") - - # Print results - t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image - LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t) - if save_txt or save_img: - s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' - LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}") - if update: - strip_optimizer(weights[0]) # update model (to fix SourceChangeWarning) - - -def parse_opt(): - parser = argparse.ArgumentParser() - parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s-seg.pt', help='model path(s)') - parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob/screen/0(webcam)') - parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='(optional) dataset.yaml path') - parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w') - parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold') - parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold') - parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--view-img', action='store_true', help='show results') - parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') - parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels') - parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes') - parser.add_argument('--nosave', action='store_true', help='do not save images/videos') - parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3') - parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS') - parser.add_argument('--augment', action='store_true', help='augmented inference') - parser.add_argument('--visualize', action='store_true', help='visualize features') - parser.add_argument('--update', action='store_true', help='update all models') - parser.add_argument('--project', default=ROOT / 'runs/predict-seg', help='save results to project/name') - parser.add_argument('--name', default='exp', help='save results to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)') - parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels') - parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences') - parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference') - parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference') - parser.add_argument('--vid-stride', type=int, default=1, help='video frame-rate stride') - parser.add_argument('--retina-masks', action='store_true', help='whether to plot masks in native resolution') - opt = parser.parse_args() - opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand - print_args(vars(opt)) - return opt - - -def main(opt): - check_requirements(exclude=('tensorboard', 'thop')) - run(**vars(opt)) - - -if __name__ == '__main__': - opt = parse_opt() - main(opt) diff --git a/spaces/xfys/yolov5_tracking/yolov5/utils/loggers/clearml/hpo.py b/spaces/xfys/yolov5_tracking/yolov5/utils/loggers/clearml/hpo.py deleted file mode 100644 index ee518b0fbfc89ee811b51bbf85341eee4f685be1..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/yolov5/utils/loggers/clearml/hpo.py +++ /dev/null @@ -1,84 +0,0 @@ -from clearml import Task -# Connecting ClearML with the current process, -# from here on everything is logged automatically -from clearml.automation import HyperParameterOptimizer, UniformParameterRange -from clearml.automation.optuna import OptimizerOptuna - -task = Task.init(project_name='Hyper-Parameter Optimization', - task_name='YOLOv5', - task_type=Task.TaskTypes.optimizer, - reuse_last_task_id=False) - -# Example use case: -optimizer = HyperParameterOptimizer( - # This is the experiment we want to optimize - base_task_id='', - # here we define the hyper-parameters to optimize - # Notice: The parameter name should exactly match what you see in the UI: / - # For Example, here we see in the base experiment a section Named: "General" - # under it a parameter named "batch_size", this becomes "General/batch_size" - # If you have `argparse` for example, then arguments will appear under the "Args" section, - # and you should instead pass "Args/batch_size" - hyper_parameters=[ - UniformParameterRange('Hyperparameters/lr0', min_value=1e-5, max_value=1e-1), - UniformParameterRange('Hyperparameters/lrf', min_value=0.01, max_value=1.0), - UniformParameterRange('Hyperparameters/momentum', min_value=0.6, max_value=0.98), - UniformParameterRange('Hyperparameters/weight_decay', min_value=0.0, max_value=0.001), - UniformParameterRange('Hyperparameters/warmup_epochs', min_value=0.0, max_value=5.0), - UniformParameterRange('Hyperparameters/warmup_momentum', min_value=0.0, max_value=0.95), - UniformParameterRange('Hyperparameters/warmup_bias_lr', min_value=0.0, max_value=0.2), - UniformParameterRange('Hyperparameters/box', min_value=0.02, max_value=0.2), - UniformParameterRange('Hyperparameters/cls', min_value=0.2, max_value=4.0), - UniformParameterRange('Hyperparameters/cls_pw', min_value=0.5, max_value=2.0), - UniformParameterRange('Hyperparameters/obj', min_value=0.2, max_value=4.0), - UniformParameterRange('Hyperparameters/obj_pw', min_value=0.5, max_value=2.0), - UniformParameterRange('Hyperparameters/iou_t', min_value=0.1, max_value=0.7), - UniformParameterRange('Hyperparameters/anchor_t', min_value=2.0, max_value=8.0), - UniformParameterRange('Hyperparameters/fl_gamma', min_value=0.0, max_value=4.0), - UniformParameterRange('Hyperparameters/hsv_h', min_value=0.0, max_value=0.1), - UniformParameterRange('Hyperparameters/hsv_s', min_value=0.0, max_value=0.9), - UniformParameterRange('Hyperparameters/hsv_v', min_value=0.0, max_value=0.9), - UniformParameterRange('Hyperparameters/degrees', min_value=0.0, max_value=45.0), - UniformParameterRange('Hyperparameters/translate', min_value=0.0, max_value=0.9), - UniformParameterRange('Hyperparameters/scale', min_value=0.0, max_value=0.9), - UniformParameterRange('Hyperparameters/shear', min_value=0.0, max_value=10.0), - UniformParameterRange('Hyperparameters/perspective', min_value=0.0, max_value=0.001), - UniformParameterRange('Hyperparameters/flipud', min_value=0.0, max_value=1.0), - UniformParameterRange('Hyperparameters/fliplr', min_value=0.0, max_value=1.0), - UniformParameterRange('Hyperparameters/mosaic', min_value=0.0, max_value=1.0), - UniformParameterRange('Hyperparameters/mixup', min_value=0.0, max_value=1.0), - UniformParameterRange('Hyperparameters/copy_paste', min_value=0.0, max_value=1.0)], - # this is the objective metric we want to maximize/minimize - objective_metric_title='metrics', - objective_metric_series='mAP_0.5', - # now we decide if we want to maximize it or minimize it (accuracy we maximize) - objective_metric_sign='max', - # let us limit the number of concurrent experiments, - # this in turn will make sure we do dont bombard the scheduler with experiments. - # if we have an auto-scaler connected, this, by proxy, will limit the number of machine - max_number_of_concurrent_tasks=1, - # this is the optimizer class (actually doing the optimization) - # Currently, we can choose from GridSearch, RandomSearch or OptimizerBOHB (Bayesian optimization Hyper-Band) - optimizer_class=OptimizerOptuna, - # If specified only the top K performing Tasks will be kept, the others will be automatically archived - save_top_k_tasks_only=5, # 5, - compute_time_limit=None, - total_max_jobs=20, - min_iteration_per_job=None, - max_iteration_per_job=None, -) - -# report every 10 seconds, this is way too often, but we are testing here -optimizer.set_report_period(10 / 60) -# You can also use the line below instead to run all the optimizer tasks locally, without using queues or agent -# an_optimizer.start_locally(job_complete_callback=job_complete_callback) -# set the time limit for the optimization process (2 hours) -optimizer.set_time_limit(in_minutes=120.0) -# Start the optimization process in the local environment -optimizer.start_locally() -# wait until process is done (notice we are controlling the optimization process in the background) -optimizer.wait() -# make sure background optimization stopped -optimizer.stop() - -print('We are done, good bye') diff --git a/spaces/xswu/HPSv2/tests/test_wds.py b/spaces/xswu/HPSv2/tests/test_wds.py deleted file mode 100644 index 3c7f8948a857a0ae024a7dfc1a88fbd990439fdd..0000000000000000000000000000000000000000 --- a/spaces/xswu/HPSv2/tests/test_wds.py +++ /dev/null @@ -1,149 +0,0 @@ -import os -import pytest -import util_test -import collections -import tarfile -import io -from PIL import Image - -from training.data import get_wds_dataset -from training.params import parse_args -from training.main import random_seed - -TRAIN_NUM_SAMPLES = 10_000 -RTOL = 0.2 - -# NOTE: we use two test tar files, which are created on the fly and saved to data/input. -# 000.tar has 10 samples, and the captions are 000_0, 000_1, ..., 000_9 -# 001.tar has 5 samples, and the captions are 001_0, 001_1, ..., 001_4 -def build_inputs(test_name): - base_input_dir, _ = util_test.get_data_dirs() - input_dir = os.path.join(base_input_dir, test_name) - os.makedirs(input_dir, exist_ok=True) - - def save_tar(idx, num_samples): - filename = os.path.join(input_dir, f'test_data_{idx:03d}.tar') - tar = tarfile.open(filename, 'w') - - for sample_idx in range(num_samples): - # Image - image = Image.new('RGB', (32, 32)) - info = tarfile.TarInfo(f'{sample_idx}.png') - bio = io.BytesIO() - image.save(bio, format='png') - size = bio.tell() - bio.seek(0) - info.size = size - tar.addfile(info, bio) - - # Caption - info = tarfile.TarInfo(f'{sample_idx}.txt') - bio = io.BytesIO() - bio.write(f'{idx:03d}_{sample_idx}'.encode('utf-8')) - size = bio.tell() - bio.seek(0) - info.size = size - tar.addfile(info, bio) - - tar.close() - - save_tar(0, 10) - save_tar(1, 5) - - return input_dir - - -def build_params(input_shards, seed=0): - args = parse_args([]) - args.train_data = input_shards - args.train_num_samples = TRAIN_NUM_SAMPLES - args.dataset_resampled = True - args.seed = seed - args.workers = 1 - args.world_size = 1 - args.batch_size = 1 - random_seed(seed) - - preprocess_img = lambda x: x - tokenizer = lambda x: [x.strip()] - - return args, preprocess_img, tokenizer - - -def get_dataloader(input_shards): - args, preprocess_img, tokenizer = build_params(input_shards) - dataset = get_wds_dataset(args, preprocess_img, is_train=True, tokenizer=tokenizer) - dataloader = dataset.dataloader - return dataloader - - -def test_single_source(): - """Test webdataset with a single tar file.""" - input_dir = build_inputs('single_source') - input_shards = os.path.join(input_dir, 'test_data_000.tar') - dataloader = get_dataloader(input_shards) - - counts = collections.defaultdict(int) - for sample in dataloader: - txts = sample[1] - for txt in txts: - counts[txt] += 1 - - for key, count in counts.items(): - assert count == pytest.approx(TRAIN_NUM_SAMPLES / 10, RTOL) - - -def test_two_sources(): - """Test webdataset with a single two tar files.""" - input_dir = build_inputs('two_sources') - input_shards = os.path.join(input_dir, 'test_data_{000..001}.tar') - dataloader = get_dataloader(input_shards) - - counts = collections.defaultdict(int) - for sample in dataloader: - txts = sample[1] - for txt in txts: - counts[txt] += 1 - - for key, count in counts.items(): - assert count == pytest.approx(TRAIN_NUM_SAMPLES / 15, RTOL), f'{key}, {count}' - - -def test_two_sources_same_weights(): - """Test webdataset with a two tar files, using --train-data-weights=1::1.""" - input_dir = build_inputs('two_sources_same_weights') - input_shards = f"{os.path.join(input_dir, 'test_data_000.tar')}::{os.path.join(input_dir, 'test_data_001.tar')}" - args, preprocess_img, tokenizer = build_params(input_shards) - args.train_data_upsampling_factors = '1::1' - dataset = get_wds_dataset(args, preprocess_img, is_train=True, tokenizer=tokenizer) - dataloader = dataset.dataloader - - counts = collections.defaultdict(int) - for sample in dataloader: - txts = sample[1] - for txt in txts: - counts[txt] += 1 - - for key, count in counts.items(): - assert count == pytest.approx(TRAIN_NUM_SAMPLES / 15, RTOL), f'{key}, {count}' - -def test_two_sources_with_upsampling(): - """Test webdataset with a two tar files with upsampling.""" - input_dir = build_inputs('two_sources_with_upsampling') - input_shards = f"{os.path.join(input_dir, 'test_data_000.tar')}::{os.path.join(input_dir, 'test_data_001.tar')}" - args, preprocess_img, tokenizer = build_params(input_shards) - args.train_data_upsampling_factors = '1::2' - dataset = get_wds_dataset(args, preprocess_img, is_train=True, tokenizer=tokenizer) - dataloader = dataset.dataloader - - counts = collections.defaultdict(int) - for sample in dataloader: - txts = sample[1] - for txt in txts: - counts[txt] += 1 - - for key, count in counts.items(): - if key.startswith('000'): - assert count == pytest.approx(TRAIN_NUM_SAMPLES / 20, RTOL), f'{key}, {count}' - else: - assert count == pytest.approx(TRAIN_NUM_SAMPLES / 10, RTOL), f'{key}, {count}' diff --git a/spaces/xuetao/bingo3/src/components/chat-suggestions.tsx b/spaces/xuetao/bingo3/src/components/chat-suggestions.tsx deleted file mode 100644 index 00c2fee295c9e010946046eb71705a5e131f7a5a..0000000000000000000000000000000000000000 --- a/spaces/xuetao/bingo3/src/components/chat-suggestions.tsx +++ /dev/null @@ -1,45 +0,0 @@ -import React, { useMemo } from 'react' -import Image from 'next/image' -import HelpIcon from '@/assets/images/help.svg' -import { SuggestedResponse } from '@/lib/bots/bing/types' -import { useBing } from '@/lib/hooks/use-bing' -import { atom, useAtom } from 'jotai' - -type Suggestions = SuggestedResponse[] -const helpSuggestions = ['为什么不回应某些主题', '告诉我更多关于必应的资迅', '必应如何使用 AI?'].map((text) => ({ text })) -const suggestionsAtom = atom([]) - -type ChatSuggestionsProps = React.ComponentProps<'div'> & Pick, 'setInput'> & { suggestions?: Suggestions } - -export function ChatSuggestions({ setInput, suggestions = [] }: ChatSuggestionsProps) { - const [currentSuggestions, setSuggestions] = useAtom(suggestionsAtom) - const toggleSuggestions = (() => { - if (currentSuggestions === helpSuggestions) { - setSuggestions(suggestions) - } else { - setSuggestions(helpSuggestions) - } - }) - - useMemo(() => { - setSuggestions(suggestions) - window.scrollBy(0, 2000) - }, [suggestions.length]) - - return currentSuggestions?.length ? ( -
          -
          - - { - currentSuggestions.map(suggestion => ( - - )) - } -
          -
          - ) : null -} diff --git a/spaces/yale-CPSC-577/musical-tone-123/README.md b/spaces/yale-CPSC-577/musical-tone-123/README.md deleted file mode 100644 index 33597eb4cd6684c49f50af454c70f4cc08bf5bb4..0000000000000000000000000000000000000000 --- a/spaces/yale-CPSC-577/musical-tone-123/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Musical Tone 123 -emoji: ⚡ -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 2.9.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/ygangang/CodeFormer/CodeFormer/basicsr/losses/__init__.py b/spaces/ygangang/CodeFormer/CodeFormer/basicsr/losses/__init__.py deleted file mode 100644 index 2b184e74c861e6fca0c548692a9a949a6100b0aa..0000000000000000000000000000000000000000 --- a/spaces/ygangang/CodeFormer/CodeFormer/basicsr/losses/__init__.py +++ /dev/null @@ -1,26 +0,0 @@ -from copy import deepcopy - -from basicsr.utils import get_root_logger -from basicsr.utils.registry import LOSS_REGISTRY -from .losses import (CharbonnierLoss, GANLoss, L1Loss, MSELoss, PerceptualLoss, WeightedTVLoss, g_path_regularize, - gradient_penalty_loss, r1_penalty) - -__all__ = [ - 'L1Loss', 'MSELoss', 'CharbonnierLoss', 'WeightedTVLoss', 'PerceptualLoss', 'GANLoss', 'gradient_penalty_loss', - 'r1_penalty', 'g_path_regularize' -] - - -def build_loss(opt): - """Build loss from options. - - Args: - opt (dict): Configuration. It must constain: - type (str): Model type. - """ - opt = deepcopy(opt) - loss_type = opt.pop('type') - loss = LOSS_REGISTRY.get(loss_type)(**opt) - logger = get_root_logger() - logger.info(f'Loss [{loss.__class__.__name__}] is created.') - return loss diff --git a/spaces/ygangang/CodeFormer/CodeFormer/basicsr/metrics/__init__.py b/spaces/ygangang/CodeFormer/CodeFormer/basicsr/metrics/__init__.py deleted file mode 100644 index 19d55cc8321f124c918d78465b053aef67f13a33..0000000000000000000000000000000000000000 --- a/spaces/ygangang/CodeFormer/CodeFormer/basicsr/metrics/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -from copy import deepcopy - -from basicsr.utils.registry import METRIC_REGISTRY -from .psnr_ssim import calculate_psnr, calculate_ssim - -__all__ = ['calculate_psnr', 'calculate_ssim'] - - -def calculate_metric(data, opt): - """Calculate metric from data and options. - - Args: - opt (dict): Configuration. It must constain: - type (str): Model type. - """ - opt = deepcopy(opt) - metric_type = opt.pop('type') - metric = METRIC_REGISTRY.get(metric_type)(**data, **opt) - return metric diff --git a/spaces/yixin6178/arXiv2Latex/backend.py b/spaces/yixin6178/arXiv2Latex/backend.py deleted file mode 100644 index d7cd868f3dd291e71925852f909c6fafdcd2ca30..0000000000000000000000000000000000000000 --- a/spaces/yixin6178/arXiv2Latex/backend.py +++ /dev/null @@ -1,86 +0,0 @@ -import tarfile -import os -import requests -import datetime -import pandas as pd -import shutil -from bs4 import BeautifulSoup -from tqdm import tqdm -import base64 - -def ToBase64(file): - with open(file, 'rb') as fileObj: - data = fileObj.read() - base64_data = base64.b64encode(data) - return base64_data - -def archive_dir(dir_name,output_filename,format="zip"): - shutil.make_archive(output_filename, format, dir_name) - return output_filename+".zip" - -def make_dir_if_not_exist(folder): - if not os.path.exists(folder): - os.makedirs(folder) - -def untar(fname, dirs): - """ - 解压tar.gz文件 - :param fname: 压缩文件名 - :param dirs: 解压后的存放路径 - :return: bool - """ - - try: - t = tarfile.open(fname) - t.extractall(path = dirs) - return True - except Exception as e: - print(e) - return False - -def get_timestamp(): - ts = pd.to_datetime(str(datetime.datetime.now())) - d = ts.strftime('%Y%m%d%H%M%S') - return d - -def get_name_from_arvix(url): - res = BeautifulSoup(requests.get(url).content, 'lxml').find("h1",attrs={"class":"title mathjax"}) - if res is None: - return '' - title = res.text[6:].replace(" ","-") - return title - -def download_source(pdf_lists=None,output_base=None,project_name=None,fetch_title=True, return_source=False): - base=output_base - project_name = project_name + get_timestamp() - base = os.path.join(base,project_name) - make_dir_if_not_exist(base) - - for pdf_link in tqdm(pdf_lists): - file_stamp = pdf_link.split("/")[-1] - if fetch_title: - title = get_name_from_arvix(pdf_link) - if len(title )== 0: - continue - else: - import numpy as np - title = file_stamp - source_link = "https://arxiv.org/e-print/"+file_stamp - inp = os.path.join(base,'input') - make_dir_if_not_exist(inp) - out = os.path.join(base,'output') - make_dir_if_not_exist(out) - if return_source: - print(source_link) - continue - response = requests.get(source_link) - filename = file_stamp+".tar.gz" - filepath = os.path.join(inp,filename) - open(filepath, "wb").write(response.content) - outpath = os.path.join(out,title) - untar(filepath,outpath) - archive_dir(out,os.path.join(base,project_name)) - -if __name__ == '__main__': - s = get_timestamp() - print(s) \ No newline at end of file diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/bros/convert_bros_to_pytorch.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/bros/convert_bros_to_pytorch.py deleted file mode 100644 index c0984f2c74b20cc61a02f616815d59b79d5a2afb..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/bros/convert_bros_to_pytorch.py +++ /dev/null @@ -1,145 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Convert Bros checkpoints.""" - -import argparse - -import bros # original repo -import torch - -from transformers import BrosConfig, BrosModel, BrosProcessor -from transformers.utils import logging - - -logging.set_verbosity_info() -logger = logging.get_logger(__name__) - - -def get_configs(model_name): - bros_config = BrosConfig.from_pretrained(model_name) - return bros_config - - -def remove_ignore_keys_(state_dict): - ignore_keys = [ - "embeddings.bbox_sinusoid_emb.inv_freq", - ] - for k in ignore_keys: - state_dict.pop(k, None) - - -def rename_key(name): - if name == "embeddings.bbox_projection.weight": - name = "bbox_embeddings.bbox_projection.weight" - - if name == "embeddings.bbox_sinusoid_emb.x_pos_emb.inv_freq": - name = "bbox_embeddings.bbox_sinusoid_emb.x_pos_emb.inv_freq" - - if name == "embeddings.bbox_sinusoid_emb.y_pos_emb.inv_freq": - name = "bbox_embeddings.bbox_sinusoid_emb.y_pos_emb.inv_freq" - - return name - - -def convert_state_dict(orig_state_dict, model): - # rename keys - for key in orig_state_dict.copy().keys(): - val = orig_state_dict.pop(key) - orig_state_dict[rename_key(key)] = val - - # remove ignore keys - remove_ignore_keys_(orig_state_dict) - - return orig_state_dict - - -def convert_bros_checkpoint(model_name, pytorch_dump_folder_path=None, push_to_hub=False): - # load original model - original_model = bros.BrosModel.from_pretrained(model_name).eval() - - # load HuggingFace Model - bros_config = get_configs(model_name) - model = BrosModel.from_pretrained(model_name, config=bros_config) - model.eval() - - state_dict = original_model.state_dict() - new_state_dict = convert_state_dict(state_dict, model) - model.load_state_dict(new_state_dict) - - # verify results - - # original BROS model require 4 points (8 float values) for each bbox, prepare bbox with [batch_size, seq_len, 8] shape - bbox = torch.tensor( - [ - [ - [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], - [0.4396, 0.6720, 0.4659, 0.6720, 0.4659, 0.6850, 0.4396, 0.6850], - [0.4698, 0.6720, 0.4843, 0.6720, 0.4843, 0.6850, 0.4698, 0.6850], - [0.4698, 0.6720, 0.4843, 0.6720, 0.4843, 0.6850, 0.4698, 0.6850], - [0.2047, 0.6870, 0.2730, 0.6870, 0.2730, 0.7000, 0.2047, 0.7000], - [0.2047, 0.6870, 0.2730, 0.6870, 0.2730, 0.7000, 0.2047, 0.7000], - [1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000], - ] - ] - ) - - processor = BrosProcessor.from_pretrained(model_name) - - encoding = processor("His name is Rocco.", return_tensors="pt") - encoding["bbox"] = bbox - - original_hidden_states = original_model(**encoding).last_hidden_state - # pixel_values = processor(image, return_tensors="pt").pixel_values - - last_hidden_states = model(**encoding).last_hidden_state - - assert torch.allclose(original_hidden_states, last_hidden_states, atol=1e-4) - - if pytorch_dump_folder_path is not None: - print(f"Saving model and processor to {pytorch_dump_folder_path}") - model.save_pretrained(pytorch_dump_folder_path) - processor.save_pretrained(pytorch_dump_folder_path) - - if push_to_hub: - model.push_to_hub("jinho8345/" + model_name.split("/")[-1], commit_message="Update model") - processor.push_to_hub("jinho8345/" + model_name.split("/")[-1], commit_message="Update model") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - # Required parameters - parser.add_argument( - "--model_name", - default="jinho8345/bros-base-uncased", - required=False, - type=str, - help="Name of the original model you'd like to convert.", - ) - parser.add_argument( - "--pytorch_dump_folder_path", - default=None, - required=False, - type=str, - help="Path to the output PyTorch model directory.", - ) - parser.add_argument( - "--push_to_hub", - action="store_true", - help="Whether or not to push the converted model and processor to the 🤗 hub.", - ) - - args = parser.parse_args() - convert_bros_checkpoint(args.model_name, args.pytorch_dump_folder_path, args.push_to_hub) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mluke/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mluke/__init__.py deleted file mode 100644 index aae869bdff51041bda7632222eaa5065f97d36eb..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mluke/__init__.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright 2021 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import TYPE_CHECKING - -from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_sentencepiece_available - - -_import_structure = {} - - -try: - if not is_sentencepiece_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["tokenization_mluke"] = ["MLukeTokenizer"] - -if TYPE_CHECKING: - try: - if not is_sentencepiece_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .tokenization_mluke import MLukeTokenizer - - -else: - import sys - - sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__) diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/flask_api_full_song.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/flask_api_full_song.py deleted file mode 100644 index 9dbf66a17114c7f9679717e2938759ae4a371c34..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Kitasan-Black/flask_api_full_song.py +++ /dev/null @@ -1,55 +0,0 @@ -import io -import numpy as np -import soundfile -from flask import Flask, request, send_file - -from inference import infer_tool -from inference import slicer - -app = Flask(__name__) - - -@app.route("/wav2wav", methods=["POST"]) -def wav2wav(): - request_form = request.form - audio_path = request_form.get("audio_path", None) # wav文件地址 - tran = int(float(request_form.get("tran", 0))) # 音调 - spk = request_form.get("spk", 0) # 说话人(id或者name都可以,具体看你的config) - wav_format = request_form.get("wav_format", 'wav') # 范围文件格式 - infer_tool.format_wav(audio_path) - chunks = slicer.cut(audio_path, db_thresh=-40) - audio_data, audio_sr = slicer.chunks2audio(audio_path, chunks) - - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - - length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample)) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - else: - # padd - pad_len = int(audio_sr * 0.5) - data = np.concatenate([np.zeros([pad_len]), data, np.zeros([pad_len])]) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - out_audio, out_sr = svc_model.infer(spk, tran, raw_path) - svc_model.clear_empty() - _audio = out_audio.cpu().numpy() - pad_len = int(svc_model.target_sample * 0.5) - _audio = _audio[pad_len:-pad_len] - - audio.extend(list(infer_tool.pad_array(_audio, length))) - out_wav_path = io.BytesIO() - soundfile.write(out_wav_path, audio, svc_model.target_sample, format=wav_format) - out_wav_path.seek(0) - return send_file(out_wav_path, download_name=f"temp.{wav_format}", as_attachment=True) - - -if __name__ == '__main__': - model_name = "logs/44k/G_60000.pth" # 模型地址 - config_name = "configs/config.json" # config地址 - svc_model = infer_tool.Svc(model_name, config_name) - app.run(port=1145, host="0.0.0.0", debug=False, threaded=False) diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/modules/F0Predictor/F0Predictor.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/modules/F0Predictor/F0Predictor.py deleted file mode 100644 index 69d8a9bd28729e33d092a5af8e2ce544c1330c3b..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Kitasan-Black/modules/F0Predictor/F0Predictor.py +++ /dev/null @@ -1,16 +0,0 @@ -class F0Predictor(object): - def compute_f0(self,wav,p_len): - ''' - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length] - ''' - pass - - def compute_f0_uv(self,wav,p_len): - ''' - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length],uv:[signal_length//hop_length] - ''' - pass \ No newline at end of file diff --git a/spaces/yukie/yukie-sovits3/hubert/hubert_model.py b/spaces/yukie/yukie-sovits3/hubert/hubert_model.py deleted file mode 100644 index 7fb642d89b07ca60792debab18e3454f52d8f357..0000000000000000000000000000000000000000 --- a/spaces/yukie/yukie-sovits3/hubert/hubert_model.py +++ /dev/null @@ -1,222 +0,0 @@ -import copy -import random -from typing import Optional, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as t_func -from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present - - -class Hubert(nn.Module): - def __init__(self, num_label_embeddings: int = 100, mask: bool = True): - super().__init__() - self._mask = mask - self.feature_extractor = FeatureExtractor() - self.feature_projection = FeatureProjection() - self.positional_embedding = PositionalConvEmbedding() - self.norm = nn.LayerNorm(768) - self.dropout = nn.Dropout(0.1) - self.encoder = TransformerEncoder( - nn.TransformerEncoderLayer( - 768, 12, 3072, activation="gelu", batch_first=True - ), - 12, - ) - self.proj = nn.Linear(768, 256) - - self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_()) - self.label_embedding = nn.Embedding(num_label_embeddings, 256) - - def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - mask = None - if self.training and self._mask: - mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2) - x[mask] = self.masked_spec_embed.to(x.dtype) - return x, mask - - def encode( - self, x: torch.Tensor, layer: Optional[int] = None - ) -> Tuple[torch.Tensor, torch.Tensor]: - x = self.feature_extractor(x) - x = self.feature_projection(x.transpose(1, 2)) - x, mask = self.mask(x) - x = x + self.positional_embedding(x) - x = self.dropout(self.norm(x)) - x = self.encoder(x, output_layer=layer) - return x, mask - - def logits(self, x: torch.Tensor) -> torch.Tensor: - logits = torch.cosine_similarity( - x.unsqueeze(2), - self.label_embedding.weight.unsqueeze(0).unsqueeze(0), - dim=-1, - ) - return logits / 0.1 - - def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - x, mask = self.encode(x) - x = self.proj(x) - logits = self.logits(x) - return logits, mask - - -class HubertSoft(Hubert): - def __init__(self): - super().__init__() - - @torch.inference_mode() - def units(self, wav: torch.Tensor) -> torch.Tensor: - wav = t_func.pad(wav, ((400 - 320) // 2, (400 - 320) // 2)) - x, _ = self.encode(wav) - return self.proj(x) - - -class FeatureExtractor(nn.Module): - def __init__(self): - super().__init__() - self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False) - self.norm0 = nn.GroupNorm(512, 512) - self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False) - self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = t_func.gelu(self.norm0(self.conv0(x))) - x = t_func.gelu(self.conv1(x)) - x = t_func.gelu(self.conv2(x)) - x = t_func.gelu(self.conv3(x)) - x = t_func.gelu(self.conv4(x)) - x = t_func.gelu(self.conv5(x)) - x = t_func.gelu(self.conv6(x)) - return x - - -class FeatureProjection(nn.Module): - def __init__(self): - super().__init__() - self.norm = nn.LayerNorm(512) - self.projection = nn.Linear(512, 768) - self.dropout = nn.Dropout(0.1) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.norm(x) - x = self.projection(x) - x = self.dropout(x) - return x - - -class PositionalConvEmbedding(nn.Module): - def __init__(self): - super().__init__() - self.conv = nn.Conv1d( - 768, - 768, - kernel_size=128, - padding=128 // 2, - groups=16, - ) - self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.conv(x.transpose(1, 2)) - x = t_func.gelu(x[:, :, :-1]) - return x.transpose(1, 2) - - -class TransformerEncoder(nn.Module): - def __init__( - self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int - ) -> None: - super(TransformerEncoder, self).__init__() - self.layers = nn.ModuleList( - [copy.deepcopy(encoder_layer) for _ in range(num_layers)] - ) - self.num_layers = num_layers - - def forward( - self, - src: torch.Tensor, - mask: torch.Tensor = None, - src_key_padding_mask: torch.Tensor = None, - output_layer: Optional[int] = None, - ) -> torch.Tensor: - output = src - for layer in self.layers[:output_layer]: - output = layer( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask - ) - return output - - -def _compute_mask( - shape: Tuple[int, int], - mask_prob: float, - mask_length: int, - device: torch.device, - min_masks: int = 0, -) -> torch.Tensor: - batch_size, sequence_length = shape - - if mask_length < 1: - raise ValueError("`mask_length` has to be bigger than 0.") - - if mask_length > sequence_length: - raise ValueError( - f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`" - ) - - # compute number of masked spans in batch - num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random()) - num_masked_spans = max(num_masked_spans, min_masks) - - # make sure num masked indices <= sequence_length - if num_masked_spans * mask_length > sequence_length: - num_masked_spans = sequence_length // mask_length - - # SpecAugment mask to fill - mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool) - - # uniform distribution to sample from, make sure that offset samples are < sequence_length - uniform_dist = torch.ones( - (batch_size, sequence_length - (mask_length - 1)), device=device - ) - - # get random indices to mask - mask_indices = torch.multinomial(uniform_dist, num_masked_spans) - - # expand masked indices to masked spans - mask_indices = ( - mask_indices.unsqueeze(dim=-1) - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - offsets = ( - torch.arange(mask_length, device=device)[None, None, :] - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - mask_idxs = mask_indices + offsets - - # scatter indices to mask - mask = mask.scatter(1, mask_idxs, True) - - return mask - - -def hubert_soft( - path: str, -) -> HubertSoft: - r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`. - Args: - path (str): path of a pretrained model - """ - hubert = HubertSoft() - checkpoint = torch.load(path) - consume_prefix_in_state_dict_if_present(checkpoint, "module.") - hubert.load_state_dict(checkpoint) - hubert.eval() - return hubert diff --git a/spaces/zej97/AI-Research-Assistant/processing/__init__.py b/spaces/zej97/AI-Research-Assistant/processing/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/zeno-ml/translation-report/gpt-MT/evaluation/system-outputs/deepl/ukcs/test.uk-cs.cs b/spaces/zeno-ml/translation-report/gpt-MT/evaluation/system-outputs/deepl/ukcs/test.uk-cs.cs deleted file mode 100644 index 26a8463d7bbf7906b1b4ed9cd09bd393d20da66b..0000000000000000000000000000000000000000 --- a/spaces/zeno-ml/translation-report/gpt-MT/evaluation/system-outputs/deepl/ukcs/test.uk-cs.cs +++ /dev/null @@ -1,2812 +0,0 @@ -Vzhledem k tomu, že vycházím ze dnů a časů, kdy musím uklízet apartmány po hostech, mohu vám poskytnout odhad na měsíc duben, protože již mám hrubý plán úklidu apartmánů. -Co se týče května, budu to moci říci přibližně koncem dubna nebo začátkem května. -Přijde vás navštívit -Nevíte, jak nahlásit online, že jsem si našel/a práci a nemám již nárok na dávky? -Nemohu přijít osobně(((((( děkuji, tento formulář je třeba vytisknout a vyplnit. -V Kyjevě jsem měl dvě vakcíny Pfizer -Mám certifikát, ale jeho platnost právě vypršela, a myslím, že potřebuji posilovací dávku. -Pokud to není příliš obtížné, mohu se zeptat vašeho lékaře na podrobnosti? -Tenkrát jsem si v Lidlu koupila dvě balení a máma dvě. -Možná však budete potřebovat podrobnější znalost českého jazyka. -Záleží mi na vaší spokojenosti s mou prací. -V tomto případě znám dívku, která je Ukrajinka, ale studuje v České republice, obor cestovní ruch. -Dobře, pojďme si povídat o něčem jiném, protože já stejně nespím, pořád usínám a zdá se mi o tom bombardování... -Myslím, že to zítra bez tebe nezvládnu. -Stál jsem ve frontě -Omlouvám se, nerozuměl jsem otázce. -Potřebuji vyměnit ložní prádlo, 2 sady. -Někdy se mi zdá, že kvůli překladateli jsou mé myšlenky v češtině jiné než v ukrajinštině. -Nemyslel sis, že jsem dítě -U nás ve zprávách: Syrští žoldnéři považují účast ve válce na straně Ruska za možnost dezertovat a nelegálně migrovat do EU - GUR -Bál bych se chodit po stejných ulicích jako syrská armáda. -Jednou jsem studovala s dívkou ze Sýrie, která s rodinou utekla před válkou a musela studovat v ukrajinštině. -V té době jsem přemýšlel o tom, že v 21. století byla válka. -Myslel jsem, že válku nikdy neuvidím na vlastní oči. -Chápu vás a nestěžuji si, ale v mé situaci se zatím budu dívat sem. -Říkám vám, že jsou tu jen velvyslanectví, -Toto rčení je na místě -Je zřejmé, že elity ve Washingtonu i Berlíně se potí při pomyšlení, že stíhačky NATO sestřelují ruské bombardéry na obloze nad Ukrajinou. -Zelenskyj však nemluví ani tak k nim, jako spíše k jejich voličům a společnosti obecně. -Svými projevy v parlamentech a před tisícovými davy na náměstích největších evropských měst se ukrajinskému vůdci daří něco, co se nepodařilo žádnému z jeho předchůdců. -Zelenskyj nepřesvědčuje evropské politiky, aby udělali vše pro Ukrajinu, ale evropské občany, kteří tyto politiky volí. -A tyto projevy, toto rafinované spojení síly, dojetí a někdy i zoufalství, postupně dosahují svého cíle. -Zatímco Biden je kategoricky proti uzavření oblohy, 70 % jeho spoluobčanů již tuto myšlenku podporuje. -Přestože Paříž a Brusel jsou proti urychlení přistoupení Ukrajiny, proces již probíhá. -Celé ty dny jsem jen chodil se třídou a učitelem. -Ale v noci mi přišla zpráva od jedné paní, které se líbila vaše práce a chtěla by mít něco od vás doma. -Čekám na nové zprávy od ní a určitě vám dám vědět. -Je to tak, všichni tři ještě chodí do školy, protože Halia a Svjatoslav studují dálkově a diplomy dostanou na podzim, kdy budou muset nastoupit na vysoké školy. -obvykle vyžadujeme cestovní pas jednoho z dospělých. -Jestli jsem řediteli dobře rozuměla, řekl mi, že když budu pracovat na zahradě, můžu přijít v některé dny do práce na tolik hodin, kolik dostanu. -Pokud tomu tak bude, budu moci chodit do práce například několik dní v týdnu na několik hodin. -Uvidím, kolik hodin a ve které dny to zvládnu, abych mohla chodit do bytů uklízet v módě, protože za úklid se platí víc, takže se té práce nechci vzdát. -Nechci, aby mezi mnou a vámi došlo k nedorozumění. -Bohužel nemám program, ale pokusím se na to přijít. -Můj druhý bratr, který zůstal na Ukrajině, také pomáhá armádě. -A v tomto malém bytě není Wi-Fi, v podstatě odpovídám na všechny vaše otázky a teď nevím, co mám dělat? A děti mají od pondělí online výuku, ale není tu internet. -Jak můžeš být ospalý a já tě držím vzhůru? -Dobře, přijde tam a odvede je. -Dnes jsem tě k něčemu přemluvil, asi je čas jít spát, abych zítra pracoval. -píše, že se můj stav změní do 24 hodin. Ale proč? Co je to za komedii? -Dobrý den, pokud budete zítra potřebovat další pomoc, můžeme přijít. -Hledáme státní podporu na běžný úklid domů zde v obci. -Nabídka činí 150 Kč za hodinu. Další spolupráce s nimi. Termín na domluvě s paní. -Na sociálních sítích se šířily falešné informace o povinnosti studentů nastoupit vojenskou službu, o přípravách na evakuaci vysokých škol a o uvolňování míst na kolejích. -Je mi velmi líto, že se mnou mluvíte v den svého volna. Doufám, že vás příliš neobtěžuji. -A Ivan by zjistil, zda je třeba něco opravit nebo vylepšit. -Sedím tu se sluchátky na uších, poslouchám hudbu a myslím na tebe a na děti. -Opravdu chci být s tebou. -Abyste byli šťastní 😢. -Nevím, miláčku, jak se máš? -Vím, že je to pro tebe těžké. -Ale já to beru, jako bychom byli spolu. Že jsme pár. -Řekl bych, že po válce bude hodně práce. -Takže se nebojím, že bych ji tam nenašel. -Umím spoustu věcí. -Takže se nebojím, že bych se do žádné z nich nedostal. -Jen nevím, co se stane s bydlením.... -Jsme to my ráno nebo odpoledne společně? -Když jsem Katii požádal, aby zavolala zpět na pohraniční stráž, řekla, že nemůže. -Jo, ale myslel jsem, že jsi nevyplnila -Paní Libuše, zaplatila jste za školku? -Protože tomu nerozumím. -Děláte pro nás tolik dobrého..... -Jsme vám hluboce zavázáni... děkujeme! -Ano, chápu, že je to velmi dobré. -Pošlu vám fotku vizitky paní, kterou učím angličtinu. -Můžeme být spolu? -Než se k vám dostaneme, bude to nějakou dobu trvat. -Proč to prozatím nezkusit takto -USA zakázaly investice v Rusku a uvalily sankce na Putinovy dcery -Spojené státy spolu se zeměmi G7 a EU uvalují nové sankce na dcery ruského prezidenta Putina, největší ruské banky Alfa Bank a Sberbank a zakazují veškeré nové investice v Rusku. -Uvádí se to v prohlášení na internetových stránkách Bílého domu. -Sberbank je největší státní finanční institucí v Rusku a Alfa-Bank je největší soukromou bankou. -Sankce zmrazí veškerý jejich majetek spojený s finančním systémem USA a zakáže občanům USA s nimi obchodovat. -Kromě toho USA uvalí úplný zákaz nových investic v Rusku. -Americký prezident Joe Biden proto podepíše nový exekutivní příkaz, který zahrnuje zákaz nových investic do Ruska pro občany USA bez ohledu na jejich umístění. -"Tento krok vychází z rozhodnutí více než 600 nadnárodních společností stáhnout se z Ruska. -Ze soukromého sektoru se stahují výrobci, energetické společnosti, velcí maloobchodníci, finanční instituce a další poskytovatelé služeb, jako jsou právní a poradenské firmy," uvedl Bílý dům ve svém prohlášení. -Třetím krokem nových amerických sankcí bylo omezení velkých ruských státních podniků v kritickém sektoru. -Občanům USA se tak zakáže provádět transakce s těmito organizacemi a zmrazí se veškerý jejich majetek v jurisdikci USA. -Ministerstvo financí zveřejní podrobný seznam následující den, tedy ve čtvrtek. -A poslední částí balíčku je úplné zmrazení majetku Putinových dospělých dětí, manželky a dcery ministra zahraničí Lavrova a členů ruské bezpečnostní rady, včetně bývalého prezidenta a premiéra Dmitrije Medveděva a premiéra Michaila Mišustina. -Sankce je odříznou od amerického finančního systému a zmrazí veškerý jejich majetek ve Spojených státech. -Samozřejmě je normální, že mě chceš vidět. Jen jsem si dělal legraci. -1. Chybí vám důležité informace? Pokud ano, o jaký typ informací se jedná? -Mohu se zeptat své ženy, co mohu koupit za kašel? -Je mi líto, ale nemám se koho zeptat. -Jdu a nechám děti samotné, čekám tam 3-4 hodiny a pak jdu domů, protože už je tam nemůžu nechat. -A vezměte si ji s sebou v 6-7 hodin ráno. -ArmyINFORM je zpravodajská agentura Ministerstva obrany Ukrajiny, která byla založena v prosinci 2018. -Čečenec pochválil ukrajinské dělostřelectvo, ale znovu vyvrátil mýty o "hrozivé" Kadyrovově armádě. -To je uvedeno v odposlechnutém telefonickém rozhovoru. -V úterý 22. března sestřelili ukrajinští obránci nepřátelské letadlo, které nedávno shazovalo bomby na Mariupol. -Shrnutí: Během pouličních bojů v Mariupolu zničil pluk speciálních sil Azov čtyři ruské tanky, několik jednotek nepřátelské obrněné techniky a pěchotu okupantů. -Dostáváme operativní informace o sexuálních zločinech páchaných ruskou armádou na okupovaných územích a v tzv. hot spotech. -Prokuratura v Kyjevské oblasti identifikovala ruského vojáka, který zabil neozbrojeného muže a opakovaně znásilnil jeho ženu. -Ruskému vojákovi bylo doručeno oznámení o podezření z porušení válečných zákonů a zvyklostí. Byl zařazen na seznam hledaných osob a u soudu byl podán návrh na jeho vzetí do vazby. -Existuje v práci nějaký dress code, nebo ne, nebo si můžu vzít, co mám? -Vezmu si povlečení pro pátého hosta, ale máte v bytě polštář a deku navíc? -Krmivo pro kočky doručené do Kryvyi Rih -Nedávno jsme obdrželi od -Humanitární pomoc v podobě vlhkého krmiva pro kočky byla doručena na hranice. -Náklad byl přepravován přes Ukrajinu. -Včera večer dorazilo krmivo do Kryvého Rihu a dnes je doručeno do... -přístřešky, minipřístřešky a další -do městských zařízení pro zvířata. -Jsme velmi vděční našim kolegům v zahraničí, dobrovolníkům a lidem, kteří pomáhají v tak nebezpečné době! -Vše bude Ukrajina -Vymažu svou fotku v plavkách. -Jinak je velmi spokojená s jakýmkoli dárkem. -Děkujeme. Doma jsme se na televizi nedívali, natož na 1+1. Není třeba nás ladit :) -Televizi tu zapínám jen proto, abych poslouchal češtinu. -K této zprávě přikládám seznam volných pracovních míst nabízených zaměstnavateli. -Pokud vás nabídka zaujala, kontaktujte prosím přímo kontaktní osobu uvedenou v inzerátu. -Nebo je to dobrý zástupce? -Pokud teď nemáte rodinu, znamená to, že jste ještě nepřišli, což znamená, že až přijdete, bude desetkrát větší a bližší, než by měla být. -Včera jsem na zastávce autobusu 402 potkal Ukrajinku Viktorii. -Tato dívka je dítě, ale je tu sama... její oči jsou plné slz a žalu. -Nemohl jsem stát stranou. -Nyní jsme se dohodli, že budeme přátelé. -A včera večer jsem za tuto dívku děkovala Bohu, protože je to první člověk z Ukrajiny, u kterého mé srdce pocítilo vzájemnost. -Nevím, co mám dělat, protože potřebuji brzy bydlet, ale nemám kde. -Rádi bychom vás informovali o změně bydliště a zároveň požádali o prodloužení platnosti karty. -Moje matka tam stojí od čtyř hodin ráno -Nevím, co ti mám říct, ale líbíš se mi. -Obraz je velmi krásný, jste velmi dobře udělal pro utrácení peněz na charitu! -Nábytek v pokoji můžete libovolně přemisťovat (pokud vám s tím pomůžeme), ale skříň se zrcadlem přemisťovat nelze. -- Budeme mít společnou koupelnu a toaletu. -Dobré odpoledne. Jsem z Ukrajiny, je mi 45 let, přijela jsem se svým osmiletým synem, 25letou dcerou a 1,5letou vnučkou. -Nyní jsem ve městě Zlín. Moje dcera a vnučka budou žít odděleně. -Potřebuji ubytování na delší dobu. Jsem slušný člověk. -Chci si najít práci, abych mohl později platit nájem. -Nyní můžeme zaplatit účet za služby. -Bohužel neumíme česky, teprve se to učíme. -Takže, prosím, napište zpět, moje telefonní číslo je +420123456789 -Nemůžu to říct s jistotou, protože město ještě moc dobře neznám... asi polovina z 12ti. -Nemáme dostatek dek, peřin, talířů. Děti nemají pyžama a nemají se do čeho převléknout. -Ano, věřím v Boha a svěřuji mu svůj život. -Dobrý den, mluvím k vám v duchu - musím se naučit česky, ale nemůžu přestat sledovat zprávy. -Samozřejmě mě zajímá i práce. -Ale pokud nejsem příliš fyzicky zdatný, protože nejsem tak zdatný. -Mohl bych si zaplatit bydlení za rozumné ceny, kdyby mi někdo poskytl pokoj. -Hodně komunikujete, možná někoho znáte -Můžeme změnit mobilní tarif, pokud máte čas? Nebo už je pozdě? -Dobrý den, věci, které jste přinesli, jsme si vzali, ale jsou tam věci, které jsou pro nás příliš malé, můžeme vám je přinést za 10 minut? -Do oka se mi dostala čpavková barva. Bojím se, že se popálím chemikálií. Může mě vyšetřit lékař? -Ano, zaregistrovali jsme se, ale je těžké žít na ubytovně a pro syna to není příliš pohodlné. -Jak se mohu připojit k internetu -Nemyslím si, že mě chce okrást :) -Dobře si uvědomuji cenu nekvalifikované práce a chápu, že nebudu moci mít stejnou práci jako doma v Kyjevě a že nebudu moci vydělávat peníze, které jsem vydělával doma. -Ale nemůžu jen tak sedět a nic nedělat, protože pak mě napadají různé špatné myšlenky, takže se musím zaměstnat nějakou fyzickou aktivitou. -Hackerská skupina Anonymous, která již dříve vyhlásila Rusku kybernetickou válku, se nabourala do databáze Roskomnadzoru a zveřejnila 360 000 souborů. -Uvádí se to v příspěvku Anonymous na Twitteru. -Skupina například informovala o úspěšném nabourání a prozrazení databáze Roskomnadzoru. -Anonymous se úspěšně nabourali do databáze Roskomnadzoru, ruské federální výkonné agentury odpovědné za monitorování, kontrolu a cenzuru ruských médií, a zveřejnili více než 360 000 souborů, uvádí se v prohlášení. -Celkový objem hacknuté databáze Roskomnadzoru je 820 gigabajtů. -Jak již dříve informovaly Ukrajinské noviny, 25. února se hackeři z Anonymous, kteří vyhlásili Rusku kybernetickou válku, nabourali do webových stránek ruského ministerstva obrany a na internet unikly údaje o zaměstnancích. -Mezitím Federální služba pro dohled nad komunikacemi, informačními technologiemi a masmédii (Roskomnadzor) požádala americkou společnost Google, aby omezila přístup uživatelů k údajně nepřesným informacím o ztrátách ruských ozbrojených sil na Ukrajině. -Vařím chutně, moc se neptám a dobře radím. -Děkuji, s touto aplikací pracuji, ve středu jsme o ní mluvili na kurzu češtiny. -Skvělé, tak zítra napíšu a domluvíme se na přesném čase. -Mohu vás požádat, abyste si je vzal s sebou na cestu domů? -Odpoledne jsem měla jednu přednášku z angličtiny a pak jsem šla pracovat do dětského pokoje. -Vysvětlil jsem Margaritě, jak vyplnit všechny dokumenty. -Doufám, že vše pochopí a bude se jí dařit. -Setkala jsem se také s Lenou a mluvila s ní o plavání ukrajinských dětí. -Přinesla oznámení z bazénu. -Je to jízda zdarma. -Všude kolem vás je příroda a veselí sousedé.) -Pokusím se to udělat sám, ale pokud budu mít nějaké problémy, napíšu vám, jestli to nevadí? -Mohla bych vás požádat o mýdlo na praní? -Protože je zde pouze antibakteriální gel. -Pošlete mi prosím fotografii příkladů, které jste vyřešili. -Může říct, že špatně jí a často ho bolí břicho? Ráno nemůže jíst - zvrací. -Došlo k nějakému zlepšení v překonávání koronaviru? -Do tří pracovních dnů se musíte zaregistrovat na úřadu práce. -Řekněte mi, prosím, malé děti - jaký věk máte na mysli? -Mé dceři je 8 let, je to přijatelný věk? -I teď jsem na tebe myslela a ty jsi mi napsal. -Večer budeme doma -Našla jsem si místo k bydlení, ale není tam žádný nábytek a nádobí, mohli byste mi pomoci najít nábytek z druhé ruky, děkuji. -Nejspíš je čas jít spát, abys mohl zítra pracovat. -Jsem v pořádku, šli jsme na procházku, jsem velmi unavená. -Bylo by dobré ve čtvrtek, pokud je to možné -Alespoň mít práci a obchod -Spojené státy tvrdí, že Čína je připravena poskytnout Rusku vojenskou pomoc. -Uvádí se to ve zprávě britské redakce BBC, píše server Ukrainian News. -USA varují Čínu před pomocí Rusku... -Čína bude čelit následkům, pokud pomůže Rusku vyhnout se sankcím za napadení Ukrajiny... -Američtí představitelé sdělili mnoha zpravodajským agenturám, že Čína vyjádřila připravenost poskytnout Rusku vojenskou pomoc. -Čínské ministerstvo zahraničí obvinilo USA z šíření dezinformací," uvádí se v prohlášení. -Zdůrazňuje se, že Rusko popírá, že by Peking žádalo o vojenskou pomoc. -Jak již dříve informoval server Ukrainian News, Čínská lidová republika a Ruská federace vydaly 4. února společné prohlášení, v němž se postavily proti rozšiřování NATO a vyzvaly Severoatlantickou alianci, aby opustila ideologické přístupy studené války. -Mluvčí čínského ministerstva zahraničí Wang Wenbin 1. března prohlásil, že Čína vítá jednání mezi Ruskem a Ukrajinou. -Agentura Reuters 10. března oznámila, že Čína odmítla dodávat ruským leteckým společnostem letecké díly. -Tím nechci říct, že je to špatně! -Tam prostě nemůžete zůstat, je to opravdu nebezpečné. -Řekni mi, co ti brání odejít? -Pokud to není vaše vlast? -Protože zůstane vaší vlastí! -Všichni jsou velmi unavení, nechodím do práce, protože není práce😓😭. -A musíš nějak žít... -Napište adresu v češtině, podíváme se na mapu, abychom zjistili, kde je to od nás. -Hledáme hospodyni na běžný úklid domu zde ve vesnici. -Nabídka činí 150 Kč za hodinu. -Další spolupráce podle spokojenosti. -Na termínu se s dámou domluvíte sami. -Zeptám se jí. Je mi trochu trapné se ptát, už tak nám dává tolik koláčů. -Ukázalo se, že jsem si objednal náhradní kartu. -K této problematice byl dokonce přijat samostatný zákon -Zatím nejsme připraveni platit takové peníze, počkáme na pomoc a pak něco naplánujeme. -Moc vám děkuji, moc jste mi pomohli. -Nejsem chamtivý na všechno -Žádný z nich nemám -Nic nevysvětluj, prosím, předstírej, že tu nejsem. -Dva body mi však k vyplnění online nestačily: -Ale nebude to snadné, budeš ho muset najít... -Nečekané seznámení na začátku válečných akcí s polskými bratry se změnilo v dobré přátelství) -Je velmi příjemné najít podobně smýšlející lidi tam, kde jste je nečekali a nehledali.) -Jmenuji se Oksana. jsem divadelní kritička a vysokoškolská pedagožka. pracuji také jako expertka Ukrajinské kulturní nadace a odbornice na divadelní festivaly. -Můj manžel je sportovec. věnuje se cestovnímu ruchu. mám syna. -Oleg je nemocný, má rýmu a kašel. Je lepší nebrat Jana s sebou, aby se nenakazil. -Bohužel musím vaši nabídku odmítnout, protože jsem si mezitím už našel práci. -Je mi líto, ale nemohu to odmítnout. -Na místo, kde jsme byli ubytováni, jsme přijeli z centra v Pardubicích, podmínky jsou dobré. -Ale je tu velké ale tady žijí muži, kteří pijí alkohol a kouří v interiéru, hlasitá hudba je nejméně, že "trapné", zápach cigaret, opilých mužů a nás s dětmi ... je to děsivé jít do postele, abych byl upřímný. -Prosím o pomoc s bydlením, za peníze prosím. -Ale tam, kde je bezpečno a není zakouřeno. -V místnosti je tak silný zápach cigaret, jako by se tu kouřilo. -Velmi, velmi vás prosím, jeden z nich sotva stojí na nohou, něco křičí, něco se mu nelíbí. Je to velmi děsivé. -Jen se modlím, jestli můžete допоможіть🙏🏼 -VZOROVÝ ŽIVOTOPIS SEKRETÁŘKY -Pro plnohodnotný a dobře koordinovaný provoz každého podniku je zapotřebí spolehlivý personál, protože na něm závisí celý pracovní proces. -Každý zaměstnanec zastává ve společnosti svou vlastní pozici a vykonává funkce v souladu se svou pracovní náplní a platnými právními předpisy. -Sekretářka je jednou z nejdůležitějších a nejzodpovědnějších pozic ve firmě. -Sekretářka hraje důležitou roli v každé firmě a velké společnosti hledají zaměstnance s praxí, proto věnujte tomuto bodu ve svém životopise zvláštní pozornost. -Uveďte své odborné dovednosti a funkční povinnosti, které jste v minulosti vykonávali, například obchodní korespondenci, telefonickou komunikaci s klienty, zaměstnanci a obchodními partnery, papírování a poradenskou činnost. -Znalost PC a kancelářského softwaru je pro tuto pozici nutností, stejně jako schopnost používat kancelářské vybavení: tiskárnu, fax, kopírku, skener atd. -Nezapomeňte uvést svou úroveň znalosti cizího jazyka. -Sekretářka často působí jako asistentka vedoucího pracovníka, řídí a kontroluje jeho pracovní den a je zodpovědná za organizaci pracovního procesu, naznačte, že máte manažerské schopnosti. -Sekretářka je tváří společnosti, takže se připravte na to, že může být vyžadována vaše fotografie. -To je také důležitý bod při žádosti o tuto pozici. -Ano, ale dělám to s obtížemi. -Doma jsme už vařili. Ale nemůžeme najít klíče od půdy. -Pokud mi můžete půjčit peníze na léky, prosím! -Nevolal, kluci to předali dál. Vím jistě, že Bůh je s ním. -Ahoj, omlouvám se, že jsem dlouho neodepsal, vařil jsem večeři a trochu spal. -Jestli ti to nevadí, mohl bych se dnes zastavit). -nebo se setkáme po vašem příjezdu) -Dnes mě překvapilo, že mnoho mých přátel a známých začalo Marinu Ovsjannikovou, tu s plakátem, považovat téměř za národní hrdinku Ukrajiny 😳. -Omlouváme se za nedorozumění -Korán říká: Korán říká: "i list ze stromu padá s Jeho vědomím". -Jen já přijdu o něco později - kolem osmé. Je to v pořádku? -Jak uklidnit domácího mazlíčka? -Zvířata jsou velmi citlivá na nebezpečí, a proto se mohou ve válečné době chovat nervózně a neklidně. -Vystresované zvíře může utéct nebo odmítat jíst či chodit na záchod. -To vede ke zdravotním problémům a dokonce k úmrtí. -Proto byste měli být vy i zvíře v klidném stavu. -Připravili jsme pro vás doporučení, jak zvíře uklidnit: -Vy sami byste měli být co nejklidnější. -Zvíře cítí váš stav, takže může převzít vaše pocity. -Mluvte na zvíře klidným tónem hlasu a dotýkejte se ho. -Vezměte si s sebou na cestu nebo do protiatomového krytu oblíbené hračky a krmivo svého domácího mazlíčka. -Pokud má rád nějaké svačinky nebo jídlo, které mu dáváte jen zřídka, teď je ten správný čas. -Vytvořte pro zvíře bezpečné místo. -Pokud jste na cestách a máte malé zvíře, měla by být přepravka pevně uzavřená a vybavená všemi potřebnými věcmi. -Doporučuje se nosič s pevnými bočnicemi. -Uvnitř by měla být plena, nejlépe přilepená ke dnu přepravky oboustrannou páskou, a na ni položte oblíbené prostěradlo nebo ručník vašeho mazlíčka, aby byla měkčí a pohodlnější. -Dbejte na to, aby prostěradlo nezabíralo příliš mnoho místa a nebylo příliš teplé, aby se zvíře nepřehřálo. -Tím si vytvoříte pohodlí a váš čtyřnohý přítel bude méně nervózní. -Ujistěte se, že zvíře pije trochu vody. -Občas mu nabídněte misku s vodou. -Nenechávejte však misku s vodou v přepravce, protože by ji zvíře mohlo vylít. -Zvíře se uklidní, když se nají. -Chcete-li prodloužit účinek, můžete oblíbenou paštiku rozetřít na látku (ručník, rukáv apod.) a nechat zvíře, aby ji olízalo. -To pomůže vašemu mazlíčkovi soustředit se na pamlsky a abstrahovat od okolních stresorů. -Pokud je zvíře velmi nervózní, může být v krajním případě uspáno. -Nejlepší je Gabapentin 100 mg, který je dostupný v humánní lékárně, ale prodává se na lékařský předpis. -Dávka je +-20 mg/kg (u některých lidí působí adrenalin silněji, takže dávka může být zvýšena na 30 mg/kg). -Účinek: zvíře se může kolébat, může tvrdě spát - poločas rozpadu je 8 hodin. -Vše pomine. -Mezi veterinární léčiva patří tablety Zilkenet (používejte podle návodu) a gel Sile. -Upozornění: nepodávat zvířatům s onemocněním srdce a zvířatům mladším 5 měsíců. -Léky nejsou první volbou, ale pokud není nic jiného - Corvalol, Barboval, Corvalcaps Extra - 1-2 mg fenobarbitalu na 1 kg tělesné hmotnosti (dávka by měla být přepočítána pro každý lék) 2krát denně. -Existují léky s názvem Kalmvet a Stop-Stress. -Jsou bylinné, takže mají kumulativní účinek (začnou působit 3-4. den stálého užívání). -Tyto léky jsou pro zdraví nejbezpečnější. Pokud však dochází k velmi silným výbuchům nebo je zvíře velmi stresované, je lepší podat výše uvedené léky. -Výše uvedená sedativa doporučuje váš veterinární lékař. -Pokud se však bez nich obejdete, je lepší nechat zvíře ve střízlivém stavu. -A nezapomeňte: válka určitě skončí vítězstvím, ale do té doby musíte vydržet a co nejvíce chránit sebe i svého mazlíčka. -Mohu vám vrátit peníze za lékaře? -Ahoj. Ano, budu na tebe čekat v šest hodin. -Pokusím se to stihnout. Protože musím jít na radnici podepsat dokumenty. -Máš mě. -Víš, že tě miluji. -Udělám pro vás všechno. -Jak to mám své lásce dokázat? -Chci, abychom měli šanci být spolu. -Aby vám vše dokázal. -Možná jste zažili mnoho zklamání. -Dovolte mi, abych vám udělal radost -Pokusím se jí vše vysvětlit) -Pokud chcete, můžete se v Kutné Hoře setkat s rodinou. -Jsem si jistý, že s tím nebudou žádné problémy. -Na Ukrajině slaví tento týden mnoho lidí stejným způsobem. -Záleží na náboženství. -Ať už je člověk řeckokatolík nebo katolík. -V žádosti se uvádí, že se jedná o humanitární pomoc. -Žádost je třeba vyplnit na pobočce Úřadu práce ČR. -A zobrazte tento čárový kód. -To znamená, že jsem si podal žádost online, ale musíte jít na úřad práce, abyste o ni požádali. -V pondělí tam mám v plánu jít. -Ano, ale jak mám kontaktovat majitele, nejsou tam žádné kontakty, potřebuji místo k pobytu na dlouhou dobu, pracujeme a můžeme platit, pomozte, pokud můžete. -Na konci každého popisu je uveden kontakt na majitele. -Pomocí tohoto online překladače z ukrajinštiny do ukrajinštiny a naopak můžete komunikovat. -Cokoli, jsme teď čtyři na jedné posteli v pokoji, kde se nedá ani obejít, děkuji pěkně. -Klikli jste na odkazy, které jsem vám poslal výše? -U každého z nich máte zdarma k dispozici nabídku ubytování, popis, kontakt na majitele a fotografii. -Nemohu najít kontakt, pouze email, bohužel nemohu nic najít, jaké ubytování je blíže Českému Krumlovu, pracujeme v Českém Krumlově a nyní žijeme v Praze. -Pak vám nemohu pomoci. V Brně není šance. Nejbližší předposlední ubytování v Perečíně -Děkujeme za pomoc, našli jsme ubytování, ale potřebujeme postel a pohovku, možná nám můžete říct, kde je můžeme levně koupit, protože náš rozpočet je slabý, děkuji moc. -Dal jí ho strýc z domova -Ve středu mám narozeniny -Musel jsem ti říct, co se rozbilo. -Poslechnu si to zítra, internet mi nefunguje. Dobrou noc. -Prezident MVČK slibuje zintenzivnit organizaci humanitárních koridorů -Evropská kosmická agentura odmítá spolupracovat s Roscosmosem na vývoji mise ExoMars -Novináři jsou žádáni, aby nezveřejňovali informace o armádě a jejich umístění. -Film Matka apoštolů získal šest ocenění na třech mezinárodních festivalech. -EU vyzývá Rusko, aby okamžitě zastavilo agresi proti Ukrajině -V Chersonu dva účastníci schůzky "záchranného výboru" tvrdí, že byli donuceni k tomu. -Zelensky obdržel polskou cenu Jana Karského v nepřítomnosti -Ostřelování města Novi Petrivci v Kyjevské oblasti: dvouleté dítě zabito, další zraněni -Trenér ženského basketbalu se připojil k obraně proti teroristům -Ukrzaliznycja vytvoří strategické zásoby výrobků na celé Ukrajině - Shmyhal -Erdogan navrhuje dvě města pro setkání Zelenského a Putina -Asi 30 000 lidí mohlo opustit Mariupol vlastní dopravou. -Výzkumné jaderné zařízení v Charkově je odpojeno od napětí -Ostřelování v Rubižném: Rusové přes noc zabili čtyři a zranili deset civilistů -Reznikov vyzývá svět, aby prověřil "oprávněné zástupce", kteří žádají o zbraně pro Ukrajinu -Gutzeit vysvětlil, proč je pro Ukrajince důležité účastnit se mezinárodních soutěží. -Ministři G7 vydali společné prohlášení k Ukrajině -Galuščenko ujišťuje, že Ukrajina má dostatek energetických zdrojů -Bezpečnostní dohoda a aspirace NATO si neodporují - Klimkin -Národní banka znovu připomíná, že dopisy na pomoc ozbrojeným silám neposílá. -USA odsuzují ruské únosy ukrajinských úředníků a aktivistů -Nepřátelský granát zasáhl kavárnu v Pokrovsku, jsou zranění -Azovský pluk zničí čtyři tanky, dva obrněné transportéry a rotu nepřátelské pěchoty za noc -Ukrajina již obdržela od EU žádost o nákup ukrajinské elektřiny. -Evropská asociace operních festivalů zahajuje projekt Opera pro Ukrajinu -Koncert We Are One v Bukurešti vynesl 900 000 eur pro ukrajinské uprchlíky -Jermak vyzývá přední investiční společnosti, aby se podílely na obnově Ukrajiny po válce -V Chersonské oblasti jsou zemědělci nuceni podepsat "dohodu o spolupráci" pod hrozbou použití zbraně -Zápasy ženského národního týmu Ukrajiny ve výběru pro mistrovství světa 2023 odloženy na červen -Vyšetřovatel Bellingcatu informuje o zadržení zástupce ředitele Rosgvardie -Ruská vojska použila téměř všechny rakety Kalibr a systémy Iskander -Vchod do vytěžené vesnice Pravdyne v Chersonské oblasti -Ruské jednotky ostřelují Kyjevskou oblast ze zbraní Grad a Smerč - jeden mrtvý a jeden zraněný -Stoltenberg: NATO chápe frustraci Ukrajiny a posiluje vojenskou pomoc -Galuščenko: Nejlepší, co mohou Rusové udělat, je vystoupit ze ZNPP -Ukrajinci v Polsku mají zaručenou bezplatnou zdravotní péči - Ljaško -Ministerstvo zdravotnictví vyzývá dobrovolníky a filantropy, aby se spojili a pomohli ukrajinským nemocnicím. -Mezi ruskými útočníky v Chersonské oblasti jsou i policisté z Krymu a Krasnodarského kraje -"Na shledanou": jak se Ukrajinci loučí na nádražích -Vakarčuk přijel do Kryvého Rogu podpořit vojáky -Druhý britský ministr informuje o podvodníkovi, který volá jménem "ukrajinského premiéra" -V Litvě se převrátil autobus s ukrajinskými uprchlíky, 10 lidí bylo zraněno - média -Zpěvačka Zemfira vydala videoklip "Don't Shoot" se záběry ničení ukrajinských měst -Ruští útočníci poškodili přes 400 vzdělávacích institucí, 64 jich zničili - Shkarlet -Vereščuk na koridoru z Mariupolu: Více než 100 000 obyvatel již mohlo odejít -Služby PayPal jsou dostupné Ukrajincům - Fedorov -Zelenskyj jednal s Macronem o podpoře Ukrajiny v oblasti obrany -Biden označil Putina za "krvavého diktátora" a "zločince" -Rada bezpečnosti OSN: Z Ukrajiny uprchlo více než 3,1 milionu lidí -Řada ukrajinských médií byla dnes hacknuta z Ruska - SBU -Napište mu, pokud budete něco potřebovat. -Chudák Oleksandra přebíhá z jednoho rohu do druhého, je toho tu tolik, že jí oči přechází a chce být všude najednou. -Pro jídelnu je vyžadován kód -Slíbili jsme ti, že až půjdeš do školy, přineseme ti překvapení. -Evo, prosím, odejděte s dětmi co nejdříve, já vám pomůžu - my vám pomůžeme, ale nezůstávejte ve válečné zóně. -V Česku dostaneš speciální vízum na rok, děti budou mít školu, školku a máš nárok na peníze od státu do začátku, je to asi 200 eur jako první pomoc, dej si nejnutnější věci do auta, čím blíž budeš k západní hranici, tím budeš v bezpečí, na Slovensku řekni, že jedeš do Česka, máš ve mně kamaráda. -Až budeš v Čechách, můžu na tebe někde počkat a on tě odveze ke svému příteli a jeho ženě. -Prosím, nezůstávejte tam, kde jste... -Je těžké opustit domov, ale život je to nejzajímavější... -Až se situace uklidní, můžeš se vrátit nebo zůstat tady... -Budu šťastný, až vás tu uvidím, potřesu vám rukou a řeknu: "Vítejte". -Nebudete moci objasnit -Jsem technolog zásobování vodou, takže chuť vody cítím hned. -Ano... moc děkuji. Ani jsem nedoufala, že takového člověka potkám, hodně zdraví a štěstí!!!!. -Pokud jsem tento formulář nevyplnil a dosud jsem neobdržel žádnou platbu, sdělte mi to, prosím. -Ano, velmi se nám líbilo a chtěli jsme něco bližšího, takže je to nejlepší možnost. -Kdybych to věděl, šel bych s tebou. -Poslechl svou matku a nešel s námi. -Můžete mi prosím sdělit poštovní směrovací číslo? -Můžete mi říct, jak se dostanu do centra města? -Mluvil jsem s Volodymyrem o bydlení a požádal ho o radu, co dělat, zda jít na stanici metra Muzeum a požádat o pomoc, a on mi poradil jít do Muzejní ulice, ale podle vašeho názoru, co by bylo lepší. -Má mozkovou obrnu v poloze na zádech, zlomenou nohu v sádře, invalidní vozík v poloze na zádech a všechno jídlo má rozemleté v mixéru. -Musím vzít Karinku s sebou -Jak najít tuto paní na sociálních sítích Chtěli jsme zítra navštívit radnici, než půjdeme do nemocnice. -"Bylo jim řečeno: Kyjev neexistuje". Jak okupanti podle odváželi Ukrajince do Ruska a Běloruska -Ruskou invazi na Ukrajinu provázejí strašné věci: rabování, znásilňování, vraždy, mučení. -Tento seznam zahrnuje i další položku, o níž je v současnosti mnohem méně informací - přesun místního obyvatelstva na území nepřítele. -Zhruba od poloviny března ruští okupanti "evakuují" Ukrajince z dočasně obsazených osad na své území a na území Běloruska, které samozvaný prezident Lukašenko fakticky předal ruskému vojenskému cvičišti. -"Ukrajinska pravda" vyhledala deportované Ukrajince a jejich příbuzné, aby se jich zeptala, jak probíhá "evakuace" a zda existují nějaké možnosti návratu na Ukrajinu po jejím skončení. -Protagonisté tohoto textu přijali dobrovolnou i nucenou evakuaci mimo Ukrajinu pod psychickým tlakem a ze zoufalství. -Naštěstí žijí a jsou v kontaktu se svými rodinami. -Podle prohlášení ukrajinských úřadů však bylo mnoho občanů odvezeno do Ruska a Běloruska pod tvrdým nátlakem. -Například v Melitopolu Rusové unesli personál druhé porodnice a odebrali děti bez rodičů, včetně dvanáctileté Myroslavy, dcery zesnulého ukrajinského šampiona v plavání Josypa Zachepynského. -Rodina Oleksandra, Maryny a jejich desetileté dcery Valie se do Hostomelu přestěhovala několik měsíců před válkou. -Oleksandr právě dostal práci na letišti Antonov, které se nachází 2,5 kilometru od vesnice. -Rodina se usadila na území vojenského tábora, přestože byli civilisté. -Jako správní členové Plastů si Denys a Maryna předem sbalili zavazadla pro případ nouze, ale ráno 24. února se nestihli evakuovat. -Neměli vlastní auto a příměstské autobusy už nikoho nevozily. -Kolem poledne 24. února spatřili vrtulníky s latinským písmenem V, po nichž následovaly první střely. -Jeden z nich zasáhl sousední dům. -Poté se rodina odebrala do sklepa a zůstala tam dlouhé tři týdny, až do 17. března, kdy je ruská armáda odvezla do Běloruska. -Celkem se v jejich sklepě ukrývalo asi 40 lidí. -Ne všichni se rozhodli jet. -"Dne 24. února v 18 hodin vstoupili do budovy lidé, kteří nemluvili ani rusky, ani ukrajinsky. -Zeptali se: "Je tam někdo?" - "Ano." - "Vylezte!". -Prohledali mě, ptali se mě, kdo je ve sklepě a jestli mám nějaké zbraně. -Všichni muži byli dotázáni, zda sloužili v armádě. -Ženám bylo řečeno: "Přišli jsme vás chránit na příkaz Ramzana Kadyrova". -Byli to čečenští těžkooděnci, dokonce ani ne vojáci, mladí muži ve věku 25-35 let. -Řekli: "(velitel vikingského nacionalistického hnutí UNA-UNSO, které bojovalo na straně Čečenců v první rusko-čečenské válce - UP), nyní jsme přišli na pomoc vám." -Třetí den se nás zeptali, co nám chybí. -Říkáme, že voda byla velkým problémem. -Rozbili obchod, vzali zboží pod záminkou, že si ho stejně vezmou Rusové, a přinesli nám šest lahví. -Na internetu se objevilo i video, na kterém děti děkují Kadyrovovi za jídlo. -Byli to oni, kdo po vyrabování obchodu přinesli párky a řekli: "Chápeme, že je to špatné, ale opravdu potřebujeme video pro Ramzana Achmatoviče." -Nikomu se nechtělo mluvit, ale udělali krásný řez. -Naše dítě řeklo: "Máte sedm dní na to, abyste nám vrátili naše telefony" - smáli se. -Také se neustále oddělovali od Rusů. -Říkali, jak je dobře, že přijeli, protože nechtějí válku, podporují Ukrajince a jsou obecně hodní: "Putin je hajzl, Kadyrov je hajzl, ale my nemůžeme nic dělat, protože tam jsou naše rodiny." -A většinou nebojovali, ale chodili rabovat do obchodů a loupit. -Přinesli si kuřata, na jednu nohu přivázali stuhu svatého Jiří a na druhou vlastní bílou pásku a nazvali ji "domobrana". -Na Ukrajinu přijeli nepřipraveni. -Nevím, jak fungovala jejich inteligence... Když je v Buči zbili, tak strašně, že ani neodnesli mrtvé, tak se nás ptali: "Máte dělostřelectvo?" zeptali jsme se. -Navíc jsme měli kombinované oddíly - Čečenci se navzájem neznali a v prvních dnech si vyžádali hesla, aby zjistili, zda jsou vlastní, nebo cizí. -Měli jsme je až do 13. března, pak přijela ruská pořádková policie a po ní vojáci z Omsku. -Mezi domy umístili 30-40 kusů zařízení - do našeho domu neustále narážely bajraktary. -Ale tito lidé přišli a řekli nám, že ukrajinské ozbrojené síly už neexistují, ale existuje Azov a podle nich je na Ukrajině 100 000 Azovů. -Od začátku března se prosazuje narativ "Kyjev se již vzdal". -Jednoho dne k nám přišel Rus - buď důstojník letectva, nebo zástupce FSB - a řekl, že bude evakuace. -A poslouchali jsme rádio - mluvili o Bucha a Gostomel a my jsme si mysleli, že nám možná dají "zelený koridor". -Od samého začátku jsme Čečencům předali naše seznamy a oni slíbili, že se spojí s ukrajinským velením. -Ale bylo nám řečeno: "Odvezou vás do Běloruska a pak možná do Rostova". -Říkáme, že nechceme jít ani jedním směrem. -Na to odpověděli: "Tak se slitujte nad psychikou svých dětí!" -Jak to fungovalo: zahnali nás do sklepa a začali střílet zpod domu, buď z Gradů, nebo z minometů. -A pak se to vrátilo... Sousední domy prostě vyhořely, některé se zřítily. -Náš dům byl zasažen přímo ve třetím patře. -Později souhlasili, že nás odvezou jen do Běloruska, a slíbili, že nás předají pohraničníkům a Červenému kříži. -Někteří z nich tam nejeli s tím, že je to pro zrádce vlasti - říkali, že "tam tě zastřelí, prodají na orgány". -Vezli nás přes Černobyl a po obou stranách silnice byly hořící budovy a rozbité zařízení, i když nám Bělorusové říkali, že Rusové jim zařízení okamžitě odvezou. -Je tu spousta kaponiér, zakopaného vybavení a vojáků. -Na kontrolním stanovišti Komar v Bělorusku jsme prošli improvizovanou kontrolou, mnozí z nás neměli vůbec žádné doklady, protože byly spáleny. -Byli jsme ubytováni ve stanech Červeného kříže a dostali jsme čaj. -A pak uslyšíme výstřely! Střílí raketa a je vidět stopa - byl to Iskander, který střílel na Kyjev. -Přestože tito lidé z Běloruského červeného kříže řekli: "Ne, to letí letadla, která se otáčejí na hranicích". -Ale my pocházíme z letectví, máme vzdělání, rozumíme tomu, co to je. -Pak nás naložili do autobusu a odvezli do sanatoria Chonki u Gomelu. -Přišel jeho šéf Vasyl Venger a řekl: Venger mu řekl: "Tady, já jsem Chochol, jsem z Černihova. -Ale chápu, že Putin a Lukašenko se nezastaví, dokud s těmi vašimi zloději neskoncují. -Chudáci lidé trpí! -A náš Lukašenko je tak dobrý! -Udělá, co řekne." -Nabídli jsme rozhovor běloruským novinářům, ale nikdo nechtěl. -Správa sanatoria nám řekla: "Jste zrádci!" -A lidé z Červeného kříže a kanceláře OSN (alespoň tak se jim říkalo) šířili dezinformace, že muži nesmějí do Polska. -Mnozí tomu uvěřili a báli se Bělorusko opustit. -Přesto jsme se rozhodli odjet, i když jsme měli problém s doklady - dcera neměla pas. -Měli jsme ho dostat v pondělí a válka začala ve čtvrtek. -Ukrajinský konzulát nám nepomohl a v Minsku na nádraží nám řekli, že bez pasů nikdo nikoho do autobusu nevezme. -Tady bych řekl, že nám hodně pomohli bělorusští dobrovolníci. -Byli jsme ubytováni v Minsku a doprovázeli nás. -V sanatoriu mě nedrželi násilím. -Naší druhé skupině zástupce pro migraci řekl, že zde můžete zůstat maximálně týden a půl, protože Bělorusko není Evropa a nejsou zde žádné dávky. -Našli jsme vnitrostátního dopravce, který souhlasil, že nás vezme do Varšavy. -Když jsme jeli poblíž Mozyru (asi 50 kilometrů od ukrajinských hranic - UP), viděli jsme, jak na Ukrajinu odpalují balistické rakety: raketa nejprve odstartuje, krásně se rozzáří a pak zhasne. -Lidé, kteří nastoupili do autobusu v Mozyru, uvedli, že Rusové tam neustále střílejí ze střelnic. -Ale mohu říci, že Bělorusové vůbec nechtějí bojovat. -Generálové jsou propouštěni z armády. -Jedna žena nám řekla, že by pro svého syna něco rozbila, kdyby byl odveden. -Severní část Běloruska, kde se nachází Minsk, však nevěří vůbec. -Řekli: Řekli: "Lžete, jsme mírumilovný národ." -Nesouhlasí s tím, že jejich cvičiště slouží k ostřelování Ukrajiny. -V Polsku na nás čekali přátelé a odvezli nás do Estonska. -V současné době procházíme registrací a rozhodujeme se, co budeme dělat dál. -V Estonsku je pravděpodobně tolik ukrajinských vlajek jako estonských." -Marina žila ve vojenském táboře v Gostomelu s rodinou svého bratra - jeho ženou a dvěma dětmi ve věku 18 a 22 let. -Ráno 24. února zavolala svým synovcům a požádala je, aby shromáždili nejnutnější věci a dokumenty. -Večer šli přespat do sklepa sousedního domu. -Marina sama se domů vrátit nemohla a v podstatě ani nebude moci - dům je pryč. -O evakuaci nikdo nic nevěděl. -Nic se neděje. -Je to možné v pondělí 25. dubna? -Děti můžete vzít s sebou, je tu spousta hraček :) -Británie chce vystěhovat všechny Rusy ze svého území a zabavit jim veškerý majetek! -Už se malý Dušan nestydí? -Ondřej a jeho otec nás navštívili. -Nikdo další nepřišel, i když po ulici chodilo mnoho koledníků. -Dnes jsme doma. -Děti se učily, teď si hrají. -Odpoledne musím jít do Lidlu nakoupit nějaké potraviny. -Knedlíky určitě přivezu jindy! -Dnes na ně nemám energii. -V kolik hodin se sejdeme na praní prádla? -Životopis prodávajícího v ukrajinštině -Andrey Levinov -(Andrij V. Levinov) -Datum narození: 02.22.1972 -Město: Kyjev -Mobilní telefon: (000) 000 00 00 -E-mail: 0000@gmail.com -Cíl: Obsadit volnou pozici v oblasti prodeje. -Vzdělání: -Září 1995 - červen 1999, Kyjevská národní univerzita technologie a designu (KNUTD), Fakulta obchodu a práva, obor "Hotelnictví a restaurace", bakalářský titul (prezenční). -Další vzdělání: -Březen - prosinec 2008 - kurzy angličtiny, "Communicate Freely", Kyjev. -Červenec 2010 - Kurzy "Počítačová pokladna", Kyjev. -Pracovní zkušenosti: -Prodejce. -červen 2000 - srpen 2002 - obchod s dětským oblečením Bunny, Kyjev. -Funkční odpovědnosti: -- poradenství zákazníkům; -- pracovat s pokladnou; -- vystavení zboží; -- práce v programu My Warehouse; -- zahájení/ukončení směny; -- Účast na inventarizaci; -- poradenství a prodej produktů na Instagramu a Telegramu; -- naplnění stránek Instagramu novými produkty; -- fotografování produktů pro Instagram. -Asistent prodeje, vedoucí prodejce -srpen 2002 - březen 2014. Stavební obchod Vector, Kyjev. -Funkční odpovědnosti: -- poradenství a prodej výrobků maloobchodním zákazníkům a stavebním firmám; -- zpracování plateb zákazníkům; -- organizace práce prodejců (5 osob); -- provádění plánu prodeje; -- informování stálých zákazníků o speciálních nabídkách a akcích; -- zajištění pořádku na obchodním parketu; -- vystavení zboží (merchandising). -Starší prodejce -březen 2014 - současnost. Obchod s nábytkem Sofino+, Kyjev. -Funkční odpovědnosti: -- organizace práce oddělení čalouněného nábytku (4 osoby); -- poradenství zákazníkům; -- vyjasnění potřeb pro sortiment pohovek a jejich konfiguraci, výběr látek; -- zpracování prodeje a nákupu v aplikaci CleverSofa; -- práce s pokladnou; -- vedení dokumentace; -- udržování pořádku v hale; -- Příprava a umístění reklamních materiálů na nástěnkách; -- vyřizování příchozích hovorů a pošty. -- udržení klientské základny. -Odborné dovednosti: -- Umím pracovat s počítačem a kancelářskou technikou; -- zkušenosti s prací s nepotravinářskými výrobky; -- dovednosti při práci s pokladnou; -- schopnost řešit konfliktní situace; -- schopnost pracovat v týmu; -- zkušenosti s prováděním inventur; -- dovednosti v oblasti řízení lidských zdrojů; -- kompetentní psaný a mluvený projev; -- Jazyky: ukrajinština - rodný jazyk; ruština - plynně; angličtina - středně pokročilý. -Osobní vlastnosti: -Slušný, společenský, vzdělaný, reprezentativní, orientovaný na výsledky. -Další informace: -Není ženatý. -Chodím na sport. -Žádné špatné návyky. -Jsem připraven pracovat v noci. -Který den chce kolega uklidit dům a kolik má dům oken? -Jak dlouho jste tento titul studoval? -Toto je můj ukrajinský spolužák -Strýc jí dal z domova kolečkové brusle a skateboard a ona na nich doma jezdila, takže to musela mít. -Kde se dá koupit klobouk s pohyblivými ušima? -Moje matka zde bude bydlet na koleji, protože poplatky jsou nízké. -Hlavní je, že mám kde bydlet. -Už na to nemám sílu. -Jdeme k lékařské komisi -Ano, bylo by to hezké, ale vím, že nebyla žádná volná místa, a v zásadě mají teď všechny hodiny podle rozvrhu a učí docela dobře. -Dejte nám na zítřek vysavač na úklid. -Tohle je moje první "kolo" po synovi. -Podělím se s vámi o své myšlenky. -Manželství je pro mě svátost dvou lidí, kteří o sobě nemluví s rodinou ani s přáteli. -Muž a žena řeší problémy v rodině sami, zejména bez účasti příbuzných. -Brzy se vrátím na Ukrajinu a budu pokračovat v práci -Doufám, že vše bude v pořádku (( a vyřešíme naši otázku bydlení ( -Lydie už byla přijata do školy, ale kvůli její nemoci půjdeme v úterý, informovala jsem dnes učitelku. -na začátku si ji můžete vyzkoušet zdarma. -V neděli mám volno. Často jsme jezdili do Charkova, bylo tam krásně. -Nevím, kolik je tam místa, takže nemůžu nic plánovat s nábytkem, chci se tento týden konečně podívat na byt a pak pochopit, jaký nábytek potřebuji. -Situace v souvislosti s ruskou invazí - brífink poradce vedoucího prezidentské kanceláře Oleksije Arestovyče (10.04.2022) -Poradce vedoucího prezidentské kanceláře vyprávěl o hrdinském činu vysokého důstojníka pohraničních jednotek v Mariupolu - byl obklíčen a zraněn, a tak se odpálil s radiostanicí, aby zabránil tomu, aby se dostala k nepříteli. -Před půl rokem jsme si v Pamětním centru holocaustu Babyn Jar v Kyjevě připomněli 80. výročí masového střílení ukrajinských Židů německými vojáky v Babyn Jaru. -Měl jsem tu čest promluvit po třech hlavách států, včetně německého prezidenta. -Hovořil o "společném základu mezinárodního práva a lidské důstojnosti, o svobodě lidí zvolit si vlastní cestu a žít v územní celistvosti, o mírové a bezpečné Evropě". -To je základ, který musíme chránit - je to také součást naší odpovědnosti související s naší historií." -Pokud se "zlí démoni minulosti dnes objevují v novém hávu," řekl, "pak pro nás Němce existuje jediná odpověď: už nikdy! -Boj musí pokračovat." -Rusko dnes zaútočilo na mírumilovnou zemi, bombardovalo a zabilo tisíce civilistů, nechalo obyvatele měst, která zablokovalo, hladovět a umírat na nemoci. -Ruské jednotky provádějí masové popravy Ukrajinců, které i vizuálně připomínají popravy v Babyn Jaru. -Němci se s tím ve zprávách setkávají už více než měsíc v reálném čase. -Německo například uvaluje sankce, poskytuje humanitární pomoc a zbraně, což bylo ještě nedávno nepředstavitelné. -Německo odkládá dodávky těžkých zbraní, které Ukrajina potřebuje. -"Nikdy více!" však neznamená jen odpor k hákovému kříži. -To znamená bojovat všemi možnými prostředky proti masovému vraždění, genocidě, válečným zločinům a zvěrstvům. -Neexistuje snadný způsob, jak porazit zlo a zastavit zvěrstva, která se dějí na Ukrajině, bez rizika a obětí. -Často se mi tady zdá o tátovi. -Zemřel den po mých narozeninách. -Sny jsou obecně dobré. -Po metodice z roku 2014 se Rusko zoufale snaží uspořádat falešné "referendum" o "lidové republice" v Chersonu. -Podpora mezi lidmi je nulová, takže jde o naprostou fikci. -Pokud budou tyto plány realizovány, měly by být na Rusko uvaleny tvrdé sankce. -Cherson je a vždy bude Ukrajinou. -Při návratu z Turecka na Ukrajinu a v rámci dialogu mezi vedoucími představiteli jsem byl ve Varšavě přijat prezidentem Andrzejem Dudou. -Poděkoval jsem za zvýšenou vojenskou, finanční a humanitární pomoc Ukrajině. -Jednali jsme o ochraně Ukrajiny a podpoře našeho členství v EU. -Rusko nadále drží v Mariupolu více než 400 000 lidí jako rukojmí a blokuje humanitární pomoc a evakuaci. -Ostřelování pokračuje. -Téměř 3000 novorozenců bude brzy bez léků a jídla. -Svět musí okamžitě jednat! -Ruští barbaři musí zastavit válku proti civilistům a dětem! -Rozhovor s chorvatským kolegou Gordanem Grlićem-Radmanem. -Záhřeb si pamatuje, jak Ukrajina pomohla Chorvatům na počátku 90. let minulého století konkrétními řešeními ve Vlastenecké válce. -Chorvatsko nyní připravuje rozhodnutí, kterým se mu odvděčí. -Poděkoval jsem jim také za podporu sankcí EU proti ruským útočníkům. -Pořádáte sbírky na pomoc zvířatům, která se ocitla v České republice nebo na Ukrajině? -Říkala, že dorty jsou tam levné a chutné. -Dominiko, kdo nakupuje spotřební materiál? -Je to trvalý postup, nebo se musím objednat, jakmile skončí? -Nemůžu si ji stáhnout, mám starý telefon. Neumím německy (( -Dobré ráno, odjíždím v 10 hodin... napište mi znovu adresu, někde jsem ztratil textovou zprávu. -Neměňte téma, ptám se přímo vás, jestli chcete pokračovat v komunikaci, pokud ne, napište mi už. -Daří se nám dobře, máme zkušenosti s topením v kamnech, na Ukrajině máme plyn, ale protože je velmi drahý, topili jsme v kamnech dřevem. -"Lucie, můžu ti pomoct v kuchyni? -Často s vámi komunikuji ve své mysli. Zdá se mi, že slyšíte mé myšlenky. -MATERIÁLNÍ POMOC: Poskytujeme materiální a finanční pomoc. -Vybavujeme byty pro dlouhodobé bydlení. Informujeme občany o nejnutnější pomoci. -Cesta sem nám trvala 4 dny, byla to noční můra, ne cesta. -Projížděli jsme Kyjevskou oblastí, městem Irpin, a bylo to strašné, střelba, sirény. -Mám o ně také obavy. -Jsem moc ráda, že jsi v pořádku. -Jít nebo nejít, prostě jsi mě musel pochopit. -Musím platit tarif, musím se kontrolovat. -Jak zjistím, kolik spotřebuji? -A jak zaplatím? -Želva vyletěla z okna při ostřelování domu v Obolonu -Spadla za plot fotbalového hřiště a poranila si tlapku. -Zvíře nyní ošetřují lékaři Červeného kříže. -Pomozte nám prosím šířit informace, aby ji majitelé mohli co nejdříve najít! -Zachráněné želvě přejeme brzké uzdravení a návrat k rodině. -Ano, v České republice jsou velmi milí lidé, přijela jsem na prázdniny navštívit své děti ve věku 10 a 15 let a babičku (jsou v Mariánských Lázních). -Vaše nabídka pronájmu je pro mě a mého bratra zajímavá. -Je nám 27 let. -Pracujeme v odvětví IT (vývoj her a softwaru) a nemáme žádné zlozvyky - nekouříme, nepijeme alkohol atd. -Potřebujeme byt k bydlení, takže hledáme zařízený, nebudeme v něm pořádat žádné "večírky". Nemáme auto. -Bylo by zajímavé znát adresu, abych mohl zkontrolovat dostupnost stanic metra, trhů (obchodů). -Zítra musím celý den pracovat, až do 12 hodin ve Fpointu, pak mám přednášky z angličtiny a večer si musím dojít do kanceláře jazykové školy pro klíče od učebny. -Omluvte nás, prosím. -Prosím vás laskavě, můžete to udělat ve čtvrtek? -2 hodiny jízdy)) procházky, maminka nakupuje a zpět)! -Každému je dáno podle jeho sil. -Jsou i mnohem smutnější příběhy, věřte mi. -Proto jsem potřeboval projít takovými událostmi, abych se změnil. -Tóra říká, že Bůh stvořil v člověku princip zla, aby se mohl duchovně změnit a stát se lepším v průběhu svého života. -Květiny jsou v plném květu. Všechno je v pořádku. Všichni jsou odpočatí a velmi krásní. -Ahoj, omlouvám se, právě jsem se probudil, špatně jsem spal. -V pondělí jsem se vrátila dřív a budík se spustil(( -Jen v jakém stavu jsme tento čas strávili společně..... -Mám na sebe výčitky a je jich mnoho.... A nemohu si pomoci..... -Ano, televize je toho plná -Na webových stránkách, na fotografii je televize, zeptejte se, zda bude možné přijet dnes... -Přeji vám, abyste si našli dobrého asistenta 😊 -Může mi někdo pomoci, potřebuji oblečení pro ženu velikosti XS nebo S, výška 165, a také pro její dcery (8, 10, 12 let) Děkuji. -Dobrý den, právě mi byla nabídnuta práce zubního lékaře v Praze, takže moc děkuji za váš zájem a omlouvám se. -V Chersonu okupanti zesměšnili památník Sláva Ukrajině na Perekopské ulici. -Podrobnosti: Je zaznamenáno, že neznámé osoby strhly vlajku Evropské unie, rozbily panely s fotografiemi Nebeské setniny a padlých v rusko-ukrajinské válce. -Okupanti strhli portréty hrdinů Nebeské setniny a padlých účastníků rusko-ukrajinské války. -Vlajka byla stažena. -Potřebujeme protizánětlivou pilulku -A my už máme produkt z jeslí. -Těšíme se na vás příště -Chápu, že jako dobrovolník a dobrý člověk nám chcete upřímně pomoci a podpořit nás. -A chápu, že potřebujete fotku, jak pomáháte, a tak dále... -Ale pochopte mě, na Ukrajině se mi žilo dobře, žádnou pomoc jsem nepotřeboval. -A nechci, aby to viděl někdo z mých přátel a tak dále. Prosím, nechápejte mě špatně ☺️. -Hledám tým, se kterým bych mohla začít, ve škole mě bavila chemie, učím se plynně česky, sním o práci pro Tevu. -Oba jsme začátečníci, neznáme nic než abecedu. -Takže já jsem na cestě a ty jsi ve vlaku. -Ale vzduch je čistý, je to plus.... -Budu čekat, až mi zavoláš. Mám si vzít něco z vybavení? -Vytvořili jsme anglickou verzi a přidali nové partnery -Přemýšlel jsem o práci. -Pokud bude ředitel spokojen, tak můžu nastoupit příští týden, protože zítra mám v plánu jít na úřad práce odevzdat dokumenty (pokud to půjde, protože jsou tam velmi dlouhé fronty) a v sobotu prší :). -V pátek jsme byli ve třídě, bylo nám řečeno, že se můžeme vrátit v pondělí. -Můj bývalý manžel se nemůže uklidnit, píše mi básně (skutečné drama). -Chci, aby vše, co nás s ním oficiálně spojuje, co nejdříve skončilo..... -Viko, jak často chodíš na večeři do drahé restaurace? -Jednou týdně jdu na večeři do drahé restaurace. -Nechci mučit tebe ani sebe, nějak se s tím vyrovnám. -Varuji vás, budu mít na sobě sportovní oblečení. -Přihlásila jsem se online. Pokusím se tam jít dnes po práci. -Zahraniční volební okrsek Ukrajiny (FED Ukrajiny) je volební okrsek, který sdružuje volební místnosti nacházející se mimo území Ukrajiny a skládá se z volebních místností na ukrajinských velvyslanectvích a konzulátech a z místností na vojenských základnách v zahraničí,[pozn. 1] kde jsou nasazeny ukrajinské mírové kontingenty (Kosovo a DR Kongo).Ústřední volební komise plní funkci okrskové volební komise pro FED. -V zahraničním volebním obvodu se konají pouze celostátní volby: prezidentské a parlamentní volby a celoukrajinská referenda. -Na vysokých školách se nekonají místní volby. -Potřebuji byt, budeme se stěhovat, našli jsme byt a není v něm žádný nábytek, lednička, pračka.... -Dala jsem prádlo vyprat, přišla domů a přeložila, co bylo napsáno na židli u dveří prádelny, oprava kanálu, můžu prát, nebo musím vypnout prádelnu. -Pane, prosím, toto jsou pozvánky pro ukrajinské děti, které budou vstupovat do tělocvičny. -Podařilo se vám s někým spřátelit? -Dobrý večer, nemohu si pomoci, ale musím se s vámi o to podělit. Mám zprávy z Mariupolu o svém synovi - je naživu a v řadách. -Ale nemám načíst nebo otevřít stránku velmi slabý internet -Dnes nemám svůj den. rozbily se mi žaluzie v obýváku. jedna ráno, druhá večer. -Možná je čas jít spát -Pokles HDP Ukrajiny v důsledku agresivní války, kterou proti ní vede Ruská federace, by mohl v roce 2022 dosáhnout mínus 10 %, ale tyto prognózy závisí na vývoji situace na Ukrajině. -Uvádí to nejnovější zpráva MMF o Ukrajině, kterou v pondělí obdržela agentura Ukrinform. -Dokument zejména předpovídá, že reálný růst HDP Ukrajiny bude v roce 2022 činit mínus 10 %, a to s ohledem na to, že válečný stav na Ukrajině nebude trvat příliš dlouho. -V této částce je již zohledněno, že Ukrajina obdržela 1,4 miliardy dolarů z nouzového financování MMF. -Pro srovnání, v "kovidovém" roce 2020 byl reálný růst HDP Ukrajiny rovněž záporný, a to mínus 4 %, ale v roce 2021 to bylo již plus 3,2 %. -Kromě toho se uvádí, že objem výroby na Ukrajině může v důsledku války klesnout o 25-35 %. -Tato prognóza vychází ze skutečných trendů pozorovaných v Iráku, Libanonu, Sýrii, Jemenu a dalších zemích, kde probíhají vojenské operace. -Dalším důležitým ukazatelem je schodek zahraničního financování, který podle prognóz Fondu dosáhne 4,8 miliardy USD a může se měnit v závislosti na délce trvání válečného konfliktu. -MMF nepředpovídá, jaký může být kurz hřivny vůči americkému dolaru nebo euru. -Fond naopak pozitivně hodnotí kroky, které ukrajinská vláda podnikla ke snížení negativního dopadu na národní měnu. -Očekává se, že veřejný dluh Ukrajiny vzroste v roce 2022 na 60 % HDP, protože humanitární krize a obnova ukrajinské infrastruktury si vyžádají reakci. -MMF rovněž upozornil, že válka Ruska proti Ukrajině již vedla k prudkému nárůstu cen energií, což bude mít negativní dopad na světovou ekonomiku. -Kromě toho utrpí trhy s potravinami. -Podle MMF bude Rusko také zažívat hlubokou recesi a prognózy fondu v této otázce mají být oznámeny příští měsíc. -Jak informovala agentura Ukrinform, Výkonná rada MMF schválila 9. března vyplacení 1,4 miliardy dolarů (1005,9 milionu ZPČ) v rámci nástroje rychlého financování (RFI). -Cílem balíčku pomoci je pomoci Ukrajině uspokojit její okamžité finanční potřeby a zmírnit dopady války na národní hospodářství. -Svůj příspěvek můžete upravit. Nejdůležitější je, aby obsahoval pravdu. -Najdeme způsob, jak to vymyslet za pochodu. Odepíšu vám. Děkuji. -Jsem vděčný každému za jeho zkušenost, i když je bolestná. -Věřím, že jsem se z každé situace poučil, abych v budoucnu neopakoval chyby. -Mám velmi důležitý dotaz, pomohou mi zaregistrovat vnučku, je jí 5 let (průduškové astma), nyní má rýmu a sípání ... -Potřebujeme pro ni rodinného lékaře, který by trochu rozuměl ukrajinsky nebo rusky. -Naléhavě potřebujeme konzultaci s lékařem a máme velké obavy o její stav... -Bydlíme v Dolních Chabrech v České republice. potřebujeme lékaře, který se nachází buď tady v Dolních Chabrech, nebo někde poblíž, případně v Brně, ale ne daleko od metra, abychom se tam dostali. -Už jsme domluvili, že ve 13.00 přijedou nějací Češi a pomohou nám lednice přestěhovat. -Souhlasíme, domluvte si prohlídku a dejte nám vědět, kdy tam budete. -Je váš otec jediný, kdo mluví rusky a rozumí jí? -Jsem z Ukrajiny a hledám práci -Chtěla bych pracovat v kavárně -Předtím jsem pracovala v kavárně na Ukrajině, mám asi tříletou praxi. -ale bohužel ještě neumím česky, teprve se učím. -Máte nějaká volná místa? Pokud ano, mohl bych u vás pracovat? -Milý bratře Alberte, moje rodina a já jsme ti velmi vděční za tvou podporu a za dar, který jsi nám dal! -Děkujeme vám z celého srdce. -S úctou rodina Bezverkhi. -Jaké služby/aktivity jsou podle vás pro starší lidi potřebné/chybí? -Už dlouho se chci zeptat a pořád zapomínám. -A jak je to u nás s očkováním proti COVID? -Neměla jsem čas nechat si doma aplikovat posilovací (třetí) dávku vakcíny. Je u nás toto očkování hrazené? -Moc se mi to líbilo, děkuji za organizaci. -Pokud jsem to správně pochopil, škola nám pošle e-mail, abychom se přihlásili do nabídky. -Potřebujeme propustku na čtvrtek, jdeme s dětmi k lékaři. -Hlavní je, abychom si rozuměli -Poté, co se objevily důkazy o tom, že se ruské síly dopouštějí na Ukrajině zvěrstev vůči civilistům, mezinárodní společenství nadále vyjadřuje šok a rozhořčení a Moskva tyto zprávy odmítá jako "provokaci". -"Zprávy o zabitých, znásilněných a vážně zraněných ukrajinských civilistech ruskými silami jsou odsouzeníhodné," řekla 4. dubna novozélandská premiérka Jacinda Ardernová novinářům ve Wellingtonu. -"Rusko se musí zodpovídat světu za to, co udělalo," dodala s tím, že její vláda bude jednat o dalších opatřeních na podporu Ukrajiny v boji proti ruské invazi. -Japonský premiér Fumio Kišida označil tyto incidenty za "porušení mezinárodního práva". -Tato prohlášení následovala po zprávách, že ruské jednotky, které se po několikatýdenní okupaci oblasti stáhly z Kyjeva, zastřelily stovky civilistů a naházely je do masových hrobů nebo je nechaly ležet na ulicích kyjevského předměstí Bucha. -Fotografie, na nichž jsou údajně vidět těla popravených civilistů se svázanýma rukama, mnohé šokovaly a vedly k výzvám k přísnějším sankcím proti Rusku a k trestnímu stíhání odpovědných osob. -Francouzský prezident Emmanuel Macron 4. dubna v rozhlasovém rozhovoru uvedl, že existují známky toho, že se ruské jednotky dopustily v Bukurešti "válečných zločinů". -"To, co se stalo v Bucha, vyžaduje nové kolo sankcí a velmi jasná opatření," řekl Macron a dodal, že další sankce by měly být zaměřeny na ruský vývoz uhlí a ropy. -Španělský premiér Pedro Sanchez prohlásil, že ruská vojska by mohla v Bukurešti zajít tak daleko, že by se dopustila "genocidy". -"Uděláme vše pro to, aby ti, kdo tyto válečné zločiny spáchali, nezůstali nepotrestáni," řekl Sanchez v Madridu. -Mluvčí ruského ministerstva zahraničí Maria Zacharovová v pozdním projevu ve státní televizi 3. dubna odmítla tato obvinění jako "provokaci". -Bez důkazů tvrdila, že Spojené státy a NATO si "objednaly" snímky, aby zdiskreditovaly Rusko. -"V tomto případě se mi zdá, že skutečnost, že tato prohlášení byla učiněna v prvních minutách po objevení těchto materiálů, nevyvolává žádné pochybnosti o tom, kdo si tento příběh 'objednal'," řekla Zacharovová. -Dříve ruské ministerstvo obrany také bez důkazů tvrdilo, že obraz Bucha je "další produkcí kyjevského režimu" a že všichni ruští vojáci opustili město do 30. března. -Moskva požádala Radu bezpečnosti OSN, aby se 4. dubna sešla k jednání o události, kterou označila za "provokaci ukrajinských radikálů" v Bukurešti. -Ruský Vyšetřovací výbor vydal 4. dubna prohlášení, v němž oznámil "vyšetřování" obvinění, že Ukrajina šířila "záměrně nepravdivé informace" o akcích ruských vojáků v Buči. -Ukrajinský prezident Volodymyr Zelenskyj vystoupil 3. dubna, obvinil ruské vojáky, že ve městě páchají "genocidu", a vzkázal představitelům Kremlu, že by se měli přijet podívat do Buchy, co jejich armáda provedla. -"Chci, aby všichni představitelé Ruské federace viděli, jak se plní jejich příkazy," řekl Zelenskyj ve videoposelství, v němž přešel z ukrajinštiny do ruštiny. -A za to neseme společnou odpovědnost. -Za ty vraždy, za to mučení... Za ty rány do týla," řekl. -Řekl, že ruský prezident Vladimir Putin a ruská armáda by měli být pohnáni k odpovědnosti za akce vojáků na Ukrajině. -"Když najdeme lidi s rukama svázanýma za zády a s uříznutou hlavou, nechápu to," řekl k výjevům obětí rozesetých v ulicích města Bucha, které leží asi 35 kilometrů severozápadně od Kyjeva. -Dne 2. dubna viděl zpravodaj ukrajinského vysílání Rádia Svoboda na ulicích malého města těla, která vypadala jako těla civilistů. -Jen na jednom místě viděl zpravodaj na ulici až deset mrtvých. -Novináři agentury AP viděli na různých místech města Bucha těla nejméně 21 lidí. -Těla jedné skupiny devíti lidí - všichni v civilním oblečení - byla roztroušena na zemi poblíž místa, které podle místních obyvatel využívaly ruské síly jako základnu. -Oběti byly zřejmě zabity zblízka. -Ukrajinské úřady uvedly, že v oblasti Kyjeva, kterou do minulého týdne kontrolovaly ruské síly, byla nalezena těla nejméně 410 civilistů. -Dobrý den, máte ještě volná místa na úterní kurz češtiny pro dospělé od 19:00? -Osmnáctý den totální války na Ukrajině se chýlí ke konci a s ním i další kalendářní týden v těchto obtížných podmínkách. -Rozhodli jsme se připravit na tento týden reportáž o záchraně zvířat. -Pro dnešek: -Koordinovali jsme dodávku potravin do útulku Iriny Dobroljubové v Kyjevě. -Kromě toho jsme obdrželi velkou dávku mokrého krmiva pro kočky, z níž byla část rozdělena mezi útulky a zbytek byl určen pro potřeby obyvatel města (podrobnosti viz předchozí příspěvek). -Dnes jsme nakoupili dalších 1,5 tuny krmiva a poslali je do Poltavské oblasti. -Potraviny jsme poslali také do Kyjeva, Kryvého Rihu a Bílé Cerkve. -Dobrovolníci z Koncha Zaspa společně zachránili tygra Shaniho. -Nyní je na cestě do specializovaného zařízení v Polsku. -Za týden: -Před 4 dny jsme udělali malý zázrak - pomohli jsme s evakuací 60 koček z útulku "Chci kočku" v zahraničí. -A dnes odjelo do Varšavy dalších 70 koček z útulku Iryny Dobroljubové. -Během těchto sedmi dnů byla poskytnuta pomoc více než 50 útulkům a miniútulkům v Kyjevě a více než stovce zvířecích institucí po celé Ukrajině. -V tomto období jsme poskytli finanční pomoc ve výši 850 000 UAH. -Tento týden jsme zpracovali více než 8 000 žádostí. -Každý den uskutečníme stovky telefonátů, abychom zkoordinovali ty, kteří mohou pomoci, a ty, kteří to potřebují. -Vidíme a slyšíme každého a jsme vděční každému, komu není osud našich přátel lhostejný. -Jen díky společnému úsilí dobrovolníků a zainteresovaných občanů zachraňujeme stovky životů denně. -Každý život je přece důležitý. -Co se stalo muži na autobusové zastávce, že přijela policie a záchranka? Je v pořádku? -Arcidiecézní charita Olomouc chce uspořádat projekt, který bude zajímavý pro Ukrajince žijící v České republice, konkrétně ve městě Olomouc. -Zeptám se kamarádky, byla tam od začátku do konce. -Podíval jsem se na předpověď počasí. Bude zima. -Bolševičtí útočníci se plně pomstili za své ztráty v bitvě. -Zastřelen byl učitel Dmytro Pavlyckij, jeho neznámý příbuzný a mladý muž Borys Oleksijenko. -Jejich těla byla vhozena do "hluboké žumpy, která byla zasypána hlínou". -Podle jeho vzpomínek "bylo ve městě jakési mrtvé ticho". -Občas některý z obyvatel přebíhal z domu do domu. -Na ulicích se neustále nacházela těla umučených bolševiky. -Hosté na něj museli zapomenout, dal jsem ho do skříně. -A když odejdu, nikdo nebude doma. -v 7.30 musím hercům, režisérům a divadelním kritikům předčítat dějiny světového divadla. -Projekty hodnotím večer. po večeři -Vaše káva již dorazila, mohu ji přinést zítra) -Všichni byli nemocní a měli příznaky, ale do nemocnice jsme nejeli. -Chtěl jsem vám říct, že jsem již připojil české mobilní číslo. -Nepotřeboval jsem k tomu žádné další zařízení, šel jsem k O2 a oni mi vyrobili elektronickou SIM kartu, kterou jsem připojil k telefonu, takže nepotřebuji žádné další zařízení. -Dobře, pak můžeme jít na místo u řeky. -2. Školní jídelna mateřské školy zajišťuje stravování řádně přihlášených dětí ve věku od 2 do 6 let, dětí s odkladem školní docházky (7 let) a stravování zaměstnanců mateřské školy. -Luboši, Ihor se ptá, jestli se ti hodí jet zítra (má volno a může jet, protože auto ještě nebylo zkontrolováno na nádraží). -Velmi se omlouváme, že vás tolik obtěžujeme. -Moc nám pomáháte, děkujeme! -Zítra půjdu do práce, aniž bych prošel komisí, a až se můj lékař uzdraví, projdu komisí, že? -Nyní dodáváme medicinální kyslík do všech nemocnic v naší zemi. -Prosím, řekněte mi, zda máte přibližný harmonogram úklidu bytů na měsíc dopředu, alespoň pro přibližnou představu, které dny? -Zvládám to. -Je to pro mě trochu těžké, ale chápu, že touto fází musím projít. -Sejdeme se ve čtvrtek a já ti všechno řeknu. -Obecně je vše v pořádku. -Už teď mi tečou slzy jen proto, že jsi nedávno vyšel z nemocnice a pomáháš mi... opravdu se to může stát? -Mohu vám později napsat, co skutečně potřebujeme? -Ahoj, už se cítím o něco lépe. -Dnes už byl ve školce. -Dokonce i dnes jsem tam už spal. -Takže jsme již dosáhli velkého pokroku. -Co napíšeš zítra, jestli se sejdeme, nebo ne, možná se tvé plány změní... -Těšíme se na setkání s ní s velkou radostí! -Této podpory si opravdu vážím! -Anita se také zmínila o dnešním pátku a navrhla, aby s Annou přišly do vašeho centra po Annině škole. -Mám chodit v pondělí, nebo ve středu? -Je mi špatně, mám kašel, horečku, hlas se ztrácí, slabost a závratě. -Ano, děkuji mnohokrát, Anna také položí trasu do geolokace v telefonu, abychom se tam mohli dostat co nejdříve. -Právě jsem to napsal ve škole při hodině -Omlouvám se, že jsem vás obtěžoval, jen jsem si dělal velké starosti. -Ale nerozuměli jsme si. -V našem domě se armáda neuchytila. -Náš dům se nachází v blízkosti vojenské jednotky a ropného skladu. -Navíc chodíme do práce a peníze, které vybíráme. -Zašlete prosím stránku s vízem nebo vstupním razítkem (červené s datem). -Bez uprchlického víza nemůžeme poskytovat humanitární pomoc. -Splachovací nádržka nefunguje správně, špatně splachuje. Potřebujeme mistra. Je třeba ji opravit. -Samozřejmě mi ho bylo líto. -Můj nejlepší přítel byl zabit přímo před jeho očima. -Měl jsem spoustu otázek na Boha, proč se v mém životě vyskytly situace, kvůli kterým jsem ztratil víru ve všechno. -Že mě vnímáte jako tiskového mluvčího. -Tento robot je velký, žlutý a silný. Je mu třicet dva let -Doporučuji vám přečíst si Joea Dispenzu, který se vědecky zabývá výzkumem vlivu vnějších faktorů a tvorbou nových neuronů spojení. -Děkuji, přizpůsobím se vám. To by bylo velmi dobré. -Pokud jste v České republice bez řidičského průkazu, nesmíte řídit.☝☝☝☝ -Další často kladené otázky týkající se řidičských průkazů: -👉 Mám řidičský průkaz platný na Ukrajině, mohu s ním jezdit v České republice? -Pokud však v České republice pobýváte déle než jeden rok, musíte si jej na obecním úřadě vyměnit za český řidičský průkaz. -👉Vypršela platnost řidičského průkazu, co s tím? -- jejichž platnost skončila po 1.1.2022, zůstávají v platnosti. -- Platnost vypršela před 1.1.2022 - je neplatná a musíte se přihlásit ke zkoušce, -Mám platný ukrajinský řidičský průkaz a chci si ho vyměnit za český: -- pobývat v České republice alespoň 185 dní v kalendářním roce. -- je možné ji nahradit v obci podle místa bydliště. -Nemám platný řidičský průkaz a chci český: -- Musíte absolvovat autoškolu a složit řidičskou zkoušku. - -Mohu se vás na něco zeptat? -Víte, jak dopadly životy dívek, které vás opustily? -Chtěli se k vám vrátit? -Vytvořili nové vztahy? -Ale jsou šťastní? -V to doufáme také -Tvůj otec ti možná řekl, že jsou mezi námi svědkové Jehovovi, ale já se k žádnému náboženství nehlásí. -Byla však pokřtěna v pravoslavné církvi. -Respektuji všechna náboženství, která jsou založena na lásce. -Četl jsem Bibli, Korán a začal číst Tóru. -Věřím, že mezi člověkem a Bohem nejsou žádní prostředníci. -Je módní požádat vás o jednu cigaretu? -Leží na posteli, Saša jí v kuchyni zdobí koule), možná se jí zlepší nálada. -A mluví pouze česky -Liší se výuka v pondělí a ve středu? -Na Ukrajině jsou operátoři největších bank se zákazníky v kontaktu 24 hodin denně a všechny problémy jsou řešeny okamžitě :) -To je v pořádku, zvyknu si na novou realitu. -Byl jsem požádán, abych šel se známými svého přítele na výstavu a pomohl jim požádat o ochranná víza. -Byl jsem tu skoro celý den. Byl jsem trochu unavený z lidí. -Prsteny jsme již objednali, chcete je vidět? -Není nový, byl již použit, ale je funkční, ale není potřeba... -Kde ve vašem okolí najdete dobré kadeřnictví? -To byl náš plán. Je lepší, že se občas střídáme? -Je velmi nepozorná a poněkud nezodpovědná. -Ujede jí zastávka, zapomene si telefon, nebo jako teď nezvedne telefon a napíše mi, že je ve škole a že je všechno v pořádku :) Vždycky se jí to povede. -A já se o ni bojím, protože není doma, je v jiné zemi, kterou nezná. -Serhij Sydorenko: Naše členství v NATO již není vzdálenou perspektivou -Před devíti lety ukrajinská sociologie ukázala, že 67 % Ukrajinců bylo proti vstupu do NATO a pouze 18 % pro. -Po revoluci důstojnosti, útěku Viktora Janukovyče, anexi Krymu a vypuknutí války v Donbasu začal počet odpůrců NATO klesat, zatímco příznivců naopak přibývalo. -Po vypuknutí totální války s Ruskem zaznamenali ukrajinští sociologové rekordní podporu vstupu Ukrajiny do NATO mezi ukrajinskými občany - více než 75 %. -Navzdory podpoře vstupu Ukrajiny do NATO a pomoci, kterou NATO poskytuje ve válce, se na Alianci a její členské státy snáší kritika, že nedodávají zbraně včas, odmítají uzavřít oblohu a nechtějí "dráždit" Rusko. -Kromě toho existuje vysoké riziko, že Ukrajina bude nucena vzdát se svých aspirací na členství kvůli postoji Ruska, pokud se mu podaří zastavit válku a Ukrajina dostane spolehlivé bezpečnostní záruky, včetně vojenských. -V novém díle podcastu Zatracené otázky diskutujeme s redaktorem Jevropejské pravdy o tom, jak Aliance pomohla Ukrajině po ruské invazi, za co by mělo být NATO kritizováno a za co je kritizováno zbytečně, proč je naše perspektiva členství v NATO blíže než kdykoli v historii a proč by se bezpečnostní záruky, o nichž se jedná při jednáních s Ruskem, mohly stát druhým Budapešťským memorandem. -Hele, nepůjdeme se zítra podívat do kavárny? -Vím, že to pro vás nebylo snadné. -A já udělám vše pro to, abys byl šťastný. -Po tom, co jsi mi dnes napsal, jsem šťastná. -Doufám, že jsi to myslel vážně. 💞 -Dobrovolníci nám sdělili, že v Praze budeme 19. března večer. -Nyní trvá cesta z Kyjeva na hranice s Polskem tři až čtyři dny, protože některé silnice jsou zničené, na některých cestách se střílí a je zde mnoho kontrolních stanovišť, kde se zastavují vozidla a každý je důkladně kontrolován, což zabere čas. -Když hrozí letecký útok, zastaví se doprava, lidé vystoupí a schovají se, kde se dá. -Když už nehrozí žádné nebezpečí, lidé se vrátí do vozidla a pokračují v jízdě. -Internet nefunguje vždy dobře. -Proto vám nemohu vždy odpovědět hned. -Upřímně vám děkuji za pochopení a za to, že jste na nás počkali. -Cesta z Kyjeva k polským hranicím se nyní nazývá "cesta života" ..... -Nyní se vydávají na objížďku k hranicím přes Uman a Ivano-Frankivsk. -Moc děkuji za vaši nabídku, ale právě jsem dceru zapsala do školy v Praze a doufám, že se brzy vrátím domů, takže nechci měnit města a traumatizovat ji. -Těmto ženám není možné porozumět -Podle listu Jevropeiska Pravda o tom informuje Welt s odvoláním na zdroje z ukrajinských vládních kruhů. -Podle publikace byl návrh v sobotu zaslán německému ministerstvu hospodářství. -Náklady na 100 houfnic včetně výcvikové soupravy a náhradních dílů činí 1,7 miliardy eur. -Houfnice jsou nabízeny také ve variantě na APC Boxer za 1,2 miliardy eur. -Zatímco tanky v boji se musí k nepřátelským cílům přibližovat relativně blízko, Panzerhaubitze 2000 může střílet ze vzdálenosti přes 30 kilometrů. -Podle ukrajinských vládních zdrojů, které se odvolávají na návrh společnosti KMW, budou dodávky samohybných houfnic probíhat na kruhovém principu. -Bundeswehr dodá Kyjevu 100 svých houfnic co nejdříve a chybějící kusy doplní ve druhé fázi průmysl. -První nové houfnice mohou být dodány 30 měsíců po podpisu smlouvy, tj. před druhou polovinou roku 2024. -Plná dodávka bude dokončena až v roce 2027. -Jsem také rád, že jsem vás poznal -Kolik peněz mám na tomto čísle? -Nestor má zatím jen rýmu a Halyna má večer teplotu 37,7 a večer 38 °C. -Karina nám předevčírem přinesla léky, které užíváme. -Měl jsem se o tom zmínit dříve? -)) Koneckonců jsem pomáhala pečovat o děti v rodině. -Dobré odpoledne. Kde se v Jihlavě dají koupit formy na velikonoční pečivo (papírové nebo silikonové)? -Naše obec má jedno z nejlepších lyceí v regionu -Máme dnes další pokoj? Nebo jsou to jen schody odshora dolů jako obvykle? -Dítě má již týden vysokou teplotu a suchý kašel. Dnes si stěžuje na bolest ucha -Neomlouvejte se. Jen si zapište věty, kterým jste nerozuměli. -Chápu, že překladatel nepřekládá text vždy správně. -Říká, že se na pohovce cítí pohodlně, nebojte se. -Pokud něco, tak něco vymyslíme. -Děkujeme vám za váš zájem. -Víte, kdy tam můžete bydlet? -Také jsem chtěl vidět dort pro Karinu, pokud jsem měl čas se na něj někde podívat. -Pokud stále potřebujete pomoc, napište na adresu -Chtěl jsem, abyste vytiskli plakát do kuchyně, aby po sobě každý umyl sporák. -Večer ti napíšu -😄😍Jaký skvělý nápad. Hned se podívám na video. Bude to dobré, protože bude kreslit celá rodina. -To je důležité, pokud existuje taková tradice, potřebuji to vědět, abych se mohl připravit.) -Dobrý den. Máme dost peněz. Bylo by velmi příjemné poznat někoho z Ukrajiny. -Mimochodem, dnes jsi mi z nějakého důvodu nechtěl napsat... -Děkujeme za rychlou organizaci tak potřebných jazykových kurzů! -Přeji všem účastníkům hodně sil a vytrvalosti při učení češtiny. -Vítejte, u nás jste v bezpečí. -Dočasně po dobu hledání univerzity, pak samozřejmě budete potřebovat trvalý pobyt. -ale možná bude mít hostel. (hledá Prahu, Brno a další místa) -Nyní jsme nedaleko Prahy, máme auto a jsme připraveni zaplatit část ubytování. -město nebo vesnice, na tom nezáleží, měl by to být samostatný byt. -Typ nebo typy ekonomické činnosti -pak mi prosím napište, kdy přijedete 🙏🏻. -Pozdravy od děvčat! Přejeme vám vše nejlepší k svátkům! -napište mi o všem, co potřebujete, prosím. -V mateřské škole se platí školné a školní stravování. -Školné se nevztahuje na povinné předškolní vzdělávání. -Pokud máte s placením poplatků problémy, škola vám poradí, jak situaci vyřešit. -Dobrý den, účastním se kurzu. -Mohli byste mi však prosím říct, zda existuje dřívější čas? -Protože nebydlím v Praze, mohou nastat potíže s dopravou. -Pokud neexistují žádné možnosti, pak stále souhlasím. -Předem děkujeme. -Na některé jejich otázky jsem neznal odpovědi. -Včera jsem byl s kamarády na lezecké stěně a dneska jsem si dal pivo v centru Brna... bylo to docela fajn, takový příjemný relax. -Dobrý den, obdivuji vaši touhu naučit se ukrajinsky. -Dario, měla bys na svůj telefon obdržet heslo T-mobile, pošli mi ho prosím. Děkuji. -Ale chutnají přesně jako chipsy. -Strýčku Stane, zdá se, že když se podíváš na turisty, vidíš peněženky na nohou. -Děkuji, že jsi mě vzal do Říše divů, tati. -Neutrácejte je všechny na jednom místě. -Rozhlédněte se kolem sebe a uvidíte ty nejúžasnější věci. -Dávají tam dětem najíst, nebo je musíme vzít s sebou? -Když náplň vychladne a těsto je hotové, připravte koláče a smažte je na mírném ohni na rostlinném oleji. -A když si dnes nedoplním mobilní účet, nebude moje číslo zítra zablokováno? -Budeme moci změnit balíček zítra, pokud již vypršela platnost měsíčního tarifu? -Hodně slyším - válka se vede na 10 % území Ukrajiny. -Proč lidé utíkají? -Proč nežijí na území, kde není válka? -Zbláznil ses? -Žili jste ve válčící zemi? -Když se ceny vyšplhaly o 100-200 %. -Když většina podniků nefunguje. -Když není kde vydělat peníze. -Kdy děti slyší airsoft? -Žili jste takhle? -Nemáte ponětí. -A chraň bůh, aby to tak nebylo. -Pokud to zjistíte, bude to noční můra. -Proto utíkáme. -Někteří z nás jsou zmatení, s telefony... které nám zůstaly z jiných životů... ano, někteří z nás se už měsíc nepřizpůsobili. -A za měsíc byste se mohli přizpůsobit z nuly na nulu. -Ano, stát nám poskytuje 5000 korun. -Dokázali byste z těchto peněz žít? -Mnoho nadací poskytuje potraviny. -Zkoušeli jste se svými dětmi jezdit metrem k nadacím? -Když v jednom sháníte jídlo, v druhém oblečení a na třetí nemáte čas. -A zítra musíme jít znovu... protože jídlo, které nám dali, vystačí jen na jeden den. -Chcete se postarat o sebe... a kam dáte své děti? -Jak žít bez znalosti jazyka? -Když ani nemůžete požádat o pomoc. -Ano, existují bezplatné kurzy... ale jaké si vybrat? -Hledáte práci, jídlo, oblečení nebo kurzy? -Za měsíc mám víc otázek než odpovědí. -Ale moje děti tam být nemohou. -Jen nás pochopte. -Pokud si chcete aplikaci stáhnout do telefonu, zkuste naskenovat kód qr. -Skvělé, tak mě manžel sveze, protože dnes má auto od stavební firmy, kde pracuje. -Dobře, ale až po 20.00, protože do té doby budu mít práci. -Nestihli jste dnes zavolat a změnit tarif mobilního operátora? -Jsem také velmi smutný -⚡️росія nesplatí svůj zahraniční dluh, uvádí CNN s odvoláním na ratingovou agenturu Standard & Poor's. -Existují nějaké bezplatné intenzivní kurzy ve večerních hodinách? Pokud jsou placené, vrátíme vám peníze později. -Zapomněla, co se tu chystala dělat. -Kde se má dům uklízet? -Destruktivní akce v podobě šíření falešných zpráv a dezinformací mají za cíl vyvolat paniku a rozvrat, což je citlivé z hlediska veřejné bezpečnosti. -V tento den mi opravdu chybí můj Slavík( -místo, den a čas budou upřesněny podle zájmu a možností. -Potřebuji se nechat vyšetřit kvůli potravinářskému průkazu. Mohu to udělat u vás? -Pomozte zvířatům! -Přihodil jsem 300 hřiven, což je momentálně moje maximum. -Pokud se transakce nezdaří, zkuste následující možnosti. -Údaje byly převzaty z oficiálních stránek https://facebook.com/UAnimals.official/ Můžete si je ověřit. -Již tři roky (před válkou) pracuji pro mezinárodní společnost, která vyváží obilí a olejniny. -Je tedy dobře obeznámen s tímto trhem na Ukrajině i ve světě. -Rozhodl jsem se založit vlákno o riziku potravinové krize ve světě v důsledku války na Ukrajině. -Pokud máte zájem, dejte nám prosím +. -Úspěchy ukrajinských zemědělců v sezóně 2021-2022. -Rekordní sklizeň 106 milionů tun! -Hlavním problémem (před válkou) byla logistika - konkrétně ochromení železnice, drahé palivo a nepřipravenost přístavů na odbavení tak velkého množství obilí. -Je třeba začít tím, že Všeruská agrární rada oznámila neúspěch osevní sezóny kvůli: přímým vojenským akcím a pohonným hmotám (vysoká cena, nedostatečné zásoby - 50\250 tisíc tun a zemědělci nedostali od státu vratku DPH - tedy nedostatek $). -Narušení se nejvíce týká kukuřice, olejnin a jarních obilovin. -Modlíme se za růst ozimů. -Za předpokladu, že regiony jsou schopny zasít (s naftou a $): 100 % v západních regionech; 50 % v centrálních regionech; méně než 20 % ve zbytku Ukrajiny. -Deficit může být značný. -Regiony, v nichž probíhají aktivní boje, jsou z hlediska sklizně na předních místech. -Pšenice (v příloze, údaje za rok 2021). -100% pokrytí: 5272,86 tis. tun. Zóna 50 %: 5402,5 tis. tun. Zóna 20 %: ‼️Дефіцит: 44,5 % *vývoz: EU, Egypt, Turecko -Vy však řeknete, že existuje jarní a ozimá pšenice a že část z ní již byla zaseta. -ALE je třeba si uvědomit, že nyní je nutné přidávat do půdy fosfor, aby se vytvořil kořenový systém, ale na to není palivo (peníze) ani příležitost. -Proto budou čísla horší. -Obě skutečnosti se tedy vzájemně kompenzují. -Když už mluvíme o Číně a Spojených státech: 1. -Čína - již tři roky po sobě aktivně skupuje veškeré obilí na trhu a vytváří si značné rezervy. -USA - pracují především pro svůj vlastní trh a často samy potřebují dovoz NICHE. -Země uvedené na seznamu proto nebudou moci dodavatele nahradit. -Budeme péct mazance a učit se. -Dnes je spousta práce. -Vaše káva dorazí dnes) -Posuzujeme bezpečnostní rezervu finančních a bankovních systémů v případě pokusů o destabilizaci situace prostřednictvím informačních a kybernetických útoků. -Tento týden zažila Ukrajina bezprecedentní útoky DDoS. -A nikdo z našich rozumných spoluobčanů nepochybuje o tom, že jsou součástí ruské hybridní války. -Soudě podle četných oficiálních i neoficiálních prohlášení a komentářů si to myslí i mezinárodní partneři naší země. -Hlavním cílem zločinců je destabilizovat již tak složitou socioekonomickou situaci v zemi, vyvolat v Ukrajincích paniku a snížit jejich důvěru nejen ve stát, ale i v jeho instituce a služby, které poskytuje. -Proto byly tentokrát kromě webových stránek ministerstva obrany, ozbrojených sil a řady dalších státních úřadů napadeny také zdroje NBÚ, portál a aplikace Diia (kterou využívá 14 milionů Ukrajinců), státní banky Oschadbank a Privatbank, které poskytují online bankovní služby milionům našich krajanů, a komerční banky. -Kybernetickým útokům navíc předcházely pečlivě naplánované informační "útoky", jejichž cílem bylo zvýšit nedůvěru veřejnosti v domácí bankovní systém - a zřejmě i v celý finanční systém. -Zločinci a jejich zákazníci naštěstí nedosáhli svého hlavního cíle. -Musíte ale uznat, že jsme měli nervy napjaté a museli jsme vynaložit spoustu energie a peněz, abychom se postavili útokům a odstranili jejich následky. -Digitální rozměr hybridních útoků -Útok typu DoS (denial-of-service) je pokus o způsobení škody tím, že se cílový systém (například webová stránka nebo aplikace) stane pro koncové uživatele nedostupným. -Za tímto účelem útočníci obvykle generují obrovské množství paketů nebo požadavků, které systém nedokáže zpracovat a stává se nedostupným pro "normální" požadavky. -K provedení útoku DDoS (distributed denial-of-service) používají hackeři různé hacknuté a kontrolované zdroje (počítače, chytré telefony, tablety). -Toto vybavení může být rozptýleno po celém světě. -Je prakticky nemožné vystopovat "hlavní" počítače, ze kterých útočníci vydávají příkazy k zahájení nebo zastavení útoku. -Například v případě Oschadbank byl systém torpédován přibližně milionem požadavků za sekundu. -Privatbank jich má ještě více. -Na jedné straně nejsou útoky DDoS tak nebezpečné jako kybernetické útoky spočívající v instalaci škodlivého softwaru, kterým Ukrajina čelila 14. ledna. -Koneckonců nedochází ke krádeži dat, k zásahům do obsahu obsaženého ve zdrojích a samotném softwaru, informace nezmizí beze stopy a nejsou nahrazeny jinou, která je výhodná pro "kupce hudby", a nikdo si ze svých účtů nevybírá peníze. -Zároveň mohou být tyto útoky poměrně zdlouhavé. -Odborníci uvádějí následující statistiky: 33 % útoků DDoS trvá do jedné hodiny, 50 % přibližně jeden den, 15 % až jeden měsíc. -Je velmi obtížné se s tímto typem zásahů do systému vypořádat najednou. -To se ukázalo na situaci ukrajinských státních bank. -Vzpomínáte si? -První prohlášení o obnovení služeb se objevila večer 15. února (kdy útok začal). -A něco se dokonce trochu "zvlnilo" (autor to sám viděl v aplikaci Privatbank). -Po chvíli však systémy opět přestaly fungovat. -A o jejich plném obnovení (a to i s výhradami, že útoky pokračují a jsou řešeny promptně) začali mluvit až v polovině následujícího dne. -A ano. -Organizace útoků této úrovně je nákladná. -Zdůraznili to účastníci společného brífinku Rady národní bezpečnosti a obrany, Státní speciální komunikační služby, kybernetické policie, Služby bezpečnosti Ukrajiny, Národní banky a dalších služeb a orgánů, který se konal ve středu. -Jen v první fázi by "investice" měla dosáhnout několika milionů dolarů (i když někteří IT odborníci tvrdí, že je to mnohem méně). -A když útočíte několik týdnů nebo měsíc? -Nikdo to ani nepočítá. -Není překvapivé, že když se mluví o údajných iniciátorech útoků DDoS, obvykle se mluví o celých státech, které jednají prostřednictvím svých zpravodajských služeb. -V tomto případě samozřejmě mluvíme o našem hlavním nepříteli a nepříteli - Ruské federaci. -Navíc je hlavním příjemcem pokusů o destabilizaci situace na Ukrajině. -Musíme si přiznat, že kybernetické útoky se zdaleka nestaly jednorázovým nástrojem speciálních informačních operací agresorského státu. -Obecně odborníci tvrdí, že útoky DDoS jsou způsobeny osobní nevraživostí, snahou "pobavit se" (jako je tomu často v případě pseudotěžby) a nekalou konkurencí - snahou naštvat obchodní či akademické konkurenty nebo "kolegy" v určitém odvětví. -Nejčastějším důvodem je vydírání a vydírání. -Mnoho lidí si z tohoto mechanismu ovlivňování webových stránek a aplikací udělalo způsob vydělávání peněz. -A konečně další nebezpečný důvod je politický. -S tím jsme se tento týden potýkali. -"Samozřejmě nemůžeme dát jasnou odpověď na to, proč k určitému útoku dochází, protože o jeho "hlavním" účelu vědí pouze pachatelé nebo zákazníci," řekl Ukrinformu Dmytro Redchuk, CIO společnosti Volz (společnost poskytující internetové služby, hosting a ochranu před DDoS útoky). -Zároveň uvedl, že je možné hypoteticky identifikovat rizika a pokusit se zjistit, k jakým důsledkům by to mohlo vést, a tedy pojmenovat hypotetické příčiny: -1) Zjišťování obranných schopností (to je něco jako "průzkum", kdy se zjišťuje, kde jsou servery umístěny, jak se s nimi zachází, jak rychle odrazí útok atd.;) -2) hledání zranitelností (zatímco IT specialisté "zvedají" servery, mohou se pokusit získat přístup k databázím, nabourat se do systému, zavést virus, který bude fungovat později; to znamená, že když servery spadnou a my s nimi ztratíme spojení, vždy hrozí, že toto spojení najde a získá někdo jiný); -3) pokusy o zneužití dříve identifikovaných zranitelností. -"Nemohu se vyjadřovat k politickým motivům, jako je "odvádění pozornosti od jiných důležitých událostí" nebo "zvyšování paniky", protože nejsem politolog, takže by to nebylo korektní. -Vyloučuji také imageový nebo reputační důvod: uživatelé se chtějí přihlásit ke svému osobnímu účtu/používat službu a neudělají to, což vede ke zvýšené nedůvěře nebo negativním reakcím. -Takové věci mohou být použity jako "bonus", komplexně, ale nejsou konečným cílem útoku. -Těžko říct proč a z jakého důvodu, ale je pravděpodobné, že to může být komplex důvodů," řekl Redchuk. -Útoky DDoS: psychologický rozměr -Zástupci vládních agentur a zpravodajských služeb jsou ve svých hodnoceních přímočařejší: hlavním cílem kybernetických útočníků tentokrát nebylo ani tak způsobit přímé škody Oschadbank, Privatbank nebo jiným cílům útoků, jako spíše zasít paniku v ukrajinské společnosti. -To vysvětluje masivní šíření informací na sociálních sítích a šíření zpráv "přítel přítele viděl přítele, který to řekl příteli" prostřednictvím SMS v předvečer a v den kybernetických útoků. -Snažili se Ukrajince přesvědčit, že ten či onen bankovní ústav (a nemluvili jen o Oščadě a Privatě) má problémy s bankomaty a terminály, že nelze vybírat hotovost, vklady jsou zmrazené, fronty na pobočkách jsou delší než u Leninova mauzolea v sovětských dobách atd. -Cíl je "na povrchu". -Jedním z cílů je přímo vyburcovat situaci. -Druhou možností je podle odborníků vyprovokovat alespoň některé důvěřivé Ukrajince k tomu, aby se stali zbraní v rukou útočníků při DDoS útocích, což by digitálním službám a jejich správcům ještě více přitížilo. -Vždyť k milionům vteřinových útoků, za které organizátoři dostali miliony dolarů, by se přidaly tisíce "bezplatných", "bonusových" volání do aplikací a na webové stránky od našich spoluobčanů, kteří propadli panice. -"Nejnovější útok DDoS je útok, který je klasifikován jako informační a psychologický. -Nešlo o destruktivní akci, která by poškodila infrastrukturu, ale byla zaměřena výhradně na obyvatelstvo, aby demonstrovala nedostatečný přístup k elektronickým informačním zdrojům poskytovaným státními a finančními institucemi," uvedl na brífinku zástupce tajemníka Rady národní bezpečnosti a obrany Serhij Demedjuk. -Ujistil nás, že nedošlo k žádným ztrátám, škodám ani krádežím. -Energetické, finanční a další státní systémy fungují jako obvykle. -Úředník připomněl, že v předvečer kybernetického útoku začalo mnoho občanů, především příslušníků ozbrojených sil a policistů, dostávat SMS zprávy, že ve dnech 15.-16. února večer budou údajně uzavřeny bankomaty. -Jednalo se o koordinovaný a plánovaný útok, aby lidé začali "pomáhat" zločincům provádět pasivní útok na zdroje tím, že budou kontrolovat své účty. -Podle Demediuka není žádná země vůči takovým útokům imunní, ale je možné se jim vyhnout. -Za tímto účelem je nutné zavést společnou koordinovanou práci mezi poskytovateli všech forem vlastnictví a vládními agenturami, které zajišťují kybernetickou obranu. -Místopředseda vlády a ministr digitální transformace Mychajlo Fedorov zdůraznil, že všechny příslušné služby jsou neustále připraveny čelit pokusům o destabilizaci situace ovlivněním systému digitálních služeb na Ukrajině. -"Na organizaci takových útoků se vynakládají miliony dolarů. -Úspěšně jim však čelíme díky koordinované práci poskytovatelů, Bezpečnostní služby Ukrajiny, kybernetické policie, odborníků z ministerstva digitální transformace a podnikatelské sféry. -Uvědomujeme si odpovědnost, kterou v zemi neseme při budování digitálního státu. -Je pro nás velkou výzvou věnovat tolik času kybernetické bezpečnosti a zároveň rozvíjet naše služby. -Ale děláme to. -Jsme připraveni na jakýkoli scénář," ujistil Fedorov. -Podle úředníka je důležité, aby společnost Diia neukládala osobní údaje. -Je navržen tak, aby se všechny informace shromažďovaly v různých registrech. -Útoky na službu jsou neúčinné proto, že je velmi obtížné napadnout všechny registry současně, protože každý z nich má samostatný systém zabezpečení. -Totéž platí pro bankovní systém," uvedla NBÚ. -Abyste ho mohli "položit", tedy zablokovat systém bankomatů, musíte se "nabourat" do všech desítek bank, které na Ukrajině působí a mají vlastní sítě bankomatů. -To je prakticky nemožné, protože Národní banka i jednotlivé bankovní instituce mají vlastní víceúrovňový systém kybernetické obrany. -Finanční rozměr nepřátelských útoků -A co finanční část informačních útoků na Ukrajinu? -Mimochodem, začaly dlouho před únorovými útoky DDoS. -Každý asi slyšel zprávy o tom, že "Ukrajinci hromadně vybírají své vklady", a proto mají banky údajně problémy s likviditou. -Žádná z těchto fám nebyla oficiálně potvrzena. -Podle ekonomů došlo v lednu skutečně k mírnému odlivu prostředků z bankovních vkladů. -Něco takového se však děje na začátku každého roku - ne všichni vkladatelé obvykle obnovují své termínované vklady. -V lednu navíc pokračovaly výkyvy na měnovém trhu a svou roli sehrála i psychologie. -Někteří naši krajané si skutečně mysleli, že nákupem dolarů nebo eur mohou vydělat více než prodloužením smlouvy o bankovním vkladu. -Někteří lidé se domnívají, že tváří v tvář nepředvídatelným ruským akcím je lepší mít hotovost na ruce než peníze na účtu... -Celkově však byl odliv vkladů zanedbatelný na pozadí trvalého růstu vkladů domácností v průběhu celého loňského roku. -Podle NBÚ se ve čtvrtém čtvrtletí roku 2021 zvýšil objem prostředků získaných bankami, zatímco náklady na financování mírně vzrostly v důsledku zvýšení nákladů na podnikové úvěry. -Podle průzkumu provedeného NBÚ samy banky hodnotí dynamiku financování v období říjen-prosinec pozitivně: 72 % respondentů zvýšilo objem získaných prostředků. -Hlavním faktorem zvýšení objemu výpůjček byla vyšší nabídka ze strany vkladatelů. -Závěry: Úspory Ukrajinců na bankovních vkladových účtech trvale rostou. -Odhady růstu úvěrů v prvním čtvrtletí roku 2022 jsou zdrženlivější: banky očekávají příliv prostředků především od firemních klientů, zatímco u domácností nepředpokládají zvýšení počtu. -Ale. -A nikdo z bankéřů nemluví o vyhlídkách (nebo alespoň o prvních příznacích) hromadného vypovídání vkladových smluv. -Mezitím pokračují informační útoky na ukrajinský bankovní systém - přesněji řečeno na Ukrajince prostřednictvím šíření falešných zpráv o situaci v našem bankovním systému. -Po vyřešení rozsáhlých útoků DDoS se na sociálních sítích objevila další várka "panických" zpráv. -Není to nic nového. -Opět jde o záměr údajně omezit množství hotovosti, kterou si Ukrajinci mohou vybrat z bankomatů. -"Telegramovými kanály se šíří informace o tom, že největší ukrajinské banky spolu s vládou testují možnost omezení výběrů z karet a bankovních účtů občanů a že tato akce je údajně prováděna v rámci přípravy na odražení ozbrojené agrese. -Jde buď o provokaci, nebo o obyčejný podvod," uvedla ve čtvrtek Národní asociace bank Ukrajiny. -Ujistili nás také, že banky neplánují žádné omezení výběru hotovosti. -Oleg Gorokhovsky, spoluzakladatel Monobank, ujistil, že tyto zvěsti jsou mírně řečeno "přehnané". -Jde tedy o naši psychickou odolnost, nikoli o naši finanční zranitelnost, pánové. -Nepropadejte panice a autoři a pachatelé hybridních útoků se určitě chytí. -Kdy sníh voní po mandarinkách? Pravděpodobně na Nový rok a o Vánocích. -V té době chcete věřit na zázraky a čekáte na mikulášské dárky a tajně doufáte, že vám splní vaše nejtajnější přání. -Ale co když už nejste dítě? Nemáte už v životě žádné kouzlo prázdnin? -Už ve vašem životě není žádné kouzlo prázdnin? -"Online dívka na turné" je druhý román z populární trilogie britské módní blogerky, youtuberky a spisovatelky Zoe Zagg. -Historie světa v zajímavostech seřazených chronologicky je skvělou příležitostí k cestování časem. -V této knize se minulost přibližuje a přítomnost se stává jasnější. -Dozvíte se o vzniku a zániku mocných říší, o vládcích a vůdcích, kteří stáli v čele celých národů, a o tom, jak drobné detaily rozhodovaly o epochálních událostech a jak prosté náhody měnily běh dějin. -Akční cena platí pouze v internetovém knihkupectví. -Slevy neplatí v kanceláři na ulici Basseynaya - pouze pokud si objednáte vyzvednutí z našeho online knihkupectví. -Zda se jim to podaří - o Vánocích se přece zázraky dějí - si přečtěte v tomto lehkém romantickém příběhu Catherine Ryderové. -Andrej Vasiljev v minulosti pracoval jako moderátor zpráv a připravoval reportáže z horkých míst. -Dnes pracuje jako operátor korekce trajektorie ve středisku řízení letů Národního úřadu pro letectví a vesmír. -Na poslední dvě videa se podívám, až mi bude pořádně fungovat internet. -Mohu vám zavolat? -Nebo můžete napsat kontakt na svou ženu? -Grigorij začal kašlat, chci se zeptat, co si můžu koupit v lékárně. -Chci požádat o radu. -Chápu vás, takže nechceme, aby si lidé mysleli, že jsme podvodníci. -Peníze, které získáte, je lepší použít pro lidi v nouzi. -Můžeme se sejít na hodinu nebo dvě a pak jdu domů, protože s sebou nemám potřebné věci. -Jedná se o doslovný překlad tohoto dokumentu do češtiny. -Dole jsem přidal vysvětlení, co je to inkaso. -Účet už mám. Založil jsem si ho v jiné bance. Musel jsem mít účet předtím, než jsem začal pracovat. -Дякую🏵️🏵️🏵️сьогодні Napíšu své údaje pro práci -Jen jsem se rozhodl jet a ty jsi to zařídil. -Jak se cítíte? -Martine, jdeme spát. -Děkujeme za krásný a zábavný den. -Se vším jsme byli velmi spokojeni. -Cítíme se jako ve vlastní rodině. -Děti celý večer vyprávěly tatínkovi a všem prarodičům, jak to s vámi bylo fajn. -Ani jsem se nedotkl dveří a už to zvoní( -Bylo to skvělé, mám peníze na dva měsíce nájmu, v Olomouci na okrese nezáleží, byla bych moc vděčná. -Každý má šanci. -Hlavní je, aby láska byla skutečná a vzájemná. -Nevím, jak v Evropě, ale u nás se lidé berou z různých důvodů, většinou ze zištných, obchodních důvodů. -A já s tím mám problém, pokud člověka nemiluji, nic nefunguje. -Podívám se na to a řeknu jí. -Jsou na stanici poblíž pokoje matky a dítěte. -Říká, že už neví, kam jít. -Diana mi neodpověděla a ty -Ano! Řídí ho jako zkušený řidič! Ale nevím, jestli je to lepší. Mám strach o stěny bytu. -Včera ti máma řekla, že máš bolesti? -To znamená, že tento byt je již pronajatý a vy nemáte jinou podobnou možnost. -Očekáváme od vás informace o nadcházejícím setkání ve středu nebo ve čtvrtek, stejně jako o čase a přesné adrese. -Pokud se můžeme sejít dnes, domluvme se na čase a místě. -Pokud jste postel ještě nedali pryč, ráda bych si ji vzala. -Ale ve středu budu v Praze. -Pokud souhlasíte, zavolám vám ve středu. -Vážený pane řediteli. Jsem Vám velmi vděčná za pomoc při zápisu mého dítěte do školy. -Velmi se omlouvám, ale zítra se s vámi nebudu moci setkat kvůli svému pracovnímu vytížení. -Děkuji vám za váš čas a ještě jednou se omlouvám. -Hádal jsem o svém otci -Myslím, že hledají cukráře, který umí anglicky. -Až dorazíte, zavolejte mi. Zvonek u dveří nefunguje -a teď mám chuť na něco sladkého -takže si jdu uvařit čaj a sníst nějaké sladkosti. -Představte si, že Rusové znásilňují ženy a jejich děti se na to musí dívat. -Pak mohou také znásilňovat a mrzačit tělo a děti to všechno vidí. -Náklady na opravu hradí majitel, pan Dedina. Očekává se, že se dostaví v úterý. -V úterý zůstaňte doma a počkejte na opraváře. -Chápu to správně, že jsem za tři dny vyčerpal celý měsíční objem GB? -Dobře, určitě se ho zeptám. -Jakmile se zítra večer ubytujeme, zavolám mu v úterý. -Chci poděkovat všem, kteří nám pomohli, protože to byla opravdu poslední naděje. -Práce se nebojím, protože jsem také žil na venkově, takže mohu a rád pracuji fyzicky. -"Na dvoře jsem měl stroj, který vypadal jako krokodýl a střílel kroupy. -Drželi mě v domě, pak mě odvedli do sklepa a tam bylo hodně lidí, asi 400, nedalo se tam dýchat. -O práci zatím nevím. Ráda bych, samozřejmě pokud se mi podaří dostat děti do školky a školy. -Jen jsem se ptal, jestli jste o tom něco neslyšeli. -Ano, můžete to udělat. -Dnes jsem se dočetl, že se musí přihlásit sami. -Musíme jim o tom říct a dělat to s nimi. -Ráno se zeptám manžela, jestli to zítra stihne. -Byl jsem v místnosti se spoustou lidí. Byl jsem energeticky unavený. -nelze odpovědět bez zaslání životopisu -Říká se, že musíte vyplnit formulář "malování" v češtině... -Poslední čtyři roky jsem k sobě nikoho nepustil, četl jsem spoustu knih o filozofii a pochopení sebe sama. -Je těžké věřit, když vám někdo ubližuje. -Ale musíš to zkusit znovu, každý si projde něčím, co ho navždy změní, takový je život. -Zvládnete to tuto sobotu? -Jistě, ale já vás pletu. -A možná tě budu opravdu následovat. Potřebuji tě. Bez tebe nemůžu pokračovat. -To už jsme udělali, ale řekli jsme, že ti, kteří se zaregistrovali v dubnu, dostanou peníze až na konci měsíce. -Ano, řeknu jim to, myslím, že budou doma. -Tak to udělám zítra -Moje máma by si to nedovolila, miluje dlouhé vlasy. -Jak se bude počítat den, kdy jsem nebyl v práci, jako propustka? -Mohli byste mi prosím sdělit přesnou adresu, kam se mám zítra dostavit na úklid, jaký je kód ke klíčům a v kolik hodin se zítra hosté odhlašují a noví hosté ubytovávají? -Dobrý večer, nezapomněli jsme napsat na ty kurzy češtiny? -První jsme již propásli😉🙄 -Je mi to trochu líto, protože opravdu chápu, jak moc to potřebuji. -Dobrý den, omlouvám se, že jsem neměl internet a nemohl jsem to poslat rychleji. Děkuji. -V případě potřeby si můžete s někým promluvit česky. -Protože bydlím na koleji, budou tu lidé do dvanácti a pak půjdou do práce. -Ale dnes bylo naše město opět bombardováno a já se bojím o svého syna. -Chci odsud co nejdříve zmizet. -To je pravda, ale my se v oblasti ještě neorientujeme a nevíme, kde co je. Dnes jsme hodinu hledali supermarket a vrátili se pěšky. -Zpráva zaměstnanců: Rusko připravuje provokaci v Podněstří, aby obvinilo Ukrajinu -Ruská armáda se může uchýlit k provokacím v moldavském Podněstří a obvinit Ukrajinu z "agrese proti sousednímu státu". -Doslova: "Není vyloučeno, že ozbrojené síly Ruské federace budou provádět provokativní akce na území Podněsterské oblasti Moldavské republiky s cílem obvinit Ukrajinu z agrese proti sousednímu státu." -Podrobnosti: Nepřítel pokračuje v budování útočné skupiny vojsk pro operace na Slobožanském směru. -Je pravděpodobné, že v nadcházejících dnech se okupanti pokusí obnovit svou ofenzivu. -Kromě toho nepřítel pokračuje ve výcviku a vysílání personálu, zbraní a vybavení k účasti na bojových akcích na území Ukrajiny. -V místě stálého nasazení 60. samostatné motostřelecké brigády (Monastyryšče) 5. kombinované armády Východního vojenského okruhu probíhá výcvik zbraní a vojenské techniky. -Tyto zbraně budou pravděpodobně přesunuty na dočasně okupované území Doněcké oblasti. -Kromě toho se pro obnovu ztrát personálu praporní taktické skupiny 36. samostatné motostřelecké brigády (Borzya, Zabajkalský kraj) 29. kombinované armády Východního vojenského okruhu rekrutuje vojenský personál z uvedené brigády. -Nepřítel se potýká se zvláštním problémem při náboru řidičů a mechaniků. -Odjezd vybraných pracovníků z jejich stálého působiště je naplánován na druhou polovinu dubna tohoto roku. -Je pravděpodobné, že nepřítel bude i nadále útočit na dopravní infrastrukturu na území Ukrajiny s cílem narušit dodávky zboží do oblastí bojů, aby ji zničil nebo vyřadil z provozu. -Některé jednotky Ozbrojených sil Běloruské republiky pokračují v plnění úkolů k posílení ochrany ukrajinsko-běloruské hranice v Brestské a Gomelské oblasti. -Ve Slobožanském sektoru některé jednotky 6. kombinované armády a pobřežní vojska Severní flotily nadále částečně blokují město Charkov a pokračuje dělostřelecké ostřelování některých oblastí města. -V Izjumském sektoru pokračuje letecký průzkum s cílem identifikovat pozice ukrajinských ozbrojených sil. -K tomuto účelu nepřítel používá bezpilotní letouny Orlan-10. -Nepřítel se pokusil o ofenzívu ve směru Dovhenke a Dmytrivka s až dvěma praporními taktickými skupinami, ale neuspěl a ustoupil na dříve obsazené pozice. -V doněckém sektoru nepřítel nadále soustředí své hlavní úsilí na ovládnutí Popasny, Rubizne, Nyzne a Novobachmutivky a na získání plné kontroly nad městem Mariupol. -Nepřítel se pokusil provést útok v oblasti Zolote, ale neuspěl. -Ve městě Mariupol pokračují okupanti v útocích dělostřelectvem a letadly na závod Azovstal a námořní přístav. -Nepřítel s pomocí samostatných jednotek ostřeloval dělostřelectvem pozice ukrajinských vojsk u osad Vysokopillia, Trudolyubivka a Maryanske. -Ukrajinští obránci během uplynulého dne odrazili čtyři nepřátelské útoky v Doněcké a Luhanské oblasti a zničili pět tanků, osm obrněných vozidel, šest vozidel a osm nepřátelských dělostřeleckých systémů. -Starosta města Bucha Anatolij Fedoruk 1. dubna oznámil dobrou zprávu, že ukrajinská armáda 31. března osvobodila město od ruských útočníků. -Druhý den byli okupanti vyhnáni z celé kyjevské oblasti. -Radost, kterou měli Ukrajinci v tu chvíli pociťovat, však zastínila hrůza a nenávist, když vyšlo najevo, že jen v Buči Rusové zastřelili nejméně 280 civilistů. -Byli zabiti přímo na ulici, někteří z nich měli svázané ruce a byli střeleni do týla, některé oběti byly nezletilé. -Starostka obce Motyžyn Olha Suhenko, její manžel Ihor a syn Oleksandr, kteří byli uneseni 23. března, byli nalezeni mrtví a se známkami mučení. -Těla Olhy a Oleksandra ležela v hromadném hrobě a Ihora v kanále. -Těla několika nahých žen zabalených do dek byla nalezena u silnice 20 km od Kyjeva. -Rusové se je pokusili spálit. -Můžeme se tam zítra vypravit a zjistit podmínky? -Přijdete na radnici a dole na recepci řeknete, že hledáte paní Krzkowskou, pokud vám neukáže cestu a číslo dveří do její kanceláře. -Na jeho dveřích bude napsáno, že je tam vedoucí oddělení. -Pokud se nemůžete dohodnout, zavolejte Dominicovi, ale věřím, že vše půjde hladce. -Pokud paní Krzkowská ještě nedorazila, počkejte na ni, říkala, že má předtím schůzku, takže to může trvat trochu déle, ale ví o vás, objednala vás na tuhle devátou hodinu právě proto, abyste se dostal k lékaři. -Chtěla jsem vám říct, že se mi podařilo najít bydlení a mám z toho velkou radost ☺ teď mám kde bydlet a kde být se svým dítětem. -Pomůže nám Viktor zítra s ledničkou? -Minulý týden Nejvyšší rada schválila návrh zákona 7176 s názvem "O monitorování potenciálních hrozeb pro národní bezpečnost Ukrajiny v ekonomické sféře". -Je překvapivé, že ve skutečnosti jedinou potenciální hrozbou pro národní bezpečnost Ukrajiny v ekonomické oblasti bylo... místo, kde zasedají dozorčí rady státních podniků a státních bank. -Iniciátoři návrhu zákona se domnívají, že rozhodování dozorčí rady státního podniku ve fiktivním Kyjevě může nějakým způsobem vyřešit problémy národní bezpečnosti Ukrajiny. -Víme o velkém množství nových nebezpečí pro státní podniky, ale nevíme o žádném podniku, který by musel fyzicky pořádat zasedání dozorčí rady v místě svého sídla. -Potenciální hrozba takového zákona je zřejmá - je to skvělý základ pro "zbourání" všech výdobytků, kterých bylo dosaženo při obtížné reformě státních podniků po revoluci důstojnosti, včetně nezávislých dozorčích rad, a následné zavedení ručního řízení. -Tak, jak tomu bylo před rokem 2015. -Proč státní podniky potřebují dozorčí rady a kdo by v nich měl být? -Navíc pokud se pod vlasteneckými hesly rozjede jeden škodlivý zákon, zítra se mohou rozjet i další - například systém veřejných zakázek prostřednictvím ProZorra nebo nezávislost NBÚ. -Mnohé lze přičíst válce. -Mimochodem, tato iniciativa se objevila ještě před válkou a zákonem 7176. -Proto bylo toto ustanovení dne 22. února zařazeno do srovnávací tabulky návrhu zákona 5397. -Obecně lze říci, že snaha o zrušení dozorčích rad tím či oním způsobem koluje mezi některými poslanci již delší dobu. -Ne kvůli kritice jako takové, ale abychom ukázali možné důsledky návrhu zákona, budeme analyzovat argumenty, které vysvětlovaly potřebu jeho přijetí. -A poté předložíme naše návrhy, jak vyřešit otázku řízení státních podniků během války. -První náměstek ministra hospodářství Denis Kudin vysvětlil potřebu této inovace rizikem ztráty internetu a dalších komunikačních prostředků, což by znemožnilo práci dozorčích rad na dálku. -Jinými slovy, hrozí, že dozorčí rada nebude schopna přijímat potřebná rozhodnutí, což může ochromit činnost společnosti. -Souhlasíme s tím, že takové riziko existuje, ale lze mu zabránit přesunem členů dozorčí rady na Ukrajinu? -Představme si poněkud cynickou situaci, kdy nezávislí členové dozorčích rad (kteří tvoří většinu celého složení) upřednostní svou osobní bezpečnost před zájmy společnosti a odmítnou přijet na Ukrajinu. -V takovém případě mohou být podle návrhu zákona propuštěni. -A pak to s největší pravděpodobností povede ke ztrátě kvóra, které je nutné k tomu, aby dozorčí rada mohla přijímat jakákoli rozhodnutí. -Nové nezávislé členy dozorčí rady však nebude možné jmenovat rychle, protože zákon vyžaduje výběrové řízení, které trvá v průměru 3 až 4 měsíce. -Řešení navrhované návrhem zákona 7176 tedy pravděpodobně nezabrání tomu, aby dozorčí rada ztratila možnost rozhodovat, ale spíše k tomu povede! -Poslanec Dmytro Natalukha, autor návrhu zákona, uvádí další argumenty na jeho podporu. -Zejména uvádí: "mnoho podniků, včetně těch v obranném průmyslu, se nyní potřebuje přemístit", a proto "si lze jen těžko představit, že by někdo mohl například diskutovat o adresách a dalších citlivých informacích prostřednictvím služby Zoom z Vídně". -Za prvé, na Ukrajině téměř neexistují státní obranné podniky, které by měly dozorčí rady. -Dozorčí radu má pouze Ukroboronprom, který fakticky řídí téměř všechny státní obranné podniky. -Ale i v Ukroboronpromu má dozorčí rada podle zákona poměrně omezené pravomoci a všichni její členové nevykonávají své povinnosti bezplatně. -Veškerá moc v Ukroboronpromu patří generálnímu řediteli, který mimochodem rozhoduje o potřebě přemístit konkrétní obranný podnik, který je součástí koncernu. -Tento argument navíc neobstojí, protože návrh zákona č. 7176 nezavádí žádné změny v právních předpisech upravujících činnost státních obranných podniků. -Pan Dmytro na své facebookové stránce uvádí také argumenty, které lze označit za "emocionální". -Takové argumenty obvykle nemají nic společného se zvyšováním efektivity státních podniků a státních bank, ale pokusme se některé z nich analyzovat: "Respektovaní zahraniční odborníci opouštějí Ukrajinu při prvním slově "válka" v roce 2021, ale zůstávají členy dozorčích rad ve státních podnicích Ukrajiny s plným platem několika set tisíc hřiven, zatímco Ukrajina sama válkou skutečně trpí." -Zaprvé jde o manipulaci - většina těchto členů dozorčí rady na Ukrajině nikdy nežila a jezdí sem jen pravidelně. -Proto je mírně řečeno nespravedlivé tvrdit, že "opouštějí Ukrajinu". -Za druhé, v důsledku pandemie koronaviru se zahraniční i ukrajinští členové představenstva již dlouho uchylují k praxi pořádání online zasedání. -Tento nástroj je již dlouho rozšířen v celosvětové praxi, a to ve zcela odlišných oblastech: podnikání, vzdělávání, lékařství atd. -Je také třeba mít na paměti, že funkce člena dozorčí rady není prací na plný úvazek. -Tito lidé mají zpravidla jiné zaměstnání a elektronické komunikační nástroje jim v tom pomáhají. -Za třetí, všechna tato prohlášení jsou vykládána jako nároky výhradně vůči cizincům. -Zajímalo by mě, proč pan Dmytro nevznáší stejné nároky na členy dozorčích rad, kteří jsou Ukrajinci. -Koneckonců ani oni se nenacházejí v místě sídla svých společností. -Mimochodem, nejde o první populistický pokus poslanců odstranit cizince z dozorčích rad a válka je jen novou vhodnou příležitostí. -Alespoň jsme se teď viděli. Rozhovor byl spíš korespondenční, ale dopadl dobře. -Můj kadeřník hodně podražil, má vlastní salon v Brně. -Chodím k němu jednou za rok, někdy jednou za dva roky. -Ale hledám nový, protože je to vysoká cena. -Snad paní Olena získá dobrý za dobrou cenu. -Pokud to nepotřebujete, moji přátelé mají kadeřnictví v Romanivce, a to může být za dobrou cenu. -A pokud to myslím vážně -Ale když přejdu na odkaz, nemohu změnit heslo, protože se mi zobrazí tato chyba: -Můžete si rezervovat televizor se set-top boxem a dálkovým ovládáním, hrnce, polštáře, židle, pánve. -Poukaz používám místo karty. -Ano, pomoc s dopravou je velmi potřebná, zpočátku, než se seznámíme s oblastí a budeme se moci pohybovat sami veřejnou dopravou. -Do 21 let jsem studoval na právnické akademii v Charkově a pak jsem šel pracovat na prokuraturu. -Píše to, že mám tarif "Společně pro dva bez internetu", a já jsem požádal o zapnutí internetu. -Nerozumím podmínkám tarifu -Promiňte, zapomněla jsem se zeptat, kolik bude školka měsíčně stát a zda bude fungovat i v létě? -Ano, kopie pasů si můžete vzít, kdykoli se vám to hodí :) -Většina mých přátel je zprávami o brutálním týrání civilistů severně od Kyjeva hluboce otřesena. -Fotografie mučených a zastřelených lidí zahanbily celý civilizovaný svět. -Desítky lidí se navzájem ptají, jak se to mohlo stát v 21. století! Jak?! -Ale vůbec nic mě nepřekvapilo. Bylo to přesně to, na co jsem smutně čekal... -Druhý den války jsem z Kyjeva odvezl své děti a staré rodiče. -Pak se oženil se svými kolegyněmi s dětmi a teprve poté se vrátil do hlavního města pracovat. -Od roku 2014 soustavně pomáhám ukrajinským ozbrojeným silám a ani na okamžik jsem nepochyboval, že v případě obsazení kyjevských předměstí nebude moje rodina ušetřena. -Protože jsem si byl a stále jsem naprosto jistý, že podstata prohnilého komunisticko-komunistického režimu se od roku 1918 nezměnila. -Bez teroru prostě nemůže existovat, je na něm postavena. -V roce 1937 byl jeden z mých pradědečků zastřelen ve vězení v Žytomyru. -Před několika lety mi žytomyrská SBU dovolila pořídit si kopii spisu o popravě mého pradědečka: celý spis má přes sto stran, ale rozsudek smrti je velmi jednoduchý, a proto o to děsivější. -Obviněný byl negramotný a obvinění popíral. -Byl to etnický Polák, katolík, měl pět dětí, byl svobodný člověk, chudý "jednodvorní" šlechtic, nikdy ne poddaný. -K odsouzení oráče a krmiče malých dětí k smrti stačilo, že žili na statku a měli několik krav. -Příbuzní o místě pohřbu ani nevěděli. -Babička vyprávěla, že z vesnice bylo odvedeno a zabito téměř sto lidí, kromě dvou, kteří souhlasili s falešným svědectvím. -V trestním spisu je uvedeno jméno udavače Shariy. -Náš bývalý předseda Nejvyšší rady pochází ze sousední vesnice. -Věřím na genetiku a nepřekvapilo by mě, kdyby to byl jeho příbuzný, kdo byl informátorem KGB. -Jablko nepadá daleko od stromu, jak víte... -Popravčí četa, která odsoudila rolníky k smrti, byli etničtí Rusové. -Při analýze případu svého pradědečka se začal zajímat o různé informační zdroje. -V Rusku existují vynikající (bez sarkasmu) informační zdroje, například Besmertnyj barak, které dobře připomínají události těch let. -A tak v dnešním Rusku potomci NKVDistů našli způsob, jak potrestat a vyhladit téměř každého, kdo vážně vyšetřoval zločiny jejich předků před 70-80 lety. -Lidé, kteří našli masové hroby obětí stalinských represí a veřejně napsali jména vrahů, byli nejprve zdiskreditováni a poté zničeni systémem. -V jeho počítači byly údajně nalezeny soubory s dětskou pornografií, za což byl uvězněn a rychle tam zemřel. -KGB stále zabíjí ty, kteří se snaží zjistit pravdu o zločinech minulého století. -Proč se tedy nyní divíte jejich jednání? -Tam se koncentrují ty nejhorší vlastnosti lidské povahy. -Otrokářství a císařský šovinismus 16.-19. století byly překryty tisíciletou tradicí Hordy. -A pak provedli brutální genetický experiment na tom, co získali, a během několika vln rudého teroru vyřadili inteligenci. -Podívejte se na polofrancouzský film Chekist z roku 1992 a uvidíte, jak to bylo. -Co budete dělat teď? -Dobrý den, Leno, prosím, řekněte mi, kde je poblíž papírnictví. -UAnimals poskytla finanční pomoc charkovské zoo -Protože naším hlavním cílem je záchrana všech zvířat bez výjimky, snaží se UAnimals pomáhat všem institucím, kde jsou chována. -Dnes zoo oznámila finanční problémy, a tak tým UAnimals poslal 100 000 UAH na udržení zvířat v zoo. -V této době je důležité spojit síly a zachránit co nejvíce životů. -Protože každý život je důležitý -Vždycky jsem chtěla navštívit Prahu -Nákup si můžete nechat v autě, my pak vyzvedneme děti a odvezeme vaše věci. -V kolik hodin můžeme prádlo předat? -Proč jsi mladý a máš hodně dívek? -Dnes jsem si nechal opravit telefon. -A myslím, že jsem dnes venku také zmrzla. -Vzal jsem si nějaký lék a čaj, cítím se hůř, ale ne kriticky. -Taky mě to mrzí, ale je lepší se sejít, až budu zdravá. -Až budete mít čas, pošlete mi nějakou hudbu, která se vám líbí. -Zajímá mě, jaká jsou vaše hudební překonání 😊 -Viděla jsem tvůj příspěvek o projektu Music Project. -Jsem sbormistrem na Drahomanově národní pedagogické univerzitě. -Těším se na spolupráci s vámi. -Proto jsem četl Bibli, Korán a další věci, abych našel odpovědi na otázky, které jsem hledal. -V té době jsem ztratil značnou část sluchu. -Najdete ji někde poblíž kočárku vašeho malého bratra. -Bruslař se v den svých narozenin pokusil o sebevraždu na Karlově mostě -Nevěděl jsem, co mám dělat. -Musíme pravidelně chodit na ambasádu, čekám na vyřízení dokumentů, jdu a ptám se. -Omlouváme se, ale máme trochu zpoždění. -Čtete rádi knihy? -Můj bratr mi přijde pomoci s dětmi, abych mohla jít do práce. -Abych byl upřímný, vždycky jsem byl první, kdo člověka opustil, ale pak se mi omluvil a snažil se vztah obnovit. -Ale už jsem to nepotřeboval, když jsem se v člověku navždy zklamal. -Jdu si odpočinout do postele. Nabrat sílu a energii. Děkuji za dnešní dárek. Dobrou noc. -Pro účast v bazénu potřebujete certifikát -Vždy se rád seznámím s novými lidmi) -Informovala o tom agentura Reuters. -"Nyní zužujeme zaměření, aby bylo v pokynech jasně uvedeno, že by to nikdy nemělo být vykládáno jako schvalování násilí vůči Rusům obecně. -Nepřipouštíme ani výzvy k atentátu na hlavu státu... -Abychom odstranili jakékoli nejasnosti ohledně našeho postoje, dále zužujeme naše pokyny, aby bylo jasné, že na našich platformách nepovolujeme výzvy k zavraždění hlavy státu," uvedl Nick Clegg, prezident společnosti Meta Global Affairs. -Společnost prohlásila, že nepodporuje rusofobii, genocidu a etnické čistky a že má také negativní postoj k diskriminaci." -Jsme stále tři (já a moje děti ve věku 3 a 17 let) a možná se k nám později nastěhuje i můj manžel, protože stále bydlí na ubytovně. -Ale toto je můj názor -Jak zjistit, zda jsou v kruhu volná místa, nebo ne? -Přijdeš zítra s videem? -Snila jsem o tom, že budu učitelkou -k šití by se měly používat pouze tyto látky. -Zašli jsme osobně za vedoucí obce v Jivově, viděla mě a že jsem těhotná, ale řekla, že není žádné ubytování. -Nechci s tím mít žádné problémy, už se rádi přestěhujeme a necháme je, ať se mezi sebou poperou. -Dobrý den, Viktor schválil tento dům, ale neví, jak pomoci tomuto pánovi s dokumenty, aby obdržel platbu od banky a řekl, že banka nemá nábytek tam ještě -Je to velmi zajímavá tradice, děláme to na Vánoce. -Dobrý večer, to mě nenapadlo :). -Dobře, počkám, jdu spát, dobrou noc 😘. -Chtěla jsem se vás také zeptat, jestli nevíte, kam děti odešly, protože Galia je nemůže najít. -4. Jaké další aktivity kromě školy by vašemu dítěti pomohly přizpůsobit se českému prostředí? -Omlouvám se. Od středy opravdu nemám žádné zmeškané hovory. Omlouvám se, můžeme si domluvit jinou schůzku? -Rád bych se také naučil česky -Děkujeme, máme všechno, koupili jsme všechno ze seznamu. -Jmenuji se Ciara, jsme z Ukrajiny, jsem tu se svou dcerou, které je 7 let, a její matkou. -Hledám práci... ale mám problém, po operaci se nemůžu dlouho udržet na nohou, takže je pro mě velmi těžké najít něco vhodného. -Dočasně bydlím na Praze 9.Jsem připravena se přestěhovat kamkoliv, pokud bude práce.Budu vděčná za radu či návrh) -Dobrý večer Stephanie, Agatha se chce zúčastnit hry, zmeškala hru 😉. -Doufám, že jsem to spočítal správně. -Skvělé, napíšu, až se dostanu ven. -Chtěla jsem se zeptat, jestli nevíte, odkud pochází mixér v naší kuchyni. -5. Jaké další vzdělání, kromě kurzů českého jazyka, byste chtěl/a mít? -Jak často uklízíte? -Uklízím každý týden -To znamená, že Anna potřebuje tuto hodinu odpracovat v jiný den. -Který den je pro ni lepší přijít dřív? (Možná v který den je více práce?) -Zítra může být na cestách špatné spojení. -Káťa se ptala na internet, můžu jí dát heslo? -Jen mi pomáhal, protože sám bych to nezvládl. -Kam s včerejšími spotřebiči -Banka požádá o úvěr v pondělí. -Když už jsme u toho, pojďme si promluvit o návratnosti. -Včera jsi řekl, že jsi připraven přijmout mě jako svého partnera, a jako tvůj partner nechci žádné peníze zpět. -Co je moje, je i tvoje. -A vy to dobře víte. -Šla jsem na stránky školy a teď se pokusím objednat Aničce oběd na zítra. -Slíbili jsme ti, že až půjdeš do školy, přineseme ti překvapení. -Chápu... mám ti to napsat sem, nebo ti to zítra přinést? -Byl to velmi bohatý muž, ale líbila jsem se mu, protože jsem měla vlastní pohled na život. Chodila jsem s ním, pak jsem ho opustila. -Byl to velmi bohatý muž. Po nějaké době mi zavolal a řekl mi, že podstoupil operaci srdce. -Požádal mě, abych si s ním promluvil a podpořil ho, a mně ho bylo líto. -Byl to velmi bohatý muž a myslel si, že jsem možná jeho hračka a že možná potřebuji jeho peníze. -A pak mi učinil nabídku. -Děti jsou smyslem našeho života. Nikdy se jich nenabažíte. Dobrou noc -Už pro sebe nemohu najít místo. -Tam se ale nedostanete, protože železnice se rekonstruuje a vjezd je uzavřen. -Až se dostanete k mostu, kde probíhá rekonstrukce, řekněte mi, že jste tam, a já za vámi přijdu. -Nabídl jsi mi, že pojedu s tebou, a já souhlasil, pokud všechno půjde dobře. -Dobře, děkuji, že jste se mnou mluvil. Vezmu si prášek a půjdu si odpočinout do postele. Dobře se vyspi 🙏🏻 -Dobře, děkuji za radu, do přihlášky uvedu svůj certifikát Vala. -Omlouvám se za tak dlouhou reakci, byl to náročný den, je těžké se přizpůsobit. -Ve tři hodiny odpoledne jsme byli doma. S Kristinou a její rodinou jsme se vydali na hrad Visegrad. -Doufám, že vám mohu říci, že je vše v pořádku. -Doufám, že nenastanou žádné problémy. Děkuji. -Dostaneš se jen na toto místo, pak se cesta uzavře a já přijdu za tebou. -Dnes mi byla nabídnuta každodenní práce. -Je mi líto, ale už jsem souhlasil, že tam budu pracovat. -Děkuji za odpověď a přeji vám, abyste našla dobrého asistenta😊. -Jak se dnes dostanu do svého bytu, abych uklidil? -Až budu u hlavního vchodu, dám vám vědět a vy mi otevřete? -Nebude vám vadit, když přijdu hned po válce? -Skvělé, takže v pondělí mám volno. -A to by pro mě bylo velmi dobré. -Připravím všechny materiály. -Mám super knížku, kde se naučíte česky krok za krokem. -Jediné, o co bych poprosil, je, kdyby to mohlo být 13:00-14:00. Protože já pracuji do 12:30. -Abyste se tam dostali včas. Bude to tak dobré? -Tento týden bych mohl přijít ve středu a příští týden bych mohl přijít ve čtvrtek. -Ano, chápu, co je třeba udělat. -Také bych byl vděčný, kdybyste mi na začátku dali nějaké tipy, abych mohl vše udělat podle vašich představ😊. -Když jsem byla malá, mohla jsem chodit k řece několikrát denně, řeka je velmi blízko mého domu, ale teď chodím jen zřídka, nemám čas a někdy ani nechci, chodím jen se sestrou) 🙂. -😂😋Už teď mě baví s tebou mluvit, a dokonce se mi to líbí 😉 -Můžete také uvažovat o tom, že ledničku neumístíte do blízkosti dveří do chlapeckého pokoje. -Měl jsi mi zavolat, když jsi přišel. -jeden by měl být rozdělen na 4 části -jednu z těchto částic lze používat 2-3 dny, ale každou hodinu ji opláchněte. -Pobočka v místnosti 29 je speciálně určena pro podávání a vyřizování žádostí o podporu. -Marino, ale opravdu, pokud potřebuješ pomoct se zahradou, já a můj manžel ti pomůžeme. -Doufám, že tu bude více míst, protože naši příbuzní, strýcova manželka, její rodiče a dcera, se také chystají odjet. -Mají vstupenky na 20. dubna. -Bůh jim žehnej, aby se jim výlet vydařil. -Dnes jsme šťastní - naše město Bucha bylo osvobozeno od okupantů a naše jednotky jsou na cestě. -Nepřítel je vyklizen - hledají Rusy ve sklepích a bytech. -Celé město je zaminováno a začnou odminovací práce. -Hryško už chodí do školky, ale není to tak snadné... Hodně pláče. -Vaše žena se nezlobí, že telefonujete. -snadno se používá a není drahý -To mi vyhovuje na dovolené. -Mám volný čas a chodíme jen na velikonoční bohoslužby. -Nemáme vízum, musíme jít do kongresového centra a zjistit, jak to udělat, protože bylo vydáno v Polsku, kde nám nedali vízum, ale pesel. -Pět raket dopadlo na můj rodný Lvov -Co si myslíte, co je tady? -Izolují ruský úřad: zastavují investice a přerušují dodávky. -Ale není mi líto zničených domů, je mi líto lidí, které okupanti zneužívají. -Povolání řidiče je považováno za romantické, ale zároveň je náročné a velmi zodpovědné. -Vzhledem ke zvláštnostem moderních silnic a skutečnosti, že se neustále zvyšuje počet automobilů, jsou na kvalifikaci řidičů kladeny zvláštní požadavky. -Proto zde při žádosti o zaměstnání hrají zásadní roli pracovní zkušenosti, zatímco na vzdělání nejsou kladeny žádné zvláštní požadavky. -Musíte mít řidičský průkaz a být schopni vykonávat tuto profesi. -Pokud jde o profesní dovednosti v životopise řidiče, zaměstnavatelé věnují pozornost následujícím skutečnostem: schopnost řídit různé druhy dopravních prostředků (řidičský průkaz s otevřenými skupinami), dovednost samostatné údržby a schopnost provádět drobné opravy, žádná nehoda, znalost oblasti a silnic atd. -Fyzické a duševní zdraví je v této profesi nesmírně důležité, proto byste měli být připraveni absolvovat lékařskou prohlídku. -Výzkumníci tvrdí, že bezpečnost řidičské techniky je do značné míry dána spíše emocionálním chováním a inteligencí než skutečnou zdatností. -Při současných rychlostech automobilů musí řidič rychle vnímat a reagovat na dění na silnici, protože vteřinové zpomalení reakce může způsobit nehodu. -Řízení vozidla vyžaduje od řidiče maximální soustředění. -V této profesi jsou klíčové takové charakterové vlastnosti, jako je pozornost, odolnost vůči stresu a zodpovědnost. -Schopnost odpoutat pozornost od starostí nebo každodenních problémů během jízdy vám pomůže najít klid, což znamená, že vaše cesta bude co nejbezpečnější. -Dobrý den, je mi líto, ale nebudu pro vás moci pracovat, protože v nejbližší době odlétám do Kanady. -Směrnice se zavádějí ve formě číslovaných písemných příloh, které jsou součástí tohoto nařízení. -Chtěli jsme tedy problém s kartou vyřešit ještě před Velikonocemi. Nemáme internetové bankovnictví, nevíme, jak ho nastavit. -Máme oslovit Mistra, s nímž žijeme? -Kdyby bylo teplo, bylo by to velmi dobré. -Ahoj Suzano, pomoz mi, přišla mi platba za březen. Je to pro mě a Julii nebo jen pro mě, nemůžu přijít na to, proč je ta částka tak vysoká. -Právě jsem přišla domů a snídám :) -Nabízíme ubytování zdarma. -Bydlíme na vesnici ve velkém rodinném domě s velkou zahradou a uzavřeným dvorem mezi Jihlavou a Havlíčkovým Brodem. -Máme 2 malé děti. -Poskytneme vám pokoj. -Kuchyň, koupelna a další zařízení jsou společné. -Překročím bariéru pro malého 😂🙃😂 -Na tenhle rozvrh si nikdy nevzpomenu) -Doufám, že můj syn vyjde z tohoto mlýnku na maso živý. -V důsledku akcí ruských okupantů na Ukrajině byl zraněn další zahraniční novinář. -Je na jednotce intenzivní péče pod dohledem lékařů. -Informovala o tom generální prokurátorka Ukrajiny Iryna Venediktová na své facebookové stránce. -Střepinová zlomenina dvou dolních končetin - diagnóza "ruského světa", kterou ukrajinští lékaři sdělili britskému novináři. -V současné době je novinář na jednotce intenzivní péče pod dohledem lékařů," uvedla Venediktová. -Generální prokurátorka poznamenala, že se hodlá zaměřit zejména na válečné zločiny spáchané na občanech partnerských států Ukrajiny. -Chápu, že se jedná o citlivou otázku, ale doufám, že se představitelé civilizovaného světa rozhodnou uzavřít oblohu raději dříve než později," řekla. -Venediktovová dodala, že britský novinář plnil redakční úkol a nebyl ve vojenském zařízení. -Ještě jednou bych se rád obrátil na naše partnery - občan vaší země byl na Ukrajině na redakční cestě. -Muž se nenacházel ve vojenském objektu, na který se podle ruských představitelů neustále zaměřují. -Byl vážně zraněn v době, kdy nebyl ve vojenském zařízení. -Trestný čin byl samozřejmě zaevidován na ÚZSI a řízení bude řádně prošetřeno. -Zdravotní stav bohužel není naší odpovědností. -Navrhuji jednat," shrnula Venediktová. -Telegramový kanál Národního svazu novinářů Ukrajiny (NUJU) informoval, že zraněným novinářem FOX NEWS je pravděpodobně Benjamin Hall. -Zraněným novinářem FOX NEWS je pravděpodobně Benjamin Hall. -NUJU se snaží ověřit přesné informace o okolnostech vážného zranění britského novináře na Ukrajině," uvedla NUJU. -Jak již dříve informoval server Ukrainian News, ruští okupanti zastřelili v Irpenu amerického novináře. -Další byl zraněn. -Předtím ruští okupanti zajali poblíž Kyjeva britské novináře jako rukojmí. -V Chersonu se lidé znovu shromáždili, než je Rusové stačili rozehnat. -Jak agentuře sdělili účastníci akce, obyvatelům Chersonu se podařilo uspořádat shromáždění před příjezdem ruské armády, která je rozehnala pomocí techniky. -"Podařilo se nám uspořádat shromáždění. Teď tam dorazila ruská vojenská technika," řekl jeden z účastníků. -Dříve se občané tradičně scházeli k protestům na náměstí Svobody, v blízkosti budov RSA a regionální rady. -Toto náměstí i budovu v současnosti kontrolují ozbrojené ruské jednotky, které proti demonstrantům používají zbraně a unášejí lidi. -Jak bylo oznámeno, 3. dubna ozbrojení ruští okupanti použili zbraně proti pokojným demonstrantům v Kachovce. -Jak informovala agentura Ukrinform, obyvatelé Doněcké oblasti pravidelně pořádají pokojné protesty proti ruskému agresorovi. -Ruská armáda používá proti lidem sílu a zbraně, jsou zranění a zadržení. -Ruští útočníci také unášejí obyvatele regionu. -Prezident Volodymyr Zelenskyj udělil Chersonu zvláštní titul Město hrdina. -24. února zahájil ruský prezident Putin rozsáhlou invazi na Ukrajinu. -Ruská vojska ostřelují a ničí klíčové objekty infrastruktury, provádějí masivní útoky na obytné oblasti ukrajinských měst a vesnic za použití dělostřelectva, raketových systémů a balistických raket. -Ahoj) vše proběhlo velmi dobře. -Byl jsem trochu v šoku, že pracuji s lidmi na tak vysoké pozici. -Dnes jsem měl hodinu s paní, která je ředitelkou ekonomického odboru kraje Vysočina. -S mou prací byla velmi spokojená. -Hryško se ve školce choval dobře. -V pondělí musím přinést potvrzení o zaměstnání a platbu v hotovosti za školku. -Cena není za tváře, ale za obličej... Ten název nezní dobře, co jste tím mysleli? -S mým tátou se určitě setkáme později. 🙏🏻 -Určitě vám napíšeme! -Denis dnes odjel v dobré náladě. -Domluvil jsem si schůzku s chlapcem z Ukrajiny, kterého jsem včera poznal, poblíž školy, abych mu ji ukázal. -Takže doufám, že se mu daří dobře. -Je to skromný člověk, v poslední době hodně studuje. Uvědomil si, že to ve svém životě opravdu potřebuje. -Hryško stále spí, v noci hodně plakal, zub mu už vylézá. -Telefonuji se všemi svými přáteli ze Lvova, mnoho lidí zemřelo. -V pondělí proběhne schůzka vedení a rozhodneme se, jak a kdy budeme hostovat. -"Říkám jí, že "oprava televize a rádia" se zpozdila kvůli dováženým součástkám a že její přátelé a známí byli varováni, aby toho moc nenamluvili, až budou volat." -"Celý měsíc jsme před tchyní tajili informace o válce. -Zatím se to daří. -Plyamka (kočka - UP) má však na břiše a zadních nohách srst - může to být způsobeno stresem, změnou stravy a nedostatkem vitamínů. -Poslední dva důvody byly odstraněny, ale stres bohužel nemohu ovlivnit. -Na rozdíl od své tchyně není Plyamka hluchá, slyší sirény i výbuchy." -Vzhledem k tomu, že Olga špatně slyší a většinu času tráví ve svém pokoji, žila s ní její snacha měsíc v bytě "inkognito". -93letá Oksana Polyová se 1. dubna poprvé v životě chopila zbraně. -"33 dní jsem s ní žil v bytě jako partyzán. -Válku před ní tajili a to, že jsem byl poblíž, jsem formálně přišel jako obvykle na pár hodin. -Ale když to kamarádka prozradila, dala jsem jí sluchátko - poslouchala 4 hodiny a pak 2 hodiny zpívala." -Rádio jsem měl vždy u postele. -Moje tchyně tráví většinu času ve svém pokoji. -Je nedoslýchavá - to mi umožnilo být pro ni neviditelný. -Kdybych slyšela zvuk jejích "chodítek", schovala bych se ve svém pokoji. -Nejdřív jsem před ní schoval přijímač a televizi. -Pak zavolala všem, kteří s ní komunikují. Bylo to 6-7 lidí. -Požádala mě, abych s ní mluvil o čemkoli jiném než o válce. -Dobrý den, provedli jsme testy a lékař vás bude informovat o výsledku. -Ano, měl jsem špatné srdce. -Chovala jsem k němu upřímné city, ale on se nechoval příliš slušně, a tak jsem ho opustila. -Po chvíli se vrátil a omluvil se mi. -Odmítl jsem ho. -Po nějaké době mi zavolal, že podstoupil operaci srdce. -Požádala jsem ho, aby se mnou jen mluvil, bylo mi ho líto. -Téměř šest měsíců jsem ho psychicky podporoval. -Během této doby jsme spolu hodně komunikovali. -Zároveň jsem začal mít problémy v práci. -Jeho podpora mi tehdy také pomohla. -Jedná se o vzájemnou pomoc. -A já si pomyslel, že se na to možná opravdu musím dívat jinak. -Požádal mě o ruku a dva dny nato zemřel. -O našich osudech tedy rozhodl sám Bůh. -Chystala jsem se sama k matce a moje sestra s kamarádkou se chystaly odjet a Anastázie je požádala, aby šla s nimi. -Jsem vděčný Bohu, že tuto válku nevidí. -Přijdou všichni rodiče? -Aby se nevyděsila, že nikdo nepřišel. -Možná byste nám mohli napsat platební algoritmus a my bychom za něj zaplatili sami. -Nebo spíš: vlak je, ale nejsou v něm sedadla. -Zdržujeme vás příliš dlouho? -Můžeme u vás zůstat, než nám paní Margarita najde bydlení? -A kam jsme se podívali teď? -Peníze už nepotřebuji, půjčil jsem si je od kamaráda na dlouhou dobu 😊. -Omlouvám se, asi jsem to špatně pochopil. -V té době jsem češtině moc nerozuměl. -Musel jsem si něco vymyslet. -Údržbář v červené kombinéze táhne raketu ke koši. Žádné změny na charkovské frontě -Kolik máte času? V kolik hodin potřebujete odjet? -To nemůžu říct mámě :) -A opravdu se teď cítím trochu špatně, že jsem si nemohla pomoct, abych se takhle necítila, než jsme opustili byt.... Jako dnes večer..... -Dá-li Bůh, Jura usne 🙏 možná půjdu rovnou do postele a vyspím se. -V každém případě vám napíšu. -Stačí, že mluvíš, ale já ti chci rozumět. -nedělejte si s tím starosti -Jedna píseň pro tebe v noci -Dobře, děkuji, objednám tedy dnes na celý týden, musím při objednávce napsat příjmení dítěte nebo ne? -Potřebuji noční stolek. Můžu zaplatit nájem. -Dobře, protože jsem se začínal obávat, že jsem něco špatně pochopil a jsem na špatném místě. -Zapomněl jsem ti napsat, omlouvám se. -Včera šlo všechno velmi dobře. -Hovořili jsme o dětech se speciálními potřebami a o systému jejich vzdělávání na Ukrajině. -Když mě paní Šachová představila jako zaměstnankyni Fpointu, první otázka, kterou mi položila, byla, jak našli psa, jak našli mopse. -Všichni sledují stránku na Facebooku. -Požádali mě, abych vás pozdravoval a vzkázal jim, že je velmi zajímá, co si o vás přečtou. -Mluvili jsme také o Romech a jejich výsadách na Ukrajině. -A domluvili jsme se, že pokud přijde dítě a rodiče vůbec nerozumí česky, mohu jim po práci telefonicky pomoci. -Ano, rád si poslechnu jazz. -Možná bude šťastná a její nálada se zlepší. -Měla velkou radost ze srdíček a věcí, které jí děti dávaly, když přišla do školy. -Dobrý den, mám velký zájem o tuto práci, ale nemluvíme česky. -Ano, přestala jsem se budit uprostřed noci. -Ráno se prostě budím velmi brzy bez budíku. -Zatím spím 4-5 hodin, ale už spím. -Marina mě dnes nemohla vzbudit, když jsme přijeli)) Ale chce to čas, musí to projít. -V úterý mi Pablo řekne, jestli je pro Nazara v pivovaru práce a kdy si má přijít domluvit schůzku. -Pokud Nazar pracuje od 5:30, může jet jakýmkoli vlakem: -Kolik stojí pronájem bytu se dvěma ložnicemi v Praze? -Všichni byli nemocní, jen Lydie nebyla nemocná, měli omikron, nový typ kovidu. -Diano, cokoli chceš. Možná se bude stydět, nevím. -Náš Panas chce zůstat doma, říká, že ho unavujeme a chce být sám, když jdeme ven bez něj. -Nic. Počkáme do zítřka. Nevěděl jsem, že i velké obchody budou zavřené. -Poštou mi ještě nepřišla karta, ačkoli už uplynul více než týden. Můžete mi pomoci? -Silvie, Máša ještě nedostala peníze od Kletzany, můžu jim zavolat? -Můžete mi říct, jak se dostanu do obchodu? -Nemůžeš jít v pondělí někam do centra? -Mohu platit 12000 měsíčně, nejlépe na splátky -Vytisknu vám nový lístek a můžete jít do haly. -Nic však není nutné -Protože se bojím, aby všechno dobře dopadlo! -U těchto exkurzí jsme o tom ani nemluvili! -Chápu, jak je to pro vás důležité! -A to ani nemluvím o sobě (pro mě je to jen vrchol hromady)! -Zelné závitky vařím takto: v hrnci zředím rajčatový protlak 0,5 litru vody a 3-4 lžíce rajčatového protlaku - přivedu k varu. -Zelné závitky vložím do hrnce a přeliji je touto omáčkou. -Tělo vložte na 1,5 hodiny do trouby. -Dnes bylo chladněji a pršelo. Zítra by mělo být tepleji. -CO MÁTE DĚLAT, KDYŽ JSTE PRÁVĚ PŘIJELI DO ČESKÉ REPUBLIKY SE SVÝMI DĚTMI? -KAM ZAPSAT SVÉ DÍTĚ? -Předškolní vzdělávání probíhá v mateřské škole a je určeno dětem obvykle od 3 do 6 let, poslední rok před nástupem do základní školy je v České republice povinný. -Pro předškolní vzdělávání se nemusíte rozhodnout hned, ale pokud se rozhodnete své dítě do mateřské školy zapsat, je to vaše právo a můžete tak učinit kdykoli. -Může se však stát, že konkrétní mateřská škola nebude v danou chvíli (tj. v průběhu školního roku) k dispozici a vy budete odmítnuti. -V takovém případě bude zřizovatel školy nebo krajský úřad situaci řešit a přidělí vám jinou školu s místem pro vaše dítě. -Je skvělé, že Yulia bude moci chodit do práce. -Včera večer jsem byl u nich v bytě. -Je velmi útulný a pohodlný. -Musím se podívat na stránky a pokusit se zjistit, kde to bolí. -Těšíme se na vaši návštěvu! -Nebojte se, nepřišel jsem o nic žádat. -Přesněji řečeno, dnes žádám pouze o pozornost. -Mám toho dnes hodně na srdci, a proto vás prosím o trpělivost. -Funguje vaše představivost dobře? -Žijete normální život, chodíte do práce, plánujete nákupy a dovolenou. -Máte sny. -Plánuješ si koupit ty šaty, které jsi včera viděla v obchoďáku. -Zítra večer po práci. -Váš mozek to nedokáže zpracovat. -Lpí na staré realitě. -Zavoláte na pracoviště, abyste zjistili, zda dnes musíte jít do práce, nebo ne. -Rozhodnete se nechat děti doma. -Dokud se něco nevyjasní. -Naděje existuje... ale nebude trvat dlouho. -Nyní .... bude souhlasit... něco se stane. -Mozek odmítá přijmout... -Nemám nikoho a nikam jít. -Mám v kapse 50 dolarů. Zbytek jsme utratili doma. -Mnoho našich manažerů na pracovištích nevyplácelo mzdy. -Kam mám jít? Kdo mě potřebuje? Jak mám zaopatřit své děti? Kde mám bydlet? -Nemůžete si vzít mnoho věcí. -Z domova jste si nic nevzali... museli jste urazit 1000 kilometrů. -Nevíte, jestli je zítra budete mít čím nakrmit, nebo ne. -Každý den od rána do večera taháte děti po celé Praze. -A co bude dál? To není známo. Jak dlouho? To nevíme. Děti si žádají sladkosti. Ale to nejde. Je to pro vás příliš drahé. -Máme na výběr - zůstat tam a ohrozit děti, nebo se pokusit utéct sem. -Jestli ti to nevadí, vezmu si tyhle kalhoty do práce. Sedí mi tak akorát. -Hlavní je si vzájemně porozumět -Mohu zítra vyprat prádlo v bytě a pověsit ho sušit do garáže? -Jsme 4, 2 dospělí a 2 děti, a je to velmi malý byt, potřebujeme ještě 1 pokoj s postelí. -◻️ V Borodiance u Kyjeva byla při demolici dvou vícepodlažních obytných domů nalezena těla 7 civilistů. -◻️ Obránci Mariupolu tvrdí, že nad městem byla z ruského dronu rozprašována neznámá jedovatá látka. -Tři osoby byly zraněny. -◻️ Přibližně 1700 ukrajinských obránců a civilistů je drženo v ruských věznicích. -Mezi nimi je 500 žen. -◻️ Německo vyčlení 1 milion eur na podporu Mezinárodního trestního soudu, který vyšetřuje válečné zločiny spáchané ruskou armádou na Ukrajině. -◻️ Kanada uvalila sankce na 33 ruských obranných společností. -◻️ Za posledních 24 hodin byl v bojích na východě Ukrajiny zničen jeden nepřátelský tank, tři obrněné transportéry, tři dělostřelecké systémy, 24 vozidel, jeden vrtulník a tři drony. -Na Ukrajině se šťáva přestává vyrábět, když jsou listy malé) -Možná půjdu večer naživo, pokud mě pustí na noc domů!) Uvidíme se 😇. -Rozumím. Děkuji, ale nepotřebujeme 1+1 ani ukrajinskou televizi obecně. -Všechny zprávy vidím na internetu. -A 1+1 je proprezidentský, propagandistický kanál, vůbec mě nezajímá. -Nabízím stříhání mužů, žen, dětí s návštěvou u vás, zájemci pište do osobních zpráv, cena: -Nejspíš v 16:20, koncert začíná v šest a pak mám zase volno. -Saša dnes nebyla ve škole, napsala jsem její učitelce. -Dobře. Pokud vše půjde dobře, budu souhlasit -Dobrý den. Jsem na Ukrajině. Hledám možné varianty dočasného pobytu v rodině. Mám s sebou 2 děti. -Boryspilští superpolicisté v akci 👮‍♂️ -Dnes hlídka policie Boryspil opět zachránila zvíře ze zamčeného bytu. -Kocour byl 15 dní bez jídla a vody, ale policie a dobrovolníci ho zachránili a nyní je v bezpečí. -Každý život je důležitý! -Ukrajina je zemí superhrdinů 🇺🇦 -To bude v pořádku. Děkuji. Pokud ultrazvuk něco ukáže, uvidíme. -Dobře, tak to zkusíme :) -Oleksiy V. Vacnyuk -Datum narození: 10.10.1974 -Město: Charkov -Mobilní telefon: +38 (000) 000 00 00 -E-mail: 0000@gmail.com -Cíl: Obsadit volné místo řidiče. -Vzdělání: -Září 1996 - červen 2000, Dnipro State Agrarian and Economic University, Fakulta inženýrství a technologie, obor "inženýr-technolog", bakalářský titul (prezenční). -září 2000 - červen 2001, Dnipro State Agrarian and Economic University, Fakulta inženýrství a technologie, obor "inženýr-technolog", diplomovaný specialista (prezenční forma). -Další vzdělání: -Červen - září 2006 - seminář "Cesty Evropy", Charkov. -Leden - duben 2009 - kurzy angličtiny a němčiny, WeCanTranslate, Charkov. -listopad 2010 - Kurzy zdokonalování v řízení motorových vozidel, Charkov. -Pracovní zkušenosti: -Řidič speditéra -červen 2001 - září 2002 - Logist West LLC, Charkov. -Funkční odpovědnosti: -- dodávání výrobků do obchodů; -- rozvoz nákladu (potravin) ve městě a regionu podle rozvozového plánu uvedeného v traťovém listu; -- práce na firemních vozidlech od 1,5 do 20 tun; -- dodržování podmínek skladování výrobků během dodávky; -- práce s podklady, fakturami a hotovostí; -- pomoc při nakládání a vykládání; -- převzetí zboží ze skladu v souladu s průvodními doklady; -- dodržování pravidel silničního provozu. -- dohled nad technickým stavem vozidla, drobné opravy. -Řidič kurýra -září 2002 - červen 2014 - Markada LLC, Charkov. -Funkční odpovědnosti: -- doručování korespondence a dokumentů od klientů a klientům podle pokynů vedení organizace; -- Zajištění neporušenosti dokumentů během přepravy (odpovědnost); -- plnění jednorázových úředních úkolů a zadání; -- přeprava zaměstnanců společnosti do místa jejich bydliště. -Osobní řidič -červen 2014 - duben 2017 - soukromý řidič, Charkov. -Funkční odpovědnosti: -- doručení výkonného pracovníka na pracoviště a domů; -- setkání a přivítání na letištích a nádražích; -- kurýrní služby; -- Vyřizování osobních záležitostí; -- doprovod dítěte do školy, sportovního oddílu, hudební školy; -- finanční zpráva; -- doprovázet rodinu na výletech po městě; -- údržba a opravy; -- údržba a servis vozidla. -Odborné dovednosti: -- 16 let praxe v řízení; -- různé styly jízdy; -- znalost pravidel silničního provozu; -- znalost města Charkova a regionu; -- zkušenosti s obsluhou strojů různých tříd a velikostí; -- Mám platnou zdravotní knížku; -- absence dopravních nehod; -- Jazyky: ukrajinština - mateřský jazyk; ruština - plynně; angličtina - středně pokročilá úroveň, polština - středně pokročilá úroveň. -Osobní vlastnosti: -Pozornost, slušnost, odpovědnost, odolnost vůči stresu, spolehlivost. -Další informace: -Rodinný stav: vdaná/ženatý. -Možnost služebních cest: ano. -Vlastní auto: ano. -Záliby: literatura, cizí jazyky. -Asi jsem se špatně zeptal. -Neměla bych ji brát s sebou, nebo jsou hlídací služby určeny pro mnohem mladší děti? -Proč jste ho zakázali? -Levnější by bylo lepší..... jsme nuceni se přestěhovat, protože si nemůžeme dovolit evropské ceny... -O všem rozhodoval -Ukrajinská obranná rozvědka zveřejnila seznam ruských vojáků, kteří se podíleli na válečných zločinech v Buči v Kyjevské oblasti. -Podle agentury Ukrinform to na Facebooku uvedla ukrajinská obranná rozvědka. -"Každý Ukrajinec by měl znát jejich jména! -Obranná rozvědka Ukrajiny získala seznam vojáků 64. samostatné motostřelecké brigády, kteří se přímo podíleli na páchání válečných zločinů proti ukrajinskému lidu v Buči," uvádí se v prohlášení. -Obranná rozvědka Ukrajiny konstatuje, že všichni váleční zločinci budou postaveni před soud za zločiny proti civilnímu obyvatelstvu Ukrajiny a budou pohnáni k odpovědnosti. -Seznam si můžete prohlédnout zde. -Jak informovala agentura Ukrinform, Irpin, Bucha, Gostomel a celá Kyjevská oblast byly osvobozeny od ruských útočníků. -V osvobozených městech a vesnicích bylo zaznamenáno masové zabíjení civilistů ruskou armádou. -Starosta města Bucha Anatolij Fedoruk 1. dubna oznámil, že v masových hrobech bylo pohřbeno 280 lidí. -Generální prokurátorka Iryna Venediktová uvedla, že 3. dubna bylo z území Kyjevské oblasti osvobozeného od ruských okupantů odvezeno 410 těl zavražděných civilistů. -24. února oznámil ruský prezident Vladimir Putin zahájení rozsáhlé invaze na Ukrajinu. -Vojska ruských okupantů ostřelují a ničí klíčové objekty infrastruktury, masivně ostřelují obytné oblasti ukrajinských měst a vesnic za použití dělostřelectva, raketových systémů, balistických raket a leteckých bomb. -Tento dům na Praze 8 se mi zatím jeví jako nejlepší varianta. -Ale pokud se mnou nechceš mluvit, chybíš mi. -Vzhledem k proměně estetiky ukrajinské literatury se staly nezbytnými nové pojmy, které by popsaly její nový kvalitativní stav. -Právě analýza metaforického prostoru ukrajinské literatury, studium stavu ukrajinské básnické metafory v současné literatuře umožňuje analyzovat postmoderní kontext jako kontext, v němž metafora hraje roli axiologického kritéria. -K utažení ponoru na desce potřebuji malé kleště. -Děkuji, dnes se cítím mnohem lépe почуваю🙏🏻 -Chodila jsem nakupovat s matkou a sestrou. -Přišli jsme, vařili a povídali si. -Vyšla jsem na dvůr, abych se nadýchala čerstvého vzduchu, trochu jsem na dvoře uklidila, protože člověk nemůže být pořád doma 🤪. -Uklidili jsme celý pokoj a přesunuli se do druhého. Pokoj je volný. Moc děkujeme🙏🙏🙏🙏. -Také jsem vstala v 9 hodin a právě připravuji snídani. -Můžeme se sejít a promluvit si osobně? -Nemůže jim říct všechno, co chce, a nerozumí všemu, co říkají( Ale včera po veletrhu přišla ve skvělé náladě. -Dnes jsem také šla do školy. -Dnes jsem viděla Kristininu mámu, ale ani mě nepozdravila ( -Já jsem byla manikérka 😥 a kamarádka manažerka. -Ale my umíme všechno, jsme Ukrajinci, a co neumíme, to se rychle naučíme))) -Pokud vám to nevadí, rád bych se zastavil. -Až se vrátíte domů, napište mi, potřebovala bych komunikovat s Danielem. -Dnes jsem prostě musel vrátit! -Také jsem vám zapomněl říct, že na zítra bude svačina. -Jedna matka řekla, že dětem upeče koblihy s marmeládou. -Na e-mailovou adresu jsem zaslal následující dokumenty -Děkuji... našli jsme nějaké věci, ale nic z vybavení... -Neměl jsem obavy, všechno je v pořádku. Mám ráda sladkosti, ale obejdu se bez nich. -Myslela jsem, že ses zamiloval do nějaké Ukrajinky a nemáš čas komunikovat 😇. -Привіт👋🏻 Jsem nemocný, takže by bylo lepší přeložit naše setkání na příští víkend, omlouvám se. -Vařit umím, hlavní je mít recept, ale ráda peču pečivo. -Na Ukrajině jsou tradiční boršč, varenyky a další tradiční jídla. -Nemám žádné oblíbené jídlo, rád zkouším různé pokrmy z různých zemí. -Jaké jsou vaše stravovací preference? Vaříte rádi? -Mohu pracovat s pacienty upoutanými na lůžko a s postiženými dětmi. -Ale nezapomeňte uvést, že mluvím pouze rusky a ukrajinsky. -Asi jsem hodně naivní nebo hodně zamilovaná, ale s těmi penězi něco udělám. -Nějak to uděláme. -Ale jak se to bude platit, prosím, žádné půjčky. -Napište mi přímo a já vám pomohu. -Jsme tvůj partner, nebo nejsem tvůj partner. -Už nevím. -Můžeme se dohodnout na nejlepším způsobu, jak to udělat? -Možná, že hostitelka nechá svůj telefon pro komunikaci. -Chtěl jsem za tebou přijít, ale měl jsem na práci jiné věci. -Proč jste si mysleli, že tuto zkušenost nepotřebujete? -Každá situace má svůj důvod. -Je lepší, když se člověk změní, zejména po duchovní stránce. -Jmenuji se Saša a přijel jsem z Luhanska do Prahy. -Jsem certifikovaná masérka a zdravotní sestra s 27 lety praxe v oboru neurologie. -Nabízím léčebné masáže, rehabilitační tělocvičnu pro děti i dospělé. -Mluvím ukrajinsky, rusky a česky a intenzivně studuji. -Kontakt: ------ Chtěla bych také dopis, aby vám Češi mohli psát přes překladatele, SMS tu nefungují, nemají ukrajinskou klávesnici, a když zavolají, nedomluvíte se. -Opravdu nechcete tuto cenu? -Opravdu pomáhá lidem zjistit, jakou mají cenu a co mohou nabídnout. -Pokud nepotřebujete ceník, nebudu vám ho dodávat. -Řekněte mi, jak to chcete, a já to udělám. -Ahoj. Martine, můžeš zkontrolovat, zda lednice funguje? Prosím. -Máme tu také dobré lékaře... ahoj -Když jsem pracoval na státním zastupitelství, myslel jsem si, že si buduji kariéru, a věnoval jsem tomu spoustu času a úsilí. -Měl jsem náročnou psychickou, zodpovědnou a mentální práci, ve které jsem byl naprosto zklamaný. -Je škoda ztraceného času, ale je to životní zkušenost. -V osobním životě jsem společenský, ale jsem monogamista a manželství beru vážně. -Měla jsem si vzít jednoho muže, ale ten zemřel na srdeční potíže. -Věřím, že k vytvoření rodiny je třeba, aby se lidé měli rádi, respektovali se a důvěřovali si. -Zatím jsem nikoho takového nepotkal. -Děkuji mnohokrát. Děláte toho pro naši rodinu tolik! Jsme vám všichni velmi vděční -Můžeš zavolat do školy, Kira nebere telefon, nevím, jestli se dostala do školy. -Lístek mě stál 2000 hřiven od průvodčího. -Hledám vše pro dům a pro děti. -Díky lidem, kteří se mě ujali, bydlím v kapli, vždycky se mě ptají, jestli něco nepotřebuju, a já se stydím říct, že se to snažím najít sama. -Jsme velmi šťastní a vděční, že naše Nika a vnoučata jsou pod vaší ochranou! -Děkujeme za pozvání! -Doufáme také, že vás po válce budeme moci pozvat na Ukrajinu! -Je 17. dubna - uklízím 2 byty :) -Už je tu postýlka a vanička pro miminko a poprvé jsem našla věci. -Nemohu zavolat lékaře, protože jsem ještě neprošel vyšetřením potravinářského průkazu. -Ale nemůžu mu zavolat, protože mi došly peníze na účtu. -Nemohu si dobít telefon, protože nerozumím českému internetovému bankovnictví a nemohu si dobít ukrajinskou bankovní kartu. -V aplikaci Vodafone nemohu změnit tarif, abych mohl normálně volat a využívat internet, ani dobít číslo. -Už začínám šílet -Nyní můžete jíst brambory ještě horké. -Můžu to dělat ve dne i v noci, pokud mám práci a peníze. -Zítra, až půjdeme do centra, se podívám, kam jít, kde stát a kde sedět. -Lidé v Rusku nemají svobodu projevu, myslím, že ano. -Škola je nepovinná, můžeš plavat nebo ne, je jim to jedno. -Jsem ráda, že vám boršč chutná, a jsem ráda, že ho vyzkouší i vaše maminka. -Nosím s sebou všechno, protože nevím, co bude pro někoho nudné. -Dokážu si zapamatovat jedno slovo po druhém, ale nedokážu spojit věty. -Jmenuji se Olga, před válkou jsem pracovala jako letuška u společnosti SkyUp, moje matka je švadlena průmyslových výrobků a můj syn Macharčík chodil do mateřské školy. -Žili jsme si báječně, ale ani nás nenapadlo, že budeme muset utéct, protože jsme neměli sílu schovat se ve sklepě. -Měl jsem štěstí a našel jsem si práci v Praze. -Tak doufáme, že nás někdo poprvé přijme) Můžeme si zaplatit i pokoj nebo byt, ale jen pokud si to můžeme dovolit. -Zaručujeme čistotu a pořádek. -Nemáme žádné špatné návyky. -Budeme vděční za takovou pomoc v tak těžké době. -Myslel jsem, že se s Milanem nemáte rádi. -Na tuto adresu jsme se šli zeptat na něco pro školu, možná jsme se dostali na špatné místo. -Muži mají také mnoho nuancí -Naše děti studují online, já jsem povoláním prodavačka a kuchařka, ale mohu pracovat i na poli a jako uklízečka, tedy jako pomocná dělnice, Mariana je specialistka na prodlužování řas, může pracovat doma, a druhá Mariana nemá žádnou specializaci, tedy pomocná dělnice. -Vedoucí prezidentské kanceláře Andrij Jermak prohlásil, že Rusko zahajuje "falešnou operaci" proti zbraním, které nám předávají naši spojenci. -Jermakova přímá řeč: "Chápou, že válku prohrávají, vidí svou zaostalost a snaží se 'srazit' dodávky zbraní jakýmikoliv prostředky. -Jedním z nejnovějších podvrhů je například údajné zničení systémů protivzdušné obrany S-300 předaných Slovenskem. -Tuto informaci již popřel premiér Eduard Heger. -Co bude dál? Známe scénáře Rusů. Jeden z nich uvedu. -Mohou vypustit falešné zprávy o tom, že se ukrajinští vojáci údajně vzdávají se zbraněmi od spojenců a hromadně je převádějí do ruské armády. -Před takovými padělky vás chci hned varovat. -Protože zbraně v rukou ukrajinských ozbrojených sil pouze posílají nepřítele na druhou stranu. -Včera jsme ho s kamarádkou upekly, zkuste ho, jestli se vám líbí. -Dobré odpoledne, daří se nám dobře) Kolja pracuje, včera jsme vyplňovali pozvánky a já jsem se učila se staršími před zkouškou na gymnázium. -Nejtěžší bylo zatím řešení geometrických úloh. -V sobotu nám pomohl Vladimír. -Vika studuje vysoké školy a středoškolské programy. -Můžeme platit nájem a wf? -Jsme velmi vděční za ubytování -Zuzano, doktor nám dal tyto papíry. Pomoz nám napsat, co potřebujeme, prosím.... -Oleh mi také našel práci na částečný úvazek - úklid dvou bytů, které se pronajímají turistům. -Apartmány se nacházejí v centru Prahy. -Dnes mi končí měsíční tarifní balíček :) -Můžeme to dnes udělat po telefonu? -Do Prahy není vhodné jezdit, bude to velmi drahé a časově nevýhodné, cesta tam zabere hodně času. -Mohu požádat o pomoc s placením školních obědů své dcery? -Peníze vám dám, jakmile je banka schválí. -Asi 30 000 UAH za dluhy, které máte, plus 30 000 UAH na údržbu. -Stydím se, že jsem nezaměstnaný -Omlouvám se, že vás obtěžuji. -Byl jsem naštvaný, že jsme včera s Kolou špatně začali test. -Požádám buď Hryhorije Denyšenka, nebo možná Jonaszka)), aby mi alespoň trochu pomohli s matematickými pojmy. -Možná bych jim mohl být také k něčemu užitečný? -Pokud budu mít příležitost, chci si o víkendu oprášit matematické pojmy. -Geometrie je obecně obtížná, ale musíme s Kolou spolupracovat, aby nedošlo k chybám. -Děkujeme za testovací úkoly -Takže když nikdo nemůže přijít, nebude to problém? -Nešel jsem tam. Ano, upekla jsem ho sama. To umím! Hlavní je, že jsi mě naučil, jak zapnout troubu. -Ano, hledám práci účetního, ale během učení se jazyka mohu dělat nějakou administrativní práci. -Chápu, že nebudu okamžitě přijat jako účetní. -Až ji budete mít, napište mi, co bude dál... -Děkujeme, zatím ne, udělali jste pro nás hodně, jsme vám velmi vděční. -Při odchodu se vždy rozlučte -Nabíráme brigádníky na sezónu sklizně chřestu v okolí Mělníka. Ubytování zajištěno, platba dobrá. -Nemůžeš mluvit, můžeš jen psát. -Je to velmi těžké těsto a poleva je nahoře. -Nejdříve však proběhne měsíční školení, které začne 13. dubna. -Nabízené profese. Jsou možné bez jazykových znalostí? -Dobrý večer, odjíždím ve 20:00, bude vám to vyhovovat? -Zajímám se o architekturu a barvy -Od pondělí 4. 4. 2022 budou moci uprchlíci před válkou na Ukrajině žádat o humanitární pomoc v novém sídle Úřadu práce v Pražské tržnici. -Pobočka v místnosti 29 je speciálně určena pro podávání a vyřizování žádostí o dávky, takže je na místě přítomen tlumočník. -Myslel jsem, že to bude za týden, ale situace se vyhrocuje, sirény jsou stále zapnuté. -ale obecně je vše v pořádku -Mimochodem, mohu v těchto dnech dělat testy? -Můžeme se setkávat často -A jsem velmi vděčný za vaši pohostinnost. -Jestli chceš, můžeš být po práci se mnou. -Ukrajinský premiér Denys Šmyhal vystoupil na mimořádném zasedání Parlamentního shromáždění Rady Evropy a vyzval k okamžitému vyloučení Ruska z Rady Evropy. -Podle agentury Ukrinform o tom informoval vládní portál s odkazem na videoposelství premiéra k poslancům 46 demokratických evropských zemí. -"Všichni víme, že trestu za genocidu a terorismus se nelze vyhnout. -A my musíme reagovat ještě tvrději. -Žádáme rozhodnutí o okamžitém vyloučení Ruska z Rady Evropy! -Ti, kdo bezpodmínečně podporují nevyprovokovanou a neoprávněnou agresi, nemají místo v jednotné evropské rodině, kde je lidský život nejvyšší hodnotou," řekl Šmyhal. -Zdůraznil, že Rusko tvrdí, že žádná válka není, ale válka se vede právě teď, a označil ji za "speciální vojenskou operaci". -Máme potvrzené informace o zničení více než 12 000 ruských vojáků, 389 tanků, 1249 obrněných vozidel, 77 letadel a 90 vrtulníků," řekl Šmyhal. -Premiér rovněž požádal o zastavení proudu lží a nenávisti šířených ruskými médii a ruskými podvrhy, které se snaží prosadit v myslích evropské společnosti. -Říkám. -"Rusko a osobně prezident Putin rozpoutali v centru Evropy 21. století válku v plném rozsahu, která může přerůst ve třetí světovou válku," řekl Šmyhal. -Vyzval také evropské politiky, aby uzavřeli nebe nad Ukrajinou a spojili své síly k zastavení agrese, zabíjení civilistů a zajištění bezpečnosti humanitárních koridorů. -"Musíme zastavit agresi. -Dokud nedošlo k jaderné katastrofě. -Dokud celá Evropa neshořela. -Proto požadujeme: zavřete nebe nad Ukrajinou! -Uzavřete oblohu v zájmu životů lidí na území Ukrajiny, uzavřete oblohu v zájmu evropské a světové bezpečnosti," uzavřel premiér. -Předseda PACE Tini Cox zase potvrdil solidaritu Shromáždění a celého mezinárodního společenství s Ukrajinou v těchto těžkých časech války s Ruskou federací. -Zdůraznil, že řízení o vyloučení členského státu Rady Evropy z organizace bylo zahájeno poprvé za 73 let její existence. -Jak bylo oznámeno, mimořádné zasedání PACE, které se koná v těchto dnech ve Štrasburku, bylo svoláno k projednání důsledků agrese Ruské federace proti Ukrajině a k rozhodnutí o budoucí účasti agresora v této mezinárodní organizaci. -Očekává se, že debata vyústí v přijetí oficiálního závěru poslanců s doporučeními pro další kroky v souvislosti s pozastavením práva Ruska na zastoupení v orgánech Rady Evropy, zejména v PACE a Výboru ministrů. -Shromáždil jsem několik prací do svého portfolia. -Po prázdninách bude zaveden internet. -Rád bych se stal vaším záložním designérem. -3. 3. Jaké aktivity by vám nebo vašim přátelům pomohly lépe se přizpůsobit životu v České republice? -Lena věci, které nám neseděly velikostí, mám dát tobě, nebo je mohu dát Kátě. -Odjíždíte z města na delší dobu? -Koupil jsem si číslo od vodaphonu -Ne tady, kde jsme byli večer s panem Petrem, prohlédli jsme si tento dům, velmi nám vyhovuje. -Nerozumím vašim myšlenkám? -Ale o víkendu se ráda procházím ve velkém. -Je tam všeho dostatek -Něco od tebe přišlo, nevidím, co to je. -Mohu pomoci s úklidem -Chtěl bych připojit internet ke svému českému číslu -Moje babička nebude moci, hodně ji bolí nohy a dolní část zad(( -Chce mi teď přitlačit na všechna bolavá místa a ví, která to jsou. -Už to na mě nemá absolutně žádný vliv. -Nechtěla jsem o něm a jeho včerejším dramatu mluvit před Míšou. -Nevím, jaké to je přispívat celé rodině. -Já a moje šestnáctiletá dcera žádáme o dočasný azyl v České republice, kde má dcera plánuje studovat na vysoké škole. -Narazil jsem na něco ve vodafonu a připojil dvě zbytečné akce, za které mi byly naúčtovány a nemůžu je vypnout 😭. -a také vybral peníze z karty, na které nebyly žádné peníze. -a nechápu, proč byly peníze staženy. -Bůh se nikdy neopozdí, všechno má svůj čas a místo. -Je mi to velmi nepříjemné a chci, abyste to věděli. -jen na mě nezapomeňte, prosím 🙏🏻 -Zítra se jedeme podívat do Prahy na byt. -Věro, můžeme teď umýt nádobí v myčce? -Čeká nás lepší budoucnost -Musím to všechno vrátit zpátky -Myslím, že by sis měl udělat kopie našich pasů, jsou v nich všechny informace. -Příjmení jsem si nezměnil, mám ho stejné od narození :). -Napište své učitelce a požádejte ji, aby vám to vyfotila, abyste měli památku). -Nechci tě rušit, jen odpočívej. -Doklady se na letišti vyřizují od 7 do 19 hodin, v jinou dobu můžete jen čekat. -Volodymyr se vrátil ve čtyři hodiny ráno. -Tak jsme je přijali. -Spala jsem s Vikou v kuchyni a oni byli uloženi v ložnici. -Dokončím svou práci a pojedu s nimi na letiště. -Můžete mi poslat správnou e-mailovou adresu? -Rád bych vám zaslal pozvánku na kurz českého jazyka, do kterého jste se přihlásil. -Je to váš domov, můžete si pozvat, koho chcete.) -Hano, dobré odpoledne. -Udělal jsem, jak jste napsal, a telefon se začal nabíjet! -To je velmi dobrá zpráva! -Děkujeme! -Situace je stejná, stále jsme tady. -Po městě jsem hodně cestoval dopravními prostředky, autem. -Ale nejsem moc dobrý ve jménech. -To je v zásadě normální, protože moje myšlenky byly úplně jiné. -Teď jsem psychicky klidnější, než když jsme sem přijeli poprvé. -Nerozumím tomu, budeme s Valjou ráno sami? -Nebo jsi měl na mysli, že uklízíme sami a ty jen dohlížíš? -Můžeme si s Valjou vyměnit směny? Valja dopoledne a já odpoledne? -Rád bych, abyste si trochu odpočinul a přemýšlel o sobě. -Nezapomeňte se dnes namazat krémem na obličej) -Reklamu jsem si přečetl na Facebooku, takže Google zřejmě něco dělá po svém написав🏵️. , myslím, že telefonní číslo stačí. -Půjdeme tam dnes, možná najdeme něco, co budeme potřebovat. -Jak je mám kontaktovat, aby mi otevřeli dveře? -Připravili jsme pro vás knedlíky, které je třeba vařit 7 minut. -Fakturu vám připomenu později. -A modlitba matky je tou nejveselejší. -a tito dva jsou moji bratranci -První den nemoci můžete zajistit stravu v čistém nádobí (jídlo). -V ostatních dnech nemá Státní služba Ukrajiny pro bezpečnost potravin a ochranu spotřebitele nárok na státem dotované potraviny. -Po odnesení jídla domů již personál kuchyně neodpovídá za jeho kvalitu a zdravotní nezávadnost. -Prosím, řekněte mi, jak připravit byt pro 5 hostů na zítřek. -Předpokládám, že páté místo je v místnosti s televizí? -Rozložení a zakrytí pohovky? -Mluvím česky na základní úrovni, ale učím se každý den, takže si myslím, že se bez problémů naučím vše, co potřebuji k práci. -Chtěla bych se zeptat, zda nevíte, kde se dá koupit mikrovlnná trouba. -V pokoji jsme 3, spím se 2 dětmi na 1 malé posteli, opravdu bychom potřebovali vaši pomoc. -Mám dvě děti, 4 a 3 roky, pokud máte nějaké hračky, bylo by to hezké, a pokud máte navíc hrnce, jeden by stačil. -Paní mi napsala, že mám přijet do Vistavishche a vybrat si, co potřebujeme, mám malé děti a oblast moc neznám. -Pošlu vám textovou zprávu, kterou mi napsala jedna paní. -Marie, chtěla bych se zeptat, jestli byste mi mohla říct něco o školce, kdy probíhá zápis a kde je volné místo. -Má duše je naplněna tichou radostí. Dívám se na Josie a vidím její štěstí, jistotu jejích kroků. -jak se dnes cítíte 😅 po oslavě? -Dnes se ve školce objevila nová dívka z Ukrajiny. -Máte nějaký, který by byl vhodný? -Jak dlouho kurzy trvají a jakou úroveň jazyka mohu po jejich absolvování očekávat? -Srbský obranný rozpočet již několik let po sobě exponenciálně roste. -Za posledních šest let investovalo Srbsko do modernizace své armády, obnovy vojenského vybavení a nákupu moderních systémů celkem více než dvě miliardy eur, čímž se v žebříčku nejsilnějších vojenských mocností světa posunulo o 22 míst. -Po ruské agresi proti Ukrajině vzaly západní země předpovědi o možné eskalaci na Balkáně zcela vážně. -Pro Ukrajinu to znamená, že diplomatické řešení rusko-ukrajinského konfliktu se stává složitějším a globalizovanějším. -Prosím, povzbuďte mě vlídným slovem, jste pro mě jako uklidňující..... -V mém volt partnerovi je vyznačeno, že se moje žádost projednává. -Odeslal jsem ji prostřednictvím této mobilní aplikace -Na mapě jsem se podíval na vzdálenost k nim, která byla pouhé 3 kilometry. -Šel bych tam pěšky, ale musím jet z Brna)))) Hledám vhodnou trasu -Ano, můžete se na mě spolehnout, děkuji! -Pochopil jsem menu a dokonce jsem si ho objednal -Chci si nechat udělat účes zvaný čepice. -Koupit látky pro vyšívání, plátno, vyšívací nitě -Říkal jste, že máte rozbitou pračku. Vezměte si ji pro sebe. Opravdu ji také potřebuješ. -Jinak se tam nedostanu( -V případě zpožděných dokladů zazvoňte. -Estonský parlament dnes, 14. března, přijal výzvu adresovanou parlamentům členských států EU, NATO a dalších zemí v souvislosti s ruskou agresí proti Ukrajině. -Informoval o tom deník Evropeiska Pravda s odvoláním na ERR. -Estonský parlament proto prý mimo jiné vyzval členské státy OSN, aby okamžitě přijaly opatření ke zřízení bezletové zóny a zabránily tak masovým obětem mezi civilním obyvatelstvem Ukrajiny. -Parlament vyzval zákonodárné sbory všech zemí, aby přijaly prohlášení vyzývající jejich vlády k podpoře zavedení dalších sankcí proti Ruské federaci a Bělorusku. -Estonští poslanci vyzvali k okamžitému zavedení rozsáhlého obchodního embarga vůči Ruské federaci a Bělorusku, které by omezilo možnosti agresorských států vést válku. -Estonský parlament také vyzval státy světa, aby uzavřely svůj vzdušný prostor a přístavy pro ruská letadla a lodě. -Kromě toho Parlament vyzval členské státy EU, aby podpořily oficiální žádost Ukrajiny o status kandidátské země EU, a vyzval k vypracování plánu pro dosažení členství Ukrajiny v NATO. -Jak již dříve informoval server Ukrainian News, Zelenskyj 4. března ostře kritizoval odmítnutí NATO uzavřít nebe nad Ukrajinou. -Již dříve tajemník NSDC Oleksij Danilov požádal mezinárodní partnery a NATO o poskytnutí bojových letounů a prostředků protivzdušné obrany. -Z nějakého důvodu jsem si myslel, že je pondělí. -Kromě toho mám zájem s vámi komunikovat. -Nepracují telefonicky ani během velikonočních svátků? -Tohle se dá vyřídit po telefonu kdykoli, že? -Děkuju, něco potřebuju, ale nemůžu nikam jít, musím vařit a učit se slovíčka... můžeme jít zítra v půl jedenácté... v jedenáct máme kurz. -Dobře, pokusíme se nás v úterý vzít na sociální službu. -Vzorový životopis prodejce -Životopis hraje důležitou roli při hledání zaměstnání. -Prostřednictvím životopisu se zaměstnavatel v podstatě seznámí s potenciálním zaměstnancem, a pokud ho zaujmete, můžete očekávat pohovor, kde budete mít všechny šance získat práci, o kterou stojíte. -Ne všichni vědí, jak správně napsat životopis, a výsledkem jsou neinformativní a nestrukturované dotazníky. -V životopise prodejce nestačí uvést předchozí zaměstnání a vzdělání - zaměstnavatel potřebuje znát vaše schopnosti. -Jak tedy napsat životopis pro práci prodejce? Podívejme se, co potenciální zaměstnavatel potřebuje. -A k tomu potřebuje zaměstnance, který to bude umět: -Nabídnout kupujícímu kvalifikovaný produkt a pochopit, co přesně se prodává. -Buďte čistotní a zdvořilí. -Prodejce by měl vypadat upraveně, umět se správně chovat a být zdvořilý. -S takovými lidmi je mnohem příjemnější komunikovat. -Soustřeďte se. -Při práci prodejce se cení dochvilnost a pozornost věnovaná detailům. -To je důležité při vystavování faktur, generování žádostí atd. -Zajímejte se o růst prodeje. -Každý zaměstnavatel je rád, když se zaměstnanec chová k jeho firmě jako ke své vlastní. -Schopnost udržet si zákazníky a vyjít jim vstříc - tyto vlastnosti by se měly odrazit ve vašem životopise. -Při posuzování vašeho životopisu bude zaměstnavatel věnovat pozornost především vašim schopnostem a jejich vztahu k jeho potřebám. -To však neznamená, že vaše vzdělání a pracovní zkušenosti nejsou pro zaměstnavatele důležité. -Další výhodou je specializované vzdělání a pracovní zkušenosti. -Proto uvádíme podrobný seznam všech předchozích zaměstnání v chronologickém pořadí a uvádíme profesní povinnosti. -Pokud jde o obchodní úspěchy, je lepší zmínit ty, které sami považujete za vynikající. -Životopis byste měli zakončit výčtem svých osobních vlastností. -Jak vidíte, na psaní životopisu prodejce není nic složitého. -Dobrý den, děkujeme dětem za koledu, u nás takový zvyk není, tak jsme nevěděli, dali jsme dětem jen čokoládu a sušenky, možná jsme jim měli dát něco jiného. -Dobře, jen musím dětem vytisknout omalovánky. Takže to bude v pořádku. Kdy můžu přijít? -Po válce ani jeden. Ani jeden, který by vás nenapadl. -Odpovězte mi přímo. -Chceš být se mnou? -Co si o mně myslíš? -Čím vás mohu přesvědčit? -Nerozumím tomu. -Musím jít do té vlády práce -Je váš otec jediný, kdo umí rusky, nebo je v rodině ještě někdo další? -Děkujeme. Jsem velmi nepohodlný, pracuji v neděli a už to nemohu dělat. -Vezměte mě ke své vládě -Mohu za vámi přijet a rozhodneme se na místě. Stejně jsem tu, abych vám pomohl. -Zlobíš se na mě? -Musím si vzít oblečení, které mi dali? -Chci jít za tebou. Pevně tě obejmout. A políbit tě. -Velmi chutná jablka, o jakou odrůdu se jedná? -Klima je téměř stejné. -Moje rodina je v pořádku, mám matku, tři bratry a manžela. -Jeden bratr je ve válce v Mariupolu, druhý v Záporoží a dnes byl můj manžel převelen z Kyjeva do Charkova. -Každý den mi volají a říkají, že je všechno v pořádku, ale kdo ví, stejně mi neřeknou pravdu. -Dobrý den, ano, bylo by dobré, kdybyste mohli dnes, jak jste psali v 17:00. -Budeme se snažit vybrat co největší částku do konce týdne, abychom vám ji mohli vrátit. -Jaký je váš dnešní den? Chceme tě vidět a už teď se nám po tobě stýská. -Jejich škola má dobře organizované dálkové studium, každý den od 7 do 12 hodin, takže může studovat z domova. -Určitě vám nevadí, že přijdu? -Možná mě dnes jen nechceš vidět, tak mi to řekni, pochopím to.) -Pokud se pro mě něco najde, bude to velmi dobré. -Musíte si vymyslet vlastní heslo -Viděli jste moje fotky v plavkách? -Co dělají dívky po návratu ze školy? -Ukrajina oznámila, že Česká republika poslala na Ukrajinu tanky T-72 a bojová vozidla pěchoty. -Dobré ráno, za 20 minut si k vám přijdu pro tiskárnu. -Staří i méně staří přátelé, přátelé, které osobně neznám, přátelé duchem i myslí. -Nyní se také potýkáte s těžkými časy. -V uplynulém měsíci jsem byl v kontaktu s mnoha z vás. -Váš život, který nikdy nebyl jednoduchý, se obrátil vzhůru nohama, stejně jako život každého Ukrajince. -Mnozí z vás utíkají z Ruska. -A mnozí z vás přiznali, že se cítí provinile a stydí se za jednání své země vůči svým sousedům. -Kvůli tomu, čemu je kvůli vám vystavena Ukrajina. -Někteří z vás, aktivistů, byli dlouho ohrožováni a připravovali jste se na rozhodující úder. -Začátkem března jsem napsal Alexandru Čerkasovovi, svému starému příteli z Memorialu. -"Povím ti to o něco později," odpověděl Saša lakonicky jako obvykle, "teď, po prohlídce, procházíme ruinami. -Ostatní - kulturní osobnosti, umělci, kritici, spisovatelé - jsou šokováni náhlým zhroucením vašeho křehkého světa. -Nikdo z vás nemá rád Putina a jeho režim zlodějů a fašistů, většina z vás je nenávidí. -Ale řekněme si upřímně: s výjimkou několika málo z vás - těch, kteří pracovali v Memorialu, Nové gazetě, Echu Moskvy, Meduze, Navalného organizaci a na řadě dalších míst - kolik z vás udělalo něco, abyste se tomuto režimu postavili na odpor? -Kromě účasti na shromážděních, když se ještě konala. -Přečtěte si také sloupek ruského novináře "Pozdě se skrývat, pozdě mlčet". -A pokud ano, jste si jisti, že vaše pocity studu a viny nejsou jen abstrakcí? -Možná jsou způsobeny vaší dlouhodobou lhostejností k tomu, co se kolem vás děje, vaší apatií a pasivní spoluúčastí, která nyní musí být velkou zátěží pro vaše srdce a duši? -Nebylo tomu tak vždy. -V 90. letech minulého století bylo krátké období, kdy jste měli svobodu a demokracii do jisté míry - špinavou, dokonce krvavou, ale skutečnou. -Rok 1991 však nebyl o nic lepší než rok 1917. -Proč vždy, když konečně dojde k revoluci, skončíte s tak silným strachem z nepokojů, že hledáte spásu za zády cara, i když se jmenuje Stalin nebo Putin? -Bez ohledu na to, kolik lidí zabije, se v jeho blízkosti cítíte bezpečněji. Proč tomu tak je? -K chybám skutečně došlo. -Místo abyste zabavili a zveřejnili archivy KGB, jako to udělali Němci se Stasi, uložili jste své duše k Dzeržinského pomníku a umožnili KGB, aby se uložila, obnovila, reformovala a převzala moc nad zemí. -Když jste měli na výběr mezi vypleněním země a návratem komunistů, nebojovali jste za možnost třetí varianty - a pokorně jste přijali vyplenění. -V roce 1998 se vaše ekonomika zhroutila a to znamenalo konec masových shromáždění za větší sociální spravedlnost nebo proti válce v Čečensku. -Hlavní starostí bylo přežití. -Pak přišel Putin. Mladý, podnikavý, agresivní, slíbil potlačit teroristy a povzbudit ekonomiku. -Málokdo z vás mu na to skočil, ale buď jste ho volili, nebo jste se rozhodli nevolit vůbec. -Když začal opět srovnávat Čečensko se zemí, většina z vás přivírala oči. -Na ta léta si dobře pamatuji. -V té době jsem pracoval v Čečensku, kde jsem pomáhal obětem Putinovy "protiteroristické operace", a na vlastní oči jsem viděl ruiny Grozného, Katar-Jurtu, Itum-Kaly a dalších měst. -Občas jsem se o víkendech vracel do Moskvy a bavil se s vámi, mými přáteli. -Popíjeli jsme, tančili a někdy jsem se snažil vyprávět o hrůzách, kterých jsem byl svědkem: o mučení civilistů, o vraždách dětí, o vojácích, kteří prodávali těla mrtvých jejich rodinám. -Řekl jsi mi: Řekl jsi mi: "Brade, už nás nebaví to tvoje Čečensko." Na ta slova si vzpomínám velmi dobře. -Na to jsem reagoval rozhořčeně: "Přátelé, tohle není moje Čečensko, ale vaše Čečensko. -Je to vaše země, sakra, ne moje. -Jsem tu jen hloupý cizinec. -To vaše vláda bombarduje jedno z vašich měst a zabíjí vaše spoluobčany." -Ale ne, bylo to všechno příliš složité, příliš bolestivé a vy jste nechtěli nic vědět. -Následoval hospodářský rozmach v polovině devadesátých let, který byl způsoben rostoucími cenami ropy a Putinovou ochotou přihlížet, když část nakradených peněz zůstávala v kapsách střední třídy. -Mnozí z vás začali vydělávat slušné peníze, někteří z vás zbohatli, a dokonce i ti nejchudší z vás si koupili nové domy a našli lepší práci. -Ceny rostly, ale koho to zajímá? -Moskva slavnostně zářila a třpytila se leskem. -Když bylo zavražděno několik osobností opozice - Jurij Ščekočichin, Anna Politkovská, Alexandr Litviněnko a další -, mnozí z vás byli šokováni a zděšeni tím, co se děje. -Dál se však věci neposunuly. -Když Putin po dvou funkčních obdobích předal prezidentský úřad Medveděvovi a ujal se funkce premiéra, téměř jste tomu nevěnovali pozornost. -Když po několika měsících Medveděvovy vlády Rusko napadlo Gruzii, většina z vás to ignorovala nebo mlčela. -Kolik z vás jsem v následujících letech potkal na horských svazích Gudauri, na úpatí Kazbegi nebo v kavárnách a tureckých lázních v Tbilisi, zatímco část této země okupovala vaše armáda? -Musím přiznat, že my na Západě jsme toho také moc neudělali, pokud vůbec něco. -Nějaké rozhořčení, nějaké sankce, ale co na tom, že Rusko hrubě porušuje mezinárodní právo, když pokušení ruské ropy, plynu a domácího trhu je tak velké? -V Rusku se mi žilo dobře. -A to bylo po těžkých devadesátých letech to nejdůležitější. -Na konci roku 2011 jste se však, moji ruští přátelé, probudili. -Když si Putin opět vyměnil místo s Medveděvem a převzal prezidentský úřad jako předtím, mnozí z vás si mysleli, že je to příliš, a houfně jste přišli protestovat. -Navalného jméno se stalo pojmem, půl roku jste nevyšli z ulic a režim se konečně vyděsil, že ztrácí půdu pod nohama. -Poté udeřil zpět. -Nejprve se organizovaly alternativní akce, pak byly přijaty represivnější zákony a věznice se zaplnily lidmi. -Tisíce lidí byly uvězněny. -Někteří z nich dostali vysoké tresty. -"Co bychom mohli dělat?" -Slyšel jsem to často a slyším to dodnes. -"Stát je tak silný a my jsme tak slabí." -Podívejte se na Ukrajince. -Podívejte se, co udělali před dvěma lety. -Naštvaní na proruského prezidenta, který zradil jejich evropské naděje, jednou obsadili Majdan a už ho neopustili. -Sami si postavili stanové městečko a připravili se na rozhodnou obranu. -Když přišla policie a chtěla ho zdemolovat, začali se bránit klacky, tyčemi a zápalnými lahvemi. -Nakonec policie zahájila palbu. -Místo útěku však protestující zaútočili. -Mnozí zemřeli, ale zvítězili. -Janukovyč se stal uprchlíkem a Ukrajinci znovu získali demokracii, právo volit si své vůdce a vyloučit je, pokud svou práci nedělají dobře. -Putinovi se Majdan příliš nelíbil. -Byl to špatný příklad. -Proto využil všeobecného zmatku a zmocnil se Krymu. -Někteří z vás byli proti, ale bylo to málo platné. -A kolik z vás bylo nadšených! -Pokud vím, anexi podporovalo 91 % ruských občanů. -Najednou se odněkud vynořil nový mýtus a mnozí z vás, kteří Putinem a jeho bandou opovrhovali, najednou otočili o 180 stupňů a začali ho zbožňovat. -Je pro mě těžké najít důvod, protože jsme spolu hned poté přestali komunikovat. -Zbytek mých přátel většinou mlčel. -"Politika nás nezajímá," řekl jste. -A vy jste se zase schovali do knih, filmů, katalogů IKEA a parků, zbrusu nových po obnově, kterou zahájil starosta Moskvy v roce 2012, s jejich bingy, veřejnou Wi-Fi a hipsterskými kavárnami. -Je pravda, že Donbas je daleko, ale Moskva je tak krásná - a je stále lepší. -Sýrii jste téměř nevěnovali pozornost. -Byli tam přece teroristé, ne? -Dokonce i moskevský redaktor, který vydal mou knihu o Sýrii, ji v jednom rozhovoru kritizoval, protože se zdálo, že nemám ponětí, co se v Sýrii děje. -Alespoň jsem se tam vypravil a na vlastní oči viděl, jak státní odstřelovači chladnokrevně střílejí v ulicích Homsu do vrstevníků mých dětí. -Ze všech ruských občanů tam byli jen vaši vojáci, kteří v roce 2015 začali bombardovat tisíce civilistů a získávat zkušenosti pro další velkou válku. -Mnozí z vás jistě znají slova pastora Martina Niemillera: -"Nejdřív si přišli pro socialisty, ale já jsem mlčel, protože nejsem socialista. -Pak si přišli pro členy odborů, ale já jsem mlčel, protože nejsem členem odborů. -Pak si přišli pro Židy, ale já jsem mlčel, protože nejsem Žid. -Pak si pro mě přišli, ale nezbyl nikdo, kdo by mohl říct něco na mou obranu." -Kolik z vás mluvilo o Čečencích, Syřanech nebo Ukrajincích? -Někteří z vás to už udělali. -Drtivá většina však mlčela. -Někteří lidé, jako například Dmitrij Gluchovskij, Michail Šiškin, Michail Zygar, Maxim Osipov a další, nyní skutečně promlouvají. -Většina z nich si dovolí promluvit ze zahraničí, někteří zevnitř země, jako Marina Ovsjannikovová, riskují, že se dostanou do nového gulagu nebo se přidají k Navalnému. -Pokud jde o ostatní, sami nejlépe víte, v jaké zemi žijete. -Takže jistě chápete, že až se Putin vypořádá s Ukrajinci - nebo, což se zdá velmi pravděpodobné, pokud se mu to nepodaří - obrátí se na vás. -Všem vám, přátelé: těm, kteří statečně, ale většinou sami, vyšli protestovat a zatím vyvázli s krátkým trestem, ale brzy dostanou delší. -Tisícům z vás, kteří jste podepsali petice, vyjádřili svůj nesouhlas na sociálních sítích (i kdyby to byl jen černý čtvereček na Instagramu) nebo se ozvali v soukromých rozhovorech s kolegy v práci. -Doby, kdy se za pouhou anekdotu udělovalo 10 nebo dokonce 25 let vězení, nejsou tak vzdálenou minulostí - a nyní vás pravděpodobně čekají v budoucnu. -Kdo tedy bude mluvit za vás? Kdo zůstane? -Příklad Ukrajinců děsí Putinův režim ještě více než v roce 2014: dokazují, že se s ním dá bojovat. -A že ho inteligence, motivace a odvaha mohou zastavit, bez ohledu na to, jak ohromující může být jeho papírová převaha. -Málokdo v Rusku si to však uvědomuje, nebo si vůbec uvědomuje, že nějaká válka probíhá. -Ale vy, přátelé, dobře víte, co se nyní děje. -Čtete zahraniční zprávy na internetu, máte na Ukrajině přátele nebo dokonce příbuzné, se kterými jste v kontaktu. -A Putin ví, že to víte. -Buďte proto ve střehu. -Chápete, kam to směřuje. -Dobrý život výměnou za vaše mlčení skončil. -Vaše volby jsou výsměch, vaše zákony, kromě těch represivních, nemají větší cenu než papír, na kterém jsou napsány, vaše poslední svobodná média jsou pryč, vaše ekonomika se hroutí rychleji, než píšu, už nemáte kreditní karty, abyste si mohli koupit letenku do zahraničí, i když jsou ještě k dispozici lety. -Nyní se Putin nespokojí s vaším mlčením, bude požadovat váš souhlas, vaše podřízení. -A pokud mu nedáte to, co chce, můžete se buď pokusit nějak odejít, nebo budete zdrceni. -Pochybuji, že vidíte jinou možnost. -A přesto je tu ještě jeden. -Což nakonec povede k pádu tohoto režimu. -A možná, že za současných okolností se od vás bude vyžadovat méně, než si myslíte. -Přemýšlejte o tom. -Jiskra nebude vycházet od vás: vzhledem k ekonomickému kolapsu, který v Rusku hrozí, se nejspíš rozhoří v provinciích, v malých městech. -Až ceny prudce vzrostou a platy nebudou vypláceny, vyjdou do ulic lidé, kteří celé ty roky volili Putina, protože chtěli chléb a mír. -Putin to ví a těchto lidí se bojí mnohem víc než intelektuálů a střední třídy Moskvy a Petrohradu, tedy vás, přátelé. -Pokud však bude každé město pořádat shromáždění samostatně, jak se občas děje, nebude pro ně obtížné je potlačit individuálně. -Bude zapotřebí organizace a koordinace. Z davu bude třeba vytvořit masu. -Máte k dispozici úžasný kouzelný nástroj - internet -, který sice režim omezuje, ale který stále funguje a který lze nastavit téměř za všech okolností. -Navalného organizace byla poražena, ale je možné vytvořit jiné, neformálnější a decentralizovanější organizace. -Je vás mnoho, jsou vás miliony. -Moskevská policie zvládne v ulicích 30 000 lidí, možná sto tisíc. -Pokud získáme více než 300 000, bude ohromena. -Armáda bude muset být použita, ale bude tato armáda bojovat za Putina, když na to přijde? -Po tom všem, čím si na Ukrajině prošel, čím vším je vystavil? -Nebezpečí bude samozřejmě velmi velké. -Mnozí z vás pocítí pochopitelný strach, ti, kteří mají děti, se o ně budou bát. -A je to přirozené, normální. -Na tvém místě bych se také bála. -Na příkladu Sýrie a nyní Ukrajiny chtěl Putin ukázat, co se stane těm, kteří se odváží neposlechnout svého pána, kteří se odváží nejen požadovat svobodu, ale skutečně se ji pokusit získat. -Ale i když neuděláte nic, životy mnoha lidí budou stejně zmařeny. -Váš syn zavtipkuje v chatu počítačové hry a bude zatčen, vaše dcera vyjádří své rozhořčení na internetu a bude zatčena, váš blízký přítel udělá chybu a zemře ve vlhké cele pod policejními obušky. -To se děje již mnoho let a v budoucnu se to bude jen zhoršovat a zvětšovat. -Nemáte tedy na výběr. Pokud nic neuděláte, víte, jak to dopadne. -Jednejte v klidu, myslete strategicky a realizujte to. -Ano, vzali jsme ho domů a hned jsme ho odnesli tetě. -Byla velmi vděčná, plakala. -Řekl jsem jí o vás a o tom, jak pomáháte Ukrajincům. -V očích měla slzy. -Řekla, že kdyby to bylo možné, možná by tam byla stará deka a polštář. -Není to však nutné. Pouze pokud se to stane. Jsem vám také velmi vděčný. -Ale abych byl upřímný, je mi velmi nepříjemné dělat to pro peníze. -Udělal jsi pro mě hodně dobrého. -Mohu to udělat zdarma. -Právě mi volala paní z Damejidla, že mi poslali informace o dalších akcích e-mailem a já jsem nereagovala. -Mluvila česky a já jí tak rozuměl. -Ale nemám dopis od dameidlo na mém e-mailu -Pronajal jsem si auto pro 9 lidí, přijedeme s Marianou a vaší rodinou. -Hledám ubytování od května, nejlépe zdarma pro sebe, svou matku a dvě děti. -Nejlépe v blízkosti Karlových Varů -Možná už někoho najali, takže ho nezvyšují. -takže se ve svém těle cítím nesvůj. -Stačí to pro tuto chvíli? -Dobrý den, omlouvám se. Chtěla bych se zeptat, jestli nepotřebujete pracovníka. -Může mýt podlahy nebo nádobí. -Bohužel neumím ukrajinsky, ale potřebuji práci. -Jsem mladá a aktivní, je mi 21 let. -Vezmu jakoukoli práci. -Jsi chytrá a všechno bude v pořádku🙏 Odpočívej, nabírej sílu a krásu dobrým spánkem!!!🇺🇦❤️🇨🇿 -Poslal jsem matce druhou zprávu. -Umění pomáhá rozptýlit pozornost a vypnout -Bože, to je ale skvělá školka! Jsem vám tak vděčná, nemám slov, moc vám děkuji. Úřady jsou velmi сподобалось❤️ -Janko, doufám, že se ti daří dobře a že mi babička nebude nadávat. -Snažím se Sašu uspat... Potřebuju, aby se vyspal..... -O nic nejde -Ale to je blíže k době, kdy opustí Ukrajinu, takže víme jistě, že se tam dostanou. -Protože tam jde nyní o život každému. -Ve sklenicích se zavírat nedá, v dešti to nemá cenu), až nebude pršet, tak to zase zavřu🙂. -Mám jen obavy, jak s vámi budeme komunikovat. -Ale můžeme se na sebe jen podívat.... -Děti si mohou přijít hrát do sálu Valia's. -Už jsem ti psala dvakrát :)), ale dělám dvojité kolečko kvůli vlakům. -Jistě, můžeme to udělat po 17. hodině. -Opravdu nechcete přijít na minutu s dětmi a babičkou v klidu. -Chci, abys byl se mnou. Chci se probouzet vedle tebe. -Bylo to v mikrovlnné troubě, něco se tam pálilo. -O možnosti sponzorovat dětské obědy mi ředitel nic neřekl. -Poskytli mi následující platební údaje. -Za část března a za duben jsem již zaplatil. -Co kdybyste mi pomohli zaplatit za květen a červen? -Co musím udělat? -Mám se s touto otázkou obrátit na ředitele školy? -Nepochopil jsem je ani za dva měsíce, ani za měsíc. -Moje holky se zatím nikam stěhovat nemohou, takže jsem majiteli domu nepsala. -Chtěla bych vás požádat, abyste mi, pokud se dozvíte nějaké informace o bydlení, dali vědět. -Rád bych si pronajal byt se slušnou dívkou. -Pokud o nějakém víte, dejte mi prosím vědět. -Pokud jde o mě, nezklamu vás. -Předem děkujeme. -Maminka a tatínek musí být poslušní -Pocházím z Ukrajiny, nyní jsem v Plzni. -Před válkou na Ukrajině jsem 20 let pracovala jako návrhářka dámských a dětských oděvů z různých materiálů, umím také šít na speciálním zařízení a vyrábět VTO. -Mám velkou chuť pracovat ve svém oboru a ráda se podělím o své znalosti a získám nové zkušenosti v této oblasti. -Aby se však tento dokument vytiskl tak, jak jsem ho vytvořil, musíte si ho nejprve stáhnout do počítače. -Mělo by to vypadat takto. -Předsedkyně Evropské komise Ursula von der Leyenová informovala ukrajinského prezidenta Volodymyra Zelenského o čtvrtém balíčku sankcí, který by dnes mohla přijmout Rada EU v rámci písemného postupu. -Podle agentury Ukrinform to uvedl předseda Evropské komise ve svém příspěvku na Twitteru. -"Putinova válka je den ode dne brutálnější. -Právě jsem informoval prezidenta Zelenského o čtvrtém balíčku sankcí. -EU stojí na straně ukrajinského lidu," uvádí se v prohlášení. -Ursula von der Leyenová připomněla, že Evropská unie podpořila Ukrajinu poskytnutím makrofinanční pomoci ve výši 1,2 miliardy eur a humanitární pomoci ve výši 500 milionů eur. -Zelenskyj rovněž zdůraznil význam sankčního tlaku na Rusko. -"S předsedkyní Evropské komise Ursulou von der Leyenovou jsme jednali o podpoře EU Ukrajině v boji proti ruské agresi. -Zvýšení tlaku na Rusko je důležité. -Oceňujeme také významnou finanční pomoc. -Ukrajina pokračuje v pokroku na cestě ke členství v EU," napsal ukrajinský prezident na Twitteru. -Jak bylo oznámeno, 24. února 2022 zahájil ruský prezident Putin nevyprovokovanou válku proti Ukrajině. -Ruské jednotky začaly ničit ukrajinská města a obce pomocí raketometů, náletů a raketových útoků. -Ozbrojené síly Ukrajiny, jednotky územní obrany a celý ukrajinský lid odrážejí útočníky a způsobují jim značné ztráty. -Evropská unie spolu s klíčovými mezinárodními partnery uvalila na ruskou ekonomiku a ruské představitele a oligarchy včetně samotného Putina balíček sankcí, které budou v případě pokračování ruské agrese ještě přísnější. -Pak budu stahovat přes Ukrajinu přes VPN -Je dobře, že už jste dorazili 🙏🏻 A já jsem vám moc vděčná za příjemný večer a 🌹. -V naší zemi dostávají děti na konci školního roku vysvědčení, při narození a svatbě dostávají rodný list. -Chápu, že s tebou nemůžu zůstat, musím pracovat. -Vidíte, že máme něco společného, rádi experimentujeme s jídlem 😊. -A co ubytování, jehož fotografie jste nám poslal? -Nevím, jak mám s doktorkou mluvit, musí mi dát papír? -Děkuji, protože se nám tu moc nedaří. -Přes den bylo teplo a večer foukal studený vítr. -Výstava motýlů se mi moc líbila. -Motýli přistáli dětem na rukou a nohou. -Jaký byl váš den? -Nikdy jsem o vztazích nemluvil, ani jsem o nich nemluvil a žádné neočekával. -Prosím, řekněte mi, jestli to ubytování, které jste mi poslal, někdo pronajal nebo co? -Ruská armáda hrozí raketovými údery na Kyjev: to ještě nikdy neudělala -Konašenkova přímá řeč: "Vidíme pokusy o sabotáže a údery ukrajinských vojsk na cíle v Ruské federaci. -Pokud budou podobné incidenty pokračovat, ruské ozbrojené síly udeří na rozhodovací centra, včetně Kyjeva, od čehož ruská armáda dosud upouštěla." -Rusové od prvních dnů plnohodnotné invaze ruských okupačních sil na Ukrajinu podnikají raketové útoky na Kyjev. -25. února dopadly zbytky ruských raket sestřelených ukrajinským systémem protivzdušné obrany na obytný dům v kyjevské čtvrti Pozňaki. -26. února zasáhla ruská raketa výškovou budovu na třídě Valerije Lobanovského. -1. března Rusové zasáhli televizní věž v Dorohožiči u Babyn Jaru a zabili 5 kolemjdoucích. -2. března sestřelily ukrajinské systémy protivzdušné obrany ruské rakety letící na budovu ministerstva obrany nad Jižním nádražím. -14. března rozbily úlomky sestřelené okupační rakety dům a trolejbus v kyjevské Kurenivce. -18. března zasáhla ruská raketa obytnou čtvrť ve Vynohradaru, kde zabila jednu osobu a zranila 19 dalších, včetně čtyř dětí. -21. března ruské rakety zničily moderní nákupní centrum Retroville ve Vynohradaru a zabily nejméně 4 lidi. -Kyjevský starosta Vitalij Kličko 13. dubna zopakoval, že na návrat obyvatel, kteří byli evakuováni, je příliš brzy. -Armáda vysvětlila místním úřadům, že Rusové mohou stále podnikat raketové útoky na Kyjev a že na okraji hlavního města je mnoho min a nevybuchlé munice. -Hlavní zpravodajská služba ministerstva obrany 12. dubna varovala, že Rusové plánují sérii teroristických útoků na ruském území, aby obvinili Ukrajince a ospravedlnili brutalitu ruské armády vůči civilistům. -1. dubna explodoval ropný sklad u ruského Belgorodu, údajně v důsledku leteckého útoku ukrajinských vrtulníků. -Na začátku dubna místní úřady v Belgorodu prohlásily, že poblíž města údajně dopadl "ukrajinský granát". -Slyšeli jsme, že už chodíš do školy. -Protože se mi to tak nelíbí, nevydržím dlouho. -Ahoj Petře, děkuji za tvůj zájem, mně osobně se podařilo najít práci na částečný úvazek. -O ostatních členech rodiny zatím nevím, protože jsem byl dva dny v Praze a teprve včera večer jsem se vrátil do Brna. -Ano, mám klienta z Kyjeva, má nějaké nemovitosti v Brně a Rakovníku, dohodli jsme se, že práce bude souviset s jeho nemovitostí. -Tento majetek budu muset s největší pravděpodobností udržovat ve spolupráci s místními úřady a organizacemi. -Zítra budu vědět všechno -Co pro vás znamená připravenost na manželství? -Dobré odpoledne, máte krabici s hračkami -Vejde se do ní jen polovina, ale další sušička se mi tam nevejde. -Mluvila jsem s Volodymyrem o ubytování a ptala se ho, jestli mám počkat, až mi ho najdou, nebo se po něm poohlédnout sama. Dámy, které s námi bydlí, mi řekly, že máme jít na stanici metra Muzeumna, kde je instituce, která pomáhá najít ubytování. -A nyní se na vás můžeme na chvíli obrátit s jednou otázkou. -Není třeba, jsem prostý člověk -Nad stránkami Україною🇺🇦 dnes začala létat letka dronů чеських🇨🇿. -Téměř 50 profesionálních vrtulníků🚁 bude použito k odhalení a některé z nich k přímému zničení nepřátelského útočníka💥. -Dva zakarpatští a dva čeští bratři byli předevčírem díky členu mukačevské městské rady Volodymyru Labutenkovi posláni na Ukrajinu. -Dnes jejich přátelé z Mižhirje v čele s Vasilem Ščurem, předsedou vesnické rady a koordinátorem https://www.facebook.com/examplehere101/, dopravili drony na místo určení: přijaly je speciální jednotky ukrajinských vojsk v různých městech, kde právě probíhají vojenské operace. -Aby bylo zajištěno co nejefektivnější a nejbezpečnější využití vrtulníků, zůstane jejich umístění prozatím utajeno. -Čtyři přátelé, kteří pomáhají ukrajinské armádě z České republiky, se nehodlají zastavit a tvrdí, že v příštích dnech pošlou na Ukrajinu mnoho dalších zajímavých "dárků", které poslouží ke zničení ruského okupanta a záchraně životů ukrajinských hrdinů. -Myslím, že Česká republika začala vydávat víza po 22. březnu. -Do té doby se dávala takováto razítka a ta se pak přirovnávala k vízům. -Obrátila jsem se na Caritas, nabídli mi dočasné ubytování na týden nebo dva, ale já potřebuji trvalé bydlení, kde bych mohla být se svým dítětem, a proč zbývá stále méně místa? -Nabízím práci, úklid koupelny a WC, mytí 3 oken, utírání prachu. -Práce zabere přibližně 3-4 hodiny. Částka 800 UAH je jednorázová. Od 19.04 úterý, od 14.00 hod. -Ještě jsem se k tomu nedostal. -Formální názvy jsou bezprostředně za pomlčkou, lomítka jsou za lomítkem a zkratky používané v rozvrhu jsou v závorkách. -Je mi líto, ale v tomto životě se nemohu na nikoho spolehnout. -Ale nepřestanu tě milovat, jsi pro mě vším. -Tuto půjčku vyřizuje finanční poradce. -Trvá to tedy několik dní. -V Brovarském, Vyšgorodském a Bučském okrese Kyjevské oblasti, které byly osvobozeny od okupantů, bude vyhlášen dvoudenní zákaz vycházení. -Pavljukova přímá řeč: "V osadách Brovarského, Vyšhorodského a Bučského okresu, které byly pod ruskou okupací a které osvobodily ukrajinské obranné síly, byl zpřísněn zákaz vycházení! -Omezení budou v těchto městech a obcích platit od 2. dubna od 21:00 do 5. dubna do 6:00." -Podrobnosti: Podle náčelníka regionální vojenské správy je v této době přísně zakázáno pohybovat se v ulicích osad a na jiných veřejných místech, dopravními prostředky a pěšky. -Obyvatelé mohou během poplachu vycházet ven pouze do úkrytu. -Tato omezení jsou zaváděna za účelem odstranění následků ruské agrese - vyklizení a odminování území. -Omeljanuk vyzval lidi, kteří tyto oblasti opustili, aby se prozatím nevraceli domů. -Ve zbytku Kyjevské oblasti bude zákaz vycházení platit každý den od 21:00 do 6:00. -Dobrý den, hledám práci pro svou ženu ve zdravotnictví, má titul zdravotnický záchranář nebo zdravotní sestra. -Ale je dobře, že se vám vše daří, jinak to ani nejde! -Ahoj, jak se máš? Co jsi dnes dělal? -Musím odejít. -Mohu dát bílé povlečení vyprat, vrátit se po 14. hodině, vyžehlit ho a pověsit? -Ministerstvo školství a vědy Ukrajiny již tento podvrh vyvrátilo. -Všechno má své klady a zápory -Chci, abys byla šťastná, chci, abys byla v bezpečí, a chci být konečně s tebou, abych tě mohl obejmout. -Ráno tě políbit, říct ti, jak vypadáš. -Velmi pozitivní, taková úžasná školka, tak milé učitelky. -Děláte toho pro nás tolik, je to neuvěřitelné, moc vám děkuji вам❤️ -Existují různé druhy muslimů. Jsou fanatici a jsou praví věřící. -Význam džihádu si různí muslimové vykládají různě. -Mám sen, že válka skončila a já se mohu vrátit domů. -Jak mohu změnit svůj tarifní balíček T-mobile na 4 gigabajty za 249 Kč? -Hledám pro sebe a svou dceru byt na 3-4 dny pokaždé, když přijedeme do České republiky a odevzdáme pasy pro vízum do Kanady. -Ve Varšavě je ve vízovém centru hodně lidí a není tam žádná online fronta. -Chtěla jsem přijet na jeden den, ale je to daleko, cesta vlakem trvá dlouho a pro dítě to bude náročné. -Proto jsem se rozhodl hledat byt v Praze nebo v její blízkosti. -Ruský prezident Vladimir Putin vysílá vojáky z Vladivostoku a Petropavlovska-Kamčatského do Běloruska, aby doplnil své ztráty ve válce na Ukrajině. -Pronásleduje mě téma drancování Rusů. -Novinář televizního kanálu 1+1 Natalia Nagorna zveřejnila 12. dubna ráno video, které jí předali vojáci 36. námořní brigády z Mariupolu. -Vojáci v něm říkají, že jim nezbyly žádné zbraně a že mají horu zraněných. -Řekli, že "je to otázka hodin". -Ještě předtím, 11. dubna, zveřejnila 36. samostatná brigáda námořní pěchoty, která se podílí na obraně Mariupolu, výzvu Ukrajincům v ruštině. -V prohlášení se uvádí, že 11. duben může být pro obránce Mariupolu poslední bitvou a že ukrajinské vojenské velení se s vojáky již dva týdny nespojilo. -Sociální média vyjádřila skepsi, že prohlášení bylo zveřejněno v ruštině. -Vrchní velitel ozbrojených sil Ukrajiny Valerij Zálužnyj ujistil, že velení je v kontaktu s obrannými silami v Mariupolu a že podrobnosti obranné operace by neměly být předmětem veřejné diskuse. -Rusové pohrozili, že zablokují ukrajinské vojáky na území závodu Azovstal v Mariupolu a použijí proti nim chemické zbraně. -Večer 11. dubna shodili ruští okupanti na Mariupol jedovatou látku neznámého původu. -Podle vůdce Azova Andrije Bileckého použila ruská armáda chemické zbraně na závod Azovstal, který je v držení bojovníků Azovského pluku. -Bojím se, že to nestihneme. Je tu spousta zavazadel. -Je tu ještě jedna osoba, která vám ještě nenapsala, Natalia, 54 let.Napište prosím přesnou adresu a kolik kurzy stojí. -Můj manžel vám ho může přivézt zítra nebo v sobotu. Pokud vám to vyhovuje. -Od samého začátku jsme opravdu nechtěli, abyste získávali finanční prostředky, aby si lidé nemysleli, že o tom spekulujeme! -Máte košík? Rádi bychom vás o něj požádali na velikonoční neděli. -Ano. Velikost je určena pro věk 10-11 let. Ale naše velikosti se neshodují. Vše se musí změřit -Obecně je to v cizí zemi opravdu těžké a každý den se mi chce domů víc a víc. -Paní Agato, pokud chcete tuto fotografii, vezmu si ji. -Děkujeme, že se nám snažíte co nejvíce zpříjemnit život. -Budu na vás vzpomínat do konce života. -A budu vzpomínat a usmívat se ☺️❤️❤️❤️ -Můžete pro nás připravit dokumenty - pracovní smlouvu. -2. Měli byste zájem o společné aktivity s českými sousedy, přáteli, komunitou? -Pokud ano, jaký typ a druh. -Pokud se nemýlím, slavíte dnes Velikonoce, že? -Ahoj Natalie, zalévej prosím květiny venku u vchodových dveří. Děkuji. -Chci se něčím zabývat -nebo ji nemusíte podávat na základě předpisu? -Někdy jsem požádán, abych byl modelem -Nevadí, když je po Velikonocích. -Ano, samozřejmě, bylo by skvělé trávit čas společně. -Vysoká škola, zkráceně VŠ, neoficiálně univerzita[1] (před přijetím ukrajinského zákona "O vzdělávání"[2] se používal termín vysokoškolská instituce a také zkratky VNZ[3], vuz[4]) je samostatný typ instituce, která je právnickou osobou soukromého nebo veřejného práva, působí na základě licence k výkonu vzdělávací činnosti v určitých stupních vysokoškolského vzdělávání, vykonává vědeckou, vědeckotechnickou, inovační anebo metodickou činnost, zajišťuje organizaci vzdělávacího procesu a poskytování vysokoškolského a postgraduálního vzdělávání fyzickým osobám s přihlédnutím k jejich povolání, zájmům a schopnostem[5]. -Nápověda k vyhledávání knih v češtině -Když má dítě narozeniny, přinese džus nebo ovoce pro celou třídu, něco pro všechny děti? -Bezpečnostní složky Lukašenkova režimu zadržely tři Bělorusy ve věku 27 a 28 let, kteří se podíleli na zničení dvou reléových skříní signalizačního systému u Osipoviči, který zajišťuje pohyb vlaků. -Zdroj: "Radyje Svaboda", centrum pro lidská práva "Viasna", vedoucí kriminální policie ministerstva vnitra Gennadij Kazakevič, citovaný státní tiskovou agenturou "BelTA". -Ministerstvo vnitra Běloruska oznámilo, že v noci na 30. března byli za násilné podpory speciálních jednotek SOBR zadrženi tři obyvatelé Bobrujsku, jeden z nich byl zraněn. -Podle Vyasny se muži aktivně bránili zatčení a pokusili se o útěk. -Bezpečnostní složky použily zbraně. -Jeden ze zadržených byl zraněn a je v lékařském zařízení. -Ostatním byla na místě poskytnuta lékařská péče. -Šéf kriminální policie běloruského ministerstva vnitra Gennadij Kazakevič prohlásil, že "teroristické činy na běloruské železnici budou tvrdě potlačeny" bezpečnostními složkami za použití zbraní. -6. dubna vyšlo najevo, že 30. března policie zadržela dalšího zaměstnance Běloruských železnic, pracovníka baranavičské pobočky Běloruských železnic a správce tematických zdrojů Běloruských železnic na serverech VKontakte a Odnoklassniki Valentina Samasjuka. -Kde se nyní nachází a v jakém je stavu, není známo, píše @belzhd_live. -Koncem března bylo v Bělorusku zadrženo nejméně 40 železničářů za sabotáž. -Shrnutí: Skupina BYpol, kterou Lukašenkův režim prohlásil za extremistickou, nadále vyzývá Bělorusy, aby v rámci plánu Peramogha prováděli sabotážní útoky na železniční dopravu a infrastrukturní zařízení v Bělorusku. -Běloruské "železniční partyzánky" se tak snaží vzdorovat ruské agresi proti Ukrajině. -Jak víte, Rusko posílá přes Bělorusko na Ukrajinu bojové síly a vybavení. -Ano, samozřejmě můžeme přijít, děkujeme! -Jsou otevřené malé obchody s potravinami? Potřebuji koupit chleba a další potraviny. -Když si můžete přinést věci k sobě -Všichni tvoji přátelé budou vědět, že mluvíš se mnou. -Dnes mi má také jeden muž zavolat, jestli pro mě něco našel. -Lásko, domluvme se na jedné aplikaci, kam budeš psát 😇😍. -Nevím, protože mi zatím nepřišla žádná zpráva z banky a kartu jsem si nevzala a děti si smazaly bankovní aplikaci, takže mě opravdu naštvaly. -Podle Tarase Chmuta by se odpalovací stanoviště Kalibr na Ukrajině mohla nacházet v Černém a Kaspickém moři, v Rusku a na Krymu. -Těchto raket mají relativně málo. -Nejsou to desítky tisíc kusů, možná jen tisíce, ale bylo jich spuštěno mnoho set," zdůrazňuje Chmut. -Údajně zničili mimo jiné podzemní sklad ukrajinských raket a letecké munice v Děljatynu v Ivanofrankivské oblasti. -Kinžal je letecká varianta systému Iskander, která byla ruské armádě dodána v roce 2017. -V roce 2018 Rusko oznámilo zahájení vývoje nového systému Kalibr-M s doletem 4 500 km. -Ukrajinské ozbrojené síly obklíčily města Irpin a Bucha a vesnici Gostomel u Kyjeva, zatímco ruští okupanti nadále ostřelují obce Makariv, Bucha, Irpin a Dmytrivska. -Ukrajinská rozvědka uvádí, že Sýrie selhává ve svém plánu zapojit bojovníky do války na Ukrajině na straně Ruska. -Podrobnosti: Uvádí se, že 22. března se uskutečnila schůzka mezi velitelem 8. brigády v jihosyrské provincii Dera plukovníkem Nasimem Abu Irrou a generálem ruských ozbrojených sil Alexandrem Žuravlevem (který slouží jako velitel ruské skupiny v jižních provinciích Sýrie). -Syrský plukovník však jasnou odpověď neposkytl. Místo toho slíbil, že se po konzultacích spojí "s dalšími představiteli vedení 8. brigády". -Co se stalo předtím: Ruské ministerstvo obrany od začátku ruské války proti Ukrajině letělo do Sýrie dvakrát. -Je známo, že ruští agenti se v současné době také snaží vyjednat nábor žoldáků z 16. brigády Syrské arabské republiky. -Generál také dodal, že Kaliningradská oblast nemá žádný vojenský význam. -Jak bylo oznámeno, polský prezident Andrzej Duda reagoval na ruské hrozby a prohlásil, že Polsko je mírumilovná země, ale v případě útoku se bude bránit. -Vedoucí ruské delegace Vladimir Medinskij se domnívá, že Ukrajina v Istanbulu deklarovala připravenost splnit "principiální požadavky" Ruska, přičemž o stažení vojsk neřekl nic a dal najevo, že Kreml nebude dělat kompromisy ohledně Krymu a Donbasu. -Podrobnosti: Medynskij tvrdí, že Ukrajina "na papíře" vyjádřila ochotu vzdát se svých aspirací na členství v NATO a zbraní hromadného ničení a souhlasit s tím, že vojenská cvičení na Ukrajině budou vyžadovat souhlas Ruska jako "garanta bezpečnosti". -Rád bych zdůraznil, že náš zásadový postoj ke Krymu a Donbasu se nemění." -Určitě se o to pokusíme, ale pochybuji o tom. -Můj telefon je rozbitý. Nenabíjí se. Do úterý budeme v kontaktu přes Alexejův mobil. -Můžete tam jezdit každý víkend a nenudit se. -K tomu poklopu se nedostaneme -Na generála Čapka se stojí velmi pomalá fronta. -Chápu, takže si mohu připravit materiály v této oblasti. -Internet opravdu potřebuji k práci -Máte mlýnek na maso na výrobu mletého masa? Máme, ale nefunguje. -Zelensky: Mariupol je srdcem války, pokud přestane bojovat, budeme mít slabé pozice -Prezident Volodymyr Zelenskyj se domnívá, že bitva o východní Ukrajinu, a zejména o Mariupol, rozhodne o průběhu války - a pokud tam budou ukrajinské ozbrojené síly poraženy, Rusové mohou od jednání upustit a znovu obsadit deokupovaná území. -Přímá řeč: "Mariupol je dnes srdcem této války. -Je to boj - bojujeme, jsme silní. -Pokud přestane bojovat, budeme mít slabší pozici. -Oni (obránci města - pozn. red.) jsou lidé, kteří přitáhli zpět velké množství nepřátel. -Čím silnější bude naše pozice v Mariupolu, tím silnější budou naše pozice na východě státu, v oblasti JFO, a pokud budou silnější, bude pro nás jednací stůl blíže a budeme mít výhody v dialogu s Ruskou federací. -Pokud je naše situace ve všech těchto oblastech slabá, nemusíme být schopni se setkat. -Protože pak Rusko podnikne všechny ty kroky, které by mohly vést k návratu i do měst, která jsme nyní deokupovali. -Mohou se do toho pustit i oni. -Pak bude naše pozice při jednáních slabší a možná nebude zajímavá ani pro ruskou stranu. -Bohužel uvádíme. -Věříme v náš výsledek, v naše vítězství...". -Podrobnosti: Zelenskij také řekl, že po mučení Ukrajinců je těžké vyjednávat, ale "nesmíme ztratit příležitost k diplomatickému řešení, pokud ji máme". -Přímá řeč: "Lidé v každém případě přijmou mír, protože chtějí, aby válka skončila. -Za našich podmínek, za podmínek nezávislosti Ukrajiny, ale... každá rodina něco ztratila - a nemyslím si, že je uspokojí jakýkoli mír za jakýchkoli podmínek. -Pokud však nemluvíme o emocích, musí válka skončit mírem, jinak si vyžádá miliony obětí. -A i tam, kde je obětí milion, vše skončí dříve, než válka skončí. -Ano, musíme bojovat - ale v zájmu života. -Nemůžete bojovat za prach, když už není nic, žádní lidé." -Protože jsem si vlastně pořád nejistá svými schopnostmi a taková slova potřebuju! -Musím si objednat oběd, nebo si ho mohu vzít z domova? Musím si tyto dokumenty vytisknout, nebo je vytisknou na místě a já je mohu podepsat? -Dívám se na inzeráty a mnohé jsou již obsazené. -Jsem velmi vděčný za pomoc lidem. Pokud můžete, pomozte prosím i vy nám. Najděte levné ubytování. -Nyní jsme ve Slapu, naši přátelé nás na hodinu schovali. -Moje rodina: manžel, dcera 6 let, syn 11 let. -Muž si již našel práci řidiče. -Jsem masér a rehabilitační terapeut a hledám práci. -Aby děti mohly chodit do školy a já do práce, potřebujeme bydlení blízko civilizace. -Je to pro zvýšení sebevědomí -Je to krásný dům a umíme si ho sami velmi dobře zařídit. -Nejšťastnější budu, když budu s tebou. -Když se probudím vedle tebe. -Políbím tě a řeknu: "Dobré ráno, lásko, moc ti to sluší." -Byli jste první, kdo opustil vztah, nebo jste zůstali pozadu? -Bude vám vyhovovat jít se mnou do banky v pondělí? Nebo jiný den. -Ach, jak je to složité :) -Na Ukrajině pracují mobilní operátoři 24 hodin denně, každý den, bez ohledu na svátky, po telefonu nebo online :) -Zítra ráno jdu do práce asi v 6:00-6:20 a odpoledne budu doma. -Pokud máte čas, pokusíme se tarifní plán změnit zítra večer. -Jaké jsou vaše celkové dojmy z prvního týdne? -Vyjadřuji upřímnou soustrast rodině Brenta Renauda, který zemřel při dokumentování bezohlednosti a zla páchaného Ruskem na obyvatelích . -Kéž Brentův život a jeho oběť inspirují svět k boji za síly světla proti silám temnoty. -Pokračující jednání s prezidentem 🇵🇱 @AndrzejDuda, premiérem 🇱🇺 @Xavier_Bettel a premiérem 🇮🇱 @naftalibennett. -Vyměnili jsme si informace o společných krocích - našich i našich partnerů - na pozadí ruské agrese. -Dohodli jsme se na dalších opatřeních. -Diskutoval jsem s předsedou Evropské komise @vonderleyen o podpoře 🇺🇦 pro 🇪🇺 v boji proti ruské agresi. -Zvýšení tlaku na Rusko je důležité. -Oceňujeme také významnou finanční pomoc. -Ukrajina pokračuje v pokroku na cestě k členství v EU. -Jednal s premiérem 🇬🇷 @kmitsotakis . -Informoval o pokroku v boji proti ruské agresi. -Oceňujeme obrannou a humanitární podporu 🇬🇷. -Zdůraznili, že je třeba zajistit fungování humanitárních koridorů, zejména v Mariupolu. -Diskutovali jsme o pohybu 🇺🇦 směrem k členství v EU. -Další mezinárodní jednání. -Diskutoval jsem s prezidentem 🇪🇺 za @eucopresident posílit finanční podporu 🇺🇦 a sankční tlak na agresora. -Zvláštní pozornost věnuje dalším jednáním o členství 🇺🇦 v #EU. -Rozhovor s premiéry 🇬🇧 @BorisJohnson a 🇨🇿 @P_Fiala . -Hovořili jsme o boji obyvatel 🇺🇦 proti ruské agresi, o zločinných útocích Ruska na civilní obyvatelstvo. -Poděkoval partnerům za jejich důležitou podporu. -Oceňujeme to. #StopRussia -Dnes už nemohou existovat žádná polovičatá řešení ani polovičaté tóny! -Existuje jen černá a bílá, dobro a zlo! -Buď se postavíte za mír, nebo podpoříte krvavého ruského agresora při vraždění ukrajinských dětí a žen. @Microsoft, @Oracle, @SAP, přestaňte podporovat své produkty v Rusku, zastavte válku! -Prezident Zelenskyj osobně navštívil zraněné vojáky v místní nemocnici. -Ujistil je, že vítězství přijde, a v posteli jim předal státní vyznamenání, čímž jim zvedl morálku. -Upřímně řečeno, myslím, že už si jeho vedení nemohu vážit. -Jednal s prezidentkou 🇸🇰 @ZuzanaCaputova . -Jménem lidu poděkoval 🇺🇦 za podporu v boji proti ruské agresi. -Zprávy o zločinech ruské armády proti civilistům 🇺🇦. -Musíme je zastavit. -Diskutovali jsme o otázce členství v EU. #StopRussia -Nezapomeň, že jsi mi slíbil, že mi pomůžeš, jakmile budeš mít příležitost, ale nedáš poslední 🤫😟😉🙃. -Máte zítra práci? Chcete se sejít po 16. hodině? -Používání služebního vozu a náhrada výdajů -Děkuji, něco podobného jsme se již naučili -A pak se musíte zeptat... záleží také na hmotnosti a velikosti... -Od školy jsem zatím nic neobdržel, mám pouze potvrzení o zaplacení. -Objednal jsem si zásilku z Německa u společnosti DHL. -Včera mi měla být doručena, ale nikdo mě nekontaktoval a zásilka nebyla doručena a na stránkách DHL se píše, že příjemce nebyl nalezen. -Proto byla zásilka odeslána na poštu a byla uvedena adresa této pobočky. -Jak ji mohu vyzvednout? -Přemýšleli jste někdy o tom, jak vše ovlivňuje náš mozek, jak vytváří nová nervová spojení? -Vytváření návyků? -Myslím, že je to správné rozhodnutí, méně je více, a můžete těmto lidem poskytnout kvalitní péči, než když je tam mnoho lidí. -Od pátku jsem líný -Nedělám nic jiného, než že hodně jím a spím 😂 (a piju hodně vody ☝🏻). -Pokud budu muset jít zítra k lékaři, půjdu s vámi, pokud to bude možné. -Mám další analýzu, která dokazuje. -a abyste nemuseli jít znovu. -Pokud mi budete chtít ještě někdy napsat, budu čekat. -Dobře, napište mi, až se vám to bude hodit 😘😘 -Děkuji za uspořádání, můry jsou krásné. -Máš dobré sny? -Děkujeme, máme všechno. A vždycky čekáme na vaši návštěvu! -Jedná se o drobné podnikatele, kteří si ve svých provozovnách a obchodech zřídili dobrovolnické centrály. -Jsou to traktoristé, kteří skutečně jezdí na pole pod palbou, protože je čas setí. -Jedná se o řidiče autobusů, kteří souhlasí s tím, že budou jezdit na dočasně okupovaná území v humanitárních konvojích, aby doručili pomoc a odvezli lidi. -Jsou to hrdinní průvodčí, kteří beze strachu cestují do válečné zóny, uklidňují a pomáhají uprchlíkům ve vagonech a na mírových stanicích pomáhají dobrovolníkům nakládat do vagonů humanitární pomoc. -To jsou ti, kteří na začátku války stáli na žitomyrské dálnici a trpělivě obsluhovali vyděšené a nervózní lidi. -Jedná se o pracovníky komunálních služeb, kteří pod palbou odstraňují odpadky, opravují vodovodní potrubí a elektrické vedení, aby lidem zajistili základní potřeby. -Jsou to lékaři a zdravotní sestry, kteří 24 hodin denně zachraňují lidi, aniž by si na cokoli stěžovali, a ve svém volném čase také dobrovolně shromažďují lékárničky pro první linii. -Pocházíme z Doněcké oblasti, mám lékařské vzdělání, 27 let praxe, práce na neurologickém oddělení, masáže, fyzikální terapie pro dospělé a děti, jakékoliv lékařské manipulace..speak ruský a ukrajinský..pro zbytek informací volejte +420-464-548-072 nebo napište na Viber +380-42-791-0436 -Brit, který v ukrajinských ozbrojených silách bránil Mariupol, je připraven vzdát se Rusům -Brit, který je příslušníkem námořní brigády ukrajinských ozbrojených sil a podílí se na obraně Mariupolu, řekl svým přátelům a rodině, že složí zbraně a vzdají se ruským okupantům. -Zdroj: Twitterová stránka Aidana Aislina, Brita, který od roku 2018 slouží v ukrajinských ozbrojených silách, BBC s odvoláním na Aislinovy rodinné příslušníky a přátele, Atlas News s odvoláním na Aislinova přítele, který s ním mluvil. -Doslova Eislinův Twitter: Eislin: "Dostali jsme od něj zprávu: "Uplynulo 48 dní, snažili jsme se ze všech sil bránit Mariupol, ale nezbývá nám nic jiného než se vzdát ruským vojskům. -Nemáme už žádné jídlo ani munici. -Děkuji vám všem a doufám, že válka brzy skončí." -Podrobnosti: Eislin sloužil v 36. samostatné námořní brigádě ukrajinských ozbrojených sil, která se podílí na obraně Mariupolu. -Novináři BBC kontaktovali vojákovu matku Anne Woodovou, která potvrdila, že jí syn do telefonu řekl, že se hodlají vzdát. -Vojákův přítel Brennan Phillips také novinářům potvrdil, že v jejich posledním telefonickém rozhovoru Aislin mluvil o plánech jejich jednotky vzdát se. -Podle něj brigádě došla munice a potraviny. -Novináři Atlas News se spojili s jeho přítelem, který uvedl, že Eislinova jednotka se hodlá vzdát ruským jednotkám, aby nepadla do rukou takzvaných "Kadyrovců" beze zbraní a nábojů. -Na sociálních sítích se také objevil zvukový záznam telefonického rozhovoru, v němž Eislin údajně hovořil se svým americkým přítelem, který se chystal odcestovat na Ukrajinu. -V rozhovoru Eislin řekl, že se pokusili dostat z města v civilním oblečení, ale nepodařilo se jim to. -Již dříve Brit uvedl, že mu na Instagramu vyhrožovali zástupci soukromé vojenské společnosti Wagner. -Aislin pracoval jako sociální pracovník v Newark-on-Trent v hrabství Nottinghamshire, ale v letech 2015-2017 odešel bojovat proti takzvanému Islámskému státu do Sýrie. -V roce 2018 oficiálně vstoupil do Ozbrojených sil Ukrajiny a složil přísahu. -Přátelé a rodina Aislinovi říkali Johnny a na sociálních sítích je známější pod přezdívkou Kozák Gundi. -Rusko mezitím rozmístilo své raketové systémy na hranicích s Finskem. -Chtějí vyzkoušet nejen ukrajinský penis, ale i finský. -Ne, jsme se vším spokojeni -Prosím, vím, že je to pro vás těžké. Ale já bych to také ráda věděla. Ať už jsme spolu, nebo ne. -Takže budu žít se dvěma ženami z Ukrajiny a jejich dětmi? Je to rodina, která je připravena nás přijmout? -Ano, rozumím. -Nejsem si jistá, jestli stihnu uklidit do 13:00, protože je to velký byt a je toho hodně k uklízení. -Pokusím se to rychle uklidit, ale nemůžu to slíbit. -Napíšu vám, až bude apartmán připraven pro hosty. -V naší zemi je Anna považována za původní formu jména Anya. -Nepotřebuji značky. -Chci jen dobré oblečení za normální ceny a velikosti. -Byli jsme v obchodě na nádraží -Stydím se, že vás ruším od vaší práce. -Prezident Volodymyr Zelenskyj uvedl, že některé typy zbraní poskytnuté západními partnery dorazily příliš pozdě. -Řekl to v rozhovoru pro agenturu Associated Press, jehož výňatky zveřejnila prezidentská kancelář, jak uvedl deník Jevropejska pravda. -"Všechno vybavení, všechno, co už posílají, u některých typů vybavení je pozdě. -Protože když se bavíme například o Mariupolu, když ztratíte tisíce lidí, co teď uděláte? -Vidím stoprocentní podporu vedoucích představitelů některých zemí, to je pravda. -A někteří evropští představitelé změnili svůj postoj, ale je vidět, jakou cenu tyto změny mají," řekl Zelenskyj. -Na otázku, zda Ukrajina dostala dostatek zbraní, aby mohla ve válce něco změnit, prezident odpověděl: "Zatím ne, zatím ne". -Prezident také řekl, že kdyby Ukrajina byla členem NATO, k této válce by nedošlo, nebo by měla jinou podobu. -"Vyvíjelo by se to jinak, měli bychom ramena blízkých sousedů, mohli bychom bojovat společně. -Ale jsem si jistý, že by k válce nedošlo," dodal. -Britský premiér přislíbil Ukrajině novou vojenskou pomoc, včetně obrněných vozidel a protilodních zbraní. -To je od vás velmi milé, děkuji. -Zlepšila jsi mi den -Domluvil jsem se s paní Markétou, že mě zaměstná. -A teď hledám práci pro Svitlanu. -Může pracovat jako pekařka, cukrářka, formovačka pečiva a pomocná kuchařka. -PŘÍKLAD ŽIVOTOPISU NA POZICI SEKRETÁŘKY -Oksana Drobot -Září 1997 - červen 2000, Vysoká škola ekonomická v Kyjevě, Ekonomická fakulta, obor "Účetnictví a kontrola", bakalářský titul (prezenční). -březen - prosinec 2005 - kurzy anglického jazyka, "IngCentre", Kyjev. Kyjev. -Červenec - listopad 2009 - kurz "Učíme se vyjednávat" v Kyjevě. Kyjev. -Tajemník -Funkční odpovědnosti: -- práce s dokumenty (kancelářská práce); -- přijímání a distribuci hovorů; -- účast na organizaci různých veřejných akcí; -- Provádění osobních pokynů vedoucího. -Tajemník -březen 2002 - duben 2010 - Farama Group, Kyjev. -Funkční odpovědnosti: -- vedení obchodní korespondence; -- Práce s korespondencí; -- příjem a distribuce příchozích/odchozích hovorů; -- Plnění pokynů vedoucího a hlavního účetního; -- udržování elektronické správy dokumentů. -Sekretářka, osobní asistentka ředitele -Duben 2010 - současnost, ZapOrg, Záporoží. -Funkční odpovědnosti: -- Provádění osobních pokynů vedoucího; -- práce s kancelářským vybavením, mini PBX; -- spolupráce s kurýrní službou; -- Příprava dokumentů a materiálů potřebných pro práci manažera; -- přijímání žádostí po telefonu; -- Přijímání a evidence příchozí a odchozí korespondence; -- vypracování smluv pomocí šablon; -- objednávání kancelářských potřeb a dalšího spotřebního materiálu a zajišťování životnosti kanceláře; -- účtování práce zaměstnanců; -- objednávání letenek, zpracování služebních cest pro zaměstnance; -- kontrola čistoty a pořádku v kanceláři. -Odborné dovednosti: -- schopnost pracovat se základními aplikacemi MS Office (Access, Excel, Power Point, Word, WordPad); -- znalost kancelářského vybavení (fax a kopírovací stroj, skener, tiskárna); -- kompetentní ústní a písemný projev; -- znalost základů kancelářské práce a správy dokumentů; -- zkušenosti s organizací externích a interních schůzek, konferencí a jednání; -- Zkušenosti s přípravou a organizací služebních cest; -- dovednosti v oblasti podpory kancelářských operací; -- znalost cizích jazyků: ukrajinština - rodilý mluvčí; ruština - plynně; angličtina - středně pokročilá úroveň. -Osobní vlastnosti: -Obětavost, zodpovědnost, komunikační schopnosti, dochvilnost, iniciativa, smysl pro humor. -Další informace: -Rodinný stav: vdaná/ženatý. -Děti: syn a dcera ve věku 7 a 13 let. -Možnost služebních cest: ano. -Nemám žádné zlozvyky. -Ve svých každodenních modlitbách za vás děkuji Bohu a Panně Marii. -Dovolte jim, aby vás tuto noc zahalili do své přikrývky lásky a tepla. -Mé srdce patří tobě. -Univerzita Karazin vyzývá své zaměstnance a studenty, aby si pečlivě ověřovali veškeré informace, nedůvěřovali anonymním zdrojům v messengerech a na sociálních sítích, fámám a pomluvám. -Spoléhejte se pouze na oficiální informace. -Dobré odpoledne, pane Reyrarde! Omlouvám se, neviděl jsem to. Hned to udělám a pošlu vám to. -Pane Reicharde, omlouvám se, že vás obtěžuji po pracovní době. -Mám na vás ale důležitou otázku. -Dnes jsem šla za Natálií do herny, abych ji pozdravila. -Měla nepříjemnou situaci. -Protože když děti odcházely domů, tak jedna maminka odmítla vzít své dítě, protože říkala, že se přišla podívat na věci a dala je do toho pokoje s oblečením a pan Valerij jí včera otevřel pokoj našich dětí do 19:00 hodin. -Může to tak opravdu být? -Místnost je otevřena do 16:00. -V konečném důsledku je to naše odpovědnost. -Žena se dvěma dětmi hledá ubytování na dva až šest měsíců, podle toho, jak se bude vyvíjet situace v naší zemi. -V naší zemi je válka a my se musíme přesunout na bezpečné místo. -Mohu pomáhat majitelům v domácnosti, vařit nebo uklízet. -Umím pracovat na zahradě, umím pěstovat zeleninu a květiny. -Umím se postarat o zvířata. -Žádám slušnou a laskavou rodinu, která je ochotná nám poskytnout bydlení a podpořit nás v naší situaci. -Čekám na vaši odpověď. -Chcete-li se se mnou spojit, napište mi prosím na adresu anonymized@example.com. Děkuji. diff --git a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/src/getDistVersion.ts b/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/src/getDistVersion.ts deleted file mode 100644 index d474e1f9ead19135a390c930e5801f4e6910c0a4..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/src/getDistVersion.ts +++ /dev/null @@ -1,29 +0,0 @@ -import https from 'https'; - -const getDistVersion = async (packageName: string, distTag: string) => { - const url = `https://registry.npmjs.org/-/package/${packageName}/dist-tags`; - - return new Promise((resolve, reject) => { - https - .get(url, (res) => { - let body = ''; - - res.on('data', (chunk) => (body += chunk)); - res.on('end', () => { - try { - const json = JSON.parse(body); - const version = json[distTag]; - if (!version) { - reject(new Error('Error getting version')); - } - resolve(version); - } catch { - reject(new Error('Could not parse version response')); - } - }); - }) - .on('error', (err) => reject(err)); - }); -}; - -export default getDistVersion; diff --git a/spaces/zhoupin30/zhoupin30/README.md b/spaces/zhoupin30/zhoupin30/README.md deleted file mode 100644 index d65eafbc8431818f738e8e086455fa6159f101bb..0000000000000000000000000000000000000000 --- a/spaces/zhoupin30/zhoupin30/README.md +++ /dev/null @@ -1,196 +0,0 @@ ---- -title: bingo -emoji: 📉 -colorFrom: red -colorTo: red -sdk: docker -license: mit -duplicated_from: hf4all/bingo ---- - -
          - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -
          - -## 演示站点 - -https://bing.github1s.tk - - - -[![img](./docs/images/demo.png)](https://bing.github1s.tk) - -## 功能和特点 - -- 完全基于 Next.js 重写,高度还原 New Bing Web 版 UI,使用体验和 Bing AI 基本一致。 -- 支持 Docker 构建,方便快捷地部署和访问。 -- Cookie 可全局配置,全局共享。 -- 支持持续语音对话 - -## RoadMap - - - [x] 支持 wss 转发 - - [x] 支持一键部署 - - [x] 优化移动端展示 - - [x] 支持画图 - - [x] 支持语音输入(支持语音指令,目前仅支持 PC 版 Edge 及 Chrome 浏览器) - - [x] 支持语音输出(需要手动开启) - - [x] 支持图片输入 - - [x] 支持自定义域名 - - [ ] 支持历史记录 - - [ ] 适配深色模式 - - [ ] 支持内置提示词 - - [ ] 支持离线访问 - - [ ] 国际化翻译 - -## 一键部署 -你也可以一键部署自己的 New Bing AI 到 🤗 HuggingFace 。 - -### 部署到 Huggingface -1. 点击此图标 -[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/login?next=%2Fspaces%2Fhf4all%2Fbingo%3Fduplicate%3Dtrue%26visibility%3Dpublic),配置可以不改。 - -2. 部署署完成后,点击“设置” 》“站点域名”,点一下,复制一下 HF 域名信息,然后分享给别人即可。 - -> Huggingface 不支持绑定自己的域名,不过我们可以使用曲线救国的方式来达到这个目的 -> 1. 方式二,借助 Cloudflare Workers [部署Cloudflare Workers](#使用Cloudflare-Workers自定义域名) -> 2. 方式一,借助 Github Pages 及 iframe [如何绑定域名](https://github.com/weaigc/bingo/issues/4) - -### 使用Cloudflare Workers自定义域名 - -> 核心代码 [worker.js](./cloudflare/worker.js) - -- [注册 Cloudflare 账号](https://dash.cloudflare.com/sign-up) - -- 添加一个新的网站,需要你有自己的域名并且将域名`Name Server`托管给 Cloudflare 才行(更多信息可自行 Google) - -- 通过左侧菜单进入「Workers」,并点击「Create a Worker」。 - -- 创建 Worker 服务,复制 [worker.js](./cloudflare/worker.js) 全部代码,粘贴至创建的服务中,根据注释进行改动,保存并部署。 - -- 触发器 中自定义访问域名。 - -### 部署其它平台 -
          - -由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看 - - -#### 部署到 Netlify -[![Deploy to Netlify Button](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo) - -#### 部署到 Vercel -如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用 - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example) - -#### 部署到 Render - -[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/weaigc/bingo) -
          - -## 环境和依赖 - -- Node.js >= 18 -- Bing AI 的[身份信息](#如何获取-BING_HEADER)) - -## 安装和使用 - -> 由于目前微软封杀比较严重,推荐优先使用 [部署 Huggingface](#部署到-huggingface) 。 - -* 使用 Node 启动 - -```bash -git clone https://github.com/weaigc/bingo.git -npm i # 推荐使用 pnpm i -npm run build -npm run start -``` - -* 使用 Docker 启动 -```bash -docker pull weaigc/bingo -docker run --rm -it -p 7860:7860 weaigc/bingo -# 或者 -docker run --rm -it -e BING_HEADER=xxxx -p 7860:7860 weaigc/bingo -``` - -## 如何获取 BING_HEADER -> 配置了 BING_HEADER 意味着你将自己的账号共享给所有使用此服务的人,如果不需要免登录画图的功能,不建议设置此变量 - -打开 https://www.bing.com 并登录,然后访问 https://www.bing.com/turing/captcha/challenge ,通过人机校验,然后 - -![BING HEADER](./docs/images/curl.png) - -> 复制出来的内容应该如下所示。确认格式无误后,打开 https://effulgent-bubblegum-e2f5df.netlify.app/#dialog=%22settings%22 ,粘贴进去,点击“转成 BING_HEADER 并复制”,然后从剪切板粘贴即可得到。(你也可以先在网页上进行验证) - -以下是格式参考,需要注意的是,网页端保存的格式是以`curl`开头, 而服务端配置的 `BING_HEADER` 是 `base64` 格式,两者不能互通。 -
          -正常格式/网页端保存的格式(格式仅供参考) - -``` -curl 'https://www.bing.com/turing/captcha/challenge' \ - -H 'authority: www.bing.com' \ - -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ - -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \ - -H 'cache-control: max-age=0' \ - -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \ - -H 'dnt: 1' \ - -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \ - -H 'sec-ch-ua-arch: "x86"' \ - -H 'sec-ch-ua-bitness: "64"' \ - -H 'sec-ch-ua-full-version: "116.0.1938.29"' \ - -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \ - -H 'sec-ch-ua-mobile: ?0' \ - -H 'sec-ch-ua-model: ""' \ - -H 'sec-ch-ua-platform: "Windows"' \ - -H 'sec-ch-ua-platform-version: "15.0.0"' \ - -H 'sec-fetch-dest: document' \ - -H 'sec-fetch-mode: navigate' \ - -H 'sec-fetch-site: none' \ - -H 'sec-fetch-user: ?1' \ - -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \ - -H 'sec-ms-gec-version: 1-116.0.1938.29' \ - -H 'upgrade-insecure-requests: 1' \ - -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \ - -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \ - -H 'x-edge-shopping-flag: 1' \ - --compressed -``` -
          - -
          -转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式) - -``` -Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA== -``` -
          - - -## 鸣谢 - - 感谢 [EdgeGPT](https://github.com/acheong08/EdgeGPT) 提供的代理 API 的方法。 - - 感谢 [Vercel AI](https://github.com/vercel-labs/ai-chatbot) 提供的基础脚手架和 [ChatHub](https://github.com/chathub-dev/chathub) [go-proxy-bingai](https://github.com/adams549659584/go-proxy-bingai) 提供的部分代码。 - - -## 答疑及交流 - - - -## License - -MIT © [LICENSE](https://github.com/weaigc/bingo/blob/main/LICENSE). - -